Every principle costs something. Listing them without the trade-off is marketing copy. Listing them with the trade-off is a commitment we can be held to. Customers, prospects, and team members hold us accountable to these on a regular basis.
Every AI-derived quantity links to its source. No black-box outputs. No 'just trust us.'
Estimators don't trust black-box AI for the same reason controllers don't trust unaudited financials: the cost of error is too high. We build click-to-source UI, audit-log immutability, and grounded recognizer architecture so customers don't have to take our word for anything. The platform shows its work.
Trade-off
We move slower on shiny features that don't preserve evidence. We've passed on demos that would have wow'd prospects but couldn't survive scrutiny.
02
Paid pilots, not free trials
Free trials produce predictable failure. Scoped, paid pilot windows make both sides accountable.
Free trials sound generous; in practice they incentivize the customer to tire-kick without committing the time to onboard the AI to their drawings. We charge for pilots so we can deploy real CSM hours; customers pay so they show up to weekly check-ins. Paid pilots convert to annual contracts at a meaningfully higher rate than industry free-trial benchmarks; specific cohort numbers are shared with prospects under NDA on discovery rather than published.
Trade-off
We lose pipeline at the top of funnel from prospects who refuse to pay until they've kicked the tires. Some of those prospects are real customers we don't reach. We've decided that's an acceptable trade.
03
Honest about what doesn't work
We list the things we don't do well. We name the failure cases. We refund pilot fees when we miss.
Construction-tech vendors have a reputation for over-promising; we'd rather be the company that under-promises and ships. The pilot-FAQ page lists the cases where we've failed. The implementation-guide page lists what slows things down. The pricing-comparison page admits the incumbents are sometimes cheaper on the headline number. Customers find this refreshing; prospects find it disarming.
Trade-off
Some bidder behaviors penalize honesty. RFP scoring rubrics built around vendor-claim density can mark us down for not making bigger claims. We've lost a few RFPs over this and concluded they weren't customers we wanted.
04
Customer slack is sacred
Active customers reach engineering, CSM, and leadership directly. We don't filter through layers.
We don't operate a tier-1 support queue that gates customers away from real engineers. Every active customer is in a private Slack channel with their CSM, the relevant onboarding engineer, and named leadership. Most questions get answered in <2 hours during business hours. The relationship is the product.
Trade-off
It's expensive at scale and discriminates against customers willing to invest in the relationship. We accept higher CSM cost as a permanent line item; we'll re-architect this only if it becomes infeasible at much higher scale.
05
Active learning > pre-training
The recognizer learns from each customer's drawings, not from a one-time generic corpus.
The construction industry has 'standard' symbols that are not actually standard — every engineering shop has its own dialect. A pre-trained model with a fixed accuracy across all customers is worse than an active-learning system that meaningfully improves per-customer over the first weeks of bid work. We architected for the latter from day 1, even though it required a more-complex onboarding curve.
Trade-off
Day-1 accuracy is lower than vendors using generic pre-trained models. Customers see lower numbers in week 1 and higher numbers in week 4. Some prospects evaluate week-1 demos and conclude we lose; we've decided to live with that.
06
Federal posture, never an afterthought
Audit-log immutability, evidence retention, and compliance posture are foundational, not bolt-on.
Federal contractors and SOX-style audited customers have requirements that retrofitted compliance often misses. We built audit-log immutability with hash-chain validation, evidence-trail object-locked storage, and per-event capture from the platform's first commit. The architecture is designed for audit-readiness from the schema up.
Trade-off
More engineering time goes into audit infrastructure; faster competitors ship features we don't because they don't have to make those features audit-survivable. We accept the slower feature velocity as a brand asset.
07
Sustainable shipping cadence
We ship every 2 weeks. We don't ship 60-hour-week heroics. We don't run weekend deploys.
Construction software needs to be reliable; reliability needs sleep. We don't operate weekend on-call rotations for product changes (only for incidents). Engineers ship features at sustainable pace; the company aims for engineer-tenure of 4+ years rather than 18-month burnout cycles. The roadmap reflects this — large items move predictably across releases.
Trade-off
We can't out-ship a venture-funded competitor that operates on heroic-engineering culture. We've decided to compete on architecture quality + customer trust rather than feature volume.
How we hold ourselves accountable
3 checkpoints
Customer Slack
When we drift from these principles, customers tell us in Slack. We've reversed feature decisions, refunded pilot fees, and re-scoped onboarding plans because customers held us accountable.
Quarterly principles review
Every quarter, leadership reviews each principle against the previous quarter's decisions. If we drifted, the leader who owns the principle owes a 1-page write-up to the company. We've published 3 such retros internally.
Public retrospectives
When we miss a principle in a way customers see, we publish a retrospective. The published retros live on the blog under the 'retro' tag. We don't soften them with PR-speak.
Disagree with one of these?
We'd genuinely like to hear it.
If you think one of these principles is wrong — or that we'd serve customers better with a different commitment — write to randy@. We've revised principles before based on outside feedback. They're not stone tablets.