Audio article narrated by OpenAI

Raymond’s co-founder, Brandon Abreu Smith, didn’t enter construction with a product thesis; he entered it with a fighter jet video and six years of contracted proximity to firms that needed help he wasn’t sure he could give. That accumulated exposure (200 customer conversations, vendor roundtables where they sat as buyers of feedback rather than sellers of software) is what separates Structured AI’s origin from most AEC AI companies built on pattern-matching from the outside. The structural tension this conversation traces is quiet but consistent: an industry full of clever technical ideas and short on founders willing to admit they don’t yet know enough to build. Raymond is one of the latter, and what follows is a conversation about what that discipline actually costs, and what it buys.

About Raymond
Raymond Zhao is co-founder and CEO of Structured AI, a platform focused on quality assurance and quality control for AEC design firms and general contractors. He built the company alongside Issy Greenslade and Brandon Abreu Smith, a machine learning researcher who spent six years embedded as a contractor inside AEC firms while studying at Oxford. Structured AI raised a seed round in late 2025 and relocated its team to New York in April 2026 to be closer to its growing base of American clients.

“For us that was a gold mine of how we shouldn’t be behaving as vendors.” Raymond Zhao

There’s a version of the Structured AI story that sounds inevitable in retrospect: a technically brilliant co-founder, deep industry contacts from years of embedded contracting, a $177 billion problem waiting to be productised. But the version Raymond actually describes is less tidy. It’s a story about resisting the obvious first idea, running 200 conversations before committing to a direction, and sitting at a vendor complaint roundtable as a vendor, asking people what not to do. The discipline isn’t the footnote. It’s the whole thesis.

The accidental entry that wasn’t accidental

Raymond traces his co-founder’s path into AEC back to a YouTube reinforcement learning tutorial. At 17, Brandon filmed a fighter jet that learned to dodge anti-missile systems, published it as an educational video on OpenAI’s PPO framework, and watched it go viral on Reddit. An American construction contractor saw it and hired him to apply the same logic to 3D construction problems.

His first project was MEP pipe optimisation: routing ducts and pipes in the most efficient spatial configuration possible, reframed as a game that an agent could learn to play. It worked well enough to prove a concept. But it also launched six years of part-time contracting for GC and engineering firms while Brandon studied physics at Oxford, which meant six years of watching which problems actually cost people time and which they’d pay to have solved.

The distinction between the problems people mention and the problems they pay to solve is where most AEC product companies fail. Brandon landed at Syska Hennessy Group as an in-house AI developer before he, Raymond, and Issy founded Structured AI. By then, the pattern from those six years was clear: the constraint wasn’t technical. It was an adoption. Information lived in engineers’ heads and didn’t move. Weekly knowledge-capture sessions were being run manually just to extract what people knew. The data problem predated the AI problem.

When Raymond, Brandon, and Issy formed Structured AI, they spent the first six months speaking with more than 200 design engineers across every AEC discipline, from MEP and structural to architecture, general contracting, and owners. They weren’t collecting pain points. They were applying a second filter that Onur Ekinci at CalcTree described in an earlier conversation: not just “does this hurt?” but “would you pay tomorrow if we fixed it?”

“When we moved to QAQC, they were like: yeah, I really want this like tomorrow. Can we give you a set of plans for you guys to find issues?”

That shift from document generation, where the response was polite interest, to quality control, where the response was immediate urgency, is the signal that separated a beachhead from a feature. MEP optimisation was technically impressive and limited in scope. QAQC was universal: every stakeholder in the AEC value chain, from architects to GCs to owners, suffers when errors reach the field. The US construction industry spends $177 billion annually fixing errors that should have been caught earlier. That number isn’t marketing. It’s the structural cost of a quality process that nobody has productised at scale. An a16z analysis of the AEC software market makes the structural cause visible: most commercial buildings are still designed using software built in the late 1990s, a platform inertia that compounds error rates across every project cycle.

Trust as the operating model

The 200 interviews are the visible part of Structured AI’s discovery process. The less visible part is what happened at the industry conferences Raymond, Brandon, and Issy attended across America.

At one of them, they sat at a round table where practitioners were complaining about vendors. The complaints were specific: locked-in contracts, misrepresented capabilities, and sales cycles that overpromised and underdelivered. Raymond, Brandon, and Issy were vendors. They could have defended the category or quietly endured the session. Instead, they introduced themselves honestly and asked the room to tell them exactly what not to do.

“We are on the vendor side. Please, please, please tell us what not to do.”

This is what trust looks like as an operating model rather than a sales tactic. Sam Carigliano at SkyCiv described how trust in AEC compounds slowly: “They’ll watch you from the sidelines for a year or two. They need to see you’re going to stick around.” Raymond’s version of this is more active. Rather than waiting for trust to accumulate over time, Structured AI created the conditions for it through radical transparency at moments when most vendors would have been defensive.

The result is clients who say they believe in the team more than the product. In an early-stage AEC company, that sentence is not a consolation; it’s the go-to-market.

The same posture shaped how they approached their design partners. Rather than running formal feedback sessions, they would go to client offices and “harass engineers,” as Raymond puts it, verifying every assumption in real time and unwilling to proceed on anything they hadn’t confirmed directly. When they needed domain expertise they didn’t have, they hired MEP and structural engineers to join their founding team. The team’s willingness to admit the boundaries of its own knowledge became a trust signal in itself.

What 90 percent accuracy actually means

The QAQC product that emerged from this process operates on 2D drawings at 30, 50, 70, and 100 percent completion, covering drawings, schedules, and specs across the entire project package. The target is 90 percent accuracy, optimised specifically for false positives: if the system flags an error, there should be a 90 percent chance it’s real.

That number isn’t arbitrary. It’s the threshold at which engineers are willing to trust the output enough to act on it, making it less a product metric than a trust metric. Each client receives its own fine-tuned agents; Structured AI doesn’t change the weights of foundational models, which Raymond describes as “fighting the tide.” Instead, they build a layer of custom agents per client that learns from that firm’s feedback and gets better at that firm’s specific standards over time. Some clients even receive project-type-specific agents: a firm with a recurring fast-food restaurant client gets an agent fine-tuned to that firm’s fast-food restaurant projects.

The quality standard itself is non-universal. An architectural engineering firm cares about formatting consistency, deliverable presentation, and referencing standards. A general contractor cares whether the building can actually be built: whether the structural beam is properly specified, whether there’s enough information to price and sequence the work. Title block consistency and north arrow placement are irrelevant to a GC; a missing load calculation is not. The same set of drawings can be “high quality” by one firm’s standards and full of critical errors by another’s. Structured AI helps firms surface and articulate what quality means for them, which turns out to be a more valuable problem than the QC flagging itself.

Raymond’s Palantir analogy becomes useful here. Palantir’s central insight was that you couldn’t sell a powerful AI platform to an organisation that hadn’t first built a coherent ontology of its own data. Structured AI is doing something structurally similar: using the QAQC process to extract the tribal knowledge that lives in senior engineers’ heads and convert it into a consistent, firm-specific standard. Three senior engineers reviewing the same set of drawings will return non-overlapping problem lists. That isn’t a failure of individual engineers. It’s the absence of a shared standard that anyone could actually check against. Annie Liang at Billie Onsite named this as construction’s deeper AI problem: the expert-verified data that would make AI useful was never created in the first place. The QAQC platform creates that standard, and the data it captures in the process becomes the foundation for everything that comes after.

The floor before the skyscraper

Raymond is clear that Structured AI’s current scope is a deliberate starting point, not a final destination. The platform vision is an AI workforce: a seven-person firm taking on projects that would traditionally require a hundred employees, using agents to replace offshore outsourcing with a faster, higher-quality internal capability. The generative phase comes later; once the quality guardrails exist, the same checks that validate human output can validate AI-generated output. The system loops on itself until the generated component meets the firm’s own quality standard, then presents it to a human for a final decision.

But the generative ambition is only defensible because the quality floor came first. Will Meurer made this point in a previous conversation in this series: “You can’t build a skyscraper from left to right. You have to build it from the bottom to the top.” The QAQC product isn’t a stepping stone to the real product. It is the load-bearing foundation without which the AI workforce claim would be a platform ambition without substance. Tom Blomfield, founder of UK challenger bank Monzo and a Y Combinator partner who worked with the team during Y Combinator, put it more practically: “Resist adding features when sales slow. Keep batting singles.” The advice carries a specific logic for AEC markets, where the temptation when sales stall is to broaden scope rather than deepen delivery. The companies that survive are the ones that become genuinely indispensable at one thing before they reach for the next.

The 2D constraint is part of this same logic. It would be technically possible to build on 3D models. But as the CEO of one of their clients told Raymond directly: “BIM is just a gimmick until the field can use it.” Subcontractors have iPads. 3D software on iPads is slow and clunky. 2D drawings are still the primary field deliverable and the legal requirement for permits. That’s a five-to-ten-year reality, not a temporary limitation. Structured AI builds for where the industry actually operates, not where the industry says it wants to go. When the field shifts, the quality guardrails and data layer they’ve built will shift with it; the platform is positioned for that transition without betting on its timeline.

The billing model nobody’s ready to discuss

The business model tension that sits underneath all of this rarely surfaces in product conversations, but Raymond raises it directly. AEC firms that adopt AI can significantly reduce time: tasks that took 20 hours now take 20 minutes. But most firms still bill by the hour. If a firm can do in two hours what used to take eight, and both firms price on time, the AI-enabled firm is billing a quarter of what its slower competitor charges for identical value delivered. Firms like Syska Hennessy Group, where Rob is already working through this transition, illustrate the logic. As Rob puts it: “We might bill less now, but we can take up more contracts and eventually we’re still going to win.” The logic is sound over a long horizon; it’s uncomfortable in the short term. The conversation with clients about value versus time hasn’t caught up with what the technology now makes possible. Raymond sees this as an unsolved structural question for the industry, not a problem Structured AI will solve for its clients. The missing conversation is how firms bring their own clients into the journey differently: not billing for the drawing, but demonstrating the value that better process creates throughout delivery.

The compounding advantage of knowing what quality means

The reason the data layer matters so much is that it compounds. Every quality standard a firm articulates, every error pattern a firm’s agents learn to catch, and every project-type-specific rule that gets encoded create a body of structured knowledge that didn’t exist before. That knowledge is firm-specific, proprietary, and self-reinforcing: the more projects run through the system, the more precise the standard becomes.

This is the transition from tool to solution: the company that captures customer context doesn’t just provide a service; it becomes the repository of the customer’s own operational intelligence. An AEC firm’s quality standard, once externalised and encoded, is not easily transferred to a competitor’s platform. It belongs to the relationship between the firm and its agents.

The question the industry hasn’t yet fully asked is who holds the coordination leverage in a world where AI shapes what gets checked, what gets built, and what gets considered correct. Structured AI’s bet is that the company that helped firms define quality first will be best positioned to answer it.

The patience required to understand before building

The arc of Raymond’s story closes on a structural insight that the AEC AI conversation rarely surfaces directly. The industry is not short of clever technical ideas. Reinforcement learning applied to pipe routing is clever. Generative AI for room layout is clever. Computer vision for defect detection is clever. What’s scarce is the willingness to spend six months talking to 200 people before building anything, to sit at a complaint roundtable as the person being complained about and ask for more criticism, to hold the platform ambition in reserve until the first floor actually bears weight.

The companies that build durable positions in AEC don’t tend to be the ones with the most sophisticated technology at the starting line. They tend to be the ones that accumulated enough proximity to the problem to understand what quality actually means for this firm, this project type, this stakeholder. The data layer is not a product feature. It is the accumulated cost of having listened long enough to know what to capture.

Three senior engineers reviewing the same drawings return non-overlapping problem lists. That isn’t the problem Structured AI is solving. It’s the condition that makes the problem worth solving, and the reason that whoever helps firms close that gap first will find that the gap itself becomes the moat.