The collapsing distance

The most important thing happening in AEC right now is not the tools. It is what happens to workflows when time and data distance collapse.
McKinsey recently argued that the biggest gains from agentic AI come not from point solutions but from reimagining entire end-to-end workflows. Not automating a task. Rethinking why the task existed in that form at all.
In AEC, the delivery process was always fragmented by necessity. Design, engineering, procurement, construction, and operations were separated not just by organisational boundaries but by time gaps and data handoffs. A coordination problem so deep it shaped the structure of every firm and every contract.
Those gaps are not permanent features of the work. There were constraints of the medium.
When AI compresses the time between a decision and its downstream consequences — when data no longer loses fidelity across handoffs — the logic of those boundaries weakens.
What replaces them is starting to come into view, at least in other industries, and it will come faster for AEC as well.
Engineers won't no longer just producing outputs. They are going to build the automation that produces outputs. The most valuable technical professionals in AEC over the next decade will not be the ones who are best at running the workflow. They will be the ones who design it, encode it, and hand it to an agent.
Project managers are going to shift position. Not inside the loop managing handoffs. Above the loop — setting intent, coordinating between agent workflows, exercising judgment at the points where the work requires it.
McKinsey's framing is useful here: human above the loop does not mean hands-off. It means repositioning human judgment from execution to oversight. Squads of agents running the core process, with humans accountable for the decisions that matter.
This is where Andrew McAfee's observation differs in AEC from other industries. The learning cadence has compressed. What used to take years of iteration now takes months. Firms that start encoding their workflows now will not just be faster, they will be compounding knowledge while their competitors are still debating adoption.
The structural question this creates is the interesting one.
If time and data distance collapse across the delivery process, does integration follow? Does one entity absorb the stack vertically?
The more likely answer is a network. Not one firm owning design through operations, but a set of specialised entities: designers, engineers, contractors, platform operators, coordinating more tightly than was previously possible. Agent-to-agent handoffs are replacing the slow human intermediaries that populated every boundary.
Vertical delivery without vertical ownership. Coherence through shared data layers and interoperable workflows, not through a consolidated hierarchy.
The organisational models being tested at Meta and Block — smaller teams, player-coaches, flatter structures — are symptoms of something broader. You need people who can set direction, build automation, and hold accountability for outcomes.
The distance is collapsing. The question is whether the industry redesigns around that or waits until the new configurations appear without it.


Stop wrestling. Start surfing!

For thirty years, construction has been building files.
RVTs, PDFs, IFCs, contracts, specifications. Terabytes of project knowledge accumulated across tools that were never designed to talk to each other. The US industry alone loses $16 billion annually to poor interoperability. Up to 90% of project data goes unused. Teams spend more than half their day not designing — just translating between systems.
We do not have a data problem. We have an access problem.
The answer is not a new platform. It is a coordination layer — one that wraps what already exists and makes it readable by the agents that can finally help us act on it. That layer is called MCP. Model Context Protocol turns your existing APIs, tools, and documentation from static to surfable.
The thing that gets chosen is not the best product. It is the most readable one.
We saw this play out in software: tools that structured their documentation for agents became the default recommendation across millions of AI interactions — not because they were better, but because they were easier to reach. The same logic applies to AEC. The winners will not be those with the best systems. They will be the most agent-ready data interface in the room.
Autodesk is moving. The APS MCP server is now in beta, with working examples of how to wrap AEC data into a layer that agents can access and act on.
But a single MCP server is a pipe. The real capability comes when you compose it: a Skill that teaches the agent your domain, an MCP server that handles authentication and structured retrieval and an MCPApp that renders results directly in the conversation. Not three separate tools. One Extension Bundle. No rip-and-replace. The data you already have has finally become surfable.
The files are not changing. The models are not changing.
What is changing is that the data is finally becoming surfable.


Skills aren’t just for developers anymore

When Anthropic shipped packaged legal skills into Claude, $285 billion was wiped from software stocks in a single day. Thomson Reuters down 16%. LexisNexis down 14%. LegalZoom down 20%. Not because of earnings. Because investors understood something the affected industries hadn't yet: a general AI just upskilled itself into a professional domain overnight.
If law, why not specification review? If contracts, why not BIM coordination?
The software world had already moved. Twelve months of packaged knowledge bundles — reusable, shareable, installable from GitHub with a single command. Code review skills. Testing workflows. Documentation protocols. Not the automation of a task. Encoded expertise.
We kept talking about automating the model.
The cool output. The deliverable that photographs well in a client meeting. The thing AI does that the industry can point to. While the actual intelligence — the calibrated judgment of engineers with fifteen years of project failure behind them — kept walking out the door with every retirement.
One architecture firm. One school a year. Twenty years: twenty schools of accumulated knowledge. Why the acoustic spec on project seven failed. What school boards never mention in the brief. Which thermal detail always generates a change order two years later? That knowledge lives in email threads. In memory. In someone who might not be there next spring.
A skill isn't a report. It's an encoded judgment.
And unlike most competitive advantages in AEC, this one compounds. Every project encoded makes the next engineer faster. Every lesson captured makes the next mistake less likely. The firms that start now aren't just retaining knowledge — they're building a moat their competitors can't see yet.
And we now have the infrastructure to build internal libraries of it. To extract what engineers know from decades of project work. To encode it into something an agent can actually use. To chain those agents into workflows, a new hire can query on day one — not a lessons-learned PDF buried in a folder no one opens, but an agent that knows what your firm knows.
The research is sobering: human-curated skills improve AI agent performance by 16 percentage points. Self-generated ones barely register. You can't automate the capture. The expertise has to come from inside — from the people who carry it, while they're still in the building.
The industry has been losing knowledge for decades.
Now it has the tools to stop it.
The question is whether it starts before or after the people who could fill those libraries are gone.



More efficiency, more demand

AI is going to make engineers faster.
The fear is that it means we need fewer of them.
History says the opposite.
Geoffrey Hinton predicted in 2016 that radiologists would be obsolete within five years. By 2023, there were 17% more radiologists in the US than in 2014. AI made analysis cheaper, so hospitals ordered more scans, screened earlier, and expanded the total volume of imaging work. This is the Jevons Paradox: when a resource gets cheaper to use, total consumption goes up, not down.
The same logic is already showing up in AEC.
WSP posted a record backlog of $17.1 billion. They're heavily investing in AI and saying the same thing the data says: AI augments engineers, it doesn't replace them. If AI halves the time it takes to process a geotechnical investigation, or run a mechanical load calculation, or model a fire compliance scenario, the work doesn't disappear. Clients who couldn't afford a comprehensive analysis are starting to order it. Projects that weren't viable become viable. The bottleneck shifts. It doesn't disappear.
More efficiency doesn't reduce demand for expertise. It expands the surface area where expertise gets applied.
But there's a catch.
More demand for experienced engineers doesn't solve the pipeline that produces them. Junior engineers traditionally learned by doing repetitive work. If you've always reviewed AI output rather than written the first draft yourself, you don't develop the instinct for what good looks like. It's the GPS problem: if you only ever follow directions, you never build a sense of direction. In structural or geotechnical or fire engineering, not having that instinct isn't an inconvenience; it's a liability.
The knowledge crisis in AEC is actually two problems: capturing what the current generation knows before they leave, and making sure the next generation develops the judgment to use that knowledge.
The firms need to ask how they will develop engineers fast enough.



The data flywheel we haven’t built yet

Construction knows it needs to collect data.
It doesn't yet know how.
Not really. We generate enormous amounts of it: models, drawings, specs, site observations, sensor feeds, cost records. But most of it is never captured with intent. It accumulates in silos, in formats that don't talk to each other, carrying labels that mean different things to different teams. We haven't agreed on what "quality" means for a construction dataset. We haven't mapped the relationships between processes, outcomes, and the decisions that connect them.
Karol Hausman, co-founder of Physical Intelligence, spent the last few years solving a deceptively similar problem for robots. To teach a robot to pick up a cup, you can't simulate the physics of the world; it's too complex, too contextual. Instead, you collect real-world data, reach a deployable threshold, and then let deployed robots collect more data while doing useful work. Models improve. More robots are deployed. More data flows in. A flywheel.
The insight isn't about robots. It's about what happens when you finally understand what data you actually need and start collecting it with that in mind.
He also learned something humbling: "quality data" and "diverse data" are easy to say and almost impossible to define until you start building. You don't theorise your way to a definition. You run experiments, the definitions sharpen, and you double down on what works.
AEC needs to go through exactly that process.
That's the ontology problem. Not a technical problem, a literacy problem.
The industry needs to learn what it means to define a thing cleanly, to connect it to related things, and to make those connections machine-readable. That work isn't glamorous. It doesn't ship as a product. But it's the substrate everything else runs on. Without it, AI can chat about your documents. It cannot reason about your buildings, your projects, or your institutional knowledge.
We can start now without waiting for perfect infrastructure. The unstructured data is already there: documents, conversations, drawings, reports. Frameworks like MCP let us start surfing it today. But surfing unstructured data and building structured knowledge aren't alternatives. They're phase one and phase two of the same loop.
Phase one: learn what's in the data you already have.
Phase two: capture the next project's data using the definitions from phase one.
Each project feeds the next.
That's the flywheel construction that hasn't been built yet.



From T&M to outcome

Construction has always been billed for time.
Everyone worked at roughly the same speed. Effort was the scarce thing. So we tracked hours, counted deliverables, measured how many quantities moved through the process and that was fair enough.
But AI is starting to break that equilibrium.
If one firm can do in two hours what used to take eight, and both are still pricing on time, the faster firm's fee is a quarter of the slower one's. And the answer isn't to just make everything cheaper. You can make a hotel room cheaper and cheaper, but eventually you end up with tents in the streets and that isn't a business.
The hours are being compressed. The conversation with clients hasn't caught up yet.
Autodesk is already moving. The CEO is going to start the shift from per-seat licensing toward consumption and outcome-based pricing not out of generosity, but because per-seat collapses the moment headcount isn't the constraint anymore.
The same pressure is hitting professional services. The firms that will capture the value AI creates aren't just the ones that got faster. They're the ones who changed what they're selling.
Not: "How long will this take?"
But: "What is faster delivery worth to you? What does a design with fewer change orders do for your project?"
The client has always cared about the outcome. We were the ones billing for the journey.
It is time to have different conversations with clients and to move from time and material to outcome-based pricing.



Institutional knowledge

Jensen Huang introduced OpenClaw at GTC, baked into Nvidia's platform. It breaks problems into tasks, spawns sub-agents, connects them to tools, to file systems, to models. The infrastructure for agents to actually coordinate work, not just answer questions in isolation.
But he was just as clear about the other half of the equation.
Agents are probabilistic by nature. They drift. They hallucinate. The only thing that anchors them is a structured, deterministic layer underneath — the ground truth that tells the agent not just what something is, but how it relates to everything else.
This is the necessary step to start defining institutional knowledge. Your policies. Your workflows. The expertise that currently lives in people's heads, in email threads, in the memory of whoever has been on the project longest.
That's the gap our industry needs to work on.
The blocks to coordinate agents across that data are starting to be ready. The tools to embed what your firm knows into agents' behaviour are already here.
Now comes the hard work of capturing the relationships, structuring the knowledge, and building the layer that makes decades of engineering expertise readable to the agents waiting to use it.



Workflows and the agent layer

We've been automating tasks for decades.
We haven't touched the gap between them.
Take a geotechnical site investigation handoff, for instance. The field team wraps up, and the design engineer who picks it up next has to dig through boring logs, lab test results, and monitoring reports just to understand what the ground is actually doing — before they can touch the design. Then they hand off their assumptions to the structural team, and the cycle starts again.
That 30-minute delay happens at every handoff.
It adds up on every project, between every team, every single quarter. It’s quiet but relentless.
We’ve given folks slicker software and faster tools, made dashboards smarter, and automated the tougher parts. But we’ve totally overlooked the cost lurking in that empty space between the tasks.
What's coming into focus now is about more than just AI speeding up individual work.
We need to ask a different question: what if we put an agent in that gap?
Not to replace the tasks, but to shorten the distance between them. To carry context forward without that annoying 30-minute wait. To kick off the next process before anyone even thinks to ask. This way, we can use all the automation already running under the hood of our workflows, but finally give it some smart direction between the handoffs.
Businesses aren’t just systems of record; they’re systems of processes. The real value comes from how these processes coordinate with each other.
That’s where things can really speed up.
We spent years making each individual step more efficient.
But we never stopped to think about whether those steps still make sense the way they are.
Now it is a great time to rethink/redesign our workflow!



Don’t do it better, reframe the issue

You've likely spent the last five years selling "better BIM" or "better project management." Slightly faster. Nicer UI. Integrated dashboards.

But here's what Autodesk's data reveals: 86% of AEC businesses want to be cloud-based within 5 years.

Only 3% are today.

That's not a technology gap. That's a problem gap.

Christopher Lochhead proved this with Meta Threads. It had 1 billion users, a legendary brand, zero-friction onboarding. It still cratered. Why? Because Threads attacked an existing, solved problem with "just better."

Same story: Amazon Fire Phone vs. iPhone. Red Bull Cola vs. Coca-Cola. Direct competitors against entrenched categories lose every time.

Construction tech is repeating this exact pattern.

Stop selling incrementally better solutions. Start reframing the problem.

Instead of "better BIM collaboration," reframe it as "design-to-fabrication workflow"—eliminating the translation layer between design intent and manufacturing. This isn't competing against BIM. It's damming demand from coordination models to fabrication-ready models.

Instead of "better project management," reframe as "outcome-based delivery"—you price the output (buildings that meet performance criteria), not the input (hours spent).

Otis didn't build a "better elevator." They reframed it as a "vertical railway." Language created a new problem frame. The solution became obvious. An entirely new category opened.

Are you solving yesterday's problem slightly better, or are you naming and claiming a problem the industry doesn't know it has yet?

The first path leads to pilots and product iterations. The second leads to category creation.



PDFs as the source of knowledge

In 1993, a Gartner consultant called the PDF "the dumbest idea I've ever heard."

Today, 2.5 trillion PDFs are in circulation.

In AEC, the format became the Rosetta Stone, the one format every user eventually uses.

AI's biggest payoff right now is coordination: extracting structure from fragmented, unstructured data and turning it into alignment. PDFs are AEC's largest source of unstructured data. They should be perfect partners.

Instead: LLMs hallucinate when reading them. Column layouts get parsed horizontally. Headers become noise. A spec document becomes a wall of confusion. David Spergel described it recently: "Every drawing and PDF tells a story." Most of them just get uploaded and never looked at again.

In construction, they carry everything: invoices, specifications, RFIs, contracts, site communications, and permit applications. Every decision that moves through a project eventually lands in a PDF. When AI can genuinely extract structure from them, not hallucinate it, but read it, coordination stops being a negotiation and starts being a function. The institutional knowledge locked in project archives becomes queryable. The decision made on Project A finally informs Project B.

The reader is here. It's learning fast.



Make readable to agents

The interoperability problem isn't going away.

It's changing shape.

From at least the past 10 years, we've been asking: how do we make systems talk to each other? IFC, open BIM, CDEs, connected project platforms. Real work. Real progress. And still — a geotechnical report in a PDF, a structural spec in a Word doc, a contract variation buried in someone's inbox.

We haven't finished that battle.

I've been watching something play out in the software world that I can't stop thinking about in the context of construction.

Resend — an email API — quietly became the default recommendation from Claude and ChatGPT across millions of interactions. Not because it was the best email tool. Because it structured its documentation so that agents could read it. Clean markdown. Code snippets at every step. A dedicated file that told LLMs exactly what the product does and how to use it.

Meanwhile, Groq — a faster and cheaper alternative for AI transcription — kept losing to older tools. Not because of performance. Because its docs were harder to parse. Agents couldn't find the right answer quickly, so they recommended something else.

The thing that gets chosen is not the best product. It's the most readable one.

This is the conversation I think we're not having in AEC.

We have data. Enormous amounts of it. Models, specifications, reports, contracts, drawings, codes. Decades of accumulated project knowledge. But almost none of it is structured for the agents that will increasingly coordinate, check compliance, procure materials, and surface decisions across our projects.

We know what it looks like to start solving this. The industry has been inching toward machine-readable building data for years — the instinct has always been right. The question now is whether we take the next step: not just making data portable between systems, but making it genuinely surfable by agents.

The next 10 years of interoperability won't be about making systems talk to each other.

It will be about making our data readable to the things that will coordinate, check compliance, procure materials, and make decisions — automatically, across every project.

We have the same fragmentation issue, but there's a great opportunity to address it using a different approach!



77% of underground construction projects suffer from insufficient or inadequately interpreted data

Source: Rajat Gangrade

The problem isn't a lack of data. It's fragmentation.

Geotechnical reports in PDFs. Designs in BIM. Cost data in spreadsheets. Schedule in Procore. Each silo is optimised individually, none talking to each other, and now we are looking to solve this by adding "AI analysis" to each silo.

That's optimising the wrong problem.

We're asking the wrong question about AI. Instead of "How do we add AI to existing processes?" ask "How do we redesign workflows around what AI enables?"

Right now, we're rushing to automate tasks: faster clash detection, quicker takeoffs, automated compliance. This just makes fragmented workflows slightly faster.

The problem: Every "AI-powered" feature gets commoditised as foundation models improve. Your competitive advantage evaporates every 18 months.

There's a different playbook.

In Reshuffle, Sangeet Paul describes how containerisation transformed global trade, not by making shipping faster, but by unbundling and rebundling the entire workflow.

Before containers, cargo was packed/unpacked manually at every port. Containerisation unpacked the process by separating cargo handling from transport, storage, and documentation. Then we repacked everything around the standardised container.

The result wasn't 20% faster shipping. It was the foundation of global supply chains.

AI needs that same mindset in construction.

Right now, two trends are forming:

Path 1: Adding AI features
What AI researcher Rich Sutton calls "the bitter lesson." Companies that encode human expertise get short-term wins but lose to those building learning systems leveraging computation at scale. These get outdated as models improve.

Path 2: Rethinking workflows
Instead of just adding AI, companies that focus on what models need (like training data and infrastructure) or find new ways to work with it will lead the way.

Here's what it looks like in action:

Unpack: Break down "coordination." Design intent in heads, decisions in meeting notes, models in isolated tools, consensus through endless meetings.

Repack: Design with AI capabilities in mind: coordination that doesn't hinge on getting everyone to agree first.

Instead of months of negotiating standards:
- Teams model in their preferred tools
- AI translates between formats as a semantic layer
- Design intent captured in machine-readable format
- Real-time conflict resolution
- Coordination emerges from continuous learning, not consensus-building

It's coordination as a continuous learning system.



Two moves that look different are actually the same strategy

The best analogy is Sephora.

Sephora did not win by making every product. It won by owning the moment of uncertainty: helping customers decide what to trust and buy.

That same dynamic is now playing out in enterprise AI.

In construction and other regulated industries, the key decision is not "Which model is smartest?" It is:

"Can I trust this decision when budget, schedule, compliance, and liability are on the line?"

This is why Palantir and Anthropic matter in the same conversation.

Both are moving toward the same control point: the high-friction decision layer.

  • Palantir: unify fragmented operational data and orchestrate decisions across teams.
  • Anthropic: combine model capability with distribution, consulting channels, and safety infrastructure to enter regulated workflows.

So what should companies do?

Yes, invest in your horizontal layer, but be precise about what that means.

  • Buy the Commodity: The foundation models, the OCR, and the general infrastructure components.
  • Build the Platform: The data model, the governance and permissions, the decision logic tied to your real workflows, and the integration architecture.

The reflection to extract is simple:

AI features are not your moat.

The moat is the connective tissue that turns fragmented data into trusted decisions.



The Jevons paradox

Where efficiency creates more demand rather than less, works in radiology because AI-enabled imaging feeds directly back into more imaging orders.

In AEC, however, this logic breaks down; the industry is so fragmented across disciplines that efficiency in one part, such as faster design documentation, doesn’t generate demand across the whole chain. Instead, it merely shifts the bottleneck elsewhere, usually to the physical construction site, where labour shortages act as a hard cap.

For AI to truly expand demand, it must move beyond simple "chat" interfaces and establish an operational ontology: a unified digital architecture that bridges the gap between a design model and physical execution. This allows the AI to vertically integrate across fragmentation points by connecting design decisions directly to operational outcomes.

The strongest market pull comes when this integration helps clients make more money, not just spend less: consider performance-based services, predictive maintenance contracts, or digital twins that turn building data into billable insights and proprietary feedback loops.

Without that revenue link, AI in AEC risks being just another layer of technology that improves one silo while the rest of the process absorbs the gains. Ultimately, the industry must stop looking to rebuild commodities that merely accelerate existing tasks and focus on capturing the value of efficiency by monetising the superior outcomes, not just the speed, that these technologies create.



The digital skills gap goes beyond tools

I've been following recent discussions on the Bricks & Bytes podcast about the digital skills gap, and it's clear the industry faces a challenge that goes far beyond "learning new software."

We often frame this gap as a technical hurdle, but the more we examine it, the more it appears to be a cultural and capability transition. The reality is that today's technology evolves too rapidly for the old model of periodic training to keep up.

To effectively close this gap, we should consider three key shifts in how we approach "skilling" in the AEC space:

  1. From static training to fluid re-orientation
    In a rapidly changing market, moving from commercial real estate to data centres or advanced manufacturing, our skills cannot remain static. Instead of generic education, skilling should focus on "#rebundling" an expert's domain knowledge with new digital workflows to address immediate market needs. It's less about merely learning a tool and more about reorienting expertise toward where value is headed.
  2. The "expert intermediary" framework
    Much of the hesitation around AI stems from the fear that it will replace human judgment. The solution isn't to bypass the expert but to empower them. The most critical skill we can teach today is oversight. We should train our qualified architects and engineers to act as "Expert Intermediaries," using AI as a powerful assistant while their judgment remains the ultimate safety net. This approach reinforces that humans ultimately manage risk.
  3. Shifting the commercial conversation
    Perhaps the biggest gap lies in how we utilise digital tools in the marketplace. We are accustomed to selling "brains by the hour," but digital fluency enables us to sell clarity. By leveraging AI as a decision-support system, we can help clients navigate complex trade-offs between carbon emissions, cost, and speed. The skill being developed here isn't just technical; it’s the commercial confidence to shape value-based deals.

The tools are already available. The real challenge now is helping thousands of individuals transform how they engage with projects, clients, and the technology itself.



Context problem

"We have the solution. We just can't talk to each other."

It's like travelling to different countries. Every country has different power outlets. You either bring multiple adapters or multiple chargers.

The same thing happens in the construction/manufacturing world. Different vendors, different teams, different software. Without a common format, files won't plug in with each other."

I just read Lesley G.'s piece on why AI hasn't transformed manufacturing design.

Same exact problem.

Manufacturing engineers can't deploy AI because proprietary "geometry kernels," the mathematical foundation of 3D design, don't talk to each other. Each vendor speaks a different language.
Construction has IFC (Industry Foundation Classes) as the universal adapter. Manufacturing doesn't even have that.

Interoperability doesn't benefit you. It benefits the ecosystem.
The project engineer who needs quick quantity takeoffs. The contractor is working on multiple software projects. The asset owner has been managing the building for 30 years.

When you optimise for your workflow, you break everyone else's.

Lesley says:
"Whoever controls that language will shape how the next generation of factories and nations are built."

China gets it. They're investing in open-source alternatives while the US protects proprietary
systems.

The engineers who understand interoperability and can make systems talk to each other are the ones who unlock entire ecosystems.
Not the ones who master a single tool. The ones who connect tools.



Every project makes the next one harder unless it doesn’t

You've probably felt this: Opening a Revit file from five years ago is more work than starting fresh. A problem you solved on Project A gets rediscovered on Project B. Your firm's knowledge exists only in people's heads—not in reusable systems.

That's not laziness. That's architecture.

Anthropic has a concept called compounding engineering: Normal engineering makes future work harder (technical debt). Compounding engineering makes future work easier.

The difference isn't effort—it's whether you extract and codify what you learn.

When Anthropic ships a feature, they don't just close the ticket. They ask: What pattern can we extract that makes the next feature easier? Every solved problem becomes infrastructure for future problems.

Construction does the opposite. Matt Goldsberry at HDR: "No firm has all project data in a single data lake. Each new project starts from the same baseline rather than building on past work."

Your firm completes 50 projects. Those 50 projects should make project 51 dramatically easier. Instead, it's almost as hard as project 1.

Here's what flips the switch: Shift from project-based delivery to capability-based delivery.

After you solve a problem, extract the generalisable solution. When your team figures out a healthcare patient room layout, don't just archive the model—codify it as a reusable template. When you crack a facade coordination strategy, extract it as a plugin that future projects can use.

Make "contribution to firm knowledge base" part of performance reviews. Currently, only project delivery is rewarded, so knowledge extraction never happens.

The compounding effect: Each project adds capability, not just complexity. Five years in, your firm has 50 solved problems accessible instantly. Project 51 becomes genuinely easier because you're not rediscovering solutions—you're leveraging them.



Why construction is trapped in delivery mode

Melissa Perri recently shared a powerful insight: the confusion between product and project management isn't a role definition problem—it's a leadership problem.

Her core observation: When executives ask "are we on schedule?" instead of "are we solving the right problems?", they create organisations where delivery matters more than direction. Teams get reduced to coordinators, and discovery gets squeezed into sprint zeros.

But here's what's specific to construction:

This isn't just a cultural preference. It's baked into the business model.

Construction operates on hourly billing and project-based pricing. This creates a perverse incentive: the more you improve at your job, the less you earn. Finishing a design faster means billing fewer hours. Delivering a project on schedule means the revenue window closes sooner.

So the industry developed rational cultural norms around "using available hours" because that maximises revenue. It's not laziness—it's a logical response to the incentive structure.

And because the business model only rewards delivery, project managers dominate. Leadership naturally asks operational questions: "What's the schedule? How many hours? What's the margin?" Not strategic questions: "What are we building capabilities toward? How do we compound value across projects? What outcomes are we guaranteeing?"

This cascades into everything:

  • How firms scope work (discrete projects, not capabilities)
  • How teams think about solutions ("better tools" instead of "different or re-thinking the problems")
  • How innovation gets funded ("500 hours to deliver" instead of "2 weeks to learn if this works")
  • What metrics leadership tracks (billable utilisation instead of customer outcomes)

The system makes perfect sense—until you realise the business model is misaligned with where value actually lives.

Here's the thing:

When AI collapses effort but maintains value, input-based pricing becomes nonsensical. Other industries are already making the shift. Legal firms are moving from hourly billing to outcome-based pricing.

A question for construction leaders:

What would change if you priced outputs instead of inputs? If you guaranteed the outcome (buildings that meet performance criteria) rather than consumed hours? What questions would your teams ask? What would leadership measure?

The problem Melissa identified—organisations choosing delivery over direction—isn't inevitable in construction. It's structural.

Which means it's fixable.



Failing brewing coffee

"I wish I would have learned how much coffee you wanted in that first experiment."

Bryan Bischof said this halfway through his talk on building AI applications. He'd just failed to brew coffee four times in a row—on stage, in front of hundreds of people.

Each failure revealed a different mistake. Wrong filter. Forgot to grind the beans. Never asked about temperature preference. Poured before measuring the ratio.

But here's what struck me: he never got to evaluate the downstream problems because he failed so early in the upstream steps.

This is exactly what happens when building products for construction.

A team builds a sophisticated tool—six months of development. Launch day: zero adoption.

They forgot what good consultants do: sit with the teams, asking What's your actual problem? What workflow are you trying to improve? What does success look like at each milestone? See the friction points.

I know why this happens. Utilisation pressure. Every hour must be billable. Taking time to ask questions feels expensive.

But building the wrong thing costs far more.

In AI, they call this "evals"—structured evaluation at every dependency point. This translates directly to our industry: decompose your process into checkpoints, and evaluate at each stage.

Don't wait until the end to discover your assumptions were wrong. Ask the critical questions at step one. We should plan slow, act fast. Invest the time upfront to ask the hard questions before you start building. Then execute rapidly

Because just like brewing coffee, if you don't know how much your user wants in the first experiment, you'll waste a lot of time brewing the wrong thing.



Rigorous where it counts

I’ve gathered some thoughts. Engineering requires precision because the results must be flawless, and the industry operates on the principle that any issues are resolved as they arise, making communication a secondary concern. Additionally, weekly iterations often lead to minor setbacks that you need to accept and adapt to without getting frustrated about making changes.

Engineers are trained to get it right.
But here's the paradox: this mindset of precision—essential for deliverables—has repercussions for how we develop solutions.

I watch the pattern unfold constantly. Someone presents a problem. We listen. Then everyone disappears to work alone, afraid to share half-formed thinking.

The irony? By refusing to iterate internally—by not sharing incomplete work, not prototyping quickly, not collaborating early—we deliver exactly what we feared: solutions that are overthought, overcomplicated, and sometimes don't fully address the problem because we never reduced complexity or tested assumptions with others.

We spent all the time perfecting an answer in isolation instead of discovering the right question together.

There's a massive difference between iteration and delivery, and confusing the two kills both speed and quality.

Adrienne Tan writes about this in her article on Perfect vs Possible. She says perfectionism is counterintuitive to good product management—and I'd argue the same dynamic is crippling engineering consulting.

Our industry won't accept buggy solutions. Nor should it. But we've confused that external standard with how we should work internally.

Here's the mindset shift:

Separate internal collaboration from external delivery.

Internally: prototype rapidly, share incomplete thinking, ask stupid questions, challenge assumptions, iterate messily. This is where you discover what you're actually solving.

Between these stages is where education happens. This is where we teach engineers that new tools and solutions aren't threats—they're opportunities to evolve faster. That weekly iterations will bring changes, and that's exactly the point.

Yes, feedback means revisions. But these small course corrections prevent the catastrophic failures that come from pursuing perfection in isolation.

Externally: be rigorous, check everything, review with fresh eyes, deliver with confidence. This is where precision matters.

Don't fall in love with your solution before you've tested it with your team.

The moment you disappear to perfect your approach alone, you've limited what's possible. Your solution cannot become its best version without input from others.

Treat internal communication as discovery, not judgment.

When you share work-in-progress, you're not asking for approval. You're extracting information, testing assumptions, finding gaps you couldn't see alone.

This is as much a teaching challenge as a process one. Engineers need permission to distinguish between internal collaboration and external delivery.

Tan shares a reflection from Simonetta Batteiger: "One of the main pillars of resilience is optimism—not blind positivity, but the ability to be realistic about where you are, to accept it, and then to create from there."

That's the shift. Grounded optimism.

Be realistic about where you are in the process. If you're still figuring out the problem, share incomplete thinking. If you're ready to deliver, then review everything and get it right.

The industry will never accept half-finished solutions. So stop delivering them by spending all your time polishing in isolation instead of discovering the right problem through collaboration.

Be rigorous where it counts. Iterate everywhere else.