March 7, 2026
Don’t do it better, reframe the issue
You've likely spent the last five years selling "better BIM" or "better project management." Slightly faster. Nicer UI. Integrated dashboards.
But here's what Autodesk's data reveals: 86% of AEC businesses want to be cloud-based within 5 years.
Only 3% are today.
That's not a technology gap. That's a problem gap.
Christopher Lochhead proved this with Meta Threads. It had 1 billion users, a legendary brand, zero-friction onboarding. It still cratered. Why? Because Threads attacked an existing, solved problem with "just better."
Same story: Amazon Fire Phone vs. iPhone. Red Bull Cola vs. Coca-Cola. Direct competitors against entrenched categories lose every time.
Construction tech is repeating this exact pattern.
Stop selling incrementally better solutions. Start reframing the problem.
Instead of "better BIM collaboration," reframe it as "design-to-fabrication workflow"—eliminating the translation layer between design intent and manufacturing. This isn't competing against BIM. It's damming demand from coordination models to fabrication-ready models.
Instead of "better project management," reframe as "outcome-based delivery"—you price the output (buildings that meet performance criteria), not the input (hours spent).
Otis didn't build a "better elevator." They reframed it as a "vertical railway." Language created a new problem frame. The solution became obvious. An entirely new category opened.
Are you solving yesterday's problem slightly better, or are you naming and claiming a problem the industry doesn't know it has yet?
The first path leads to pilots and product iterations. The second leads to category creation.
March 6, 2026
PDFs as the source of knowledge
In 1993, a Gartner consultant called the PDF "the dumbest idea I've ever heard."
Today, 2.5 trillion PDFs are in circulation.
In AEC, the format became the Rosetta Stone, the one format every user eventually uses.
AI's biggest payoff right now is coordination: extracting structure from fragmented, unstructured data and turning it into alignment. PDFs are AEC's largest source of unstructured data. They should be perfect partners.
Instead: LLMs hallucinate when reading them. Column layouts get parsed horizontally. Headers become noise. A spec document becomes a wall of confusion. David Spergel described it recently: "Every drawing and PDF tells a story." Most of them just get uploaded and never looked at again.
In construction, they carry everything: invoices, specifications, RFIs, contracts, site communications, and permit applications. Every decision that moves through a project eventually lands in a PDF. When AI can genuinely extract structure from them, not hallucinate it, but read it, coordination stops being a negotiation and starts being a function. The institutional knowledge locked in project archives becomes queryable. The decision made on Project A finally informs Project B.
The reader is here. It's learning fast.
February 27, 2026
Make readable to agents
The interoperability problem isn't going away.
It's changing shape.
From at least the past 10 years, we've been asking: how do we make systems talk to each other? IFC, open BIM, CDEs, connected project platforms. Real work. Real progress. And still — a geotechnical report in a PDF, a structural spec in a Word doc, a contract variation buried in someone's inbox.
We haven't finished that battle.
I've been watching something play out in the software world that I can't stop thinking about in the context of construction.
Resend — an email API — quietly became the default recommendation from Claude and ChatGPT across millions of interactions. Not because it was the best email tool. Because it structured its documentation so that agents could read it. Clean markdown. Code snippets at every step. A dedicated file that told LLMs exactly what the product does and how to use it.
Meanwhile, Groq — a faster and cheaper alternative for AI transcription — kept losing to older tools. Not because of performance. Because its docs were harder to parse. Agents couldn't find the right answer quickly, so they recommended something else.
The thing that gets chosen is not the best product. It's the most readable one.
This is the conversation I think we're not having in AEC.
We have data. Enormous amounts of it. Models, specifications, reports, contracts, drawings, codes. Decades of accumulated project knowledge. But almost none of it is structured for the agents that will increasingly coordinate, check compliance, procure materials, and surface decisions across our projects.
We know what it looks like to start solving this. The industry has been inching toward machine-readable building data for years — the instinct has always been right. The question now is whether we take the next step: not just making data portable between systems, but making it genuinely surfable by agents.
The next 10 years of interoperability won't be about making systems talk to each other.
It will be about making our data readable to the things that will coordinate, check compliance, procure materials, and make decisions — automatically, across every project.
We have the same fragmentation issue, but there's a great opportunity to address it using a different approach!
February 26, 2026
77% of underground construction projects suffer from insufficient or inadequately interpreted data
Source: Rajat Gangrade
The problem isn't a lack of data. It's fragmentation.
Geotechnical reports in PDFs. Designs in BIM. Cost data in spreadsheets. Schedule in Procore. Each silo is optimised individually, none talking to each other, and now we are looking to solve this by adding "AI analysis" to each silo.
That's optimising the wrong problem.
We're asking the wrong question about AI. Instead of "How do we add AI to existing processes?" ask "How do we redesign workflows around what AI enables?"
Right now, we're rushing to automate tasks: faster clash detection, quicker takeoffs, automated compliance. This just makes fragmented workflows slightly faster.
The problem: Every "AI-powered" feature gets commoditised as foundation models improve. Your competitive advantage evaporates every 18 months.
There's a different playbook.
In Reshuffle, Sangeet Paul describes how containerisation transformed global trade, not by making shipping faster, but by unbundling and rebundling the entire workflow.
Before containers, cargo was packed/unpacked manually at every port. Containerisation unpacked the process by separating cargo handling from transport, storage, and documentation. Then we repacked everything around the standardised container.
The result wasn't 20% faster shipping. It was the foundation of global supply chains.
AI needs that same mindset in construction.
Right now, two trends are forming:
Path 1: Adding AI features
What AI researcher Rich Sutton calls "the bitter lesson." Companies that encode human expertise get short-term wins but lose to those building learning systems leveraging computation at scale. These get outdated as models improve.
Path 2: Rethinking workflows
Instead of just adding AI, companies that focus on what models need (like training data and infrastructure) or find new ways to work with it will lead the way.
Here's what it looks like in action:
Unpack: Break down "coordination." Design intent in heads, decisions in meeting notes, models in isolated tools, consensus through endless meetings.
Repack: Design with AI capabilities in mind: coordination that doesn't hinge on getting everyone to agree first.
Instead of months of negotiating standards:
- Teams model in their preferred tools
- AI translates between formats as a semantic layer
- Design intent captured in machine-readable format
- Real-time conflict resolution
- Coordination emerges from continuous learning, not consensus-building
It's coordination as a continuous learning system.
February 19, 2026
Two moves that look different are actually the same strategy
- Palantir is launching a construction-specific stack (Ontology + AI workflows across precon, procurement, execution, and closeout).
- Anthropic is moving upmarket with SIs, industry-specific GTM teams, and heavy investment in safety infrastructure.
The best analogy is Sephora.
Sephora did not win by making every product. It won by owning the moment of uncertainty: helping customers decide what to trust and buy.
That same dynamic is now playing out in enterprise AI.
In construction and other regulated industries, the key decision is not "Which model is smartest?" It is:
"Can I trust this decision when budget, schedule, compliance, and liability are on the line?"
This is why Palantir and Anthropic matter in the same conversation.
Both are moving toward the same control point: the high-friction decision layer.
- Palantir: unify fragmented operational data and orchestrate decisions across teams.
- Anthropic: combine model capability with distribution, consulting channels, and safety infrastructure to enter regulated workflows.
So what should companies do?
Yes, invest in your horizontal layer, but be precise about what that means.
- Buy the Commodity: The foundation models, the OCR, and the general infrastructure components.
- Build the Platform: The data model, the governance and permissions, the decision logic tied to your real workflows, and the integration architecture.
The reflection to extract is simple:
AI features are not your moat.
The moat is the connective tissue that turns fragmented data into trusted decisions.
February 12, 2026
The Jevons paradox
Where efficiency creates more demand rather than less, works in radiology because AI-enabled imaging feeds directly back into more imaging orders.
In AEC, however, this logic breaks down; the industry is so fragmented across disciplines that efficiency in one part, such as faster design documentation, doesn’t generate demand across the whole chain. Instead, it merely shifts the bottleneck elsewhere, usually to the physical construction site, where labour shortages act as a hard cap.
For AI to truly expand demand, it must move beyond simple "chat" interfaces and establish an operational ontology: a unified digital architecture that bridges the gap between a design model and physical execution. This allows the AI to vertically integrate across fragmentation points by connecting design decisions directly to operational outcomes.
The strongest market pull comes when this integration helps clients make more money, not just spend less: consider performance-based services, predictive maintenance contracts, or digital twins that turn building data into billable insights and proprietary feedback loops.
Without that revenue link, AI in AEC risks being just another layer of technology that improves one silo while the rest of the process absorbs the gains. Ultimately, the industry must stop looking to rebuild commodities that merely accelerate existing tasks and focus on capturing the value of efficiency by monetising the superior outcomes, not just the speed, that these technologies create.
January 27, 2026
The digital skills gap goes beyond tools
I've been following recent discussions on the Bricks & Bytes podcast about the digital skills gap, and it's clear the industry faces a challenge that goes far beyond "learning new software."
We often frame this gap as a technical hurdle, but the more we examine it, the more it appears to be a cultural and capability transition. The reality is that today's technology evolves too rapidly for the old model of periodic training to keep up.
To effectively close this gap, we should consider three key shifts in how we approach "skilling" in the AEC space:
- From static training to fluid re-orientation
In a rapidly changing market, moving from commercial real estate to data centres or advanced manufacturing, our skills cannot remain static. Instead of generic education, skilling should focus on "#rebundling" an expert's domain knowledge with new digital workflows to address immediate market needs. It's less about merely learning a tool and more about reorienting expertise toward where value is headed. - The "expert intermediary" framework
Much of the hesitation around AI stems from the fear that it will replace human judgment. The solution isn't to bypass the expert but to empower them. The most critical skill we can teach today is oversight. We should train our qualified architects and engineers to act as "Expert Intermediaries," using AI as a powerful assistant while their judgment remains the ultimate safety net. This approach reinforces that humans ultimately manage risk. - Shifting the commercial conversation
Perhaps the biggest gap lies in how we utilise digital tools in the marketplace. We are accustomed to selling "brains by the hour," but digital fluency enables us to sell clarity. By leveraging AI as a decision-support system, we can help clients navigate complex trade-offs between carbon emissions, cost, and speed. The skill being developed here isn't just technical; it’s the commercial confidence to shape value-based deals.
The tools are already available. The real challenge now is helping thousands of individuals transform how they engage with projects, clients, and the technology itself.
January 15, 2026
Context problem
"We have the solution. We just can't talk to each other."
It's like travelling to different countries. Every country has different power outlets. You either bring multiple adapters or multiple chargers.
The same thing happens in the construction/manufacturing world. Different vendors, different teams, different software. Without a common format, files won't plug in with each other."
I just read Lesley G.'s piece on why AI hasn't transformed manufacturing design.
Same exact problem.
Manufacturing engineers can't deploy AI because proprietary "geometry kernels," the mathematical foundation of 3D design, don't talk to each other. Each vendor speaks a different language.
Construction has IFC (Industry Foundation Classes) as the universal adapter. Manufacturing doesn't even have that.
Interoperability doesn't benefit you. It benefits the ecosystem.
The project engineer who needs quick quantity takeoffs. The contractor is working on multiple software projects. The asset owner has been managing the building for 30 years.
When you optimise for your workflow, you break everyone else's.
Lesley says:
"Whoever controls that language will shape how the next generation of factories and nations are built."
China gets it. They're investing in open-source alternatives while the US protects proprietary
systems.
The engineers who understand interoperability and can make systems talk to each other are the ones who unlock entire ecosystems.
Not the ones who master a single tool. The ones who connect tools.
January 7, 2026
Every project makes the next one harder unless it doesn’t
You've probably felt this: Opening a Revit file from five years ago is more work than starting fresh. A problem you solved on Project A gets rediscovered on Project B. Your firm's knowledge exists only in people's heads—not in reusable systems.
That's not laziness. That's architecture.
Anthropic has a concept called compounding engineering: Normal engineering makes future work harder (technical debt). Compounding engineering makes future work easier.
The difference isn't effort—it's whether you extract and codify what you learn.
When Anthropic ships a feature, they don't just close the ticket. They ask: What pattern can we extract that makes the next feature easier? Every solved problem becomes infrastructure for future problems.
Construction does the opposite. Matt Goldsberry at HDR: "No firm has all project data in a single data lake. Each new project starts from the same baseline rather than building on past work."
Your firm completes 50 projects. Those 50 projects should make project 51 dramatically easier. Instead, it's almost as hard as project 1.
Here's what flips the switch: Shift from project-based delivery to capability-based delivery.
After you solve a problem, extract the generalisable solution. When your team figures out a healthcare patient room layout, don't just archive the model—codify it as a reusable template. When you crack a facade coordination strategy, extract it as a plugin that future projects can use.
Make "contribution to firm knowledge base" part of performance reviews. Currently, only project delivery is rewarded, so knowledge extraction never happens.
The compounding effect: Each project adds capability, not just complexity. Five years in, your firm has 50 solved problems accessible instantly. Project 51 becomes genuinely easier because you're not rediscovering solutions—you're leveraging them.
December 2, 2025
Why construction is trapped in delivery mode
Melissa Perri recently shared a powerful insight: the confusion between product and project management isn't a role definition problem—it's a leadership problem.
Her core observation: When executives ask "are we on schedule?" instead of "are we solving the right problems?", they create organisations where delivery matters more than direction. Teams get reduced to coordinators, and discovery gets squeezed into sprint zeros.
But here's what's specific to construction:
This isn't just a cultural preference. It's baked into the business model.
Construction operates on hourly billing and project-based pricing. This creates a perverse incentive: the more you improve at your job, the less you earn. Finishing a design faster means billing fewer hours. Delivering a project on schedule means the revenue window closes sooner.
So the industry developed rational cultural norms around "using available hours" because that maximises revenue. It's not laziness—it's a logical response to the incentive structure.
And because the business model only rewards delivery, project managers dominate. Leadership naturally asks operational questions: "What's the schedule? How many hours? What's the margin?" Not strategic questions: "What are we building capabilities toward? How do we compound value across projects? What outcomes are we guaranteeing?"
This cascades into everything:
- How firms scope work (discrete projects, not capabilities)
- How teams think about solutions ("better tools" instead of "different or re-thinking the problems")
- How innovation gets funded ("500 hours to deliver" instead of "2 weeks to learn if this works")
- What metrics leadership tracks (billable utilisation instead of customer outcomes)
The system makes perfect sense—until you realise the business model is misaligned with where value actually lives.
Here's the thing:
When AI collapses effort but maintains value, input-based pricing becomes nonsensical. Other industries are already making the shift. Legal firms are moving from hourly billing to outcome-based pricing.
A question for construction leaders:
What would change if you priced outputs instead of inputs? If you guaranteed the outcome (buildings that meet performance criteria) rather than consumed hours? What questions would your teams ask? What would leadership measure?
The problem Melissa identified—organisations choosing delivery over direction—isn't inevitable in construction. It's structural.
Which means it's fixable.
December 2, 2025Notesconstruction-tech,digital-transformation
November 20, 2025
Failing brewing coffee
"I wish I would have learned how much coffee you wanted in that first experiment."
Bryan Bischof said this halfway through his talk on building AI applications. He'd just failed to brew coffee four times in a row—on stage, in front of hundreds of people.
Each failure revealed a different mistake. Wrong filter. Forgot to grind the beans. Never asked about temperature preference. Poured before measuring the ratio.
But here's what struck me: he never got to evaluate the downstream problems because he failed so early in the upstream steps.
This is exactly what happens when building products for construction.
A team builds a sophisticated tool—six months of development. Launch day: zero adoption.
They forgot what good consultants do: sit with the teams, asking What's your actual problem? What workflow are you trying to improve? What does success look like at each milestone? See the friction points.
I know why this happens. Utilisation pressure. Every hour must be billable. Taking time to ask questions feels expensive.
But building the wrong thing costs far more.
In AI, they call this "evals"—structured evaluation at every dependency point. This translates directly to our industry: decompose your process into checkpoints, and evaluate at each stage.
Don't wait until the end to discover your assumptions were wrong. Ask the critical questions at step one. We should plan slow, act fast. Invest the time upfront to ask the hard questions before you start building. Then execute rapidly
Because just like brewing coffee, if you don't know how much your user wants in the first experiment, you'll waste a lot of time brewing the wrong thing.
October 17, 2025
Rigorous where it counts
I’ve gathered some thoughts. Engineering requires precision because the results must be flawless, and the industry operates on the principle that any issues are resolved as they arise, making communication a secondary concern. Additionally, weekly iterations often lead to minor setbacks that you need to accept and adapt to without getting frustrated about making changes.
Engineers are trained to get it right.
But here's the paradox: this mindset of precision—essential for deliverables—has repercussions for how we develop solutions.
I watch the pattern unfold constantly. Someone presents a problem. We listen. Then everyone disappears to work alone, afraid to share half-formed thinking.
The irony? By refusing to iterate internally—by not sharing incomplete work, not prototyping quickly, not collaborating early—we deliver exactly what we feared: solutions that are overthought, overcomplicated, and sometimes don't fully address the problem because we never reduced complexity or tested assumptions with others.
We spent all the time perfecting an answer in isolation instead of discovering the right question together.
There's a massive difference between iteration and delivery, and confusing the two kills both speed and quality.
Adrienne Tan writes about this in her article on Perfect vs Possible. She says perfectionism is counterintuitive to good product management—and I'd argue the same dynamic is crippling engineering consulting.
Our industry won't accept buggy solutions. Nor should it. But we've confused that external standard with how we should work internally.
Here's the mindset shift:
Separate internal collaboration from external delivery.
Internally: prototype rapidly, share incomplete thinking, ask stupid questions, challenge assumptions, iterate messily. This is where you discover what you're actually solving.
Between these stages is where education happens. This is where we teach engineers that new tools and solutions aren't threats—they're opportunities to evolve faster. That weekly iterations will bring changes, and that's exactly the point.
Yes, feedback means revisions. But these small course corrections prevent the catastrophic failures that come from pursuing perfection in isolation.
Externally: be rigorous, check everything, review with fresh eyes, deliver with confidence. This is where precision matters.
Don't fall in love with your solution before you've tested it with your team.
The moment you disappear to perfect your approach alone, you've limited what's possible. Your solution cannot become its best version without input from others.
Treat internal communication as discovery, not judgment.
When you share work-in-progress, you're not asking for approval. You're extracting information, testing assumptions, finding gaps you couldn't see alone.
This is as much a teaching challenge as a process one. Engineers need permission to distinguish between internal collaboration and external delivery.
Tan shares a reflection from Simonetta Batteiger: "One of the main pillars of resilience is optimism—not blind positivity, but the ability to be realistic about where you are, to accept it, and then to create from there."
That's the shift. Grounded optimism.
Be realistic about where you are in the process. If you're still figuring out the problem, share incomplete thinking. If you're ready to deliver, then review everything and get it right.
The industry will never accept half-finished solutions. So stop delivering them by spending all your time polishing in isolation instead of discovering the right problem through collaboration.
Be rigorous where it counts. Iterate everywhere else.
