Audio article narrated by OpenAI

Jeroen de Bruijn spent more than three years as a product manager at Arup, building internal tools for their engineers. Before that, he worked at 12 different AEC organisations during his studies alone: contractors, consultancies, architects, and a municipality. In this conversation, we explore how product management in AEC often begins as an unnamed instinct for fixing broken workflows, why the standard tech playbook breaks when your users are billing hundreds per hour on $500 million projects, and what a layered approach to measurement reveals about the gap between tracking activity and understanding value.

About Jeroen de Bruijn
Jeroen de Bruijn is a product manager with more than a decade of experience in the AEC industry. Based in the Netherlands, he studied at TU Delft and built his career across an unusually wide range of AEC contexts before joining Arup’s digital technology department as a full-time product manager. He is currently on sabbatical, having sailed across the North Atlantic Ocean, and is exploring his next role.

“If I saw things being so inefficient, it made me itch to fix it. I just saw workflows that could be streamlined. Looking back, I was already doing product management in a sense. But never calling it that way.” Jeroen de Bruijn

The AEC industry is developing product management capability without realising it. Somewhere inside every organisation, someone is introducing Excel macros to a structural engineering team that has never automated its calculation sheets, teaching colleagues AutoCAD shortcuts to speed up delivery, replacing manual drawing markups with Revit schedules because the existing method is painfully slower. The industry calls them “the BIM guy,” “the Dynamo person,” or simply “the one who’s good with computers.” But the work they do: identifying pain points, building solutions, training users, selling colleagues on new ways of working. That is product management, whether anyone recognises it or not.

Jeroen de Bruijn’s story reveals both sides of this tension. His path to a formal product management role was unplanned, built on a decade of instinctive problem-solving across a dozen AEC organisations. And a measurement framework the team developed at Arup addresses a question that haunts every digital team: how do you prove that your tools create real value, not just generate activity?

The path not programmed

Jeroen’s journey into product management follows a pattern familiar to anyone who has watched technical people in AEC gradually drift toward product-adjacent work. As a kid, he built Lego and designed theme parks in Rollercoaster Tycoon. He did C++ game modding and website development. He even made money from a game mod when it became popular. “The pattern was already there,” he reflects. “Build stuff, make it better, share it with others.”

What made those 12 organisations valuable wasn’t the breadth of exposure; it was the pattern that repeated at each one. He introduced Excel macros and Visual Basic to a structural engineering firm that had never automated its calculation sheets. He trained colleagues in AutoCAD to speed up delivery. He taught quantity surveyors to use Revit schedules instead of printing drawings and doing manual markups.

None of this was called product management. It was just Jeroen being unable to watch people do things the slow way when a faster way existed.

This is the pattern that Krzysztof described in a previous conversation about product management identity in AEC: the role is supposed to deliver clarity, ultimately responsible for building the right thing. In a pure-software company, everybody gets that; in an AEC enterprise, nobody is quite sure what it is.” Jeroen’s story is a concrete example of this identity gap. He was doing product work for years before the title existed for him: understanding user problems, building solutions, driving adoption, and iterating based on feedback.

Matt Wash made a related observation in an earlier conversation about strategic professionals in AEC who get “labelled instead of being recognised for the work they do.” Jeroen avoided that fate by eventually moving into formally defined product roles. First at Haskoning, where he worked on AEC projects while simultaneously holding a part-time product owner position. He then transitioned to Arup, joining as a full-time product manager. But his story raises an important question: how many people in the industry are doing this work right now without the recognition, the resources, or the career path to do it properly?

The answer is: a lot. But calling them product managers would miss the point. The industry has computational designers, BIM coordinators, and project engineers who share an instinct for fixing broken workflows. They build internal tools on the side, teach colleagues better methods and streamline processes that nobody asked them to touch. Most would describe themselves as problem solvers; the industry tends to label them that way, too. Yet problem-solving and product thinking are not the same thing. A problem solver fixes what’s broken. Product thinking asks who the user is, what outcome matters, whether the solution scales, and how to measure whether it’s working. The mentality is there. What’s missing is the framework that turns instinct into discipline: the structured approach to discovery, prioritisation, and value measurement that separates ad hoc fixes from sustained product work. The gap isn’t talent or even intent. It’s recognition that these people need a different set of tools and a different kind of support.

The difference becomes concrete when you look at what each mindset holds itself accountable for. The problem solver asks whether the script works. The product thinker asks whether drawing production time dropped by 20 percent, and whether that margin gain justified the investment. The shift from instinct to discipline is not about learning new methods; it is about accepting accountability for outcomes, not outputs. Most of the unnamed product people across AEC never make that shift, not because they lack the ability, but because nobody asks them to. Jeroen made that shift. And a measurement framework the team built at Arup is the clearest evidence of the crossing: a structured attempt to move past “is anyone using this?” and toward “is this creating value that we can prove?”

Why the standard playbook breaks

When I asked what a product manager from a traditional tech company would get wrong if they showed up at an engineering consultancy with their standard playbook, Jeroen’s answer was immediate.

“Move fast and break things. But if you do that in a consultancy space, then you could break the client deliverables. In tech, you can ship buggy things, MVPs, roll them back, iterate. Users might be a bit annoyed, but they’re usually forgiving. But here in AEC, the engineers use your tools to generate output for construction models, for drawings, $500 million projects. And if the tool generates something wrong, incorrect geometry, then someone might build it wrong. And then legal consequences.”

This isn’t an abstract observation. It’s the structural reality that shapes everything about how products work in AEC. David Ward captured the same constraint from the field perspective in a previous conversation: “Building is already complicated and risky enough without putting another technology project into that building project.” The risk tolerance of an industry where errors have physical and legal consequences is fundamentally different from one where you can push a rollback.

Jeroen identified a second constraint that receives less attention: the cost of user access. In a typical tech company, product managers can talk to customers daily. In an engineering consultancy, users are billing hundreds per hour on client projects. “You need to be really particular if you reach out to them,” Jeroen explains. “Just ask for a thirty-minute interview. Say, this is how you can help me. Help me understand your issue. Then in the future, we can develop tools that can alleviate your pain points.”

The value proposition of the conversation itself has to be clear. You’re not just asking for time; you’re asking someone to step away from billable work. The return on that investment needs to be visible, even if it’s a future return.

Discovery inside a consultancy: earning access

With 18,000 people and projects running across the globe, the first challenge was knowing where to look. Jeroen built a dashboard that mapped upcoming projects and revenue sources. From there, he and digital leadership could identify where the money was flowing and which projects might benefit from digital tools. They would then target discipline leaders and project leads for conversations.

But the real discovery happened in the workflow mapping sessions that Arup ran before major projects. These sessions were already part of the project checklist: key project people would gather around a Miro board, lay out their workflow, and discuss what they needed. The digital team embedded itself in these sessions. A digital team member who knew the tool portfolio could sit in and say, “at this point in your workflow, this tool could help.”

This is discovery adapted to constraints. Rather than scheduling separate research sessions, the digital team grafted discovery onto existing processes. They joined brainstorming sessions that were already happening. They earned access by being useful in the moment, not by demanding time for abstract research.

For deeper work, they ran dedicated discovery sessions. When mission-critical facilities identified areas for improvement and allocated budget, Jeroen pulled up his dashboard, identified the relevant projects and people, and ran structured interviews, drawing on frameworks from The Mom Test by Rob Fitzpatrick and Continuous Discovery Habits by Teresa Torres. “I looked at all the mission-critical projects, all the big data centre projects, and then identified which people had a broad range of projects. Some people were deep experts on specific projects. So we talked to some key people, and we mapped out their workflow.”

The approach echoes what Krzysztof described about applying The Mom Test in AEC contexts: come with the problem, not the solution. The digital team didn’t arrive at workflow mapping sessions, pitching tools. They arrived asking questions, understanding the process, and then connecting the dots.

The parametric workflow case: testing on the non-critical path

One project illustrates how this discovery-to-delivery cycle worked in practice. A project team had already built a parametric workflow using Rhino and Grasshopper with a Human UI interface. Engineers were the end users; computational designers maintained and modified the scripts. The workflow was functional but slow.

The digital team worked with them to improve the process. They had conversations to understand the bottleneck, made changes, gave the improved workflow back for testing, collected feedback, and iterated. The critical detail: they tested on a small subset first, on the non-critical path. Only after several successful tests did the team run the full model through the improved workflow.

The result was an 80 percent reduction in computation time. Over the project lifecycle, with multiple iterations required, this compounded into significant time savings. But what Jeroen found most interesting was what happened with the saved time: “They could do more complex design exploration. So they had a higher quality end result because of the time savings.”

The value wasn’t just speed. It was what speed enabled. This distinction matters because it points toward the measurement challenge that Jeroen would later address systematically: the difference between measuring what a tool does (faster computation) and understanding what it achieves (better design outcomes).

The four layers of measurement

When I asked how Jeroen tracked the performance of internal products, he described four distinct layers, each representing a step closer to understanding real value.

Layer one: usage metrics. Monthly active users, adoption rates and feature usage. “Really necessary, but they don’t tell the full story,” Jeroen explains. These metrics answer the question “is anyone using this?” but say nothing about whether that use creates value. As Sol Amour observed in a previous conversation about product storytelling: “Monthly active users are not a very good metric. Do people use it once a month for five seconds? Do people use it 400 times a month? Quite hard to reason with.”

Layer two: behaviour change. This is where measurement gets more interesting but also more qualitative. Jeroen tracked proxy indicators: the evolution of support tickets (from “how do I install this?” and “it crashed” to “can I also use it for this?” and “can we customise it to suit our workflow?”); knowledge transfer happening organically (engineers teaching other engineers, people creating tutorials, running workshops); and the direction of outreach flipping, with project managers coming to the digital team proactively instead of needing to be found.

That last indicator is particularly telling. When people seek you out instead of the other way around, something has shifted. In Jeroen’s context, the equivalent is a project manager proactively asking whether a digital solution would be a good fit for their project, without anyone from the digital team reaching out first.

Layer three: outcome metrics through conversation. Here, measurement becomes fundamentally human. “The real outcome metrics are talking to people. Really understanding them,” Jeroen says. This layer includes direct dialogue with users about what got faster and how they used the saved time, engineer satisfaction surveys, and before-and-after comparisons that users themselves measure.

The parametric workflow case is a perfect example: the team couldn’t measure the baseline without knowing about the project in advance. The 80 percent improvement figure came from the engineers themselves, who compared their process before and after. No dashboard could capture this. No analytics tool could surface it. It required a conversation.

“The most common thing teams do is measure outputs instead of outcomes,” Jeroen observes. “Outputs are what the tool delivers. Outcomes are what the tool achieves.” His third layer is the bridge between these two: using direct conversation to understand outcomes that quantitative metrics miss.

Layer four: story capture rate. The idea: systematically document project case studies where tools made a clear difference. Track not just whether stories exist, but the rate at which new stories are being captured.

“If we could capture these stories, then we knew the project was delivering value. From these case studies, other people would sometimes reach out to us: hey, we want that, but for our project.”

The concept is striking because it treats storytelling as a metric rather than just a communication strategy. Sol Amour articulated the same principle from a different angle: “Storytelling is the base human condition. If your story is better than the story they’re telling themselves, change can happen.” Jeroen’s fourth layer turns this insight into a measurable practice. A high story capture rate means value is being created and recognised. A low rate means either value isn’t there or it’s going undocumented, which, for organisational purposes, amounts to the same thing.

Where most teams get stuck

The four-layer progression represents a maturity curve. Most AEC digital teams are stuck at layer one: counting users, tracking adoption rates, and reporting feature usage. These metrics are easy to collect, present in dashboards, and understand for leadership. They’re also insufficient.

Tim Wark identified the root tension in a previous conversation: “When teams care about outputs and executives care about outcomes, the incentives misalign.” A digital team reporting monthly active users is measuring outputs. An executive asking “what’s the return on our digital investment?” wants outcomes. The gap between these two creates a chronic justification problem that digital teams in AEC consistently struggle with.

Jeroen’s framework offers a path through this gap. Layer one satisfies the basic need for quantitative evidence. Layer two provides qualitative signals that the investment is creating behavioural change. Layer three produces the outcome stories that executives need to hear. Layer four creates a self-reinforcing cycle where documented value attracts new demand.

The progression also reflects an increasing investment of human attention. Layer one is automated. Layer two requires observation. Layer three requires conversation. Layer four requires systematic storytelling. Each layer demands more effort; each layer provides deeper understanding. The question for digital teams isn’t whether they should aspire to all four layers. It’s whether they’re willing to invest the human effort required to move beyond the first one.

The Arup context: three paths for internal products

Arup’s structure provides useful context for understanding how internal product development functions at scale. The digital technology department sat under the CIO and comprised roughly 700 people globally, serving an organisation of 18,000. Funding came through three distinct models.

The first is central investment. Jeroen’s team was funded centrally. People across the organisation could propose ideas on an internal platform, where others could comment, show support, and give likes. A digital board would review proposals and allocate investment. “Here’s an interesting idea. Here’s some money. Develop it further. Come back in a bit, see how it goes.”

The second is project-driven funding, where specific projects identified digital needs and allocated budget from their own resources to develop solutions.

The third is external revenue, exemplified by Oasys, a company within Arup that sells commercial software products like GSA and MassMotion. These started as internal capabilities and grew into mature external products.

Between these models, Jeroen identified several instructive examples. Fuse, a project data platform, took an intermediate path: it started charging internal projects for its use, became revenue-positive, and funded its own development team and product manager from internal sales.

One feature of Arup’s structure that Jeroen highlighted was the absence of mandated software. Because Arup is fully employee-owned, with no external investors dictating technology choices, engineers can often choose what works best. Arup develops GSA, but engineers who prefer ETABS may use it. “GSA is recommended because revenue flows back into Arup and we don’t need to pay an expensive external licence. But if people said another tool would be better, that was fine.” This flexibility creates room for experimentation and organic adoption, a sharp contrast to organisations where tool mandates stifle both innovation and genuine product-market fit testing.

The recognition problem

Jeroen’s story, taken as a whole, reveals something about how the AEC industry develops product capabilities. It doesn’t plan for it. It doesn’t recruit for it. It discovers it accidentally, in people who happen to have both domain knowledge and a compulsion to fix broken workflows.

The measurement framework the team developed at Arup is equally revealing. The progression from usage metrics to behaviour change to outcome conversations to story capture reflects a growing sophistication about what “value” means in an AEC digital context. Most teams stop at layer one because that’s where the data is easy. Moving to layers three and four requires a fundamentally different investment: not in analytics infrastructure, but in human attention and organisational storytelling.

For the AEC industry more broadly, these two themes are not parallel observations; they form a reinforcing cycle. Without formal recognition of product roles, there is no organisational mandate to measure outcomes. Without outcome measurement, there is no evidence to justify formal recognition. The people who will build the industry’s digital future are already here: introducing macros, teaching new tools, building prototypes on the side. The measurement systems that will justify continued digital investment are within reach; they require moving past dashboards and into conversations, past activity metrics and into value stories. What remains is the recognition that product management in AEC is not an imported discipline from Silicon Valley. It is a practice that has been hiding in plain sight, and the industry’s inability to name it is the same force that prevents it from measuring its value.