Audio article narrated by OpenAI
The construction industry has accumulated data for decades without ever deciding what to do with it. Cost records, labour rates, supplier quotes: all of it generated as a byproduct of work, captured without intent, aged on a shelf. Samuel Giffin spent 12 years moving through the machinery of that problem, from equipment valuations at EquipmentWatch to the data operations team that ships RSMeans cost data to an industry that depends on it, to Flash AI, launched March 17, 2026. That arc is not a career story; it is a live demonstration of what happens when someone treats data as the product rather than the residue. What follows is a conversation about the distance between the raw signal and the trusted decision, and why closing it is harder and more consequential than the industry has ever admitted.
—
About Samuel Giffin
Samuel Giffin is Principal Product Manager at Gordian, a Fortive company, where he oversees the product strategy of Gordian’s Estimating products. With a background spanning equipment valuations, industry analysis, and construction cost intelligence, he joined Gordian in 2021 and led the shift from annual to quarterly data publishing.
—
“My through line is shortening the distance between data and the customer. Sometimes it’s physical, sometimes it’s time, sometimes it’s cognitive. And now, with Flash AI, it’s personal: making sure the data solves the user’s own specific job to be done.” ; Samuel Giffin, Principal Product Manager at Gordian
Four distances run through Samuel’s career, and each one reveals a different layer of the same problem. Early on, the distance was physical: how do you get the cost books printed in Kansas to every estimator who needs them? Then it became temporal: how do you make sure cost data is current enough to be useful in a market that can reprice in a week? Then, cognitive: how do you design software that helps people make better decisions rather than just surface numbers? With Flash AI, a fourth dimension enters the frame. Personal distance: ensuring data actually solves the specific job the individual user is hired to do. Each reduction is harder than the last. The physical and time problems yield to logistics and engineering. The cognitive and personal problems are harder because they require understanding what estimators actually do with their time, before the data ever appears on screen.
The kitchen table and the data product
The RSMeans origin story is almost too clean to be true, but Samuel tells it with the unselfconscious pleasure of someone who has found in it a genuine philosophy. Robert Snow Means was an estimator in the 1940s, tired of calling subs for prices that were never consistent and never recorded. So he sat down at his kitchen table with a legal pad and started writing down unit costs. He averaged them. His estimating got better. Colleagues noticed. They asked for copies. When competitors started asking for his unit costs, he realised there was a business.
“For sixty-plus years and even still today, we printed books filled with data,” Samuel explains. “But how we’ve collected that data has evolved a lot.”
What matters in this story is not the commercial arc from legal pad to printed books to digital database to AI-assisted product. What matters is the first decision: to treat the data as something worth deliberately capturing, organising, and delivering to someone else. That decision, made at a kitchen table before product management had a name, is what created 75 years of compounding value.
The principle extends well beyond RSMeans. Samuel argues that most people in construction, and in most industries, think of data as an attribute: something that accumulates as a side effect of doing work, not something to be designed, managed, and delivered. His counter-position is blunt:
“If you build it, and you build it in a process, and you deliver it to a consumer, it’s a product. It doesn’t matter if it’s a building, if it’s an Excel file, if it’s software, if it’s a large language model. That’s a product, and it needs to be treated with the same rigour we apply to product management.”
The framework applies equally to a construction project, an Excel file, a large language model, or a costed estimate. Everything built for someone else deserves the same attention to who uses it, how, and to what end.
This is the same argument Jeroen de Bruijn made from a different angle in an earlier conversation: that the AEC industry is full of people already doing product management without realising it. Samuel extends the argument further. It isn’t only that people don’t recognise the practice; it’s that they don’t recognise what they’re building as a product in the first place. Change that framework, and everything downstream changes with it.
He reaches for a reference point that will land with anyone who has watched a Disney film in the last twenty years. “I have young kids,” he says, “so we watch a lot of animated movies.” The film he invokes is Ratatouille, and the line he borrows is its central provocation: anyone can cook. “I feel like we’re moving that way with product thinking. People are going to realise that whatever they’re working on, if they deliver it to someone else, that’s a bit of a product.” The point is not that everyone will cook well. It is that the assumption that cooking is reserved for chefs, or that product thinking is reserved for software companies, is both limiting and increasingly untenable.
The 45-degree axis
For a company whose core product differentiator is data, Gordian operates within a tension that most software companies only encounter at the edges: speed and quality are not just competing priorities; they pull in fundamentally opposite directions, and the only viable path requires constant calibration between them.
Samuel describes this as a 45-degree axis. On the X axis is velocity: the data is more valuable the faster it can be delivered. Cost data that is 12 months old is not just stale; in a market that has absorbed COVID-19 supply chain shocks, dramatic swings in labour rates, and ongoing tariff volatility, it can actively mislead estimators relying on it. On the Y axis is quality. Moving faster generates more opportunities to introduce errors, mismatches, and gaps.
“Just because you ship data faster doesn’t mean you should ship BAD data faster. The odds are, the faster you go, the harder it’s going to be to maintain a reasonable level of quality.”
Two years ago, Gordian made a significant operational commitment: moving from annual to quarterly data publishing. RSMeans cost data is now updated every 90 days. The move required building AI-assisted, human-in-the-loop workflows that help researchers find outliers faster, match data entries more accurately, and move material through the pipeline without sacrificing the verification layer that makes it trustworthy. The goal is to move along that axis in the north-east direction simultaneously, faster, and with higher quality. Getting there requires dancing carefully, incrementally: a little faster, accept a slight quality dip, fix the quality, push velocity again.
COVID-19 made the stakes visceral in a way that no product strategy document could. Suppliers who had historically given three-month price quotes were suddenly saying: “Here is today’s price; it is good until Friday.” External shocks of this kind serve a purpose that internal advocacy rarely replicates: they collapse the distance between the internal data teams and the markets they serve, forcing shared urgency where polite alignment tends to fail. “In a lot of ways, you have to walk in your customers’ shoes,” Samuel explains. “When you’re truly customer-facing on the product side, that’s part of your DNA. But when you’re in data engineering or data research or data operations, it’s not always the first thing.” Feeling the same uncertainty their customers felt, watching suppliers they had known for years give quotes that expired in days created urgency in a way that internal advocacy rarely does. They got it. They felt it. And they built faster because of it.
The ingestion pipeline that makes this possible is not primarily a technology story; it is a people story. Gordian hires subject-matter experts: engineers, estimators, carpenters, electricians, and mechanics. People who understand the work because they have done it. “The primary issue we see with a lot of data ingestion is you bring in people who don’t understand the end product and the end work, and so that’s what you get out of it,” Samuel explains. Those relationships with suppliers and distributors across North America are still cultivated by email and phone: human interaction. Construction is a relationship-driven industry; even in an era of spam calls, the personal connection still opens the conversation that produces reliable data. Technology, including AI, layers on top of that foundation. It does not replace it.
Flash AI: the zero-to-ten problem
The conversation around estimating tools tends to frame the challenge in terms of accuracy: how do you build a system that produces a reliable number? Flash AI Estimating, which Gordian launched publicly in March 2026, frames the challenge differently. The problem it was designed to solve is not the estimate itself; it is the research that precedes the estimate.
Samuel describes the discovery: “We found that people were spending four to eight hours apiece on early-stage estimates, well before projects were well-defined, well before they needed to be that detailed.” The time was not going into analysis or judgment; it was going into retrieval. Combing through databases. Tracking down a copy of an Excel file from two years ago. Calling around for a baseline reference that someone, somewhere, must have. The estimator’s expertise was being spent on lookup work rather than on scope judgement.
The framing Samuel uses to describe Flash AI’s ambition is precise: “If you think of the spectrum of an estimate as ideation at 0% and bid submitted and won at 100%, I don’t think AI is good enough yet to go from 0 to 100. Not yet. But Flash AI Estimating is going to rapidly accelerate that first 10%.” The commercial bet is not on replacing the estimator; it is on compressing those first four to eight hours down to 30 minutes. The downstream effects are real: seven-plus hours freed to read actual scope documents, proactively review drawings, and apply real judgement at the stage of the project where judgement is cheapest.
The product surface around Flash AI is expanding. A partnership with eTakeoff means that what were previously three separate workflows, AI-assisted scoping, 2D plan takeoff, and cost database look-up, are becoming a single interconnected system. Samuel describes it with an image: three pillars leaning into each other. The output of the AI-assisted estimate connects with the takeoff quantities, which link back to the cost lines, which stay in sync. This is the pattern McKinsey identifies as the highest-value application of agentic AI: not optimising isolated tasks, but redesigning entire end-to-end workflows that span multiple functions, where the connective layer eliminates rework and handoffs. What changed is not the underlying data; what changed is the distance between the estimator and the decision they need to make.
In conversation, this opens a larger question. If the estimating workflow can be broken into discrete research tasks and those tasks can be distributed across a crew of AI agents, does the orchestrating intelligence have to be human? Samuel’s answer is candid: “Is there any reason that the orchestrator has to be a human and not an AI? No.” He adds the qualifier that matters: the limiting factor is how well the agent is designed, written, and managed; the same holds for people. The distinction that matters is less about humans inside the loop, performing handoffs between steps, and more about humans above the loop: providing the judgment overlay on outputs that agents produce at scale. But the principle holds.
“Flash AI is not as good as an expert quantity surveyor or estimator. But it’s already better than the worst ones. And it’s always going to be better than the people who say they don’t need an estimate.”
The comparison is not to perfection; it is to the actual distribution of practice, including those who skip estimating altogether. The relevant benchmark for any workflow tool is not the expert who would never need it; it is the practice gap it actually closes in daily use.

From data exhaust to data product
Construction generates data the same way a camera phone generates photographs: indiscriminately, in volume, without a specific use in mind at the moment of capture. The obvious interpretation is that most of this accumulation is noise. Samuel’s interpretation runs in the opposite direction.
“But then you look at the technologies, the benefits, the insights that have come from these photos. The ability for all of our optical recognition workflows.” He pauses on one specific application. Many construction equipment categories, including tower cranes, now carry cameras whose optical recognition systems have been trained on the massive accumulation of photo and video data built up over 30 to 40 years. Those systems can now recognise a dangerous situation in their field of travel and stop it before people are injured or killed. The data that everyone thought was exhausted became the training set for technology that now saves lives on construction sites.
The implications are not subtle. “We (the industry) took something that everybody else thought was exhaust. And as an industry, we say: well, actually, no, that’s not data exhaust. That’s a data product. And we can gain insights from it by analysing it, by running it through data science models and running it through AI workstreams. And then we can change the human outcomes.” The argument is not sentimental; it is structural. Whatever construction is generating right now, project files, sensor feeds, cost records and field observations, the value of those records is not fixed at the moment of creation. It depends on what someone, somewhere, decides to build with them.
Samuel’s conclusion from this is deliberately practical: “We have to find ways to democratise access to that data so people keep building innovative products that turn data exhaust into tools.” The barrier he is pointing at is not technical capability; it is access to the raw material. If the petabytes of data generated during construction remain locked inside project silos or accumulate as unstructured exhaust, the safety-camera equivalent for the next generation of construction problems never gets built.
This connects to what Annie Liang described from the opposite angle: the expert-verified data that would make AI useful in construction was never deliberately created, and what was not captured at the moment of work cannot be reconstructed retroactively. Samuel’s RSMeans pipeline is one of the rare counter-examples in the industry, an organisation that has spent 75 years creating expert-verified data on purpose, as a product, rather than waiting for it to accumulate as a byproduct. The lesson from both sides is the same: capture with intent, or accept that the next problem will go unsolved for lack of raw material.
The forward-deployed model and Schumpeter
Samuel brings up Palantir not as a competitor but as a model. Gordian’s services division, which operates the job order contracting workflow with client organisations, functions as a version of Palantir’s forward-deployed engineers: experts embedded with customers who are running real projects on Gordian software, and reporting back directly on where the product needs to change. Joe Patrois described the same dynamic from the inside by exploring what Palantir’s approach enabled at Thomas Cavanagh Construction. “We have, on average, two to four hundred people associated with Gordian sending us direct feedback on how to improve the products based on their customers’ needs,” Samuel says. That feedback loop, embedded rather than solicited, is an advantage most product teams must cultivate through discovery programmes and user interviews.
The broader question of what happens as more platforms consolidate their workflows brings up Schumpeter. Samuel cites the economist’s concept of creative destruction with the same pragmatic register he brings to everything else: “You have two competing elements. The stubborn inertia of construction – based on compliance and experience. And the ‘move fast and break things’ mantra of tech entrepreneurs. The creative destruction from that collision will increase our capabilities. It might not be pleasant, but it’ll increase our collective ability to build better.”
The SpaceX analogy is instructive here. An industry that moved slowly, ran on established contracts and established contractors, was entered by someone who treated the constraints as assumptions rather than facts. Things broke, publicly, sometimes spectacularly. But the capability curve shifted in a way that years of incremental improvement had not produced.
The insight here is not that disruption is inevitable or uniformly welcome. It is that creative destruction in construction is already underway, and the companies best positioned to benefit are the ones who have been treating their data as a product long enough to have built something that competitors cannot replicate quickly: expert-verified cost data, built over 75 years, updated quarterly, now embedded into AI-assisted workflows. That is not a feature; it is a coordination layer.
AI isn’t doing VOC
The most quietly urgent moment in the conversation comes late, when Samuel names the one thing that distinguishes the teams that survive the AI wave from those that get displaced by it.
“AI is not doing VOC. AI isn’t calling a contractor and asking how their last project went. As long as we are continuing to know our customers and continue to drive that deep understanding, that’s where we start to drive real outcomes.”
This lands harder than it might initially appear. The concern in AEC product circles about AI is usually framed as replacement: which roles will be automated, which skills will become redundant, which positions will disappear. Samuel reframes it as a question of value location. The output of any individual task is increasingly replicable; the understanding that makes output meaningful is not. Knowing which problem to solve, which workflow to target, which friction is costing estimators seven hours a week: that requires talking to people, watching how they work, staying close enough to their reality to notice the gap between what they say they do and what they actually do.
Research across sectors confirms the pattern. Andrew McAfee’s analysis of AI adoption, presented at the HBR Strategy Summit 2026, points to a consistent dividing line: the organisations most likely to succeed with AI are not those automating most aggressively, but those pairing AI capability with the customer understanding to know which automation is worth building. The VOC function, in other words, is not a department; it is the mechanism by which teams find the right problem in the first place.
The race-to-the-bottom concern comes up in this context. If AI compresses the time and cost of building software, and the output becomes cheap, where does the value go? Samuel proposes a three-layer framing. The bottom layer is the product’s direct value. The middle layer is the workflow that the product enables. The top layer is the insight gained from watching what customers do within that workflow, which reveals the next problem to solve. “Your solution should build on itself. You deliver a product, you build insight around that product, you understand the workflow, and then you go solve that next workflow problem.”
This is a description of a compounding loop, not a commodity race. The estimators who use Flash AI will annotate, remove lines, make judgment calls, and value-engineer. Those patterns are observable. What are they correcting for? Where is the AI consistently missing? What does that reveal about the next problem in the workflow? The teams that stay close enough to their customers to see those patterns are the ones that can keep moving up the value stack. The teams that don’t will build features that ship and then stall.
Samuel raises the counterpoint anecdotally: every PM knows the story of teams spending months on discovery work into a genuine market signal, a space where competing products were already attracting real investment, only to have a stakeholder redirect the effort toward something else entirely. The command-and-control override did not waste just months; it ignored both the discovery work and the market signal these products were already generating. Samuel’s response is empathetic but diagnostic: the teams that hold their ground do so because they know their customers well enough. When an executive arrives in crisis mode, claiming something is wrong, it is the team that understands the customer context, the version mismatch, the misapplied use case, and the older data file that can actually solve the problem. Not the one that reacts to the loudest voice.
One problem, four distances
There is a structural pattern that only becomes visible when you lay all of these threads alongside each other. The four distances Samuel identifies in his career, physical, temporal, cognitive, and personal, are not four separate problems. They are a single compounding problem that each generation of tooling partially solves, only to reveal the next layer.
Printed books solved physical distance. Quarterly data publishing is solving the time distance. Well-designed software with a clean interface solves cognitive distance. Flash AI, which takes an estimator’s specific job to be done and compresses the research phase from hours to minutes, is beginning to solve personal distance: the gap between what a data product can do in general and what it can do for this estimator, on this project, right now.
Each reduction in distance deepens the switching cost. An estimator who uses Flash AI doesn’t just use RSMeans data; she uses RSMeans data embedded in an AI workflow that connects to her 2D takeoff tool and her project platform. The moat is not the accuracy of any individual cost line; it is the coordination infrastructure that makes them all work together.
What the construction industry is beginning to see, through products like Flash AI and integrations like the one with eTakeoff, is that the most consequential shift is not in any single workflow being automated. It is in the collapsing of the distances between workflows: scoping, takeoff, cost lookup, and project data becoming a single connected surface rather than a sequence of separate tools that estimators manually bridge. The industry’s fragmentation problem, where data lives in disconnected silos and each handoff introduces friction and loss, is not solved by better individual tools. It is solved by whoever builds the layer that connects them. Data discoverability, the ability to find and use the right signal at the right moment, has been the industry’s missing infrastructure, not its missing data.
Robert Snow Means sat at a kitchen table and wrote down unit costs because he was tired of not having them when he needed them. What he was actually doing was beginning to build the coordination layer. The distance between that legal pad and a multi-agent estimating system that pulls from a Revit model, a cost database, and a supplier feed to produce a costed table in thirty minutes is enormous in technical terms. In structural terms, it is the same problem, closed one loop at a time.
