Before I start, I want to thank Matt Burton, Digital Design Lead Structures at AECOM, for agreeing enthusiastically to collaborate on this topic. We had many fun conversations!

AEC professionals, we have a scalability issue with computational design solutions. Many firms struggle or lose significant money when attempting to scale their computational design development. While tools like Grasshopper and Dynamo have transformed how workflows are approached, their scalability remains elusive.

The typical story starts with a computational designer creating a script to solve a specific problem. At first, it appears to be an effective, efficient solution. However, as more teams or projects start using the workflow, it begins to be discussed like a real “product”. Eventually, situations arise just outside the bounds of the initial problem, or new projects need features that the original script can’t accommodate. This leads to the script creator (repeatedly) returning to fix, update, and tweak it. What was supposed to be an easy win becomes a never-ending maintenance job, overwhelming the script and its creator and leading to frustrations with the workflow borne through the expectations associated with a real “product”.

This pattern repeats across the industry: workflows get mistaken for tools, fragile scripts pretend to be scalable solutions, and designers are stretched thin and confused about their roles.

“I was wondering if you had a ready-made tool to achieve this specific bespoke task for my unique project that involves moving and transferring data across black box software?” 

“Can’t we use your data movement tool also to transform the data into a completely different thing?”

“The Ailurarctos Generator seamlessly allows users to select any software they want, in any format and perfectly transform it into a completed submission, with just the push of a single button….in a visual scripting platform…with a specific user who might also be the developer.”

Does any of this sound familiar? These real-world questions highlight a fundamental issue: workflows and tools are often misunderstood.
Many computational solutions, having been built to solve a specific problem, are neither sustainable nor scalable. This misunderstanding impacts trust in computational designers’ and companies’ time, resources, and energy.

What should we be scaling? Scaling a solution isn’t about moving from workflow to tool but recognising the difference and making the right decision. It’s not about every script or definition but focusing on the systems and processes supporting scalability. We need to turn custom workflows into proper tools when there is a benefit to doing so and create an environment where computational design can succeed without the metric for success being how many tools were developed or ephemeral time-saving prediction that will never be extracted or measured.

The challenges

Scalability often struggles because people lean too much on individual creators and don’t communicate clearly about computational solutions. Most scripts end up being like one-person shows, depending on their creators for updates, troubleshooting, and changes. This setup not only holds back innovation but also creates roadblocks that make it harder for these solutions to be widely adopted. The problem runs deeper than just relying on one person; it’s also about how computational designers communicate their work—and how stakeholders interpret and convey these solutions. One of the greatest challenges in scaling computational solutions is their technical implementation and how they are framed. Terms like “workflow” and “tool” carry existing connotations that, if unclear, lead to misinterpretation. Too many times, scripts are called tools when they aren’t.

Using the microwave vs. cookbook analogy, both produce meals, but one requires pushing a button, while the other demands preparation and expertise. If we started calling a home-cooked meal a “ready meal,” we’d be misleading people. Similarly, expectations wouldn’t be met if someone were asked to “cook” and simply microwave a pre-packaged dish. The same logic applies to computational design—without clear communication, workflows might be assumed to function like tools, and tools might be expected to handle complex bespoke processes.

In some regions, “workflow” might be considered rigid or prescriptive, while “tool” is perceived as a flexible enhancement. These nuances matter when driving adoption. What’s essential is not just which term is used but ensuring that all stakeholders—designers, decision-makers, and end users—have a shared understanding of what a solution actually does and how it fits into their processes. Without this clarity, scaling computational solutions remains an uphill battle.

A tool should be easy/friendly to use/adopt and reusable, giving the same results with little guidance. Tools are reusable systems designed to address broader challenges. They feature defined inputs and outputs, repeatable functions, and a consistent approach to solving everyday problems. 

A workflow is a bespoke solution that addresses specific needs in a particular environment. They are tailored to solve a single challenge and are often crafted for one-time or highly contextual use. Workflows usually need detailed explanations, step-by-step instructions, and strict adherence to certain conditions.

This distinction resembles Steve Jobs’ philosophy:

“Do not try to do everything. Do one thing well.”

Despite ultimately serving different overall purposes, workflows and tools are often created in the same environments using similar, if not identical, skill sets. To the untrained eye, they might look like the same thing—a script, a definition, or software that automates a task. However, their potential for scaling could not be more different.

Tools, by nature, offer a return on investment through consistent performance, cross-practice application, and adaptability across projects. Workflows are made to be specific to the task at hand. When people treat workflows like just regular tools, it usually leads to frustration, wasted resources, and a lack of appreciation for the skilled professionals who put them together.

Unlike tools, workflows often lack the clear instructions, flexibility, and reliability needed for scaling up. Calling a workflow a tool sets people up for unrealistic expectations and can damage trust in computational design when things don’t work out in larger situations.

This mix-up isn’t just a technical problem; it also reflects a knowledge gap. In many companies, decision-makers might not know how to distinguish workflows from tools. Without understanding the differences—and the unique benefits each offers—leaders find it tough to make smart choices about investing, deploying, or scaling these solutions.

This is nothing new; if we take the classic Rude Goldberg Self Operating Napkin illustrations and ignore the irony, we will have an apt example of the distinction between the workflow and the tool at play.

The self-operating napkin requires a series of complex interlinked mechanisms working in sequence to solve the issue of how to mop the diner’s face without picking up the napkin. Clearly, the device, however elaborate, accomplishes the goal.
Self Operating Napkin sounds like a tool. It is not. This is a workflow. It is built of a series of steps acting in a specific sequence to achieve a particular goal. No one looking at this image will ask if it can be scaled into a self-operating cleaning service. However, it may prompt a rethink of the goal if automated cleaning is a good idea. How do we achieve that? What resources would be required? Is there anything modular in this solution we could adapt to that broader goal?

Create the condition for scalability

Scaling computational design isn’t about making every script universal. Instead, it’s about creating the conditions for scalability through three key areas: building an ecosystem, communication and digital literacy, and an intrapreneurial approach.

  • Build an ecosystem: The future lies in modularity—creating reusable libraries and APIs that allow workflows to evolve into tools where needed. Documenting solutions and fostering collaboration between federated teams and IT can transform ad hoc scripts into maintainable systems. Training should be embedded in this approach to ensure that all stakeholders understand the framework within which computational design operates. However, we can’t neglect that the biggest challenge to building an ecosystem lies not in the technology itself but in adapting processes, shifting mindsets, and ensuring that innovations serve broader business objectives rather than being isolated advancements.
  • Communication and digital literacy: Scaling requires a workforce equipped to engage with computational solutions. Misunderstandings between computational designers and decision-makers are common. While experts play a key role in the early stages of adoption, the full impact of digital transformation is realised only when a larger majority of the organisation is involved. Designers need to articulate the scope and limitations of their work, while business leaders and other stakeholders must be actively brought on board with clear guidance on their roles in the process and providing the necessary resources to support the adoption.
  • An intrapreneurial approach: Scaling computational solutions requires a clear understanding of user needs rather than trying to predict what will work from the outset and evaluating the benefits of making them scalable. By launching small, testable experiments and learning from real-world data, companies can refine their solutions based on actual user behaviour instead of speculative projections. A disconnect often occurs when companies invest in automation strategies without first aligning them with how success is measured. This is especially important in computational design, where companies often invest heavily in complicated automation projects, only to find out later that their teams’ needs have changed or are different from what they thought. Instead of committing to long-term development cycles that may become obsolete, organisations should prioritise flexibility by adapting computational workflows based on feedback, iteration, and real-world constraints instead of committing to long-term development cycles that may become obsolete. The ultimate message is that organisations should not strive for perfect foresight but instead cultivate resilience. This means creating a culture that values quick decision-making, learning from mistakes, and maintaining the flexibility to change course when necessary. In a world where predictions are unreliable, the best strategy is not to be “right” about the future but to be prepared to respond effectively when the future unfolds unexpectedly.

The AEC industry must move beyond piecemeal solutions toward integrated systems that bring lasting value by reinforcing these three key areas. By building systems, not silos, and empowering teams to scale intelligently, computational design can achieve its full potential. The question is no longer whether computational solutions can scale—it’s whether we’re ready to create the conditions to make it happen.

A huge thank you to Ben for his invaluable support.