Amir Dezfouli’s path from machine learning research to construction automation reveals a category-defining insight: the barrier to automation was never understanding; it was access. Every construction professional could already describe what they needed automated — clearly, specifically, in plain language. BIMLOGIQ’s bet is that if they can describe it, they can automate it without writing a single line of code.
About Amir Dezfouli
Amir is the co-founder of BIMLOGIQ, an AI-powered automation platform transforming the construction industry. With a PhD in Machine Learning and a decade of specialised research experience, Amir has served as a reviewer and Area Chair for prestigious conferences, including NeurIPS. He is dedicated to bridging the gap between deep technical research and scalable engineering to build next-generation platforms for the AEC sector.
“If they can explain it to us in natural language, why not just put it in a prompt and communicate that with our AI model to automate it for them?” Amir Dezfouli
Amir spent years applying machine learning across verticals such as health, manufacturing, and conservation before his co-founder, Ali, introduced him to the automation challenges in construction. What Amir discovered wasn’t just a sector in need of better technology; it was an entire industry explaining automation needs in natural language while waiting for consultants to write the code solutions. This observation would become the foundation for a new category: conversational automation that puts design professionals in control, eliminating the need for coding expertise.
The need for automation was everywhere. When Ali began working on construction drawings, he encountered a problem that seemed like a perfect early use case: the documentation required manual tagging and dimensioning. Ali spent three days doing this repetitive work on drawing assets. The task was specific, frustrating, and clearly automatable.
That initial specificity was exactly what drew in early customers. Once the team released the computer vision model, a clear pattern emerged: clients began asking, “Is it possible to automate our other workflows as well?” Having seen the technology’s potential, they sought similar solutions for their own unique processes. However, as Amir observed, these requests were often highly fragmented, “typically specific to one project or for one team.”
Each request made sense individually: valuable to that client, and offering immediate revenue. But the pattern revealed a business model constraint: the team could continue building custom scripts for each customer, resulting in revenue scaling linearly with headcount. Or they could recognise that building scripts one project at a time would never scale as a product company. This choice was clear.
BIMLOGIQ identified the opportunity of a new category: conversational automation for construction. A category, in this sense, is a new market created by reframing an existing problem and offering a fundamentally different solution approach. Rather than accepting that automation required custom consulting or complex visual programming, they asked: what if construction professionals could automate themselves through natural language, without coding skills or consultants? This category identification, removing the barrier to automation entirely, defined their product strategy and shaped everything that followed.
This is where the strategic shift becomes visible. Not trying to solve a known problem better than others, but recognising a different way to solve an existing one.
The consultancy path
The pattern became obvious immediately. Every customer had something to automate:
“Every single person had something to automate every single day. Engineers, architects, drafters, managers… everyone found workflows they wanted to streamline.”
But the requests were always the same: “Can you develop this script for us? This piece of automation for us?” These were “typically small things that could be for one project or for one team.” Valuable individually, but difficult to turn into a repeatable product.
This pattern is common in AEC technology. Custom solutions, bespoke development, consulting revenue tied to headcount. The model was financially viable but imposed a fundamental constraint: each client required ongoing support for their unique implementation. Amir and Ali recognised the choice: they could keep growing revenue with headcount, or they could recognise that building scripts one project at a time would never scale as a product company.
What changed the direction was how Amir interpreted these patterns. His neuroscience background gave him a different lens. He had spent years applying machine learning to brain pattern recognition; the techniques were domain-agnostic and transferable. Construction problems looked different through this lens because the pattern recognition approach revealed opportunities others missed.
But the critical insight came from listening closely to customers. In meetings and conversations, Amir noticed something important about how customers already explained their automation needs: “I need automation for breaking views into specific formats.” These weren’t vague requests; they were clear, detailed problem statements. And they were in natural language.
The question was obvious once he heard the pattern: if customers could already explain what they needed in natural language, why should they require coding skills to automate themselves?
This observation became the inflection point.
From consulting to conversational self-service
Automation in construction happened through custom, project-specific development. Each request required custom development time, thereby limiting the extent to which automation could be deployed.
Natural language emerged not as a technology choice, but as an insight about opportunity. The wedge wasn’t “better computer vision for construction” or “faster automation services.” It was “conversational automation that turns description into execution.”
If customers could already articulate their automation needs clearly in meetings, why couldn’t they simply describe their needs to an AI system and have it execute? Why should coding skills be a barrier to accessing automation?
The choice was counterintuitive. Construction is fundamentally visual. Architects work in 3D models. Engineers analyse structural drawings. Contractors coordinate from blueprints. But Amir saw advantages that would define this approach:
“The technology was available and accessible. Fast onboarding with minimal training needed for natural language communication.”
Natural language became the interface not because it was technically superior, but because it matched how customers already communicated. Users wouldn’t need to learn complex software interfaces or master visual programming environments. They could describe their needs using the same language they used when explaining problems to colleagues and consultants. Minimal training. Fast onboarding. Intuitive.
The solution did not ask users to adopt a new way of thinking about automation; it met them where they already were. This approach distinguished the product from traditional consulting services, visual programming tools such as Dynamo or Grasshopper, and general-purpose AI assistants that lacked construction knowledge.
As pilots expanded, the shift became visible in how people talked about automation:
“I don’t need to code to automate! Computational design becoming accessible to everyone.”
Engineers who had never written code were suddenly automating their workflows. Project managers were creating scripts without consulting developers. The barrier to automation technical expertise had been removed. This wasn’t theoretical. Automation was no longer a luxury for development teams; it was a capability available to anyone. This was democratisation in practice.
Validating the product through customer obsession
The category only matters if customers recognise themselves in it and validate that the product works. The team ran quick pilots with early partners, giving them access to early versions and observing what happened. The technology wasn’t perfect. The AI made mistakes. Users hit failures. But the behaviour was clear:
“Everyone had something to automate every single day. They were still trying to do it even though the technology had gaps.”
Users continued experimenting, prompting, and trying to make it work. This behaviour validated the insight that customers wanted to automate tasks without coding, and that natural language was their preferred interface. The friction wasn’t the UI; it was the AI’s ability to understand construction-specific context.
The team’s approach differed from typical SaaS practice. Rather than collecting requests and prioritising by revenue, they held weekly and monthly check-ins with all clients and had systematic conversations about what’s working, what’s failing, and where the product excels in production workflows.
Amir describes the framework:
“On each milestone we identify what are the main reasons for failures of the product. We categorise them into different baskets, then agree on which ones we can solve technology-wise and which have the highest impact if we solve them.”
This failure categorisation system does more than guide product development; it reveals what the product actually needs to be. Early assumptions about natural language interaction evolved through this systematic feedback.
The critical insight is the use of internal benchmarking systems to validate fixes. Industry-reported benchmarks often don’t align with construction-specific performance needs. The team developed vertical-specific benchmarks that measure what matters in production: accuracy in construction terminology, handling of domain-specific workflows and reliability under real-world conditions.
“We have a large dataset of different tasks that we can ask the platform to do, then automatically evaluate if it was correct or not. Sometimes the reported benchmarks in the industry are not really aligned with the vertical benchmark we developed internally. Sometimes the gains are very marginal but it’s a lot slower compared to previous models.”
Competitors using general-purpose models would show impressive benchmark numbers that didn’t translate to construction performance. The team’s advantage lies in measuring success against construction standards informed by sustained customer engagement and systematic failure analysis.
This shift also surfaced a critical contextual challenge:
“Users often use very high-level language to communicate, as if the model sees everything they see, which it doesn’t really.”
The product couldn’t simply translate language into code; it required contextual awareness of the user’s environment and domain-specific knowledge of construction workflows. From “natural language interface” to “contextually-aware automation that understands construction conventions, families, parameters, and project structures.”
Each failure pattern refined the product definition. Each customer conversation validated the insight: construction professionals want to automate their work without coding. However, the solution had to deliver on that promise with sufficient accuracy, context awareness, and domain intelligence to be effective.

Building competitive advantage in the category
Once a category is identified and the product is built, the question becomes: how do you defend it? Competitors don’t just need to build a better product; they need to replicate all the domain-specific intelligence required to make it work.
The team’s moats emerged not from proprietary technology but from a deep understanding of construction-specific context that enables general-purpose competitors to measure themselves against the wrong metrics.
Consider domain-specific training. Off-the-shelf large language models underperform on construction tasks because they’re optimised for general knowledge, not construction workflows:
“The baseline performance is not great in terms of accuracy. It’s not super surprising because it’s a general-purpose model that can do anything from medical to anything. So one of the ways that we did was to train the models to be more accurate and more aware of not just the Revit data but general knowledge in the sector.”
This training advantage compounds over time. Every customer interaction, every identified failure pattern, and every synthetic data generation effort deepen the understanding of what the product requires. Competitors entering the market start from scratch, testing models against general benchmarks that don’t reflect construction-specific performance.
But there’s a privacy challenge: how do you learn from customer interactions without accessing sensitive project data or proprietary workflows? Many AI companies train on user data, creating intellectual property concerns that construction firms rightly worry about.
The team’s approach focuses on patterns, not individual prompts:
“We focus on patterns instead of prompts. For example, we find a pattern that the model is failing in working with geometry or parameters. Once we find that, we internally start to produce some data for that.”
This protects user privacy while building an advantage through systematic understanding. The product categorises failure modes and addresses them at the architectural level. Many failures aren’t AI training problems; they’re context management challenges:
“Being aware of the context of what the user is talking about in terms of the 3D model is very important. Those things don’t really need training; it just needs the architecture to be changed in terms of how we interact with the data.”
Context awareness is what competitors can’t quickly copy. Users expect the product to understand their working environment: which families are present in their Revit model, which parameters are available, and the conventions their projects follow. This requires a granular understanding of construction workflows. It can’t be licensed or downloaded; it must be built through sustained engagement.
The internal benchmarks created a feedback loop: a better understanding of construction workflows improves the benchmarks, which in turn guide model selection, which improves performance, which reveals new patterns. Competitors using general-purpose models can’t replicate this because they lack construction-specific benchmarks, domain training, and architectural solutions for context awareness in BIM environments.
Managing boundaries and evolution
Products don’t mature quickly. Managing user expectations requires patience and transparency:
“We are very transparent around these are the things the tool can do and cannot do.”
The 20-minute value threshold reflects product maturity thinking:
“It’s important that we can deliver value quickly to the users. Quickly, I mean in the first 20 minutes of using the tool.”
This isn’t instant perfection; it’s proving the product works for real tasks within a timeframe busy professionals will tolerate. Pilots who work have dedicated time and people. The “try it when you have time” approach fails. Users need bandwidth and internal champions to advocate for adoption.
Managing feature requests without becoming a “Frankenstein product” requires clear boundaries:
“We try to follow the transparency principle. We are transparent that these are all the things we can do now for you, with the possibility of being added along the roadmap in the future.”
Saying no to customer requests isn’t hostile; it’s product discipline. If every request gets built, the product loses coherent identity and competitive differentiation.
But product evolution means expanding capabilities without losing coherence. The multimodal vision represents the next phase:
“For these products to be able to have visual oversight of what’s happening and analyse it, I think it’s going to be a massive improvement. If you want to have a production-ready system, we’re far from that point. At least currently.”
Natural language was the wedge, not the destination. The next phase is visual analysis: analysing views and drawings before generating the PDF. General-purpose visual models currently underperform on construction drawings because the required accuracy is too high. But as visual AI matures with construction-specific training, architects and engineers won’t need specialised technical skills to automate complex visual workflows. The democratisation continues, removing barriers rather than adding complexity.
The team of 13 supporting 100+ customers reflects a deliberate focus. A larger team would demand faster growth and pressure to expand. A smaller team couldn’t validate product expansion. This size creates space for deep work on hard problems before scaling.
Why this matters: the next three to five years
Amir’s journey from the consultancy model to a defensible product reveals consistent patterns. He observed that customers had already described their needs and recognised an alternative way to solve the problem. The product strategy from custom consulting scripts to self-service automation through natural language worked because systematic customer connection validated it, and domain-specific intelligence defended it.
The pattern is visible across every stage:
Observation became the foundation. Customers explained automation needs in natural language during meetings. That clue revealed how construction professionals had already considered their problems.
Validation required a systematic connection. Quick pilots showed demand. Weekly check-ins, failure categorisation, and internal benchmarking refined the product’s requirements. The framework emerged from listening to customer successes and failures.
Competitive advantage compounds through domain expertise. General-purpose models underperform in construction. Domain training, context awareness, and construction-specific benchmarks create defences competitors can’t quickly replicate.
Evolution requires discipline. Clear communication about boundaries builds trust. Saying no protects coherence. Patient expansion into multimodal capabilities prevents premature deployment.
Amir’s long-term bet is clear:
“Human-level interaction with design software becomes possible in the next three to five years.”
The companies that win won’t be those with the best underlying models; they’re those that understood the category opportunity before the market named it, created clear differentiation from existing alternatives (consulting, visual programming, general AI), connected with customers deeply enough to prove it works, and built advantages competitors can’t quickly copy.
The next three to five years will determine whether conversational construction automation becomes defensible or is absorbed by large platforms. BIMLOGIQ’s advantage lies in building not just technology but also systematic customer understanding and domain intelligence that compound over time. The strategy was elegant: identifying a category opportunity (removing the barrier to automation) and building a product that delivers on it (users automating themselves without coding). Execution is what determines whether that category opportunity becomes a lasting competitive advantage.
