Year round learning for product, design and engineering professionals

Your AI Can’t Engineer (Yet) — Theodoros Galanos at AI Engineer Melbourne 2026

Theodoros Galanos at AI Engineer Melbourne 2026

Your AI Can't Engineer (Yet): Where AI Fails in Professional Contexts

The demos are remarkable. An AI system accepts a brief specification and generates a detailed engineering design. It analyses complex problems and proposes solutions. The output looks professional and complete. But when actual engineers try to use these systems on real work, something critical breaks down. The AI has hallucinated compliance requirements, misunderstood contextual constraints, or produced designs that are technically correct but professionally unacceptable.

This gap reveals something important about how AI currently works and what it would need to do to genuinely support professional engineering work. It's not a gap that better prompting or larger models necessarily close.

Engineering isn't fundamentally a pattern-matching problem, even though that's largely what large language models do. Engineering requires integrating multiple overlapping constraint domains: physics, materials, regulations, cost, time, environmental impact, professional standards, and client requirements. A professional engineer spends years learning not just how to apply formulas, but which constraints matter most in which contexts, how to negotiate tradeoffs, and when standard solutions are inappropriate.

An AI system trained on engineering documentation and designs can pattern-match against this material. It can propose solutions that look reasonable because they're statistically similar to solutions that worked before. But this resembles engineering work without being engineering work. It's like a system trained on medical literature proposing treatments—it might sound medically plausible, but without the clinical judgment developed through practice and accountability, it's just confident guessing.

Consider regulatory compliance specifically. Professional engineers work within stringent regulatory frameworks. A design might be technically sound but fail to meet building codes, environmental regulations, or professional standards. These requirements vary by jurisdiction, change over time, and often contain edge cases and exceptions that aren't obvious from the regulation text itself. An AI system might miss these entirely, not because it's stupid but because compliance isn't encoded in patterns—it's embedded in expert judgment and professional responsibility.

The same challenge appears with professional accountability. When an engineer signs off on a design, they're accepting liability for their judgment. They must be prepared to defend their decisions, explain tradeoffs, and take responsibility for failures. Can an AI system do this? More importantly, should it? The legal and professional responsibility frameworks don't yet accommodate AI recommendations presented as engineering work. Professional engineers can't realistically stand behind AI-generated designs without reviewing them completely, at which point the AI hasn't actually saved time.

There's also the contextual understanding problem. Real engineering problems are embedded in histories, relationships, and organizational constraints that aren't visible in the problem statement. The client has tried solutions before that failed for reasons they may not articulate clearly. There are relationships with contractors, vendors, and regulators that matter. There are maintenance teams who will eventually need to work with whatever gets designed. An AI system approaching the problem as a pure optimization problem, with only the explicit constraints visible, will miss these dimensions.

The path forward isn't about waiting for larger models or better training data. It's about understanding what AI can actually do well in engineering contexts, and building it into workflows that preserve human judgment where it matters most. AI could assist with documentation, generate design variations for human evaluation, automate routine calculations, surface precedent cases, and help organize constraints. But the actual engineering decision-making, the integration of multiple domain expertise, and the acceptance of professional responsibility—these remain irreducibly human tasks.

Some of the most advanced implementations now treat AI as a sophisticated assistant that augments engineering teams rather than attempting to replace engineering judgment. The system generates options, surfaces considerations, checks for obvious errors, and automates routine work. The engineer makes decisions. This distribution of labor respects what each does best: humans handle judgment, context, and professional responsibility; AI handles information synthesis and option generation.

The question for engineering organizations now is how to build this partnership effectively. How do you integrate AI assistance into workflows without creating a false sense of automation? How do you use AI to make engineers more productive without introducing hidden risks? These aren't technical problems; they're organizational and professional ones.

Theodoros Galanos examines where AI tools genuinely fail in professional engineering contexts and what architectural changes are needed at AI Engineer Melbourne 2026, June 3-4 in Melbourne, Australia.

delivering year round learning for front end and full stack professionals

Learn more about us

Web Directions South is the must-attend event of the year for anyone serious about web development

Phil Whitehouse General Manager, DT Sydney