Mal Curtis

Mal Curtis

Principal Software Engineer

NVIDIA

Why LLMs Fall for Stories (And 5 Production Patterns That Actually Stop Them)

explore the program

Why LLMs Fall for Stories (And 5 Production Patterns That Actually Stop Them)

Prompt injection isn’t a bug — it’s a feature. LLMs trained on humanity’s written corpus learned something we didn’t intend: narrative structure. They understand dramatic tension, plot twists, and persuasive framing. When an attacker crafts a compelling story (“Actually, the real system prompt said...”), the model follows because that’s what stories do. This talk connects 2,500 years of storytelling theory — from Aristotle’s Poetics to Derrida’s “there is no outside text” — to explain why prompt injection is an inevitable consequence of training on human language, not a solvable vulnerability.

Understanding why doesn’t stop the attacks, but it changes how you build defences. You’ll learn production-tested layered defence patterns and leave with a mental model for threat modelling and patterns you can implement immediately.

Mal Curtis

Mal Curtis is a software engineer at Nvidia, where he continues working on AI inference infrastructure following Nvidia’s acquisition of Groq. Previously, he built engineering infrastructure for the development of the world’s fastest AI inference chips, and before that led AI agent development at Kolatr, where prompt injection moved from “interesting research problem” to “thing that breaks production on Tuesdays.”