Kill the God Agent
Your multi-agent system probably has one orchestrator with access to every tool, every database, every API. If that agent gets injected, the entire toolchain is compromised. Guardrails won't save you.
In this session, learn three architectural patterns that move agent security from hope to proof: how to isolate agent capabilities so no single agent holds all the keys, how to scope authorization per task using cryptographic tokens that survive prompt injection, and how to enforce policies outside the LLM using a formally verified engine that intercepts actions in microseconds. Walk away with patterns you can apply to your agent architecture this week.
Adesh Gairola
Adesh Gairola is an AI security nerd who has spent over a decade and a half breaking and fixing things across network, cloud, and AI security. At Cisco, he led a 30-person Asia Pacific VPN security team. At AWS, he moved from cloud security engineering to consulting, giving him a rare dual perspective: he knows how to build secure systems and how to assess them against the policies and frameworks enterprises actually care about. OWASP LLM Top 10 contributor. Now he’s building raxIT (raxit.ai), a security platform for AI agents. His open-source work includes GrayZoneBench (AI safety benchmark at bench.raxit.ai) and Securing Ralph Loop (security architecture for autonomous coding agents). He also runs the AI Security Circle, a meetup group in Sydney.