Get a GRASP: How to create real risk visibility for AI agents — Hamish Songsmith at AI Engineer Melbourne 2026
Get a GRASP: How to Create Real Risk Visibility for AI Agents
If you deploy an AI agent into production, it will make decisions with real consequences. It might approve a loan, schedule critical infrastructure maintenance, route emergency services, or allocate resources worth millions. And yet, most organisations have almost no systematic way to understand the risks involved.
What they have instead is compliance theatre: audit trails showing that an AI system was used, documentation of the algorithm, perhaps a fairness report. These create the appearance of risk management without actually revealing whether the system poses genuine risks in your specific context. They're often created after the system is built, checking boxes rather than preventing problems.
Hamish Songsmith's GRASP framework takes a different approach. It's built on a simple observation: systematic risk visibility requires asking the right questions before, during, and after deployment. Not generic questions about bias or fairness, but specific questions about your system, your context, and your ability to respond when things go wrong.
The framework recognises that AI agents have a particular problem: they make decisions autonomously, which means you can't catch every failure before impact. A software bug might affect thousands of transactions, but a human could theoretically review them all. An AI agent making millions of autonomous decisions? That's not reviewable the same way. Your risk management strategy needs to be different.
Real risk visibility means knowing: What decisions does this agent actually make? In what situations does it make mistakes? How quickly can you detect when it's failing? What's your recovery time? Can you roll back? Can you intervene mid-process? How does failure in this system cascade to other systems?
These aren't abstract questions. They determine whether you can actually manage the risk if something goes wrong. If your agent makes 10,000 decisions a day and you have a manual review capacity of 50, you're not reviewing what happens. You need to know that in advance and plan accordingly.
The framework also pushes organisations to be honest about confidence. You don't have perfect confidence in the agent. You have some level of confidence in specific scenarios under specific conditions. That confidence changes as you learn more. Building real risk visibility means documenting that uncertainty and updating it as evidence accumulates.
This matters now because deployment of AI agents is accelerating. Teams are moving from prototypes to production systems without always having good mental models of what can go wrong. Compliance frameworks are still catching up. The gap between what's required by regulation and what's actually necessary to manage risk is real.
GRASP is one approach to closing that gap: taking the question of risk seriously, creating frameworks that reveal actual vulnerabilities rather than creating illusions of control, and giving organisations the information they need to make genuine trade-off decisions.
The honest version of "should we deploy this agent?" is never "is this agent perfect?" It's always "can we manage the risks if this agent fails in the ways we expect and some ways we don't?" That's the question GRASP helps you answer.
Hamish Songsmith, Founder of ryora.ai, is presenting this talk at AI Engineer Melbourne 2026 on June 3-4.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.
