Optimising GenAI at Runtime with Experimentation and Guardrails
Generative AI systems evolve constantly, and the impact of prompt or model changes often isn’t clear until real users interact with them in production. In this session, learn how teams using Amazon Bedrock safely experiment with AI at runtime, testing models and prompts with targeted rollouts, evaluating system outputs online, and optimising against real business results.
Aaron Montana
Aaron serves as the Head of Experimentation at LaunchDarkly, leading a practice that integrates experimentation directly into the software delivery lifecycle. He works alongside product and engineering teams to develop and validate technical frameworks that allow developers to ship code with confidence and precision. With over a decade as a software engineer and practitioner across several major technical verticals in Accenture, IBM, Rackspace and Clearhead. His expertise focuses on moving teams beyond basic feature toggles toward advanced Progressive Delivery models.