Edge AI with Direct Device Control
Despite all the hype and promise, we are in the Timeshare Mainframe moment of AI. Even our devices rely on the cloud for most inference. As AI moves beyond the cloud and into the physical world, the real opportunity lies at the edge. It’s where local intelligence meets local data and action. In this talk, we explore how AI systems can move from cloud agents to direct device control reducing round-trip latency, preserving user privacy, and enabling real-time responsiveness without constant cloud dependency. Drawing on experiments using platforms such as ESP32 and Axera edge AI SoCs, we’ll examine how to architect low-power systems that combine local data and action with inference. This includes running compact speech-to-text and video models on-device and using USB and Bluetooth HID interfaces to translate AI outputs directly into keyboard, mouse, and other human interface device control signals. Attendees will gain an insight into tools such as Platform.io and ready-made modules like those from M5Stack that accelerate edge development.
Jeremy Kelaher
Jeremy Kelaher is AI Enablement Architect at the Special Broadcasting Service, where they champion smarter, more resilient media workflows across digital and live production at scale, helping systems deliver much loved content every day.
A lifelong electronics hobbyist and an early coder who cut their teeth building an Eliza-style chatbot as a teen, Jeremy has been obsessed with practical AI ever since. For this talk, Jeremy brings a blend of hands-on tinkering and professional engineering rigor to Edge AI, especially real-time, privacy-preserving systems that interact directly with the world.