Siri, Alexa, Cortana and “Hey Google” - the big companies are all building services that can talk to us, and that we can talk back at. These tools are great for setting timers, adding reminders, and playing a song. But compared to the sheer amount of information and interaction available to us on the web, they can still feel quite limited.
How can we, as standards-loving web developers, help shape the future of voice controlled, conversational user interfaces? How can we use the well supported Speech Synthesis API, and the still new Speech Recognition API, to teach our pages to talk, and to listen?
And how can we make the key information on our page, and the key interactions, available to people who might be driving, preparing food, carrying a child, or people with visual impairments? In this talk we’ll explore how voice controlled interfaces open even more possibilities once combined with the natural strengths of the web platform.