Most of our interactions with technology aren’t really intuitive. We’ve had to adapt to it by learning to type, swipe, execute specific voice commands, etc… but what if we could train technology to adapt to us?
Programming for hardware in JavaScript has already been made accessible with frameworks like Johnny-five, but, combining it with machine learning, we have the opportunity to create new and smarter interactions.
In this presentation, I will talk about how to build a simple gesture recognition system using JavaScript, Arduino and Machine learning.
Charlie Gerard
Charlie is a developer at Atlassian, a Google Developer Expert and a Mozilla Tech Speaker. Outside of her day job, she is passionate about human-computer interaction and spends her free time experimenting with innovative technologies to build prototypes mixing art, science and tech. She also loves contributing to the community and giving back by building open-source tools, writing tutorials, mentoring junior developers and speaking at conferences.