top of page

If physical buttons and screens become a thing of the past, how will we control and communicate with our devices and the virtual world around us? The proliferation of voice assistants, from Google Home to Amazon Echo, in households across the world represents more than just a new level of convenience for families and individuals. The devices are also a method of capturing massive amounts of voice data to develop the next generation of voice AI, which is expected to understand nuanced language and communicate far more fluently, and allow users to interact with devices and virtual interfaces hands-free. In addition to voice commands, touchless interfaces will be navigated through gesture, where a user’s physical movements are tracked and interpreted to control a device or projected display. Instead of dragging a fingertip across a physical screen to browse content, for example, users could motion with their head or wave their hand in the space in front of them and have their smart glasses scroll through a virtual display accordingly. Meta’s Quest Pro, in addition to touch controllers, also recognizes hand gestures like pinch and zoom. Meta also acquired CTRL-Labs to work on haptic gloves and wristband inputs to utilize electromyography in capturing typing on an invisible keyboard. Even further out, R&D efforts are exploring direct brain-computer interfaces such as Elon Musk’s Neuralink and Synchron, backed by Bill Gates and Jeff Bezos, but these devices face years of further development and FDA approval, not to mention user trepidation about the ethics, health, and security risks of direct interfaces.

Voice, gesture, and neural interfaces

KEY TRENDS

Metaverse