The way you interact with the computers in your life, both big and small, has changed dramatically over the past couple of decades and will continue to do so in the coming years. The user interface aspect lays at the heart of the design of some of your favorite products. So, what are user interfaces?
In short, a user interface (UI) is a conduit between human and computer interaction – the space where a user will interact with a computer or machine to complete tasks.
The user interface of your devices may be something you take for granted on a daily basis. Whether you are reading this article on your phone or on the computer, you probably interacted with your device by clicking or touching certain icons via your preferred operating system. These are part of the user interface.
However, have you ever thought about the next generation of user interfaces and how humans will interact with future devices? Half a century ago, pioneering computer scientist Doug Engelbart changed the user interface forever. His work created or laid the foundation for the computer mouse, the graphical user interface, hypertext, video conferencing, and much more, all pioneering the way people interact with computers today.
Next-generation gesture interfaces will make you feel like Tom Cruise in Minority Report.
Society is not as far as you think from a complete Minority Report-like interface.
Gesture interfaces do not even need you to physically touch the device. With a simple finger, hand, or body movement, users are able to execute various actions. Imagine you are trying to take a group photo but need to also include yourself in the photo. Of course, you can set up a timer to capture the perfect shot. However, with a device that is gesture-controlled, you could simply wave your hands in front of the camera to initiate the shot.
With gesture-controlled interfaces, there will no longer be a need for a mouse, keys, remote control, or even buttons, because all these functions would be controlled through gestures.
In fact, these next-generation user interfaces are already here. People are able to control drones, art installations, and even mobile devices using gestures. Google is currently among the leaders in this field. The technology company is creating new interaction sensors using radar technology. These sensors can track sub-millimeter motions at high speed and accuracy. Currently, the Soli Chip interfaces are in Pixel 4.
Tangible user interfaces (TUI) will be an important part of emerging user interfaces.
Though this may sound like something out of a sci-fi novel, a computer that fuses the physical environment with the digital realm is not too far off from reality.
The tangible user interface, or TUI, will allow computer surfaces to recognize real objects by simply placing the objects on the screen. These new interfaces could even allow you to interact with real-world objects while in the digital world, and vice versa.
Microsoft’s Pixelsense is one example of this type of technology.
Shaped like a massive touchscreen table, users of the prototype can use and touch real-world objects and share digital content at the same time. Several people can simultaneously share and interact with digital content.
With the power of a tangible interface, you could be able to do a host of things, including programming the system to recognize sizes and shapes and to interact with embedded tags in various devices.
Pioneer of tangible user interfaces, MIT Media Laboratory Lead Hiroshi Ishii, aims to create a world with tangible UIs called tangible bits, which would make digital information directly manipulable and perceptible.
The aim is to create a seamless coupling between physical objects and virtual data.
Your emotions could control your user interface.
Recommendations on what apps to use, the settings on your devices, or even how you should use your computer could all be dictated by your emotions.
Though this future interface centers primarily on improving your overall user experience, it is not too far-fetched to imagine a world where how you use your devices is completely dictated by the way you feel. Currently, emotion-sensing technology (EST) exists that could change the way you use devices forever.
With the rise of even more powerful AI, according to a recent report by MIT Sloan, EST technology could be used to analyze body language such as eye movement, expressions, and skin response, to assess how a person is feeling.
With EST, computer interfaces would be tailored to fit your mood, and could potentially make recommendations on what apps to use, optimize your computer’s operating system, and even influence what you see and hear.
Get ready to see augmented reality and virtual reality everywhere.
Virtual and augmented reality already seem to be just about everywhere. But what has been considered up to now as a novelty, will likely impact the way future generations complete tasks, take on projects, and eventually even interact with computers.
Though a majority of VR and AR now is used for entertainment and gaming purposes, this could all change in the very near future.
Companies are already using VR to create products, allow designers to collaborate from different locations, build musical experiences, and even create multi-tasking operating systems.
Brain-Computer interfaces could be here in the next few years.
Probably the most futuristic idea on this list, brain-computer interfaces could play a big role in the near future. As the name implies, humans could soon be controlling computers using nothing but their brains. Just this past year, Elon Musk's company Neuralink demonstrated that technology like this could be possible.
Founded four years ago, Neuralink is Elon Musk's most secretive company, and it has kept much of its operations covert, until lately.
Describing it as a "Fitbit in your skull, with tiny wires," the small, easy-to-install brain-computer interface could someday be used to expand the capabilities of billions of people around the world, changing the way we interact with technology and treat neurological and mobility issues.
Beyond this, it could allow people to easily control computers, mobile phones, and smart devices with a simple thought. Rather than doing physical actions to turn on a light switch or the TV, you could simply think about the task and it will be completed.
A host of other engineers and researchers are also working in this area. Tan Le, the founder of Emotiv Lifescience has created a futuristic headset that detects brain waves generated by thoughts giving users almost “telepathic powers”. Brain-Computer interfaces could be here sooner than you think and could one day be an integral part of our everyday life.
Which interface are you most looking forward to in the near future? Would you like to control a computer with your tongue?