The way you interact with the computers in your life, both big and small, has changed dramatically over the past couple of decades and will continue to do so in the coming years.
A buzzword that lays at the heart of the design of some of your favorite products, new leaps in technology is causing researchers to rethink user interfaces.
In short, a user interface (UI) is a conduit between human and computer interaction – the space where a user will interact with a computer or machine to complete tasks.
The user interface of your devices may be something you overlook on a daily basis. Whether you are reading this article on your phone or on the computer, you probably interacted with your device by clicking or touching certain icons via your preferred operating system.
However, have you ever thought about the next generation of user interfaces and how humans will interact with future devices?
A half a century ago, pioneering computer scientist Doug Engelbart changed the user interface forever, showcasing the computer mouse, the graphical user interface, hypertext, video conferencing and much more, all laying the foundation for the way people interact with computers today.
Don’t worry, society is not as far as you think from a complete Minority Report-like interface.
Compared to the more common touch interfaces, gesture interfaces do not even need you to physically touch the device. With a simple finger, hand, or body movements individuals will be able to control various devices.
Imagine you are trying to take a group photo but need to also include yourself in the photo. Of course, you can set up a timer to capture the perfect shot.
However, with a device that is gesture controlled, you could simply have your hands in front of the camera to initiate the shot.
With gesture-controlled interfaces, there will no longer be a need for a mouse, keys, remote control or even buttons because all these functions can be mapped to your body.
It is good to mention that there is some technology like drones and some computer interfaces that already use gesture interfaces to complete basic tasks.
Expect to see gesture interfaces everywhere in five years.
Tangible User Interface (TUI)
Though this may sound like something out of a sci-fi novel, the ability to have a computer that fuses the physical environment with the digital realm is not too far off from reality.
The tangible user interface or TUI will allow computer surfaces to recognize real objects by simply placing the objects on the screen.
Microsoft’s Pixelsense is an excellent example of this type of technology.
Shaped like a massive touchscreen table, users of the prototype can place common objects and identify them using a complex host of sensors to detect objects that touch the screen.
With the power of a tangible interface, you could be able to do a host of things including programming the system to recognize sizes and shapes and to interact with embedded tags in various devices.
Pioneer of tangible user interfaces, MIT Media Laboratory Lead Hiroshi Ishii aims to create a world with tangible UIs called tangible bits, making digital information directly manipulable and perceptible.
Therefore he aims to create a seamless coupling between physical objects and virtual data.
What recommendations on apps to use, the settings on your devices, or even how you should use your computer could all be dictated by your emotions.
Though this centers primarily on improving user interfaces, it is not too far-fetched to have a world where devices are completely dictated by the way you feel.
Currently, there is emotion-sensing technology or (EST) that could change the way you use devices forever.
With the rise of even more powerful AI, according to a recent report by MIT Sloan, EST technology could be used to analyze body language such as eye movement, expressions, and skin response to see how a person is feeling.
With EST, computer interfaces would be tailored to fit your mood eager to make recommendations on apps, optimize the computer’s operating system, influencing everything you see and hear.
Virtual Reality and Augmented Reality
Virtual and augmented reality have been just about everywhere. What has been considered a novelty is sure to impact the way future generations complete tasks, take on projects, and eventually the way humans interact with computers.
Though a majority of VR and AR now is used for entertainment and gaming purposes, this could all change in the very near future.
Companies are already using VR to create products, build musical experiences, and even create multi-tasking operating systems.
While AR has been used to connect the digital and real world, sharing some similarities to TUI.
Probably the most sci-fi-esque idea, on this list, the brain-computer interface could play a big role in the near future.
As the name implies, humans could soon be controlling computers with their brains and nothing else at all.
Your brain produces unique electric signals, every time it “thinks”.
In the future, researchers will be able to map out these brainwaves to carry out specific commands. Rather, than turning on a light switch or the TV, you could simply think about the task and it will be completed.
Currently, Tan Le, the founder of Emotiv Lifescience has created a futuristic headset that detects brain waves generated by thoughts giving users almost “telepathic powers”.
Which interface are you most looking forward to in the near future? Leave your comments below.