How Gesture Recognition Will Change Our Relationship With Tech Devices

Computer gesture recognition is showing up in cars and operating theaters.
Marcia Wendorf
Gesture recognitionSource: Wikimedia Commons

Before the 2002 science fiction film "Minority Report", director Steven Spielberg invited 15 experts to a hotel in Santa Monica, California for a three-day "think tank" about the technologies that would be in use by the year 2054, the setting of the film.

RELATED: AN INSIDE LOOK AT THE SPY TECHNOLOGY OF THE FUTURE

The group included the computer scientists Neil Gershenfeld and Jaron Lanier. Director Spielberg said he wanted the interface between human and computer to be like "conducting an orchestra."


The production designer for the film, Alex McDowell, kept what was called the "2054 bible", an 80-page guide that contained all the aspects of the future world. John Underkoffler, the science and technology advisor for the film said, "We worked so hard to make the gestural interface in the film real. I really did approach the project as if it were an R&D thing."

Today, gesture recognition is a real thing. Companies such as Intel and Microsoft, have already created use cases for this new technology. Intel produced a White Paper on touchless multifactor authentication (MFA) for use by healthcare organizations to mitigate security risks and improve clinician efficiency. Intel proposed combining facial recognition with device recognition to authenticate users.

Microsoft has a project to explore camera-based gesture recognition within surgical settings. This would allow a surgeon to view and manipulate a patient's x-rays or lab reports without having to "break scrub".

[see-also]

With gesture recognition, Microsoft was first through the door with its Kinect for Xbox 360,  released in November 2010. It captured body and hand motions in real-time, freeing gamers from keyboards and joysticks. Kinect also supported multiple players within a small room setting. Today, Kinect is part of Microsoft's cloud-based service, Azure.

Elliptic Labs uses ultrasound signals that are sent through the air from the speakers already integrated in a smartphone or tablet. The signals then bounce off a hand, an object, or a person's head, and are recorded by the microphones already in the device. This process is is similar to how bats use echolocation to navigate.

Swedish company Tobii Rex, creates an infrared light-based eye-tracking device that makes it possible for disabled people to use their eyes to point and interact with a computer.

American firm Leap Motion makes a sensor that detects hand and finger motions as input. Besides using it to control your PC, it also allows for hand tracking in virtual reality.

Companies such as uSens and Gestigon provide software development kits (SDKs) for application developers to incorporate gesture recognition into their apps. Silicon Valley based uSens builds hardware and software that allow a user to interact with a digital interface, such as a smart TV, by sensing finger movements and hand gestures.

Italian start-up Limix uses gesture recognition to record the sign language hand movements of deaf people. They are then translated into words, which are played by a voice synthesizer on a smartphone.

Gestigon began been using 3D depth data in augmented reality for companies in the auto sector, such as Audi, Renault and Volkswagen, beginning in 2016. The company is also working on AR/VR headsets as well as consumer electronics and smart home devices.

Since 2016, the BMW 7 Series of cars has had gesture recognition that allows drivers to turn up or turn down the volume, accept or reject a phone call, and change the angle of the multicamera view. There's even a customizable two-finger gesture that you can program to whatever you want – from "navigate home" to "let's get a pizza."

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board