Imagine having a Zoom meeting without having to show your face, yet the entire team on the other end knows how you're reacting? No more webcams or even microphones as you work remotely from home.
The system is called C-Face, short for Contour Face, and it monitors your facial contours before turning them into an emoji.
It's an "ear-mountable wearable sensing technology that uses two cameras to continuously reconstruct facial expressions thanks to deep learning contours of a face," as the study puts it.
A simple, clear wearable device
"This device is simpler, less obtrusive and more capable than any existing ear-mounted wearable technologies for tracking facial expressions," explained Cheng Zhang, senior author of the study and director of Cornell's SciFi Lab.
"In previous wearable technology aiming to recognize facial expressions, most solutions needed to attach sensors on the face and even with so much instrumentation, they could only recognize a limited set of discrete facial expressions," he continued.
C-Face could be useful to create avatars in virtual reality environments, as they could express how the user is feeling without using words or written text, something that may come in handy for teachers who are teaching remote classes, for instance.
The device uses two RGB cameras that sit below each ear, which record changes in cheek contours as the user's facial muscles move. These 2D images are then analyzed by a convolutional neural network after they've been reconstructed thanks to computer vision and a deep learning model.
So far, these facial expressions can be expressed with eight emojis with C-Face. Only nine participants were able to test the system out, as the pandemic has slowed down trials. However, the emoji recognition has an 88% accuracy rate, and facial cues were accurate by 85%.
One big step the team will be working on is to improve on the system's battery capacity, which ended up limiting it.
Take a look at exactly how C-Face works here: