Researchers Take Computers A Step Closer to Perceive Human Emotions

With the new machine-learning model, computers could potentially interpret subtle variations in human emotions.

Humanity is in a constant race towards making computers and other gadgets smarter than humans and enabling them to carry out elaborate processes and activities that are only deemed possible by us mere mortals as of now. One such power is perceiving human emotions.

Till date, only humans have been granted this ability to be able to detect and gauge the emotions of the people surrounding us and then act accordingly in a set environment. However now, computers could have been empowered enough to do the same and in quite an efficient manner! Skeptical? Let’s find out how!

The researchers of MIT Media Lab have been successful in creating models of machine learning that can “read” facial expressions to understand the emotions of humans. The current times have seen several developments in the robotic technology, but detecting human emotions has always been the missing link in taking it to the next level.

In order to analyze human emotions, the new machine learning model aims to analyze and interpret facial expressions.

Human emotions are as complicated and inconsistent parts of humanity as global health, war, and environmental destruction. It can sometimes be a challenge for even one man to ascertain the feelings of another man because we largely take understanding emotions for granted, let alone computers doing it for us!

However, with the advancement in robotic technology and machine learning, it seems that the answer is not too farther anymore.

The lab researchers and scientists at MIT have now designed a methodology wherein computers can see, gauge and comprehend basic facial expressions and then translate them into human emotions. According to the claims made by the team working on this model, with just a little bit of additional training data, it is potentially possible for this system to adapt its current methodology and even determine new contexts, such as different people in an environment without losing its potency.

This new model is quite different from the earlier systems devised by scientists to be able to reach this objective. It works on a combination of individual neural networks instead of simply mapping a set of facial expressions by feeding it to the system.

AI

Chinese Telecommunications Firm Developing AI That Reads Emotions

This technique that is known as “Mixture of Experts,” is, therefore, more advanced and flexible as these neural network models are trained to detect a mood spontaneously, instead of the data it has been fed to analyze. This new model also works on a “gating network” that calculates all the possibilities of a set mood analyzed by the network.

Combining the Mixture of Experts technique with model personalization techniques, researchers were able to develop a model that can mine fine-grained facial-expression data from individuals.

To train the model, the researchers utilized nine subjects from the video recordings of people conversing on a video-chat platform developed for affective-computing applications. They then evaluated them on the remaining nine subjects by breaking down the videos into individual frames.

The model scored each frame based on valence (pleasant or unpleasant) and arousal (excitement) to decode emotional states of the subjects. For further personalization, the researchers fed the model with the unseen video frames of the same subjects.

The model with just 5 to 10% of data from the new population was able to outperform traditional models in interpreting the emotions, showing the ability of the model to adapt to changing population with very few data.

“Imagine a model set to analyze facial expressions in one culture that needs to be adapted for a different culture. Without accounting for this data shift, those models will underperform. But if you just sample a bit from a new culture to adapt our model, these models can do much better, especially on the individual level. This is where the importance of the model personalization can best be seen,” said Oggi Rudovic, the co-author of the paper.

The next goal for the researchers is to train the model on a larger dataset consisting of more diverse cultures.

"This is an unobtrusive way to monitor our moods," Rudovic said. "If you want robots with social intelligence, you have to make them intelligently and naturally respond to our moods and emotions, more like humans."

It is clear that this intuitive method is a handy tool to look out for emotional signs in humans adapt them from being passive participants to active partakers in emotion recognition in the near future.

Via: MIT News