Google’s New AI Experiment Compares Your Pose to 80,000 Photos and Finds a Match
Google has unveiled some impressive artificial intelligence (AI) innovations as of late such as its hyper-realistic Google Assistant and its blood scanning technology. However, so far all applications have had useful purposes.
In a rather unusual turn of events, the technology firm has launched a new AI experiment that is completely purposeless. Google's Mirror Move is an online feature available to all that uses your device's camera to identify your pose and match it to more than 80,000 images of other people in similar stances.
Google makes a playful AI
The application also allows you to make GIFs and share them with loved ones. According to Google Creative Lab creative technologist Irene Alvarado, the purpose of this exercise is to show that AI can be used in "playful" ways.
"It feels like a magical mirror that reflects your moves with images of all kinds of human activity—from sports and dance to martial arts, acting and beyond."
"That’s why we made Move Mirror—an AI Experiment that lets you explore pictures in a fun new way, just by moving around," said Alvardo in a blog. "It feels like a magical mirror that reflects your moves with images of all kinds of human activity—from sports and dance to martial arts, acting and beyond."
Clicking on the Mirror Move website will lead you to a page where you will be asked permission for the system to access your web camera. Once you approve this security request, you are indicated to stand far enough away from your device that all your limbs can be captured inside your camera's frame.
Dancing on your own
The system can only analyze one person at a time so this is a solo mission. Once you are properly positioned inside the rectangular frame, the AI scans your body as you move and uses lines to indicate where your limbs are.
It proceeds to match your poses to a catalog of 80,000 pictures of people in similar positions and shows them on the right side of your image. Pressing a single green button allows you to then make a GIF of the process.
Google says the application is meant to show how computer vision techniques can be made available to anyone with a webcam and motivate coders to experiment with pose estimation. The system uses a Google real-time human pose estimation model called PoseNet.
PoseNet has the capacity to detect human figures in both images and videos through the recognition of their key body joints. The application does not process any personally identifiable information whatsoever on the figures it scans.
[see-also]
Instead, it simply estimates the location of key body joints and from there extrapolates the movements of limbs. PoseNet is powered by a machine learning library that runs models directly in your browser without ever storing your images.
That library is called Tensorflow.js and Google hopes it will attract many coders and machine learning enthusiasts.
"We’d love to see what you make — and don’t forget to share your awesome projects using #tensorflowjs and #posenet!" said Alvarez in a Medium post last May.
Move Mirror can be accessed at g.co/movemirror. Have fun!
Via: Google
In a first-of-its-kind study, a group of researchers from the Manipal College of Pharmaceutical Sciences in India have come up with a new machine learning-based AI tool that could help doctors to distinguish between tropical diseases.