This New AI App Can Tell You the Ingredients in Your Food Just by Looking at Its Picture
MIT have created something like the Shazam for food. The new app uses a deep learning algorithm that can generate a list of ingredients from looking at photos of food. The system could even provide recipes and dietary information. While the AI still needs some bug fixing, the team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have made great progress in the initial training of the system.
[Image Source: Pixabay]
The program hosts a massive database of food images and information that it draws on to predict ingredients and recipes. The new system was getting a 65% accuracy score in tests when asked to retrieve the correct recipe after being shown photos of prepared meals.
The database, dubbed Recipe1M, was created from common recipe websites like AllRecipes and Food. Each of these recipes was then annotated with additional information about the ingredients. A neural network was set to work on these recipes looking for patterns and connections between completed recipes and raw ingredients. For example, when presented with a photo of a muffin the system could quickly identify key ingredients like butter and flour. The system called Pic2Recipe, then suggests a popular recipe from its database.
CSAIL graduate student Nicholas Hynes who led the research explained to Gizmodo that the system is more than just about recognizing food. “That the program recognizes food is really just a side effect of how we used the available data to learn deep representations of recipes and images,” he said. “What we’re really exploring are the latent concepts captured by the model. For instance, has the model discovered the meaning of ‘fried’ and how it relates to ‘steamed?’ We believe that it has and we’re now trying to extract the knowledge from the model to enable downstream applications that include improving peoples’ health.”
[Image Source: CSAIL]
Hynes also is quick to point out the difference between this system and a reverse image search. “What differentiates this from reverse image search is that we go directly from image to recipe instead of simply returning the recipe associated with the most similar image; in other words, we return the recipe that, according to the model, was most likely to have produced the query image.”
Simple foods are ok for the system as it stands now. But more complex foods like sushi are more challenging and will require the team to come up with new ways to teach the system. Foods that have almost endless iterations, such as lasagne, were also difficult for the system to pin down.
Ultimately the researchers would like to train the system to understand different methods of cooking, such as boiling or frying and to more accurately tell the differences between food types. Another goal is to create a ‘dinner aid’ system. Users would be able to give the system a list of available ingredients from which the AI could offer meal options. Hynes explains that this could potentially help people figure out what’s in their food even when they don't have the exact nutritional information.
The system won’t be hitting the Apple App Store anytime soon but that could be one day on the cards to assist budding chefs and nutritionists.