Researchers Fool Image Recognition Software Into Labeling This Turtle as a Rifle

MIT and Kyushu researchers are shedding light on an area of small, but growing concern related to potential algorithmic and neural network limitations of image recognition software.

Researchers Fool Image Recognition Software Into Labeling This Turtle as a Rifle
MIT

Image recognition has entered our world at a very fast rate in the past decade, and it is being developed for an unbelievable variety of purposes, from monitoring terrorism suspects, to bathroom mirrors that use the software to help plan your day. Perhaps because of its ubiquitous presence in our lives, there are some critics who feel that the technology—which relies largely on algorithms—can offer unpredictable, or at times inconsistent, results.

In combined studies, researchers from Kyushu University and MIT put this theory to the test, and the results were very suprising. Essentially, through modifying only a few details, they were able to fool the image recognition software

Researchers Fool Image Recognition Software Into Labeling This Turtle as a Rifle
Source: Arxiv

The first team from Kyushu University tried the approach of taking an image and altering one of the pixels. The idea is that the neural networks would be interrupted, and in the confusion images of dogs would be wrongly labeled as horses, cats or even cars. It’s interesting to note that the number of pixels was limited to only one thousand, meaning that the same result may not have been achieved with larger-scale images in which pixel numbers are in the millions.

The MIT group employed 3D printing to achieve their results, but in this case the effects were much more dramatic: a printed turtle sculpture tricked an algorithm into labeling it as a rifle, while a baseball was perceived as an espresso. Though the effects of the first result are relatively harmless, the results of the second experiment showed the MIT team that this small blunder could translate into larger problems with the software when it is applied in other areas of daily life. 

The results showed the two teams other possible areas of, particularly adversarial examples, which are “...inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake.” Neural networks, the foundation of the algorithms, would be under attack.

AI

New AI System Turns Low-Resolution Images into HD

Anish Athalye, from the MIT team, elaborates: "More and more real-world systems are starting to incorporate neural networks, and it's a big concern that these systems may be possible to subvert or attack using adversarial examples," adding that there is still confusion about the source."The machine learning community doesn't fully understand what's going on with adversarial examples or why they exist."

Still, he says, though there is no cause for great concern, work is underway by big contender web companies like Amazon, Google, and Facebook to pinpoint the activity. This means, as Athalye indicates, that we are not facing isolated incidents. "It's not some weird 'corner case' either...We've shown in our work that you can have a single object that consistently fools a network over viewpoints, even in the physical world.”

Though the results of these combined studies undoubtedly reveal some of the design and developmental flaws of image recognition software, an important question still remains: are we accelerating the development—and expectations—of image recognition beyond its current capacity, or are the results indicators that there are very real limitations that will remain regardless of any future R&D efforts? Only time will tell.

Via: Arxiv, OpenAI, BBC

DESIGN This Adjustable Kitchen is Designed to Help People with Disabilities 1 month ago