Researchers collect baffling images to expose weak spots in AI

Adversarial Images

In recent years, the understanding of computers has improved significantly well but it’s still qualified enough to make serious errors that there is a dedicated field of research of studying images that are normally misidentified by Artificial Intelligence, known as Adversarial Images.

“While you see a Cat up a tree, AI sees a Squirrel”

A group of researchers from UC Berkeley, the University of Washington, and the University of Chicago created a dataset of about 7,500 “natural adversarial examples.” Researchers tested a number of machine vision systems on this dataset and found that their accuracy precision by about 90 percent, with the software only able to identify just two or three percent of images in some cases.

This kind of research is much needed as we are putting the machine vision into the hearts of new technology like AI security cameras and autonomous vehicles which means we are trusting the computers to see the world same way as we see it. Adversarial Images are proving them wrong.

Below are the examples of the Adversarial Images that were mistaken by the AI.

  • Example of Adversarial Images

Researchers published a paper accompanying the research that the data will hopefully help train more robust vision systems and explains that the images exploit “deep flaws” that arise from the software’s “over-reliance on color, texture, and background cues” to identify what it sees.

The detection presented in this dataset seems to support the suggestions that rather than looking at images holistically, considering the overall shape and content, algorithms focus in on specific textures and detail, for example, pictures that show clear shadows on a brightly-lit surface are misidentified as sundials. AI is crucially missing the wood for the trees.

Leave a Reply

Top