Researchers from Adobe Research and the University of California, Berkley have created a new Artificial Intelligence tool that has the capability of detecting manipulated images twice as better than humans.
Research
The new tool uses Machine Learning to identify that the images have been altered or not. The researchers used thousands of images scraped from the internet on the deep-learning tool, and it was able to correctly identify altered images 99% of the time compared with 53% success time of humans.
The Context
Fake Images and Deep Fake videos are spreading around the world at a very high rate, however, Machine Learning and Artificial Intelligence will play a massive role in detecting (as well as halting the creations) of the hoax.
Red Flag
Adobe is working to be seen acting on this issue as it has been always criticized that its own tools are always used for altering images which mostly ends up in Fake News.
The major drawback of the new tool is that it can detect only images that have been manipulated using Adobe Photoshop’s Face Aware Liquify feature.
Conclusion
It is just a prototype yet, but the company says it has plans to take this research ahead and provide tools to identify and discourage the misuse of its products across the board.
More details can be read here