Spotting Image Tampering with AI

July 23, 2018

Tags: AI & Machine Learning, Computer Vision, Imaging & Video, Content Intelligence, Data Intelligence

Faked images can be powerful tools of deception, and as digital editing tools improve, so does the opportunity to use them to manipulate images without the consent of their authors or subjects. Researchers are seeking new ways to more clearly identify images that have been tampered with.

New work by an Adobe Research scientist and collaborators shows a promising approach to detect tampered regions of images. Vlad Morariu, who recently joined Adobe Research, partnered with scholars and former colleagues from the University of Maryland on the experimental research, which was published at the CVPR 2018 conference this June.

In “Learning Rich Features for Image Manipulation Detection,” the authors employ a two-stream neural network to detect tampered regions of a manipulated image. One of the streams focuses on RGB colors, extracting features such as strong color contrast and unnatural boundaries. The second steam seeks out differences in noise distribution in the image.

The use of this noise data is a novel approach, Morariu explains, one that lends increased effectiveness in identifying faked images. In digital imaging, “noise” refers to a random variation of brightness or color information in the photo. The noise represents a sort of signature captured by the camera at a specific moment.

To examine the photos’ noise, the research team used a filter layer borrowed from a different kind of study: steganalysis, the investigation of data or messages hidden within images or files. That specialized layer allowed the network to pin down noise inconsistency between real and tampered regions of an image.

Once collected, the network’s judgments about RGB levels and noise information were then “fused” in another neural network layer. As an output, the network drew a colored box around the manipulated area of a photo.

The team’s results outperformed other methods in showing which images had been faked. Remarkably, the network was also able to learn to identify which kind of manipulation had occurred in specific photos. The system provides labels on the boxes highlighting tampering, marking them as an object’s removal, the copying and moving of an object, or object splicing.

Related Posts