Perceptual hashing

Perceptual hashing is the use of a fingerprinting algorithm that produces a snippet, hash, or fingerprint of various forms of multimedia. A perceptual hash is a type of locality-sensitive hash, which is analogous if features of the multimedia are similar. This is in contrast to cryptographic hashing, which relies on the avalanche effect of a small change in input value creating a drastic change in output value. Perceptual hash functions are widely used in finding cases of online copyright infringement as well as in digital forensics because of the ability to have a correlation between hashes so similar data can be found (for instance with a differing watermark).

Development
The 1980 work of Marr and Hildreth is a seminal paper in this field.

The July 2010 thesis of Christoph Zauner is a well-written introduction to the topic.

In June 2016 Azadeh Amir Asgari published work on robust image hash spoofing. Asgari notes that perceptual hash function like any other algorithm is prone to errors.

Researchers remarked in December 2017 that Google image search is based on a perceptual hash.

In research published in November 2021 investigators focused on a manipulated image of Stacey Abrams which was published to the internet prior to her loss in the 2018 Georgia gubernatorial election. They found that the pHash algorithm was vulnerable to nefarious actors.

Characteristics
Research reported in January 2019 at Northumbria University has shown for video it can be used to simultaneously identify similar contents for video copy detection and detect malicious manipulations for video authentication. The system proposed performs better than current video hashing techniques in terms of both identification and authentication.

Research reported in May 2020 by the University of Houston in deep learning based perceptual hashing for audio has shown better performance than traditional audio fingerprinting methods for the detection of similar/copied audio subject to transformations.

In addition to its uses in digital forensics, research by a Russian group reported in 2019 has shown that perceptual hashing can be applied to a wide variety of situations. Similar to comparing images for copyright infringement, the group found that it could be used to compare and match images in a database. Their proposed algorithm proved to be not only effective, but more efficient than the standard means of database image searching.

A Chinese team reported in July 2019 that they had discovered a perceptual hash for speech encryption which proved to be effective. They were able to create a system in which the encryption was not only more accurate, but more compact as well.

Apple Inc reported as early as August 2021 a Child Sexual Abuse Material (CSAM) system that they know as NeuralHash. A technical summary document, which nicely explains the system with copious diagrams and example photographs, offers that "Instead of scanning images [on corporate] iCloud [servers], the system performs on-device matching using a database of known CSAM image hashes provided by [the National Center for Missing and Exploited Children] (NCMEC) and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices."

In an essay entitled "The Problem With Perceptual Hashes", Oliver Kuederle produces a startling collision generated by a piece of commercial neural net software, of the NeuralHash type. A photographic portrait of a real woman (Adobe Stock #221271979) reduces through the test algorithm to the same hash as the photograph of a piece of abstract art (from the "deposit photos" database). Both sample images are in commercial databases. Kuederle is concerned with collisions like this. "These cases will be manually reviewed. That is, according to Apple, an Apple employee will then look at your (flagged) pictures... Perceptual hashes are messy. When such algorithms are used to detect criminal activities, especially at Apple scale, many innocent people can potentially face serious problems... Needless to say, I’m quite worried about this."

Researchers have continued to publish a comprehensive analysis entitled "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash", in which they investigate the vulnerability of NeuralHash as a representative of deep perceptual hashing algorithms to various attacks. Their results show that hash collisions between different images can be achieved with minor changes applied to the images. According to the authors, these results demonstrate the real chance of such attacks and enable the flagging and possible prosecution of innocent users. They also state that the detection of illegal material can easily be avoided, and the system be outsmarted by simple image transformations, such as provided by free-to-use image editors. The authors assume their results to apply to other deep perceptual hashing algorithms as well, questioning their overall effectiveness and functionality in applications such as client-side scanning and chat controls.