Physical unclonable function

A physical unclonable function (sometimes also called physi cally -unclonable function, which refers to a weaker security metric than a physi cal unclonable function ), or PUF, is a physical object whose operation cannot be reproduced ("cloned") in physical way (by making another system using the same technology), that for a given input and conditions (challenge), provides a physically defined "digital fingerprint" output (response). that serves as a unique identifier, most often for a semiconductor device such as a microprocessor. PUFs are often based on unique physical variations occurring naturally during semiconductor manufacturing. A PUF is a physical entity embodied in a physical structure. PUFs are implemented in integrated circuits, including FPGAs, and can be used in applications with high-security requirements, more specifically cryptography, Internet of Things (IOT) devices and privacy protection.

History
Early references about systems that exploit the physical properties of disordered systems for authentication purposes date back to Bauder in 1983 and Simmons in 1984. Naccache and Frémanteau provided an authentication scheme in 1992 for memory cards. PUFs were first formally proposed in a general fashion by Pappu in 2001, under the name Physical One-Way Function (POWF), with the term PUF being coined in 2002, whilst describing the first integrated PUF where, unlike PUFs based on optics, the measurement circuitry and the PUF are integrated onto the same electrical circuit (and fabricated on silicon).

Starting in 2010, PUF gained attention in the smartcard market as a promising way to provide "silicon fingerprints", creating cryptographic keys that are unique to individual smartcards.

PUFs are now established as a secure alternative to battery-backed storage of secret keys in commercial FPGAs, such as the Xilinx Zynq Ultrascale+, and Altera Stratix 10.

Concept
PUFs depend on the uniqueness of their physical microstructure. This microstructure depends on random physical factors introduced during manufacturing. These factors are unpredictable and uncontrollable, which makes it virtually impossible to duplicate or clone the structure.

Rather than embodying a single cryptographic key, PUFs implement challenge–response authentication to evaluate this microstructure. When a physical stimulus is applied to the structure, it reacts in an unpredictable (but repeatable) way due to the complex interaction of the stimulus with the physical microstructure of the device. This exact microstructure depends on physical factors introduced during manufacture, which are unpredictable (like a fair coin). The applied stimulus is called the challenge, and the reaction of the PUF is called the response. A specific challenge and its corresponding response together form a challenge-response pair or CRP. The device's identity is established by the properties of the microstructure itself. As this structure is not directly revealed by the challenge-response mechanism, such a device is resistant to spoofing attacks.

Using a fuzzy extractor or the fuzzy commitment scheme that are provably suboptimal in terms of storage and privacy leakage amount or using nested polar codes that can be made asymptotically optimal, one can extract a unique strong cryptographic key from the physical microstructure. The same unique key is reconstructed every time the PUF is evaluated. The challenge-response mechanism is then implemented using cryptography.

PUFs can be implemented with a very small hardware investment compared to other cryptographic primitives that provide unpredictable input/output behavior, such as pseudo-random functions. In some cases, PUFs can even be built from existing hardware with the right properties.

Unclonability means that each PUF device has a unique and unpredictable way of mapping challenges to responses, even if it was manufactured with the same process as a similar device, and it is infeasible to construct a PUF with the same challenge-response behavior as another given PUF because exact control over the manufacturing process is infeasible. Mathematical unclonability means that it should be very hard to compute an unknown response given the other CRPs or some of the properties of the random components from a PUF. This is because a response is created by a complex interaction of the challenge with many or all of the random components. In other words, given the design of the PUF system, without knowing all of the physical properties of the random components, the CRPs are highly unpredictable. The combination of physical and mathematical unclonability renders a PUF truly unclonable.

Note that a PUF is "unclonable" using the same physical implementation, but once a PUF key is extracted, there's generally no problem with cloning the key – the output of the PUF – using other means. For "strong PUFs" one can train a neural network on observed challenge-response pairs and use it to predict unobserved responses.

Because of these properties, PUFs can be used as a unique and untamperable device identifier. PUFs can also be used for secure key generation and storage and for a source of randomness.

Strong/Weak

 * Weak PUFs can be considered a kind of memory that is randomly initialized during PUF manufacture. A challenge can be considered an address within the memory, and response can be considered the random value stored by that address. This way count of unique challenge-response pairs (CRPs) scales lineary with count of random elements of the PUF. The advantage of such PUFs is that they are actual random oracles, so are immune to machine-learning attacks. The weakness is that count of CRPs is small and can be exhausted either by an adversary, that can probe the PUF directly, or during authentication protocols over insecure channels, in which case verifier has to keep track of challenges already known to adversary. That's why the main application of weak PUFs is the source of randomness for deriving crypto keys.


 * Strong PUFs are systems doing computation based on their internal structure. Their count of unique CRPs scales faster than linearily with increase in count of random elements because of interactions between the elements. The advantage is that this way space of CRPs can be made large enough to make its exhaustion practically impossible and collisions of 2 randomly chosen elements of the space improbable enough, allowing the verifying party not to keep track of used elements but just to choose them randomly from the space. Another advantage is that the randomness can be stored not only within the elements but also within their interactions, which sometimes can not be read directly. The weakness is that the same elements and their interactions are reused for different challenges, which opens the possibility to derive some information about the elements and their connections and use it to predict the reaction of the system to the unobserved challenges.

Implicit/explicit
All implementations of a certain PUF within certain device are created uniformly using scalable processes. For example when a cryptoprocessor based on a silicon chip is produced, a lot of processors are created on the same silicon wafer. Foundry equipment applies the same operations to all the chips on a wafer and tries to do it as much reproducible as possible in order to have predictable and high performance and reliability characteristics within all the chips. Despite this there should be generated randomness to make PUF in each chip unique.


 * Explicit PUF randomness is created explicitly in a separate technological operation. It is a disadvantage because a separate operation imposes additional costs and because manufacturer can intentionally replace that separate operation with something else, which can reduce randomness and compromise security characteristics.
 * Implicit PUF uses technology imperfections as a source of randomness by designing a PUF as a device which operation is strongly affected by technology imperfections instead of being unaffected, as it is done for usual curcuitry, and fabricating it simultaneously with the rest of the device. Since foundries themselves cannot defeat the imperfections of the technology despite having strong economic incentive in being capable to fabricate more performant and more reliable chips, it gives some protection from foundry backdooring such PUFs this way. Backdooring PUFs by tampering with lithographic masks can be detected by reverse engineering the resulting devices. Fabricating the PUF as the part of the rest of the device makes it cheaper than explicit PUFs.

Intrinsic/extrinsic

 * Extrinsic PUFs rely on sensors to measure a system containing the randomness. Such sensors are weak point since they can be replaced with fakes sending the needed measurements.
 * Intrinsic PUF's operation is affected by randomess contained within the system itself.

Types
Over 40 types of PUF have been suggested. These range from PUFs that evaluate an intrinsic element of a pre-existing integrated electronic system to concepts that involve explicitly introducing random particle distributions to the surface of physical objects for authentication. All PUFs are subject to environmental variations such as temperature, supply voltage and electromagnetic interference, which can affect their performance. Therefore, rather than just being random, the real power of a PUF is its ability to be different between devices but simultaneously to be the same under different environmental conditions on the same device.

Error correction
In many applications, it is important that the output is stable. If the PUF is used for a key in cryptographic algorithms, it is necessary that error correction be done to correct any errors caused by the underlying physical processes and reconstruct exactly the same key each time under all operating conditions. In principle there are two basic concepts: Pre-Processing and Post-Processing Error Correction Code (ECC).

On-chip ECC units increase size, power, and data processing time overheads; they also expose vulnerabilities to power analysis attacks that attempt to model the PUF mathematically. Alternatively, some PUF designs like the EC-PUF do not require an on-chip ECC unit.

Strategies have been developed which lead SRAM PUF to become more reliable over time without degrading the other PUF quality measures such as security and efficiency.

Research at Carnegie Mellon University into various PUF implementations found that some error reduction techniques reduced errors in PUF response in a range of ~70 percent to ~100 percent.

Research at the University of Massachusetts Amherst to improve the reliability of SRAM PUF-generated keys posited an error correction technique to reduce the error rate.

Joint reliability–secrecy coding methods based on transform coding are used to obtain significantly higher reliabilities for each bit generated from a PUF such that low-complexity error-correcting codes such as BCH codes suffice to satisfy a block error probability constraint of 1 bit errors out of 1 billion bits.

Nested polar codes are used for vector quantization and error correction jointly. Their performance is asymptotically optimal in terms of, for a given blocklength, the maximum number of secret bits generated, the minimum amount of private information leaked about the PUF outputs, and minimum storage required. The fuzzy commitment scheme and fuzzy extractors are shown to be suboptimal in terms of the minimum storage.

Availability

 * PUF technology can be licensed from several companies including eMemory, or its subsidiary, PUFsecurity, Enthentica, ICTK, Intrinsic ID, Invia, QuantumTrace, Granite Mountain Technologies and Verayo.
 * PUF technology has been implemented in several hardware platforms including Microsemi SmartFusion2, NXP SmartMX2, Coherent Logix HyperX, InsideSecure MicroXsafe, Altera Stratix 10, Redpine Signals WyzBee and Xilinx Zynq Ultrascale+.

Vulnerabilities
In 2011, university research showed that delay-based PUF implementations are vulnerable to side-channel attacks and recommends that countermeasures be employed in the design to prevent this type of attack. Also, improper implementation of PUF could introduce "backdoors" to an otherwise secure system. In June 2012, Dominik Merli, a scientist at Fraunhofer Research Institution for Applied and Integrated Security (AISEC) further claimed that PUF introduces more entry points for hacking into a cryptographic system and that further investigation into the vulnerabilities of PUFs is required before PUFs can be used in practical security-related applications. The presented attacks are all on PUFs implemented in insecure systems, such as FPGA or Static RAM (SRAM). It is also important to ensure that the environment is suitable for the needed security level, as otherwise attacks taking advantage of temperature and other variations may be possible.

In 2015, some studies claimed it is possible to attack certain kinds of PUFs with low-cost equipment in a matter of milliseconds. A team at Ruhr Universität of Bochum, Germany, demonstrated a method to create a model of XOR Arbiter PUFs and thus be able to predict their response to any kind of challenge. Their method requires only 4 CRPs, which even on resource-constrained devices should not take more than about 200ms to produce. Using this method and a $25 device or an NFC-enabled smartphone, the team was able to successfully clone PUF-based RFID cards stored in the wallet of users while it was in their back pocket.

Provable machine learning attacks
The attacks mentioned above range from invasive, e.g., to non-invasive attacks. One of the most celebrated types of non-invasive attacks is machine learning (ML) attacks. From the beginning of the era of PUFs, it has been doubted if these primitives are subject to this type of attacks. In the lack of thorough analysis and mathematical proofs of the security of PUFs, ad hoc attacks against PUFs have been introduced in the literature. Consequently, countermeasures presented to cope with these attacks are less effective. In line with these efforts, it has been conjectured if PUFs can be considered as circuits, being provably hard to break. In response, a mathematical framework has been suggested, where provable ML algorithms against several known families of PUFs have been introduced.

Along with this provable ML framework, to assess the security of PUFs against ML attacks, property testing algorithms have been reintroduced in the hardware security community and made publicly accessible. These algorithms trace their roots back to well-established fields of research, namely property testing, machine learning theory, and Boolean analysis.

ML attacks can also apply to PUFs because most of the pre and post-processing methods applied until now ignore the effect of correlations between PUF-circuit outputs. For instance, obtaining one bit by comparing two ring oscillator outputs is a method to decrease the correlation. However, this method does not remove all correlations. Therefore, the classic transforms from the signal-processing literature are applied to raw PUF-circuit outputs to decorrelate them before quantizing the outputs in the transform domain to generate bit sequences. Such decorrelation methods can help to overcome the correlation-based information leakages about the PUF outputs even if the ambient temperature and supply voltage change.

Optical PUFs
Optical PUFs rely on a random optical multiple-scattering medium, which serves as a token. Optical PUFs offer a promising approach to developing entity authentication schemes that are robust against many of the aforementioned attacks. However, their security against emulation attacks can be ensured only in the case of quantum readout (see below), or when the database of challenge-response pairs is somehow encrypted.

Optical PUFs can be made very easily: a varnish containing glitter, a metallic paint, or a frosted finish obtained by sandblasting a surface, for example, are practically impossible to clone. Their appearance changes depending on the point of view and the lighting.

Authentication of an optical PUF requires a photographic acquisition to measure the luminosity of several of its parts and the comparison of this acquisition with another previously made from the same point of view. This acquisition must be supplemented by an additional acquisition either from another point of view, or under different lighting to verify that this results in a modification of the appearance of the PUF.

This can be done with a smartphone, without additional equipment, using optical means to determine the position in which the smartphone is in relation to the PUF.

Theoretical investigations suggest that optical PUFs with nonlinear multiple-scattering media, may be more robust than their linear counterparts against the potential cloning of the medium.