Single-pixel imaging



Single-pixel imaging is a computational imaging technique for producing spatially-resolved images using a single detector instead of an array of detectors (as in conventional camera sensors). A device that implements such an imaging scheme is called a single-pixel camera. Combined with compressed sensing, the single-pixel camera can recover images from fewer measurements than the number of reconstructed pixels.

Single-pixel imaging differs from raster scanning in that multiple parts of the scene are imaged at the same time, in a wide-field fashion, by using a sequence of mask patterns either in the illumination or in the detection stage. A spatial light modulator (such as a digital micromirror device) is often used for this purpose.

Single-pixel cameras were developed to be simpler, smaller, and cheaper alternatives to conventional, silicon-based digital cameras, with the ability to also image a broader spectral range. Since then, it has been adapted and demonstrated to be suitable for numerous applications in microscopy, tomography, holography, ultrafast imaging, FLIM and remote sensing.

History
The origins of single-pixel imaging can be traced back to the development of dual photography and compressed sensing in the mid 2000s. The seminal paper written by Duarte et al. in 2008 at Rice University concretised the foundations of the single-pixel imaging technique. It also presented a detailed comparison of different scanning and imaging modalities in existence at that time. These developments were also one of the earliest applications of the digital micromirror device (DMD), developed by Texas Instruments for their DLP projection technology, for structured light detection.

Soon, the technique was extended to computational ghost imaging, terahertz imaging, and 3D imaging. Systems based on structured detection were often termed single-pixel cameras, whereas those based on structured illumination were often referred to as computational ghost imaging. By using pulsed-lasers as the light source, single-pixel imaging was applied for time-of-flight measurements used in depth-mapping LiDAR applications. Apart from the DMD, different light modulation schemes were also experimented with liquid crystals and LED arrays.

In the early 2010s, single-pixel imaging was exploited in fluorescence microscopy, for imaging biological samples. Coupled with the technique of time-correlated single photon counting (TCSPC), the use of single-pixel imaging for compressive fluorescence lifetime imaging microscopy (FLIM) has also been explored. Since the late 2010s, machine learning techniques, especially Deep learning, have been increasingly used to optimise the illumination, detection, or reconstruction strategies of single-pixel imaging.

Theory


In sampling, digital data acquisition involves uniformly sampling discrete points of an analog signal at or above the Nyquist rate. For example, in a digital camera, the sampling is done with a 2-D array of $$N$$ pixelated detectors on a CCD or CMOS sensor ($$N$$ is usually millions in consumer digital cameras). Such a sample can be represented using the vector $$x$$ with elements $$x_i, i = 1,2,...,N$$. A vector can be expressed as the coefficients $$\{a_i\}$$ of an orthonormal basis expansion:"$x = \sum_{i=1}^{N}{a_i \psi_i}$"where $$\psi_i$$ are the $$N \times 1$$ basis vectors. Or, more compactly: "$x = \Psi a$"where $$\Psi$$ is the $$N \times N$$ basis matrix formed by stacking $$\psi_i$$. It is often possible to find a basis in which the coefficient vector $$a$$ is sparse (with $$K << N$$ non-zero coefficients) or r-compressible (the sorted coefficients decay as a power law). This is the principle behind compression standards like JPEG and JPEG-2000, which exploit the fact that natural images tend to be compressible in the DCT and wavelet bases. Compressed sensing aims to bypass the conventional "sample-then-compress" framework by directly acquiring a condensed representation with $$M < N$$ linear measurements. Similar to the previous step, this can be represented mathematically as:"$y = \Phi x = \Phi \Psi a$"where $$y$$ is an $$M \times 1$$ vector and $$\Phi$$ is the $$M$$-rank measurement matrix. This so-called under-determined measurement makes the inverse problem an ill-posed problem, which in general is unsolvable. However, compressed sensing exploits the fact that with the proper design of $$\Phi$$, the compressible signal $$x$$ can be exactly or approximately recovered using computational methods. It has been shown that incoherence between the bases $$\Phi$$ and $$\Psi$$ (along with the existence of sparsity in $$\Psi$$) is sufficient for such a scheme to work. Popular choices of $$\Psi$$ are random matrices or random subsets of basis vectors from Fourier, Walsh-Hadamard or Noiselet bases. It has also been shown that the $$\mathcal{L}_1$$ optimisation given by:"y - \Phi \Psi \alpha'"works better to retrieve the signal $$x$$ from the random measurements $$y$$, than other common methods like least-squares minimisation. An improvement to the $$\mathcal{L}_1$$ optimisation algorithm, based on total-variation minimisation, is especially useful for reconstructing images directly in the pixel basis.

Single-pixel camera
The single-pixel camera is an optical computer that implements the compressed sensing measurement architecture described above. It works by sequentially measuring the inner products $$y_m = \langle x, \phi_m \rangle$$ between the image $$x$$ and the set of 2-D test functions $$\{\phi_m\}$$, to compute the measurement vector $$y$$. In a typical setup, it consists of two main components: a spatial light modulator (SLM) and a single-pixel detector. The light from a wide-field source is collimated and projected onto the scene, and the reflected/transmitted light is focussed on to the detector with lenses. The SLM is used to realise the test functions $$\{\phi_m\}$$, often as binary pattern masks, and to introduce them either in the illumination or in the detection path. The detector integrates and converts the light signal into an output voltage, which is then digitised by an A/D converter and analysed by a computer.

Rows from a randomly permuted (for incoherence) Walsh-Hadamard matrix, reshaped into square patterns, are commonly used as binary test functions in single-pixel imaging. To obtain both positive and negative values (±1 in this case), the mean light intensity can be subtracted from each measurements, since the SLM can produce only binary patterns with 0 (off) and 1 (on) conditions. An alternative is to split the positive and negative elements into two sets, measure both with the negative set inverted (i.e., -1 replaced with +1), and subtract the measurements in the end. Values between 0 and 1 can be obtained by dithering the DMD micromirrors during the detector's integration time.

Examples of commonly used detectors include photomultiplier tubes, avalanche photodiodes, or hybrid photo multipliers (sandwich of layers of photon amplification stages). A spectrometer can also be used for multispectral imaging, along with an array of detectors, one for each spectral channel. Another common addition is a time-correlated single photon counting (TCSPC) board to process the detector output, which, coupled with a pulsed laser, enables lifetime measurement and is useful in biomedical imaging.

Advantages and drawbacks
The most important advantage of the single-pixel design is its reduced size, complexity, and cost of the photon detector (just a single unit). This enables the use of exotic detectors capable of multi-spectral, time-of-flight, photon counting, and other fast detection schemes. This made single-pixel imaging suitable for various fields, ranging from microscopy to astronomy.

The quantum efficiency of a photodiode is also higher than that of the pixel sensors in a typical CCD or CMOS array. Coupled with the fact that each single-pixel measurement receives about $$N/2$$ times more photons than an average pixel sensor, this help reduce image distortion from dark noise and read-out noise significantly. Another important advantage is the fill factor of SLMs like a DMD, which can reach around 90% (compared to that of a CCD/CMOS array which is only around 50%). In addition, single-pixel imaging inherits the theoretical advantages that underpins the compressed sensing framework, such as its universality (the same measurement matrix $$\Phi$$ works for many sparsifying bases $$\Psi$$) and robustness (measurements have equal priority, and thus, loss of a measurement does not corrupt the entire reconstruction).

The main drawback the single-pixel imaging technique face is the tradeoff between speed of acquisition and spatial resolution. Fast acquisition needs projecting fewer patterns (since each of them is measured sequentially), which leads to lower resolution of the reconstructed image. An innovative method of "fusing" the low resolution single-pixel image with a high spatial-resolution CCD/CMOS image (dubbed "Data Fusion") has been proposed to mitigate this problem. Deep learning methods to learn the optimal set of patterns suitable to image a particular category of samples are also being developed to improve the speed and reliability of the technique.

Applications
Some of the research fields that are increasingly employing and developing single-pixel imaging are listed below:


 * Multispectral and hyperspectral imaging
 * Infrared imaging spectroscopy
 * Diffuse optics and imaging through scattering media
 * Time-resolved and life-time microscopy
 * Fluorescence spectroscopy
 * X-ray diffraction tomography
 * Biomedical imaging
 * Terahertz and ultrafast imaging
 * Magnetic resonance imaging
 * Photoacoustic imaging
 * Holography and phase imaging
 * Long-range imaging and remote sensing
 * Cytometry and polarimetry
 * Real-time and post-processed video