Serial time-encoded amplified microscopy

Time Stretch Microscopy, also known as Serial time-encoded amplified imaging/microscopy or stretched time-encoded amplified imaging/microscopy' (STEAM), is a fast real-time optical imaging method that provides MHz frame rate, ~100 ps shutter speed, and ~30 dB (× 1000) optical image gain. Based on the Photonic Time Stretch technique, STEAM holds world records for shutter speed and frame rate in continuous real-time imaging. STEAM employs the Photonic Time Stretch with internal Raman amplification to realize optical image amplification to circumvent the fundamental trade-off between sensitivity and speed that affects virtually all optical imaging and sensing systems. This method uses a single-pixel photodetector, eliminating the need for the detector array and readout time limitations. Avoiding this problem and featuring the optical image amplification for improvement in sensitivity at high image acquisition rates, STEAM's shutter speed is at least 1000 times faster than the state - of - the - art CCD and CMOS cameras. Its frame rate is 1000 times faster than the fastest CCD cameras and 10 - 100 times faster than the fastest CMOS cameras.

History
Time stretch microscopy and its application to microfluidics for classification of biological cells were invented at UCLA. It combines concept of spectrally encoded illumination with the photonic time stretch, an ultrafast real-time data acquisition technology developed earlier in the same lab to create a femtosecond real-time single-shot digitizer, and a single shot stimulated Raman spectrometer. The first demonstration was a one-dimensional version and later a two-dimensional version. Later, a fast imaging vibrometer was created by extending the system to an interferometric configuration. The technology was then extended to time stretch quantitive phase imaging (TS-QPI) for label free classification of blood cells and combined with artificial intelligence (AI) for classification of cancer cells in blood with over 96% accuracy. The system measured 16 biophysical parameters of cells simultaneously in single shot and performed hyper-dimensional classification using a Deep Neural Network (DNN). The results were compared with other machine learning classification algorithms such as logistic regression and naive Bayes with the highest accuracy obtained with deep learning. This was later extended "Deep Cytometry" in which the computationally intensive tasks of image processing and feature extraction before deep learning were avoided by directly feeding the time-stretch line scans, each representing a laser pulse into a deep convolutional neural network. This direct classification of raw time-stretched data reduced the inference time by orders of magnitude to 700 micro-second on a GPU accelerated processor. At a flow speed of 1 m/s the cells only move less than a millimetre. Therefore, this ultrashort inference time is fast enough to for cell sorting.

Background
Fast real-time optical imaging technology is indispensable for studying dynamical events such as shockwaves, laser fusion, chemical dynamics in living cells, neural activity, laser surgery, microfluidics, and MEMS. The usual techniques of conventional CCD and CMOS cameras are inadequate for capturing fast dynamical processes with high sensitivity and speed; there are technological limitations—it takes time to read out the data from the sensor array and there's a fundamental trade-off between sensitivity and speed: at high frame rates, fewer photons are collected during each frame, a problem that affects nearly all optical imaging systems.

The streak camera, used for diagnostics in laser fusion, plasma radiation, and combustion, operates in burst mode only (providing just several frames) and requires synchronization of the camera with the event to be captured. It is therefore unable to capture random or transient events in biological systems. On the other hand, Stroboscopes have a complementary role: they can capture the dynamics of fast events—but only if the event is repetitive, such as rotations, vibrations, and oscillations. They are unable to capture non-repetitive random events that occur only once or do not occur at regular intervals.

Principle of operation
The basic principle involves two steps both performed optically. In the first step, the spectrum of a broadband optical pulse is converted by a spatial disperser into a rainbow that illuminates the target. Here the rainbow pulse consists of many subpulses of different colors (frequencies), indicating that the different frequency components (colors) of the rainbow pulse are incident onto different spatial coordinates on the object. Therefore, the spatial information (image) of the object is encoded into the spectrum of the resultant reflected or transmitted rainbow pulse. The image-encoded reflected or transmitted rainbow pulse returns to the same spatial disperser or enters another spatial disperser to combine the colors of the rainbow back into a single pulse. Here STEAM's shutter speed or exposure time corresponds to the temporal width of the rainbow pulse. In the second step, the spectrum is mapped into a serial temporal signal that is stretched in time using dispersive Fourier transform to slow it down such that it can be digitized in real-time. The time stretch happens inside a dispersive fibre that is pumped to create internal Raman amplification. Here the image is optically amplified by stimulated Raman scattering to overcome the thermal noise level of the detector. The amplified time-stretched serial image stream is detected by a single-pixel photodetector and the image is reconstructed in the digital domain. Subsequent pulses capture repetitive frames hence the laser pulse repetition rate corresponds to the frame rate of STEAM. The second is known as the time stretch analogue-to-digital converter, otherwise known as the time stretch recording scope (TiSER).

Amplified dispersive Fourier transformation
The simultaneous stretching and amplification is also known as amplified time stretch dispersive Fourier transformation (TS-DFT). The amplified time stretch technology was developed earlier to demonstrate analog-to-digital conversion with femtosecond real-time sampling rate and to demonstrate stimulated Raman spectroscopy in single shot at millions of frames per second. Amplified time stretch is a process in which the spectrum of an optical pulse is mapped by large group-velocity dispersion into a slowed-down temporal waveform and amplified simultaneously by the process of stimulated Raman scattering. Consequently, the optical spectrum can be captured with a single-pixel photodetector and digitized in real-time. Pulses are repeated for repetitive measurements of the optical spectrum. Amplified time stretch DFT consists of a dispersive fibre pumped by lasers and wavelength-division multiplexers that couple the lasers into and out of the dispersive fibre. Amplified dispersive Fourier transformation was originally developed to enable ultra wideband analog to digital converters and has also been used for high throughput real-time spectroscopy. The resolution of STEAM imager is mainly determined by diffraction limit, the sampling rate of the back-end digitizer, and spatial dispersers.

Time stretch quantitative phase imaging
Time-stretch quantitative phase imaging (TS-QPI) is an imaging technique based on time-stretch technology for simultaneous measurement of phase and intensity spatial profiles. Developed at UCLA, it has led to the development of time stretch artificial intelligence microscope.

Time stretched imaging
In time stretched imaging, the object's spatial information is encoded in the spectrum of laser pulses within a pulse duration of sub-nanoseconds. Each pulse representing one frame of the camera is then stretched in time so that it can be digitized in real-time by an electronic analog-to-digital converter (ADC). The ultra-fast pulse illumination freezes the motion of high-speed cells or particles in flow to achieve blur-free imaging. Detection sensitivity is challenged by the low number of photons collected during the ultra-short shutter time (optical pulse width) and the drop in the peak optical power resulting from the time stretch. These issues are solved in time stretch imaging by implementing a low noise-figure Raman amplifier within the dispersive device that performs time stretching. Moreover, warped stretch transform can be used in time stretch imaging to achieve optical image compression and nonuniform spatial resolution over the field-of-view.

In the coherent version of the time-stretch camera, the imaging is combined with spectral interferometry to measure quantitative phase and intensity images in real-time and at high throughput. Integrated with a microfluidic channel, coherent time stretch imaging system measures both quantitative optical phase shift and loss of individual cells as a high-speed imaging flow cytometer, capturing millions of line-images per second in flow rates as high as a few meters per second, reaching up to hundred-thousand cells per second throughput. The time stretch quantitative phase imaging can be combined with machine learning to achieve very accurate label-free classification of the cells.

Applications
This method is useful for a broad range of scientific, industrial, and biomedical applications that require high shutter speeds and frame rates. The one-dimensional version can be employed for displacement sensing, barcode reading, and blood screening; the two-dimensional version for real-time observation, diagnosis, and evaluation of shockwaves, microfluidic flow, neural activity, MEMS, and laser ablation dynamics. The three-dimensional version is useful for range detection, dimensional metrology, and surface vibrometry and velocimetry.

Image compression in optical domain


Big data not only brings opportunity but also a challenge in biomedical and scientific instruments, as acquisition and processing units are overwhelmed by a torrent of data. The need to compress massive volumes of data in real-time has fueled interest in nonuniform stretch transformations – operations that reshape the data according to its sparsity.

Recently researchers at UCLA demonstrated image compression performed in the optical domain and in real-time. Using nonlinear group delay dispersion and time-stretch imaging, they were able to optically warp the image such that the information-rich portions are sampled at a higher sample density than the sparse regions. This was achieved by restructuring the image before optical-to-electrical conversion followed by a uniform electronic sampler. The reconstruction of the nonuniformly stretched image demonstrates that the resolution is higher where information is rich and lower where information is much less and relatively not important. The information-rich region at the center is well preserved while maintaining the same sampling rates compared to uniform case without down-sampling. Image compression was demonstrated at 36 million frames per second in real time.