User:Hsvideo/sandbox

High Speed Video is widely used to view fast moving objects in slow motion. Captured video is played back at a speed much slower than the real-time record rate. Real-time to most people is 30 frames per second (fps) but in high-speed video applications, 1000 to over 1,000,000,000 fps is what we mean as real-time, the @ speed recording. Steadily, the image resolution has been improving from the early beginnings of 256 x 256 pixel resolution to today, 2048 x 2048 pixel resolution. As the resolution increase, so does the volume of information recorded. In fact, a high-speed camera typically push over 10 gigapixels per second.

The majority of high-speed cameras are using CMOS sensor technology. CMOS sensors have many advantages for high-speed cameras. Most scientific cameras used for applications that require 14-bits or 16-bits of dynamic range are CCD based sensors that operate at much slower clock speeds, at slower frame rates and often use cooling techniques not found in high-speed cameras. Frame averaging and binning techniques have been used with CCDs sensors to reduce the noise level while greatly increasing the signal level, for a higher SNR and dynamic range. Binning in a CMOS sensor does not reduce the noise or increase the SNR or dynamic range. Frame averaging would have the same results as found with a CCD sensor. However, for high-speed applications, frame averaging will leave undesirable image artifacts such as blur and edge displacement. CMOS sensor technology is not often used in scientific imaging requiring 14-bit or 16-bit dynamic range. The reason is the linearity and noise produced in a CMOS Active Pixel Sensor (APS) is not sufficient for these demanding applications, CMOS APS is becoming more mature, producing 10 & in some case 12-bit HDR (high dynamic range) images that are linear. Some CMOS sensors have the capability to reset the level of each pixel to a preset level during integration. This technique original called WDR (wide dynamic range) was developed out of JPL years ago. The resulting image may have a wider range of values, as the name implies, but the range is not linear. This makes it very difficult to create accurate color as well as good stop motion photography due to the variability of the reset from pixel to pixel. Some companies have named WDR as Extended, Extreme Dynamic Range (EDR), or Dual Slope. All use the same technique. Majority of the high-speed sensors come from the two semiconductor companies. Therefore, the differences in dynamic range have more to do with pixel size, full well capacity, output conversion and the various noise sources. Claiming a camera has a 14-bit dynamic range simply because the camera has an ADC (Analog-to-Digital-Converter) of 14-bits is misleading because noise and the capacity of the pixel well to produce such a dynamic range has not be considered. My advise would be to closely look at what is being claimed as far as the dynamic range and make judgments on the image quality that you actual get from the camera.

The first area of discussion will be to explain the signal processing steps in converting light into digital numbers for both digital and analog imaging sensors. An explanation will be given on signal to noise ratio that determines the range of values a pixel can produce, also known as the dynamic range. The next area of discussion will focus on CMOS sensor technology that is used in most high-speed megapixel cameras. An example will be given on what would be expected from a CMOS sensor to get a dynamic range to yield an SNR for 16, 14, 12, 10 and 8-bits.

This paper will provide end users of high-speed cameras with enough information to make informed decisions on camera specs and what to expect. Let’s begin with a bird’s eye view of a classical high-speed digital imaging system’s components.