OmniVision Technologies

OmniVision Technologies Inc. is an American subsidiary of Chinese semiconductor device and mixed-signal integrated circuit design house Will Semiconductor. The company designs and develops digital imaging products for use in mobile phones, laptops, netbooks webcams, security, entertainment, automotive and medical imaging systems. Headquartered in Santa Clara, California, OmniVision Technologies has offices in the US, Western Europe and Asia.

In 2016, OmniVision was acquired by a consortium of Chinese investors consisting of Hua Capital Management Co., Ltd., CITIC Capital and Goldstone Investment Co., Ltd.

History
OmniVision was founded in 1995 by Aucera Technology (TAIWAN:奧斯來科技).

Some company milestones:
 * 1999: First Application-specific integrated circuit (ASIC)
 * 2000: IPO
 * 2005: Acquired CDM-Optics, a company founded to commercialize wavefront coding.
 * 2010: Acquires Aurora Systems and adds LCOS to its product line
 * 2011: Acquired Kodak patents
 * 2015: Signed an agreement to be acquired by a group of Chinese investors, including Hua Capital Management, CITIC Capital Holdings and GoldStone Investment, for about $1.9 billion in cash in April 2015.
 * 2016: Becomes a private company due to buyout by Chinese private equity consortium
 * 2018/2019: Will Semiconductor acquired OmniVision Technologies (for $2.178 billion) and SuperPix Micro Technology, merging them to form Omnivision Group
 * 2019: Achieved Guinness World Record for world's smallest commercially available sensor for OV6948 used as the CameraCubeChip.

OmniPixel3-HS
OmniVision's front-side illumination (FSI) technology is used to manufacture compact cameras in mobile handsets, notebook computers and other applications that require low-light performance without the need for flash.

OmniPixel3-GS expands on its predecessor, and is used for eye-tracking for facial authentication, and other computer vision applications.

OmniBSI
Backside illuminated image (BSI) technology differs from FSI architectures in how light is delivered to the photosensitive area of the sensor. In FSI architectures, the light must first pass through transistors, dielectric layers, and metal circuitry. In contrast, OmniBSI technology turns the image sensor upside down and applies color filters and micro lenses to the backside of the pixels, resulting in light collection through the backside of the sensor.

OmniBSI-2
The second-generation BSI technology, developed in cooperation with Taiwan Semiconductor Manufacturing Company Limited (TSMC), is built using custom 65 nm design rules and 300mm copper processes. These technology changes were made to improve low-light sensitivity, dark current, and full-well capacity and provide a sharper image.

CameraCubeChip
In this camera module, sensor and lens manufacturing processes are combined using semiconductor stacking methodology. Wafer-level optical elements are fabricated in a single step by combining CMOS image sensors, chip scale packaging processes, (CSP) and wafer-level optics (WLO). These fully integrated chip products have camera functionality and are intended to produce thin, compact devices.

RGB-Ir technology
RGB-iR technology uses a color filter process to improve color fidelity. By committing 25% of its pixel array pattern to infrared (IR) and 75% to RGB, it can simultaneously capture both RGB and IR images. This makes it possible to capture both day and night images with the same sensor. It is used for battery powered home security cameras as well as biometric authentication, such as gesture and facial recognition.

PureCel technologies
OmniVision developed its PureCel and PureCel Plus image sensor technology to provide added camera functionality to smartphones and action cameras. The technical goal was to provide smaller camera modules that enable larger optical formats and offer improved image quality, especially in low-light conditions.

Both of these technologies are offered in a stacked die format (PureCel-S and PureCelPlus-S). This stacked die methodology separates the imaging array from the image sensor processing pipeline into a stacked die structure, allowing for additional functionality to be implemented on the sensor while providing for much smaller die sizes compared to non-stacked sensors. PureCelPlus-S uses partial deep trench isolation (B-DTI) structures comprising an interfacial oxide, first deposited HfO, TaO, oxide, Ti-based liner, and a tungsten core. This is OmniVision's first DTI structure, and the first metal filled B-DTI trench since 2013.

PureCel Plus uses buried color filter array (BCFA) to collect light with various incident light angles for tolerance improvements. Deep trench isolation reduces crosstalk by creating isolation walls between pixels inside silicon. In PureCel Plus Gen 2, OmniVision set out to improve deep trench isolation for better pixel isolation and low-light performance. Its target application is smartphone video cameras.

Nyxel
Developed to address the low-light and night-vision performance requirements of advanced machine vision, surveillance, and automotive camera applications, OmniVision's Nyxel NIR imaging technology combines thick-silicon pixel architectures and careful management of the wafer surface texture to improve quantum efficiency (QE). In addition, extended deep trench isolation helps retain modulation transfer function without affecting the sensor's dark current, further improving night vision capabilities. Performance improvements include image quality, extended image-detection range and a reduced light-source requirement, leading to overall lower system power consumption.

Nyxel 2
This second generation near-infrared technology improves upon the first generation by increasing the silicon thickness to improve imaging sensitivity. Deep trench isolation was extended to address issues with crosstalk without impacting modulation transfer function. Wafer surface has been refined to improve the extended photon path and increase photon-electron conversion. The sensor achieves 25% improvement in the invisible 940-nm NIR light spectrum and a 17% increase in the barely visible 850-nm NIR wavelength over the first-generation technology.

LED flicker mitigation and high dynamic range
High-dynamic-range (HDR) imaging relies on algorithms to combined several image captures into one to create a higher quality image than native capture alone. LED lighting can create a flicker effect with HDR. This is a problem for machine vision systems, such as those used in autonomous vehicles. That is because LEDs are ubiquitous in automotive environments, from headlights to traffic lights, road signs and beyond. While the human eye can adapt to LED flickering, machine vision cannot. To mitigate this effect, OmniVision uses split-pixel technology. One large photodiode captures a scene using short exposure time. A small photodiode using long exposure simultaneously captures the LED signal. The two images are then joined in a final picture. The result is a flicker-free image.

CMOS image sensors
OmniVision CMOS image sensors range in resolution from 64 megapixels to below one megapixel. In 2009, it received orders from Apple for both 3.2 megapixel and 5 megapixel CIS.

ASIC
OmniVision also manufactures application integrated circuits (ASICs) as companion products for its image sensors used in automotive, medical, augmented reality and virtual reality (AR/VR), and IoT applications.

CameraCubeChip
OmniVision's CameraCubeChip is a fully packaged, wafer-level camera module measuring 0.65 mm × 0.65 mm. It is being integrated into disposable endoscopes and catheters with diameters as small as 1.0mm. These medical devices are used for a range of medical procedures, from diagnostic to minimally invasive surgery. The used OV6948 sensor has a size of 0.575 mm × 0.575 mm and a resolution of 200 × 200 Pixel.

LCOS
OmniVision manufacturers liquid crystal on silicon (LCOS) projection technology for display applications.

In 2018, Magic Leap used OmniVision's LCOS technology and their sensor bridge ASIC for the Magic Leap One augmented reality headset.

Markets and applications
The digital imaging market has converged into two paths: digital photography and machine vision. While smartphone cameras drove the market for some time, since 2017, machine vision applications have driven new developments. Autonomous vehicles, medical devices, miniaturized security cameras, and internet of things (IoT) devices all rely on advanced imaging technologies. OmniVision's image sensors are designed for all imaging market segments including: The following are examples of OmniVision products that have been adopted by end-users.
 * Mobile
 * Automotive
 * Security
 * IoT/emerging
 * Computing
 * Medical


 * The iPhone 5 front-facing camera is an OV2C3B unit.
 * The Official 5.0 megapixel camera for the Raspberry Pi released in 2013 uses an OV5647.
 * In 2014, Google developed 3D mapping technology, Project Tango, for the purpose of bringing AR/VR technology to mobile applications. Tango contains a number of OmniVision products including a 4 MP RGB-Ir sensor that allows for high-res photo and video, as well as depth perception in its standard camera, as well as a low-power CameraChip.
 * The Arlo home security camera by Netgear is a battery operated, wireless camera security system. It contains several OmniVision products including the OV00788 as the camera's image signal processor, and OV9712 a 1 MP progressive scan CMOS image sensor with video capturing capability.
 * The Ring doorbell uses an HD camera that contains a OmniVision OV9712 1 MP Image Sensor OmniVision H.264 and a video compression chip used for video processing.
 * The Sony PlayStation contains two OV9713 CMOS image sensors in the PlayStation Camera, as well as two USB bridge ASIC solutions. It also appears to have an OV580 ASIC chip that was made specifically for Sony.
 * Automotive system supplier ZF included OmniVision CMOS image sensors in its Gen-4 Generation S-Cam in both the mono camera and triple camera set-up.
 * As of June 2020, the rear autopilot camera on the Tesla Model S/X/3/Y uses the OV10635 720p CMOS sensor.
 * All five models of Asus’ ZenFone 4 smartphone line include dual camera set ups. The mid-range model uses an 8-megapixel OV8856 for both the front camera and the secondary sensor to provides a 120-degree super wide view. The ZenFone 4 Selfie uses a low 5-megapixel resolution OV5670 as its secondary sensor, also for a super-wide view.
 * The Microsoft Surface Pro 4 comes with an 8 MP rear camera with the OV5693 image sensor, and a 5 MP front facing camera with the OV8865 image sensor. The rear camera has 1.4 μm pixels, and a F/2 aperture for lower light scenarios. The front camera moves to a wider field of view for use with video conferencing. The quality is a bit grainy.
 * Qualcomm's virtual reality design kit (VRDK) was developed to provide a foundation for consumer electronics manufacturers so they could create VR headsets based on Qualcomm's Snapdragon VR hardware. To achieve positional tracking, the company designed in on-board cameras backed by the OV9282 global shutter image sensor which can capture 1,280 × 800 images at 120 Hz, or 180 Hz at 640 × 480. Qualcomm chose it based on claims that low latency makes it a good choice for VR headsets.