User:BorisMitrovic

Image Division also known as  Pixel Division and  Rationing is an arithmetic image operation, which is a topic in computer vision, discussed in CVonline.

Introduction
Image division is a pixelwise arithmetic image operation, which given two input images of the same size produces a resulting image of the same size, where each pixel value in the resulting image is the quotient between the two corresponding pixel values from the input images. Because the result is a ratio of the two images, this operation is also known as rationing.

A pixelwise image operation is performed only on the corresponding pixels from two input images. So the operation is performed independently for each pixel in the resulting image. Compared to other image operations, pixelwise image operations are very fast. Image division is often used to detect changes in a sequence of images (for example, in a video sequence) and for foreground extraction (a process which extracts the foreground from the image, given a background).

Theory
In pixel division, the resulting image is obtained by applying the mathematical division operation to each (positionally) corresponding pixel pair of the two input images. This requires the images to have the same dimensions. The formula for computing the resulting image can be written in the following way:

$$Q_{ij} = I_{ij} \div D_{ij} $$,

for every pixel defined by location indices (i,j), where Q is the resulting image, I is the dividend and D the divisor.

The pixel values of the resulting image don't span the original value space, so image normalisation is commonly applied in order to ensure that the original value space is used.

If the image has multiple channels (e.g. RGB, HSV, CYMK), the operation is simply performed on each channel.

Example
The following images come from a static camera stationed above a table football pitch, which was used by two robots playing football in order to estimate their position and orientation relative to the pitch. The images used here have been converted to grayscale, so there is only one channel. This example shows foreground extraction, also known as background removal.

The first image is a background image of the football pitch, i.e. it was taken before the match took place, and does not contain foreground objects such as the robots and the ball. The second image is a sample image taken during the game, and both the robots and the ball are present in it. The third image results from the division of the sample image by the background image, after linear normalisation. The fourth image was created by the process of image subtraction rather than image division, showing the result of an alternative (more commonly used) change detection method.

The division was performed on the images converted to a floating point type, as if 8-bit integer images were used, the integer division would be performed, which would result in a significant loss of precision.

Applications
Pixel division can be used to detect changes in a sequence of subsequent images. An example of this is the process known as foreground extraction. In order to perform foreground extraction on an image, it is necessary to have an image of the background. A background image is an image taken from the exact same camera position as the main image, but where foreground objects are absent.

Another common technique used to detect changes in a sequence of subsequent images is image subtraction, which is faster than image division. Image division can perform better than image subtraction in conditions where lighting varies considerably throughout the image, as it better models illumination properties. So image division is sometimes used in Microscopy, as it is quite simple to obtain the background image, and the lighting conditions of the screen can vary considerably.