User:Sandipsparikh/sandbox

Context-adaptive binary arithmetic coding (CABAC) is a form of entropy encoding used in H.264/MPEG-4 AVC video encoding. It is also part of the draft standard of HEVC. It is a lossless compression technique. It is notable for providing much better compression than most other encoding algorithms used in video encoding, and is one of the primary advantages of the H.264/AVC encoding scheme. CABAC is only supported in Main and higher profiles of H.264/MPEG-4 AVC standard and requires a large amount of processing to decode compared to similar algorithms. It is also difficult to parallelize and vectorize. As a result, Context-adaptive variable-length coding (CAVLC), a lower efficiency entropy encoding scheme, is used to increase performance on slower playback devices.

Background
The basis for CABAC is an Arithmetic coder, but several modifications and simplifications have been made to adapt it to the requirements of H.264.

Notably,


 * Use of binary symbols
 * Use of a few quantized states to represent range and probabilities

The use of binary symbols has several benefits:
 * Context modelling on a sub-symbol level, which allows modelling of only frequently used bins or bits of a symbol
 * Simplified modelling and coding using multiplier less schemes

The use of few quantized states to represent probability ranges and symbol probabilities allows calculations using a look-up table and hence avoids multipliers.

Algorithm
CABAC has multiple probability modes for different contexts. It first converts all non-binary symbols to binary. Then, for each bit, the coder selects which probability model to use, then uses information from nearby elements to optimize the probability estimate. Arithmetic coding is finally applied to compress the data.



The context modeling provides estimates of conditional probabilities of the coding symbols. Utilizing suitable context models, given inter-symbol redundancy can be exploited by switching between different probability models according to already coded symbols in the neighborhood of the current symbol to encode. The context modeling is responsible for most of CABAC’s 10% savings in bit rate over the CAVLC entropy coding method.

Coding a data symbol involves the following stages.


 * Binarization: CABAC uses Binary Arithmetic Coding which means that only binary decisions (1 or 0) are encoded. A non-binary-valued symbol (e.g. a transform  coefficient or motion vector) is “binarized” or converted into a binary code prior to arithmetic coding. This process is similar to the process of converting a data symbol into a variable length code but the binary code is further encoded (by the arithmetic coder) prior to transmission.


 * Stages are repeated for each bit (or “bin”) of the binarized symbol.

models depending on the statistics of recently-coded data symbols. The context model stores the probability of each bin being “1” or “0”.
 * Context model selection: A “context model” is a probability model for one or more bins of the binarized symbol. This model may be chosen from a selection of available

to “0” and “1”).
 * Arithmetic encoding: An arithmetic coder encodes each bin according to the selected probability model. Note that there are just two sub-ranges for each bin (corresponding


 * Probability update: The selected context model is updated based on the actual coded value (e.g. if the bin value was “1”, the frequency count of “1”s is increased).

Example
1. Binarize the value MVDx.



The first bit of the binarized codeword is bin 1; the second bit is bin 2; and so on.

2. Choose a context model for each bin. One of 3 models is selected for bin 1, based on previous coded MVD values. The L1 norm of two previously-coded values, ek, is calculated:



If ek is small, then there is a high probability that the current MVD will have a small magnitude; conversely, if ek is large then it is more likely that the current MVD will have a large magnitude. We select a probability table (context model) accordingly. The remaining bins are coded using one of 4 further context models:



3. Encode each bin. The selected context model supplies two probability estimates: the probability that the bin contains “1” and the probability that the bin contains “0”. These estimates determine the two sub-ranges that the arithmetic coder uses to encode the bin.

4. Update the context models. For example, if context model 2 was selected for bin 1 and the value of bin 1 was “0”, the frequency count of “0”s is incremented. This means that the next time this model is selected, the probability of an “0” will be slightly higher. When the total number of occurrences of a model exceeds a threshold value, the frequency counts for “0” and “1” will be scaled down, which in effect gives higher priority to recent observations.

The arithmetic coding engine
The arithmetic decoder is described in some detail in the Standard. It has three distinct properties:

1. Probability estimation is performed by a transition process between 64 separate probability states for “Least Probable Symbol” (LPS, the least probable of the two binary decisions “0” or “1”).

2. The range R representing the current state of the arithmetic coder is quantized to a small range of pre-set values before calculating the new range at each step, making it possible to calculate the new range using a look-up table (i.e. multiplication-free).

3. A simplified encoding and decoding process is defined for data symbols with a nearuniform probability distribution. The definition of the decoding process is designed to facilitate low-complexity implementations of arithmetic encoding and decoding. Overall, CABAC provides improved coding efficiency compared with VLC at the expense of greater computational complexity.