User:Integersummary/sandbox

Introduction
Deep neural networks have enjoyed much success in all manners of tasks, but it is common for these networks to be complicated, requiring large amounts of energy-intensive memory and floating-point operations. Therefore, in order to use state-of-the-art networks in applications where energy is limited or having packaging limitation for hardware, such as anything not connected to the power grid, the energy costs must be reduced while preserving as much performance as practical.

Most existing methods focus on reducing the energy requirements during inference rather than training. Since training with SGD requires accumulation, training usually has higher precision demand than inference. Most of the existing methods focus on how to compress a model for inference, rather than during training. This paper proposes a framework to reduce complexity both during training and inference through the use of integers instead of floats. They address how to quantize all operations and operands as well as examining the bitwidth requirement for SGD computation & accumulation. Using integers instead of floats results in energy-savings because integer operations are more efficient than floating point (see the table below). Also, there already exists dedicated hardware for deep learning that uses integer operations (such as the 1st generation of Google TPU) so understanding the best way to use integers is well-motivated. The authors call the framework WAGE because they consider how best to handle the Weights, Activations, Gradients, and Errors separately.

Weight and Activation
Existing works on binary weights and activations still use high-precision accumulation for SGD. Ternary weight networks offer more expression than binary weight networks.

Gradient Computation and Accumulation
Some methods quantize gradients in the backwards pass, but the weights are still stored in float32, and batch normalization is ignored.

WAGE Quantization
The core idea of the proposed method is to constrain the following to low-bitwidth integers on each layer: The error and gradient are defined as:
 * W: weight in inference
 * a: activation in inference
 * e: error in backpropagation
 * g: gradient in backpropagation

$$e^i = \frac{\partial L}{\partial a^i}, g^i = \frac{\partial L}{\partial W^i}$$

where L is the loss function.

The precision in bits of the errors, activations, gradients, and weights are $$k_E$$, $$k_A$$, $$k_G$$, and $$k_W$$ respectively. As shown in the above figure, each quantity also has a quantization operators to reduce bitwidth increases caused by multiply-accumulate (MAC) operations. Also, note that since this is a layer-by-layer approach, each layer may be followed or preceded by a layer with different precision, or even a layer using floating point math.

Shift-Based Linear Mapping and Stochastic Mapping
The proposed method makes use of a linear mapping where continuous, unbounded values are discretized for each bitwidth $$k$$ with a uniform spacing of

$$\sigma(k) = 2^{1-k}, k \in \natnums_+$$

With this, the full quantization function is

$$Q(x,k) = Clip\left \{ \sigma(k) \cdot round\left [ \frac{x}{\sigma(k)} \right ], -1 + \sigma(k), 1 - \sigma(k) \right \}$$

Note that this function is only using when simulating integer operations on floating-point hardware, on native integer hardware, this is done automatically. In addition to this quantization function.

A distribution scaling factor is used in some quantization operators to preserve as much variance as possible when applying the quantization function above. The scaling factor is defined below.

$$Shift(x) = 2^{round(log_2(x))}$$

Finally, stochastic rounding is substituted for small or real-valued updates during gradient accumulation.

A visual representation of these operations is below.

Weight Initialization
In this work, batch normalization is simplified to a constant scaling layer in order to sidestep the problem of normalizing outputs without floating point math, and to remove the extra memory requirement with batch normalization. As such, some care must be taken when initializing weights. The authors use a modified initialization method base on MSRA.

$$W \thicksim U(-L, +L),L = max \left \{ \sqrt{6/n_{in}}, L_{min} \right \}, L_{min} = \beta \sigma$$

$$n_{in}$$ is the layer fan-in number, $$U$$ denotes uniform distribution. The original$$\eta$$ initialization method is modified by adding the condition that the distribution width should be at least $$\beta \sigma$$, where $$\beta$$ is a constant greater than 1 and $$\sigma$$ is the minimum step size see already. This prevents weights being initialised to all-zeros in the case where the bitwidth is low, or the fan-in number is high.

Weight $$Q_W(\cdot)$$
$$W_q = Q_W(W) = Q(W, k_W)$$

The quantization operator is simply the quantization function previously introduced.

Activation $$Q_A(\cdot)$$
The authors say that the variance of the weights passed through this function will be scaled compared to the variance of the weights as initialized. To prevent this effect from blowing up the network outputs, they introduce a scaling factor $$\alpha$$. Notice that it is constant for each layer.

$$\alpha = max \left \{ Shift(L_{min} / L), 1 \right \}$$

The quantization operator is then

$$a_q = Q_A(a) = Q(a/\alpha, k_A)$$

The scaling factor approximates batch normalization.

Error $$Q_E(\cdot)$$
The magnitude of the error can vary greatly, and that a previous approach (DoReFa-Net ) solves the issue by using an affine transform to map the error to the range $$[-1, 1]$$, apply quantization, and then applying the inverse transform. However, the authors claim that this approach still requires using float32, and that the magnitude of the error is unimportant: rather it is the orientation of the error. Thus, they only scale the error distribution to the range $$\left [ -\sqrt2, \sqrt2 \right ]$$ and quantise:

$$e_q = Q_E(e) = Q(e/Shift(max\{|e|\}), k_E)$$

Max is the element-wise maximum. Note that this discards any error elements less than the minimum step size.

Gradient $$Q_G(\cdot)$$
Similar to the activations and errors, the gradients are rescaled:

$$g_s = \eta \cdot g/Shift(max\{|g|\})$$

$$ \eta $$ is a shift-based learning rate. It is an integer power of 2. The shifted gradients are represented in units of minimum step sizes $$ \sigma(k) $$. When reducing the bitwidth of the gradients (remember that the gradients are coming out of a MAC operation, so the bitwidth may have increased) stochastic rounding is used as a substitute for small gradient accumulation.

$$\Delta W = Q_G(g) = \sigma(k_G) \cdot sgn(g_s) \cdot \left \{ \lfloor | g_s | \rfloor + Bernoulli(|g_s| - \lfloor | g_s | \rfloor) \right \}$$

This randomly rounds the result of the MAC operation up or down to the nearest quantization for the given gradient bitwidth. The weights are updated with the resulting discrete increments:

$$W_{t+1} = Clip \left \{ W_t - \Delta W_t, -1 + \sigma(k_G), 1 - \sigma(k_G) \right \}$$

Miscellaneous
To train WAGE networks, the authors used pure SGD exclusively because more complicated techniques such as Momentum or RMSProp increase memory consumption and are complicated by the rescaling that happens within each quantization operator.

The quantization and stochastic rounding are a form of regularization.

The authors didn't use a traditional softmax with cross-entropy loss for the experiments because there does not yet exist a softmax layer for low-bit integers. Instead, they use a sum of squared error loss. This works for tasks with a small number of categories, but does not scale well.

Experiments
For all experiments, the default layer bitwidth configuration is 2-8-8-8 for Weights, Activations, Gradients, and Error bits. The weight bitwidth is set to 2 because that results in ternary weights, and therefore no multiplication during inference. They authors argue that the bitwidth for activation and errors should be the same because the computation graph for each is similar and might use the same hardware. During training, the weight bitwidth is 8. For inference the weights are ternarized.

Implementation Details
MNIST: Network is LeNet-5 variant

SVHN & CIFAR10: VGG variant

ImageNet: AlexNet variant

Training Curves and Regularization
The authors compare the 2-8-8-8 WAGE configuration introduced above, a 2-8-f-f (meaning float32) configuration, and a completely floating point version on CIFAR10. The test error is plotted against epoch. For training these networks, the learning rate is divided by 8 at the 200th epoch and again at the 250th epoch. The convergence of the 2-8-8-8 has comparable convergence to the vanilla CNN and outperforms the 2-8-f-f variant. The authors speculate that this is because the extra discretization acts as a regularizer.

Bitwidth of Errors
The CIFAR10 test accuracy is plotted against bitwidth below

Bitwidth of Gradients
The authors also examined the effect of bitwidth on the ImageNet implementation.

Here, C denotes 12 bits (Hexidecimal) and BN refers to batch normalization being added.

Discussion
The authors have a few areas they believe this approach could be improved.

MAC Operation: The 2-8-8-8 configuration was chosen because the low weight bitwidth means there aren't any multiplication during inference. However, this does not remove the requirement for multiplication during training. 2-2-8-8 configuration satisfies this requirement, but it is difficult to train and detrimental to the accuracy.

Non-linear Quantization: The linear mapping used in this approach is simple, but there might be a more effective mapping. For example, a logarithmic mapping could be more effective if the weights and activations have a log-normal distribution.

Normalization: Normalization layers (softmax, batch normalization) were not used in this paper. Quantized versions are an area of future work

Conclusion
A framework for training and inference without the use of floating-point representation is presented. Future work may further improve compression and memory requirements.