User:Af8662/sandbox

Guided Filter is a kind of Edge-preserving smoothing filter. Same as bilateral filter, this image filter can also filter out noise or texture while retaining sharp edges.

Unlike the bilateral filter, the guided image filter has two advantages: first, Bilateral filters have very high Computational complexity, but the guided image filter does not use too complicated mathematical calculations which has linear computational complexity. Furthermore, due to the mathematical model, bilateral filters sometimes have unwanted gradient reversal artifacts and cause image distortion. While guiding the image filter, since the filter is mathematically based on linear combination, the output image must be consistent with the gradient direction of the guidance image, and the problem of gradient reversal does not occur.

Introduction
Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter, but has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: it can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and non-approximate linear time algorithm, regardless of the kernel size and the intensity range. Currently it is one of the fastest edge-preserving filters. Guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.

Definition
The key assumption of the guided filter is a local linear model between the guidance $$I$$ and the filtering output $$q$$. Assume that $$q$$ is a linear transform of $$I$$ in a window $$\omega_k$$ centered at the pixel $$k$$. In order to determine the linear coefficient $$(a_k, b_k)$$, constraints from the filtering input $$p$$ are required. Model the output $$q$$ as the input $$p$$ subtracting some unwanted components $$n$$ like noise/textures.

The following is the basic model of the guided image filter：

(1)　　$$q_i = a_k I_i + b_k, \forall i \in \omega_k$$

(2)　　$$q_{i} = p_{i} - n_{i}$$

In the above formula, $$q_{i}$$ is the $$i_{th}$$ output pixel，$$p_i$$ is the $$i_{th}$$ input pixel，$$n_{i}$$ is the $$i_{th}$$ pixel of noise components ，$$I_i$$ is the $$i_{th}$$ guidance image pixel，and $$(a_k, b_k)$$ are some linear coefficients assumed to be constant in $$\omega_k$$.

The reason to define as linear combination is that the boundary of an object is related to its gradient.This local linear model ensures that $$q$$ has an edge only if $$I$$ has an edge, because $$ \nabla q = a \nabla I$$.

Subtract (1) and (2) to get formula (3)；At the same time, define a cost function (4)：

(3)　　$$n_{i} = p_{i} - a_k I_{i} - b_k $$

(4)　　$$E(a_{k},b_{k})=\sum_{i{\epsilon}{\omega}_{k}}^{}((a_{k}I_{i} + b_{k} - p{i})^{2} + {\epsilon}a_{k}^{2})$$

In the above formula, $$\epsilon$$ is a regularization parameter penalizing large $$a_{k}$$，$$\omega_{k}$$ is a window centered at the pixel $$k$$. And the cost function's solution is given by：

(5)　　$$a_{k} = \frac{\frac{1}{\left|\omega\right|}\sum_{i\epsilon\omega_{k}}I_{i}p_{i} - \mu_{k}\bar{p_{k}}}{\sigma^{2}_{k}+\epsilon}$$

(6)　　$$b_{k} = \bar{p_{k}} - a_{k}\mu_{k}$$

Here, $$\mu_{k}$$ and $$\sigma^{2}_{k}$$ are the mean and variance of $$I$$ in $$\omega_{k}$$，$$\left|\omega\right|$$ is the number of pixels in $$\omega_{k}$$, and $$\bar{p}_{k} = \frac{1}{\left|\omega\right|}\sum_{i\epsilon\omega_{k}}p_{i}$$ is the mean of $$p$$ in $$\omega_{k}$$.

Having obtained the linear coefficients $$(a_k, b_k)$$, we can compute the filtering output $$q_i$$ by (1)

Algorithm
With above formulas, the algorithm can be written as:

'''Algorithm 1. Guided Filter'''

input： filtering input image $$p$$ ，guidance image $$I$$ ，window radius $$r$$ ，regularization $$\epsilon$$

output： filtering output $$q$$

1. $$mean_{I}$$ = $$f_{mean}(I)$$ $$mean_{p}$$ = $$f_{mean}(p)$$ $$corr_{I}$$ = $$f_{mean}(I.*I)$$ $$corr_{Ip}$$ = $$f_{mean}(I.*p)$$

2. $$var_{I}$$ = $$corr_{I} - mean_{I.} * mean_{I}$$ $$cov_{Ip}$$ = $$corr_{Ip} - mean_{I.} * mean_{p}$$

3. $$a$$ = $$cov_{Ip}./(var_{I} + \epsilon)$$ $$b$$ = $$mean_{p} - a. * mean+{I}$$

4. $$mean_{a}$$ = $$f_{maean}(a)$$ $$mean_{b}$$ = $$f_{maean}(b)$$

5. $$q$$ = $$mean_{a.} * I + mean_{b}$$

$$f_{mean}$$ is a mean filter with a wide variety of O(N) time methods.

Properties
When the guide $$I$$ is identical to the filtering input $$p$$. The guided filter behaves as an edge-preserving smoothing operator.
 * Edge-Preserving Filtering

Specifically, the criterion of a “flat patch” or a “high variance” one is given by the parameter $$\epsilon$$. The patches with variance $$(\sigma^2)$$ much smaller than $$\epsilon$$ are smoothed, whereas those with variance much larger than $$\epsilon$$ are preserved. The effect of $$\epsilon$$ in the guided filter is similar to the range variance $$\sigma_r^2$$ in the bilateral filter. Both determine “what is an edge/a high variance patch that should be preserved.”


 * Gradient-Preserving Filtering


 * Structure-Transferring Filtering