User:ParaTechNoid/ABfilter

An alpha beta filter (or alpha-beta filter) is a simplified form of state observer for estimation, data smoothing and control applications. It is closely related to Kalman filters and to state observers used in control theory.

Overview
An alpha beta filter presumes that a sampled data set results from measuring a system having two states, where the first state is obtained by integrating the value of the second state over time. This is an adequate approximation for many simple systems, for example, mechanical systems where position is obtained as the time integral of velocity. Based on a mechanical system analogy, the two states will be called position x and velocity v. Assuming that velocity remains approximately constant for the small increment of time T between measurements, the position state can be projected forward to the next time interval using

$$ \hat{\textbf{x}}_{k} = \hat{\textbf{x}}_{k-1} + \textrm{T}\ \textbf{ } \textbf{v}_{k-1} $$

If additional information about a driving function for the v state is known, v can be projected in the same manner as the x state, replacing the usual assumption is that v is approximately constant. $$ \hat{\textbf{v}}_{k} = \hat{\textbf{v}}_{k-1} $$

The system output corresponds to the measured value of the state x. The deviation (also called the residual or innovation) between the measured x and the projected value can be calculated. Call this difference r.

$$ \hat{\textbf{r}}_{k} = \textbf{x}_{k} - \hat{\textbf{x}}_{k} $$

Suppose that residual r is positive. Possibly the previous x estimate was low, the previous v was low, or some combination of the two. The alpha beta filter takes selected alpha and beta constants (from which the filter gets its name), uses alpha times the deviation r to correct the position estimate, and uses beta times the deviation r to correct the velocity estimate. An extra T factor conventionally serves to approximately normalize magnitudes of the multipliers. The alpha and beta gains are selected for best performance, yielding the following estimates.

$$ \hat{\textbf{x}}_{k} = \hat{\textbf{x}}_{k} + (\alpha)\ \textbf{r}_{k} $$

$$ \hat{\textbf{v}}_{k} = \hat{\textbf{v}}_{k} + (\beta / \textrm{T})\ \textbf{r}_{k} $$

Over a large number of updates, the effects of the corrections accumulate. The state estimates often track actual state values closely, even when the measurements of x contain substantial levels of zero-mean random noise. This ability to estimate states more accurately than they can be measured directly is the motivation for calling the alpha-beta process a filter. Sometimes the results are sufficiently good that more sophisticated and elaborate methods do not offer any significant improvement.

Relationship to general state observers
More general state observers, such as the Luenberger observer for linear control models, use a rigorous system model. As a generalization of the alpha and beta gains, linear observers use a gain matrix to determine state estimate corrections from multiple observed deviations. Like alpha beta filters, but unlike Kalman filters, there is no general theory for determining the best observer gain terms.

In linear control applications, the Luenberger observer equations reduce to the alpha beta filter by applying the following specializations and simplifications.


 * The discrete state transition matrix A is a square matrix of dimension 2, with all main diagonal terms equal to 1, and T on the super-diagonal.


 * The observation equation matrix C has one row that selects the value of the first state variable for output.


 * The filter correction gain matrix L has one column containing the alpha and beta gain values.


 * Any known driving signal for the second state term is represented as part of the input signal vector u, otherwise the u vector is set to zero.


 * Input coupling matrix B has a non-zero gain term as its last element if u is non-zero.

Relationship to Kalman Filters
A Kalman filter estimates the values of state variables and corrects them in a manner similar to a state observer or an alpha beta filter. However, a Kalman filter does this in a much more formal and rigorous manner. The principal differences between Kalman filters and alpha beta filters are the following:


 * Like a state observer, a Kalman filter uses a detailed dynamic system model that is not restricted to two states.


 * A Kalman filter uses covariance noise models for states and observations.


 * Like observers, Kalman filters can use multiple observed variables to correct state variable estimates. Alpha beta filtering uses only one residual.


 * The state estimate variance is time-varying and adjusted automatically. Uncertainty about the initial state of the system can be represented by an appropriate selection of the initial-state covariance matrix.


 * The alpha and beta gains are generalized to a Kalman gain matrix, computed automatically using the system and covariance models.


 * Within certain limitations, a Kalman filter is Wiener optimal.

The alpha beta gamma extension
It is sometimes useful to extend the assumptions of the alpha beta filter one level. The second state variable v is presumed to be obtained from integrating a third acceleration a state, analogous to the way that the first state is obtained by integrating the second. An equation for integrating the a state to obtain v is added to the equation system. A third multiplier, gamma, is selected for applying corrections to the new a state estimates. The alpha beta gamma update equations then become

$$ \hat{\textbf{x}}_{k} = \hat{\textbf{x}}_{k} + ( \alpha )\ \textbf{r}_{k} $$

$$ \hat{\textbf{v}}_{k} = \hat{\textbf{v}}_{k} + ( \beta / [ \textrm{T} ] )\ \textbf{r}_{k} $$

$$ \hat{\textbf{a}}_{k} = \hat{\textbf{a}}_{k} + ( \gamma / [2 \textrm{T} ^ \textrm{2}] )\ \textbf{r}_{k} $$

Extensions to additional higher orders are possible, but typically are not as useful.