Additive state decomposition

Additive state decomposition occurs when a system is decomposed into two or more subsystems with the same dimension as that of the original system. A commonly used decomposition in the control field is to decompose a system into two or more lower-order subsystems, called lower-order subsystem decomposition here. In contrast, additive state decomposition is to decompose a system into two or more subsystems with the same dimension as that of the original system.

Taking a system $P$ for example, it is decomposed into two subsystems: $P_{p}$ and $P_{s}$, where $dim( P_{p} ) = n_{p}$ and $dim( P_{s} ) = n_{s}$, respectively. The lower-order subsystem decomposition satisfies


 * $$n = n_p + n_s\text{ and } P = P_p \oplus P_s$$

By contrast, the additive state decomposition satisfies


 * $$n = n_p = n_s \text{ and } P = P_p + P_s$$

On a dynamical control system
Consider an 'original' system as follows:

where $$x\in\R^n$$.

First, a 'primary' system is brought in, having the same dimension as the original system:

where $$x_p\in\R^n.$$

From the original system and the primary system, the following 'secondary' system is derived:
 * $$\dot{x} - \dot{x}_p = f(t,x,u) - f_p (t,x_p,u_p), x(0) = x_0$$

New variables $$x_s\in\R^n$$ are defined as follows:

Then the secondary system can be further written as follows:

From the definition ($$), it follows
 * $$x(t)=x_p(t)+x_s(t), $$ $$t\geq 0.$$

The process is shown in this picture:

Example 1
In fact, the idea of the additive state decomposition has been implicitly mentioned in existing literature. An existing example is the tracking controller design, which often requires a reference system to derive error dynamics. The reference system (primary system) is assumed to be given as follows:


 * $$\dot{x}_r=f(t,x_r,u_r),$$ $$x_r(0)=x_{r,0}$$

Based on the reference system, the error dynamics (secondary system) are derived as follows:
 * $$\dot{x}_e =f(t,x_e+x_r,u)-f(t,x_r,u_r),$$ $$x_e(0)=x_0-x_{r,0}$$

where $$x_e=x-x_r$$

This is a commonly used step to transform a tracking problem to a stabilization problem when adaptive control is used.

Example 2
Consider a class of systems as follows:

Choose ($$) as the original system and design the primary system as follows:

Then the secondary system is determined by the rule ($$):

By additive state decomposition
 * $$e(t)=e_p(t)+e_s(t)$$

Since
 * $$ \| e(t) \| \le \| e_p(t) \| + \| e_s(t) \|$$

the tracking error $e ( t )$ can be analyzed by $e_{p} ( t )$ and $e_{s} ( t )$ separately. If $e_{p} ( t )$ and $e_{s} ( t )$ are bounded and small, then so is $e ( t )$. Fortunately, note that ($$) is a linear time-invariant system and is independent of the secondary system ($$), for the analysis of which many tools such as the transfer function are available. By contrast, the transfer function tool cannot be directly applied to the original system ($$) as it is time-varying.

Example 3
Consider a class of nonlinear systems as follows:

where $x, y, u$ represent the state, output and input, respectively; the function $&phi; (•)$ is nonlinear. The objective is to design $u$ such that $y − r → 0$ as $t → ∞$. Choose ($$) as the original system and design the primary system as follows:

Then the secondary system is determined by the rule ($$):

where $u_{s} = u_{p}$. Then $x = x_{p} + x_{s}$ and $y = y_{p} + y_{s}$. Here, the task $y_{p} → 0$ is assigned to the linear time-invariant system ($$) (a linear time-invariant system being simpler than a nonlinear one). On the other hand, the task $x_{s} → 0$ is assigned to the nonlinear system ($$) (a stabilizing control problem is simpler than a tracking problem). If the two tasks are accomplished, then $y = y_{p} + y_{s} → 0$. The basic idea is to decompose an original system into two subsystems in charge of simpler subtasks. Then one designs controllers for two subtasks, and finally combines them to achieve the original control task. The process is shown in this picture:

Comparison with superposition principle
A well-known example implicitly using additive state decomposition is the superposition principle, widely used in physics and engineering. The superposition principle states: For all linear systems, the net response at a given place and time caused by two or more stimuli is the sum of the responses which would have been caused by each stimulus individually. For a simple linear system:
 * $$\dot{x}=Ax+B(u_1+u_2)$$, $$x(0)=0$$

the statement of the superposition principle means $x = x_{p} + x_{s}$, where
 * $$\dot{x}_p = A x_p + Bu_1, x_p(0) = 0$$
 * $$\dot{x}_s = A x_s + Bu_2, x_s(0) = 0$$

Obviously, this result can also be derived from the additive state decomposition. Moreover, the superposition principle and additive state decomposition have the following relationship. From Table 1, additive state decomposition can be applied not only to linear systems but also nonlinear systems.

Applications
Additive state decomposition is used in stabilizing control, and can be extended to additive output decomposition.