Harmonic balance

Harmonic balance is a method used to calculate the steady-state response of nonlinear differential equations, and is mostly applied to nonlinear electrical circuits. It is a frequency domain method for calculating the steady state, as opposed to the various time-domain steady-state methods. The name "harmonic balance" is descriptive of the method, which starts with Kirchhoff's Current Law written in the frequency domain and a chosen number of harmonics. A sinusoidal signal applied to a nonlinear component in a system will generate harmonics of the fundamental frequency. Effectively the method assumes a linear combination of sinusoids can represent the solution, then balances current and voltage sinusoids to satisfy Kirchhoff's law. The method is commonly used to simulate circuits which include nonlinear elements, and is most applicable to systems with feedback in which limit cycles occur.

Microwave circuits were the original application for harmonic balance methods in electrical engineering. Microwave circuits were well-suited because, historically, microwave circuits consist of many linear components which can be directly represented in the frequency domain, plus a few nonlinear components. System sizes were typically small. For more general circuits, the method was considered impractical for all but these very small circuits until the mid-1990s, when Krylov subspace methods were applied to the problem. The application of preconditioned Krylov subspace methods allowed much larger systems to be solved, both in the size of the circuit and in the number of harmonics. This made practical the present-day use of harmonic balance methods to analyze radio-frequency integrated circuits (RFICs).

Example
Consider the differential equation $$\ddot x + x^3 = 0$$. We use the ansatz solution $$x = A \cos(\omega t)$$, and plugging in, we obtain $$-A\omega^2 \cos(\omega t) + A^3 \frac 14 (\cos(3\omega t) + 3\cos(\omega t) ) = 0.$$

Then by matching the $$\cos(\omega t)$$ terms, we have $$\omega = \sqrt{\frac 34} A$$, which yields approximate period $$T = \frac{2\pi}{\omega} \approx \frac{7.2552}{A}$$.

For a more exact approximation, we use ansatz solution $$x = A_1 \cos(\omega t) + A_3 \cos(3\omega t)$$. Plugging these in and matching the $$\cos(\omega t)$$, $$\cos(3\omega t)$$ terms, we obtain after routine algebra: $$\omega = \sqrt{\frac 34} A_1 \sqrt{1 + y + 2y^2}, \quad y = A_3/A_1, \quad 51y^3 + 27 y^2 + 21 y - 1 = 0.$$

The cubic equation for $$y$$ has only one real root $$y \approx 0.0448$$. With that, we obtain an approximate period $$T = \frac{2\pi(1+y)}{\sqrt{\frac 34} A \sqrt{1 + y + 2y^2}} \approx \frac{7.402}{A}$$Thus we approach the exact solution $$T = 7.4163\cdots/A$$.

Algorithm
The harmonic balance algorithm is a special version of Galerkin's method. It is used for the calculation of periodic solutions of autonomous and non-autonomous differential-algebraic systems of equations. The treatment of non-autonomous systems is slightly simpler than the treatment of autonomous ones. A non-autonomous DAE system has the representation

0=F(t,x,\dot x) $$ with a sufficiently smooth function $$F:\mathbb{R}\times\mathbb{C}^n\times\mathbb{C}^n\rightarrow\mathbb{C}^n$$ where $$n$$ is the number of equations and $$t,x,\dot x$$ are placeholders for time, the vector of unknowns, and the vector of time derivatives.

The system is non-autonomous if the function $$t\in\mathbb{R}\mapsto F(t,x,\dot x)$$ is not constant for (some) fixed $$x$$ and $$\dot x$$. Nevertheless, we require that there is a known excitation period $$T>0$$ such that $$t\in\mathbb{R}\mapsto F(t,x,\dot x)$$ is $$T$$-periodic.

A natural candidate set for the $$T$$-periodic solutions of the system equations is the Sobolev space $$H^1_{\rm per}((0,T),\mathbb{C}^n)$$ of weakly differentiable functions on the interval $$[0,T]$$ with periodic boundary conditions $$x(0)=x(T)$$. We assume that the smoothness and the structure of $$F$$ ensures that $$F(t,x(t),\dot x(t))$$ is square-integrable for all $$x\in H^1_{\rm per}((0,T),\mathbb{C}^n)$$.

The system $$B:=\left\{\psi_k \mid k\in\mathbb{Z}\right\}$$ of harmonic functions $$\psi_k:=\exp\left(i k\frac{2\pi t}{T}\right)$$ is a Schauder basis of $$H^1_{\rm per}((0,T),\mathbb{C}^n)$$ and forms a Hilbert basis of the Hilbert space $$H:=L^2([0,T],\mathbb{C})$$ of square-integrable functions. Therefore, each solution candidate $$x\in H^1_{\rm per}((0,T),\mathbb{C}^n)$$ can be represented by a Fourier-series $$x(t)=\sum_{k=-\infty}^\infty \hat x_k \exp\left(i k\frac{2\pi t}{T}\right) $$ with Fourier-coefficients $$\hat x_k:=\frac1T\int_0^T\psi^*_k(t)\cdot x(t)dt$$ and the system equation is satisfied in the weak sense if for every base function $$\psi\in B$$ the variational equation

0=\langle \psi, F(t,x,\dot x)\rangle_H := \frac 1 T \int_0^T \psi^*(t) \cdot F(t,x,\dot x) dt $$ is fulfilled. This variational equation represents an infinite sequence of scalar equations since it has to be tested for the infinite number of base functions $$\psi$$ in $$B$$.

The Galerkin approach to the harmonic balance is to project the candidate set as well as the test space for the variational equation to the finitely dimensional sub-space spanned by the finite base $$B_N:=\{\psi_k \mid k\in\mathbb{Z}\text{ with } -N \leq k \leq N\}$$.

This gives the finite-dimensional solution $$ x(t) = \sum_{k=-N}^N \hat x_k \psi_k(t) = \sum_{k=-N}^N \hat x_k \exp\left(i k \frac{2\pi t}{T}\right)$$ and the finite set of equations

0 = \langle \psi_k, F(t,x,\dot x)\rangle\quad\text{ with }k=-N,\ldots,N $$ which can be solved numerically.

In the special context of electronics, the algorithm starts with Kirchhoff's current law written in the frequency-domain. To increase the efficiency of the procedure, the circuit may be partitioned into its linear and nonlinear parts, since the linear part is readily described and calculated using nodal analysis directly in the frequency domain.

First, an initial guess is made for the solution, then an iterative process continues:


 * 1) Voltages $$V$$ are used to calculate the currents of the linear part, $$I_\text{linear}$$ in the frequency domain.
 * 2) Voltages $$V$$ are then used to calculate the currents in the nonlinear part, $$I_\text{nonlinear}$$. Since nonlinear devices are described in the time domain, the frequency-domain voltages $$V$$ are transformed into the time domain, typically using inverse Fast Fourier transforms. The nonlinear devices are then evaluated using the time-domain voltage waveforms to produce their time-domain currents. The currents are then transformed back into the frequency domain.
 * 3) According to Kirchhoff's circuit laws, the sum of the currents must be zero, $$\epsilon = I_\text{linear} + I_\text{nonlinear} = 0$$. An iterative process, usually Newton iteration, is used to update the network voltages $$V$$ such that the current residual $$\epsilon$$ is reduced. This step requires formulation of the Jacobian $$\tfrac{d\epsilon}{dV}$$.

Convergence is reached when $$\epsilon$$ is acceptably small, at which point all voltages and currents of the steady-state solution are known, most often represented as Fourier coefficients.