Broyden's method

In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in $k$ variables. It was originally described by C. G. Broyden in 1965.

Newton's method for solving $f(x) = 0$ uses the Jacobian matrix, $J$, at every iteration. However, computing this Jacobian is a difficult and expensive operation. The idea behind Broyden's method is to compute the whole Jacobian at most only at the first iteration and to do rank-one updates at other iterations.

In 1979 Gay proved that when Broyden's method is applied to a linear system of size $n × n$, it terminates in $2 n$ steps, although like all quasi-Newton methods, it may not converge for nonlinear systems.

Solving single-variable equation
In the secant method, we replace the first derivative $f′$ at $x_{n}$ with the finite-difference approximation:


 * $$f'(x_n) \simeq \frac{f(x_n) - f(x_{n-1})}{x_n - x_{n - 1}},$$

and proceed similar to Newton's method:


 * $$x_{n + 1} = x_n - \frac{f(x_n)}{f^\prime(x_n)}$$

where $n$ is the iteration index.

Solving a system of nonlinear equations
Consider a system of $k$ nonlinear equations
 * $$\mathbf f(\mathbf x) = \mathbf 0 ,$$

where $f$ is a vector-valued function of vector $x$:


 * $$\mathbf x = (x_1, x_2, x_3, \dotsc, x_k),$$
 * $$\mathbf f(\mathbf x) = \big(f_1(x_1, x_2, \dotsc, x_k), f_2(x_1, x_2, \dotsc, x_k), \dotsc, f_k(x_1, x_2, \dotsc, x_k)\big).$$

For such problems, Broyden gives a generalization of the one-dimensional Newton's method, replacing the derivative with the Jacobian $J$. The Jacobian matrix is determined iteratively, based on the secant equation in the finite-difference approximation:


 * $$\mathbf J_n (\mathbf x_n - \mathbf x_{n - 1}) \simeq \mathbf f(\mathbf x_n) - \mathbf f(\mathbf x_{n - 1}),$$

where $n$ is the iteration index. For clarity, let us define:


 * $$\mathbf f_n = \mathbf f(\mathbf x_n),$$
 * $$\Delta \mathbf x_n = \mathbf x_n - \mathbf x_{n - 1},$$
 * $$\Delta \mathbf f_n = \mathbf f_n - \mathbf f_{n - 1},$$

so the above may be rewritten as


 * $$\mathbf J_n \Delta \mathbf x_n \simeq \Delta \mathbf f_n.$$

The above equation is underdetermined when $k$ is greater than one. Broyden suggests using the current estimate of the Jacobian matrix $J_{n−1}$ and improving upon it by taking the solution to the secant equation that is a minimal modification to $J_{n−1}$:


 * $$\mathbf J_n = \mathbf J_{n - 1} + \frac{\Delta \mathbf f_n - \mathbf J_{n - 1} \Delta \mathbf x_n}{\|\Delta \mathbf x_n\|^2} \Delta \mathbf x_n^{\mathrm T}.$$

This minimizes the following Frobenius norm:


 * $$\|\mathbf J_n - \mathbf J_{n - 1}\|_{\rm F} .$$

We may then proceed in the Newton direction:


 * $$\mathbf x_{n + 1} = \mathbf x_n - \mathbf J_n^{-1} \mathbf f(\mathbf x_n) .$$

Broyden also suggested using the Sherman–Morrison formula to update directly the inverse of the Jacobian matrix:


 * $$\mathbf J_n^{-1} = \mathbf J_{n - 1}^{-1} + \frac{\Delta \mathbf x_n - \mathbf J^{-1}_{n - 1} \Delta \mathbf f_n}{\Delta \mathbf x_n^{\mathrm T} \mathbf J^{-1}_{n - 1} \Delta \mathbf f_n} \Delta \mathbf x_n^{\mathrm T} \mathbf J^{-1}_{n - 1}.$$

This first method is commonly known as the "good Broyden's method".

A similar technique can be derived by using a slightly different modification to $J_{n−1}$. This yields a second method, the so-called "bad Broyden's method" (but see ):
 * $$\mathbf J_n^{-1} = \mathbf J_{n - 1}^{-1} + \frac{\Delta \mathbf x_n - \mathbf J^{-1}_{n - 1} \Delta \mathbf f_n}{\|\Delta \mathbf f_n\|^2} \Delta \mathbf f_n^{\mathrm T}.$$

This minimizes a different Frobenius norm:


 * $$\|\mathbf J_n^{-1} - \mathbf J_{n - 1}^{-1}\|_{\rm F}.$$

Many other quasi-Newton schemes have been suggested in optimization, where one seeks a maximum or minimum by finding the root of the first derivative (gradient in multiple dimensions). The Jacobian of the gradient is called Hessian and is symmetric, adding further constraints to its update.

The Broyden Class of Methods
In addition to the two methods described above, Broyden defined a whole class of related methods. In general, methods in the Broyden class are given in the form $$ \mathbf{J}_{k+1}=\mathbf{J}_k-\frac{\mathbf{J}_k s_k s_k^T \mathbf{J}_k}{s_k^T \mathbf{J}_k s_k}+\frac{y_k y_k^T}{y_k^T s_k}+\phi_k\left(s_k^T \mathbf{J}_k s_k\right) v_k v_k^T, $$ where $$y_k := \mathbf{f}(\mathbf{x}_{k+1}) - \mathbf{f}(\mathbf{x}_{k}),$$ $$s_k := \mathbf{x}_{k+1} - \mathbf{x}_k,$$ and $$v_k = \left[\frac{y_k}{y_k^T s_k} - \frac{\mathbf{J}_k s_k}{s_k^T \mathbf{J}_k s_k}\right],$$ and $$\phi_k \in \mathbb{R}$$ for each $$k = 1, 2, ...$$. The choice of $$\phi_k$$ determines the method.

Other methods in the Broyden class have been introduced by other authors.
 * The Davidon–Fletcher–Powell (DFP) method is the only member of this class being published before the two methods defined by Broyden. For the DFP method, $$\phi_k = 1$$.
 * Schubert's or sparse Broyden algorithm – a modification for sparse Jacobian matrices.
 * Klement (2014) – uses fewer iterations to solve many equation systems.