User:FortW/sandbox

Information field theory (IFT) is a Bayesian statistical field theory relating to signal reconstruction, cosmography, and other related areas. IFT summarize the information available on a physical field using Bayesian probabilities. It uses computational techniques developed for quantum field theory and statistical field theory to handle the infinite number of degrees of freedom of a field and to derive algorithms for the calculation of field expectation values. For example, the posterior expectation value of a field generated by a known Gaussian process and measured by a linear device with known Gaussian noise statistics is given by a generalized Wiener filter applied to the measured data. IFT extends such known filter formula to situations with nonlinear devices, non-Gaussian field or noise statistics, dependence of the noise statistics on the field values, and partly unknown parameters of measurement. For this it uses Feynman diagrams, renormalisation flow equations, and other methods from mathematical physics.

Motivation
Fields play an important role in science, technology, and economy. They describe the spatially variations of a quantity, like the air temperature as a function of position. Knowing the configuration of a field can be of large value. Measurements of of fields, however, can never provide the precise field configuration with certainty. Physical fields have an infinite number of degrees of freedom, but the data of any measurement device is always finite, providing only a finite number of constraints on the field.Thus, an unambiguous deduction of such a field from measurement data alone is impossible and only probabilistic inference remains as a means to make statements about the field. Fortunately, physical fields exhibit correlations and follow often known physical laws. Such information is best fused into the field inference in order to overcome the mismatch of field degrees of freedom to measurement points. To handle this, an information theory for fields is needed, and that is what information field theory is.

Bayesian inference
Be $$s(x)$$ a field value at a location $$x\in\Omega$$ in a space $$\Omega$$. The prior knowledge about the unknown signal field $$s$$ is encoded in the probability distribution $$\mathcal{P}(s)$$. The data $$d$$ provides additional information on $$s$$ via the likelihood $$\mathcal{P}(d|s)$$ that gets incorporated into the posterior probabilityaccording to Bayes theorem.

Information Hamiltonian
In IFT Bayes theorem is usually rewritten in the language of a statistical field theory,with the information Hamiltonian$$\mathcal{H}(d,s) \equiv -\ln \mathcal{P}(d,s) = -\ln \mathcal{P}(d|s)-\ln \mathcal{P}(s) \equiv \mathcal{H}(d|s)+\mathcal{H}(s),$$the negative logarithm of the joint probability of data and signal and with the partition functionThis reformulation of Bayes theorem permits the usage of methods of mathematical physics developed for the treatment of statistical field theories and quantum field theories.

Fields
As fields have an infinite number of degrees of freedom, the definition of probabilities over spaces of field configurations has subtleties. Identifying physical fields as elements of function spaces of mathematics provides the problem that no Lesbeque measure is defined over the latter and therefore probability densities can not be defined there. However, physical fields have much more regularity as most elements of function spaces, as they are continuous and smooth at most of their locations. Therefore less general, but sufficiently flexible constructions can be used to handle the infinite number of degrees of freedom of a field.

A pragmatic approach is to regard the field to be discretized in terms of pixels. Each pixels carries a single field value that is assumed to be constant within the pixel volume. All statements about the continuous field have then to be cast into its pixel representation. This way, one deals with finite dimensional field spaces, over which probability densities are well definable.

In order for this description to be a proper field theory, it is further required that the pixel resolution $$\Delta x$$ can always be refined, while expectation values of the discretized field $$s_{\Delta x}$$ converge to finite values:$$\langle f(s) \rangle_{(s|d)}\equiv \lim_{\Delta x \rightarrow 0} \int ds_{\Delta x} f(s_{\Delta x})\, \mathcal{P}(s_{\Delta x}). $$

Path integrals
If this limit exists, one can talk about the field configuration space integral or path integral irrespective of the resolution it might be evaluated in numerically.

Gaussian prior
The simplest prior for a field is that of a zero mean Gaussian probability distribution$$\mathcal{P}(s) = \mathcal{G}(s,S)\equiv \frac{1}{|2\pi S|}e^{-\frac{1}{2}\,s^\dagger S^{-1}\, s}.$$The determinant in the denominator might be ill-defined in the continuum limit $$\Delta x \rightarrow 0$$, however, all what is necessary for IFT to be consistent is that this determinant can be estimated for any finite resolution field representation with  $$\Delta x > 0$$ and that this permits the calculation of convergent expectation values.

A Gaussian probability distribution requires the specification of the field two point correlation function $$S \equiv \langle s\, s^\dagger\rangle_{(s)} $$ with coefficients and a scalar product for continuous fields$$a^\dagger b \equiv \int_\Omega dx \, \overline{a(x)}\, b(x),$$ with respect to which the inverse signal field covariance $$S^{-1}$$ is constructed, i.e. $$(S^{-1} S)_{xy} \equiv \int_\Omega dz\, (S^{-1})_{xz} S_{zy} = \mathbb{1}_{xy} \equiv \delta(x-y).$$

The corresponding prior information Hamiltonian reads

Measurement equation
The measurement data $$d $$ was generated with the likelihood $$\mathcal{P}(d|s)$$. In case the instrument was linear, a measurement equation of the form d=R\,s+ncan be given, in which $$R$$ is the instrument response, which describes how the data on average reacts to the signal, and $$n$$ is the noise, simply the difference between data $$d $$ and linear signal response $$R\,s$$. It is essential to note, that the response translates the infinite dimensional signal vector into the finite dimensional data space. In components this reads $$d_i = \int_\Omega dx \, R_{ix}\,s_x + n_i,$$

where a vector component notation was also introduced for signal and data vectors.

If the noise follows a signal independent zero mean Gaussian statistics with covariance $$N$$, $$\mathcal{P}(n|s)=\mathcal{G}(n,N),$$ then the likelihood is Gaussian as well,and the likelihood information Hamiltonian isA linear measurement of a Gaussian signal subject to Gaussian, signal independent noise leads to a free IFT.

Free Hamiltonian
The joined information Hamiltonian of the Gaussian scenario described above is$$\begin{align} \mathcal{H}(d,s) &= \mathcal{H}(d|s)+\mathcal{H}(s)\\ &\widehat{=} \frac{1}{2}\,(d-R\,s)^\dagger N^{-1}\, (d-R\,s) + \frac{1}{2}\,s^\dagger S^{-1}\, s\\ & \widehat{=} \frac{1}{2}\,\left[s^\dagger \underbrace{(S^{-1}+R^\dagger N^{-1}R)}_{D^{-1}} \, s - s^\dagger \underbrace{R^\dagger N^{-1} d}_j - \underbrace{d^\dagger N^{-1} R}_{j^\dagger}\, s \right] \\ &\equiv \frac{1}{2}\,\left[s^\dagger D^{-1} s - s^\dagger j - j^\dagger s \right]\\ &= \frac{1}{2}\,\left[s^\dagger D^{-1} s - s^\dagger D^{-1}\underbrace{D\,j}_m - \underbrace{j^\dagger D}_{m^\dagger}\,D^{-1} s \right] \\ & \widehat{=} \frac{1}{2}\,(s-m)^\dagger D^{-1} (s-m) , \end{align} $$where $$\widehat{=} $$ denotes equality up to irrelevant constants, which means here expressions that are independent of $$s $$. From this is it clear, that the posterior must be a Gaussian with mean $$m $$ and variance $$D $$, $$\mathcal{P}(s|d) \propto e^{-\mathcal{H}(d,s)} \propto e^{-\frac{1}{2}\,(s-m)^\dagger D^{-1} (s-m)} \propto \mathcal{G}(s-m,D) $$where equality between the right and left hand sides holds as both distributions are normalized, $$\int \mathcal{D}s\,\mathcal{P}(s|d) =1= \int \mathcal{D}s\,\mathcal{G}(s-m,D) $$.

Generaliezed Wiener filter
The posterior mean$$m=D\,j=(S^{-1}+R^\dagger N^{-1} R)^{-1} R^\dagger N^{-1} d $$is also known as the generalized Wiener filter solution and the uncertainty covariance $$D =(S^{-1}+R^\dagger N^{-1} R)^{-1} $$as the Wiener variance.

In IFT, $$j = R^\dagger N^{-1} d $$ is called the information source, at it acts as a source term to excite the field (knowledge), and $$D $$ the information propagator, as it propagates information from one location to another in$$m_x = \int_\Omega dx\, D_{xy} j_y. $$

Interaction Hamiltonian
If any of the assumptions that lead to the free theory is violated, IFT becomes an interacting theory, with terms that are of higher than quadratic order in the signal field. This happens when the signal or the noise are not following Gaussian statistics, when the response is non-linear, when the noise depends on the signal, or when response or covariances are uncertain.

In this case, the information Hamiltonian might be expandable in a Taylor-Fréchet series,

where $$\mathcal{H}_{\text{free}}(d,\,s)$$ is the free Hamiltonian, which alone would lead to a Gaussian posterior, and $$\mathcal{H}_{\text{int}}(d,\,s)$$ is the interacting Hamiltonian, which encodes non-Gaussian corrections. The first and second order Taylor coefficients are often identified with the (negative) information source $$-j$$ and information propagator $$D$$, receptively. The higher coefficients $$\Lambda_{x_{1}...x_{n}}^{(n)}$$are associated with non-linear self-interactions.

Classical field
The classical field $$s_{\text{cl}}$$ minimizes the information Hamiltonian,$$\left. \frac{\partial \mathcal{H}(d,s)}{\partial s} \right|_{s=s_{\text{cl}}} =0, $$ and therefore maximizes the posterior:The classical field  $$s_{\text{cl}}$$ is therefore the maximum a posterior estimator of the field inference problem.

Critical filter
The Wiener filter problem requires the two point correlation $$S \equiv \langle s\, s^\dagger\rangle_{(s)} $$ of a field to be known. If it is unknown, it has to be inferred alongside with the field itself. This requires the specification of a hyperprior $$\mathcal{P}(S)$$. Often, statistical homogeneity can be assumed, implying that $$S$$ is diagonal in Fourier space (for $$\Omega = \mathbb{R}^u$$ being a  $$u$$ dimensional Cartesian space). In this case, only the Fourier space power spectrum $$P_s(\vec{k})$$ needs to be inferred. In the case of statistical isotropy, this spectrum depends only on the length $$k=|\vec{k}|$$ of the Fourier vector $$\vec{k}$$ and only a one dimensional spectrum $$P_s(k)$$ has to be determined. The prior field covariance reads then in Fourier space coordinates $$S_{\vec{k}\vec{q}}=(2\pi)^u \delta(\vec{k}-\vec{q})\,P_s(k)$$.

If the prior on $$P_s(k)$$ is flat, the joint probability of data and spectrum iswhere the notation of the information propagator $$D =(S^{-1}+R^\dagger N^{-1} R)^{-1} $$ and source $$j= R^\dagger N^{-1} d$$ of the Wiener filter problem was used again. The corresponding information Hamiltonian is$$\mathcal{H}(d,P_{s}) \widehat{=}\frac{1}{2}\left[\ln|S\,D^{-1}|-j^{\dagger}D\,j\right] =\frac{1}{2}\mathrm{Tr}\left[\ln\left(S\,D^{-1}\right)-j\,j^{\dagger}D\right],$$where $$\widehat{=}$$ denotes equality up to irrelevant constants (here: constant with respect to $$P_s$$). Minimizing this with respect to $$P_s$$, in order to get its maximum a posteriori power spectrum estimator, yieldswhere the Wiener filter mean $$m=D\,j$$ and the spectral band projector $$(\mathbb{P}_k)_{\vec{q}\vec{q}'}\equiv (2\pi)^u\delta(\vec{q}-\vec{q}')\, \delta(|\vec{q}|-k)$$were introduced. The latter commutes with $$S^{-1}$$, since $$(S^{-1})_{\vec{k}\vec{q}}=(2\pi)^u \delta(\vec{k}-\vec{q})\,[P_s(k)]^{-1}$$ is diagonal in Fourier space. The maximum a posteriori estimator for the power spectrum is therefore It has to be calculated iteratively, as $$m=D\,j$$ and $$D=(S^{-1}+R^\dagger N^{-1} R)^{-1}$$ depend both on $$P_s$$ themselves. In an empirical Bayes approach, the estimated $$P_s$$ would be taken as given. As a consequence, the posterior mean estimate for the signal field is in this approximation the corresponding $$m$$ and its uncertainty the corresponding $$D$$.

The resulting non-linear filter is called the critical filter . The generalization of the power spectrum estimation formula as exhibits filter that with perception thresholds for $$\delta < 1$$, meaning that the data variance has to exceed the expected noise level by a certain threshold before the signal reconstruction $$m$$ becomes non-zero. Whenever the data variance exceeds this threshold slightly, the signal reconstruction jumps to a finite excitation level, similar to a first order phase transition in thermodynamic systems. For filter with $$\delta = 1$$ perception of the signal starts continuously as soon the data variance exceeds the noise level. The disappearance of the discontinuous perception at $$\delta = 1$$ similar to a thermodynamic system going through a critical point. Hence the name critical filter.

The critical filter, and extensions thereof to non-linear measurements and the inclusion of non-flat spectrum priors, permitted the application of IFT to real world signal inference problems, for which the signal covariance is usually unknown a priori.

IFT Application Examples
The generalized Wiener filter, that emerges in free IFT, is in broad usage in signal processing. Algorithms explicitly based on IFT were derived for a number of applications. Many of them are implemented using the [https://gitlab.mpcdf.mpg.de/ift/NIFTy Numerical Information Field Theory] (NIFTy) library. Deconvolving, and Decomposing Photon Observations''. It reconstructs images from individual photon count events taking the Poisson statistics of the counts and an instrument response function into account. It splits the sky emission into an image of diffuse emission and one of point sources, exploiting the different correlation structure and statistics of the two components for their separation. D³PO has been applied to data of the Fermi and the RXTE satellites.
 * D³PO is a code for ''Denoising,

synthesis imaging in radio astronomy. RESOLVE is similar to D³PO, but it assumes a Gaussian likelihood and a Fourier space response function. It has been applied to data of the Very Large Array.
 * RESOLVEis a Bayesian algorithm for aperture

framework for Spatially Explicit Spectral Analysis'' for spatially explicit spectral analysis of point clouds and geospatial data.
 * PySESA is a ''Python

Advanced Theory
Many techniques from quantum field theory can be used to tackle IFT problems, like Feynman diagrams, effective actions, and the field operator formalism.

Feynman diagrams
In case the interaction coefficients $$\Lambda^{(n)}$$ in a Taylor-Fréchet expansion of the information Hamiltonianare small, the log partition function, or Helmholtz free energy,

$$\ln \mathcal{Z}(d) = \ln \int \mathcal{D}s \,e^{-\mathcal{H}(d,s)}=\sum_{c\in C}c $$can be expanded asymptotically in terms of these coefficients. The free Hamiltonian specifies the mean and variance of the Gaussian distribution $$\mathcal{G}(s-m,D)$$ over which the expansion is integrated. This leads to a sum over all connected Feynman diagrams $$C$$. From the Helmholtz free energy, any connected moment of the field can be calculated via Situations where small expansion parameters exist that are needed for such a diagrammatic expansion to converge are given by nearly Gaussian signal fields, where the non-Gaussianity of the field statistics leads to small interaction coefficients $$\Lambda^{(n)}$$. For example, the statistics of the Cosmic Microwave Background is nearly Gaussian, with small amounts of non-Gaussianities believed to be seeded during the inflationary epoch in the Early Universe.

Effective Action
In order to have a stable numerics for IFT problems, a field functional that if minimized provides the posterior mean field is needed. Such is given by the effective action or Gibbs free energy of a field. The Gibbs free energy $$G$$ can be constructed from the Helmholtz free energy via a Legendre transformation. In IFT, it is given by the difference of the internal information energy and the Shannon entropy for temperature $$T=1$$, where a Gaussian posterior approximation $$\mathcal{P}'(s|d')=\mathcal{G}(s-m,D)$$ is use with the approximate data $$d' = (m,D)$$ containing the mean and the dispersion of the field. The Gibbs free energy is then $$\begin{align} G(m,D)&=U(m,D)-T\,\mathcal{S}(m,D)\\&=\langle\mathcal{H}(d,s)-\ln\mathcal{P}'(s|d)\rangle_{\mathcal{P}'(s|d)}\\&=-\int\mathcal{D}s\,\mathcal{P}'(s|d)\,\ln\frac{\mathcal{P}'(s|d)}{\mathcal{P}(d,s)}\\&=-\int\mathcal{D}s\,\mathcal{P}'(s|d)\,\ln\frac{\mathcal{P}'(s|d)}{\mathcal{P}(s|d)\,\mathcal{P}(d)}\\&=-\int\mathcal{D}s\,\mathcal{P}'(s|d)\,\ln\frac{\mathcal{P}'(s|d)}{\mathcal{P}(s|d)}+\ln\,\mathcal{P}(d)\\&=\text{KL}(\mathcal{P}'(s|d)||\mathcal{P}(s|d))+\ln\mathcal{Z}(d), \end{align}$$the Kullback-Leibler divergence $$\text{KL}(\mathcal{P}',\mathcal{P})$$ between approximative and exact posterior plus the Helmholtz free energy. As the latter does not depend on the approximate data $$d' = (m,D)$$, minimizing the Gibbs free energy is equivalent to minimizing the Kullback-Leibler divergence between approximate and exact posterior. Thus, the effective action approach of IFT is equivalent to the variational Bayesian methods, which also minimize the the Kullback-Leibler divergence between approximate and exact posteriors.

Minimizing the Gibbs free energy provides approximatively the posterior mean field $$\langle s \rangle_{(s|d)} = \int \mathcal{D}s\,s\,\mathcal{P}(s|d), $$whereas minimizing the information Hamiltonian provides the maximum a posteriori field. As the latter is known to over-fit noise, the former is usually a better field estimator.

Operator formalism
The calculation of the Gibbs free energy requires the calculation of Gaussian integrals over information Hamiltonian, since the internal information energy is$$U(m,D)=\langle \mathcal{H}(d,s)\rangle_{\mathcal{P}'(s|d')} = \int \mathcal{D}s\,\mathcal{H}(d,s)\,\mathcal{G}(s-m,D).$$Such integrals can be calculated via a field operator formalism , in which is the field operator. This generates the field expression $$s$$ within the integral if applied to the Gaussian distribution function, and any higher power of the field if applied several timesIf the information Hamiltonian is analytical, all its terms can be generated via the field operatorAs the field operator does not depend on the field $$s $$ itself, it can be pulled out of the path integral of the internal information energy construction,where $$1_m=1$$ should be regarded as a functional of $$m$$ that always returns the value $$1$$. The resulting expression can be calculated by commuting the the mean field annihilator $$D\,\frac{\mathrm{d}}{\mathrm{d}m}$$ to the right of the expression, where they vanish since $$\frac{\mathrm{d}}{\mathrm{d}m}\,1_m =0$$. The mean field annihilator $$D\,\frac{\mathrm{d}}{\mathrm{d}m}$$ commutes with the mean field as $$\left[ D\,\frac{\mathrm{d}}{\mathrm{d}m},m\right] = D\,\frac{\mathrm{d}}{\mathrm{d}m}\, m- m\,D\,\frac{\mathrm{d}}{\mathrm{d}m} =D+m\,D\,\frac{\mathrm{d}}{\mathrm{d}m} -m\,D\,\frac{\mathrm{d}}{\mathrm{d}m}=D.$$

By the usage of the field operator formalism the Gibbs free energy can be calculated, which permits the (approximate) inference of the posterior mean field via a numerical robust functional minimization.

History
The book of Norbert Wiener might be regarded as one of the first works on field inference. The usage of path integrals for field inference was proposed by a number of authors, e.g. Edmund Bertschinger or William Bialek and A. Zee. The connection of field theory and Bayesian reasoning was made explicit by Jörg Lemm. The term information field theory was coined by Torsten Enßin. See the latter reference for more information on the history of IFT.