Frame (linear algebra)

In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.

History
Because of the various mathematical components surrounding frames, frame theory has roots in harmonic and functional analysis, operator theory, linear algebra, and matrix theory.

The Fourier transform has been used for over a century as a way of decomposing and expanding signals. However, the Fourier transform masks key information regarding the moment of emission and the duration of a signal. In 1946, Dennis Gabor was able to solve this using a technique that simultaneously reduced noise, provided resiliency, and created quantization while encapsulating important signal characteristics. This discovery marked the first concerted effort towards frame theory.

The frame condition was first described by Richard Duffin and Albert Charles Schaeffer in a 1952 article on nonharmonic Fourier series as a way of computing the coefficients in a linear combination of the vectors of a linearly dependent spanning set (in their terminology, a "Hilbert space frame"). In the 1980s, Stéphane Mallat, Ingrid Daubechies, and Yves Meyer used frames to analyze wavelets. Today frames are associated with wavelets, signal and image processing, and data compression.

Motivating example: computing a basis from a linearly dependent set
Suppose we have a vector space $$V$$ over a field $$F$$ and we want to express an arbitrary element $$\mathbf{v} \in V$$ as a linear combination of the vectors $$\{\mathbf{e}_{k}\}\in V$$, that is, finding coefficients $$\{c_k\} \in F$$ such that


 * $$ \mathbf{v} = \sum_k c_k \mathbf{e}_k.$$

If the set $$\{ \mathbf{e}_{k} \}$$ does not span $$V$$, then such coefficients do not exist for every such $$\mathbf{v}$$. If $$\{ \mathbf{e}_{k} \}$$ spans $$V$$ and also is linearly independent, this set forms a basis of $$V$$, and the coefficients $$c_{k}$$ are uniquely determined by $$\mathbf{v}$$. If, however, $$\{\mathbf{e}_{k}\}$$ spans $$V$$ but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if $$V$$ is of infinite dimension.

Given that $$\{\mathbf{e}_k\}$$ spans $$V$$ and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan:


 * 1) Removing arbitrary vectors from the set may cause it to be unable to span $$V$$ before it becomes linearly independent.
 * 2) Even if it is possible to devise a specific way to remove vectors from the set until it becomes a basis, this approach may become unfeasible in practice if the set is large or infinite.
 * 3) In some applications, it may be an advantage to use more vectors than necessary to represent $$\mathbf{v}$$.  This means that we want to find the coefficients $$c_k$$ without removing elements in $$\{\mathbf{e}_k\}$$. The coefficients $$c_k$$ will no longer be uniquely determined by $$\mathbf{v}$$. Therefore, the vector $$\mathbf{v}$$ can be represented as a linear combination of $$\{\mathbf{e}_{k}\}$$ in more than one way.

Definition
Let $$V$$ be an inner product space and $$\{\mathbf{e}_k\}_{k \in \mathbb{N}}$$ be a set of vectors in $$V$$. The set $$\{\mathbf{e}_k\}_{k \in \mathbb{N}}$$ is a frame of $$V$$ if it satisfies the so called frame condition. That is, if there exist two constants $$ 0<A \le B < \infty $$ such that
 * $$A \left\| \mathbf{v} \right\| ^2 \leq \sum_{k \in \mathbb{N}} \left| \langle \mathbf{v}, \mathbf{e}_k \rangle \right| ^2 \leq B \left\| \mathbf{v} \right\| ^2, \quad \forall \mathbf{v}\in V.$$

A frame is called overcomplete (or redundant) if it is not a Riesz basis for the vector space. The redundancy of the frame is measured by the lower and upper frame bounds (or redundancy factors) $$A$$ and $$B$$, respectively. That is, a frame of $$K \geq N$$ normalized vectors $$\|\mathbf{e}_k\| = 1$$ in an $$N$$-dimensional space $$V$$ has frame bounds which satisfiy
 * $$ 0 < A \leq \frac{1}{N}\sum_{k=1}^{K}|\langle\mathbf{e}_k,\mathbf{e}_k\rangle|^2 =\frac{K}{N} \leq B < \infty.$$

If the frame is a Riesz basis and is therefore linearly independent, then $$A\leq 1 \leq B$$.

The frame bounds are not unique because numbers less than $$A$$ and greater than $$B$$ are also valid frame bounds. The optimal lower bound is the supremum of all lower bounds and the optimal upper bound is the infimum of all upper bounds.

Analysis operator
If the frame condition is satisfied, then the linear operator defined as
 * $$ \mathbf{T}: V \to \ell^2, \quad \mathbf{v} \mapsto \mathbf{T}\mathbf{v} = \sum_{k} \langle \mathbf{v},\mathbf{e_k}\rangle,$$

mapping $$\mathbf{v} \in V$$ to the sequence of frame coefficients $$c_k = \langle \mathbf{v},\mathbf{e_k}\rangle$$, is called the analysis operator. Using this definition, the frame condition can be rewritten as
 * $$A \left\| \mathbf{v} \right\| ^2 \leq \left\| \mathbf{T} \mathbf{v} \right\| ^2 = \sum_k \left| \langle \mathbf{v}, \mathbf{e}_k \rangle \right| ^2 \leq B \left\| \mathbf{v} \right\| ^2.$$

Synthesis operator
The adjoint of the analysis operator is called the synthesis operator of the frame and defined as
 * $$ \mathbf{T}^*: \ell^2 \to V, \quad \{c_k\}_{k \in \mathbb{N}} \mapsto \sum_k c_k\mathbf{e}_k.$$

Frame operator
The composition of the analysis operator and the synthesis operator leads to the frame operator defined as
 * $$\mathbf{S} : V \rightarrow V, \quad \mathbf{v}\mapsto \mathbf{S} \mathbf{v} = \mathbf{T}^*\mathbf{T}\mathbf{v} = \sum_{k} \langle \mathbf{v}, \mathbf{e}_{k} \rangle \mathbf{e}_{k}.$$

From this definition and linearity in the first argument of the inner product, the frame condition now yields
 * $$A \left\| \mathbf{v} \right\| ^2 \leq \left\| \mathbf{T} \mathbf{v} \right\| ^2 = \langle \mathbf{S} \mathbf{v}, \mathbf{v} \rangle \leq B \left\| \mathbf{v} \right\| ^2 .$$

If the analysis operator exists, then so does the frame operator $$\mathbf{S}$$ as well as the inverse $$\mathbf{S}^{-1}$$. Both $$\mathbf{S}$$ and $$\mathbf{S}^{-1}$$ are positive definite, bounded self-adjoint operators, resulting in $$A$$ and $$B$$ being the infimum and supremum values of the spectrum of $$\mathbf{S}$$. In finite dimensions, the frame operator is automatically trace-class, with $$A$$ and $$B$$ corresponding to the smallest and largest eigenvalues of $$\mathbf{S}$$ or, equivalently, the smallest and largest singular values of $$\mathbf{T}$$.

Relation to bases
The frame condition is a generalization of Parseval's identity that maintains norm equivalence between a signal in $$V$$ and its sequence of coefficients in $$\ell^2$$.

If the set $$\{\mathbf{e}_k\}$$ is a frame of $$V$$, it spans $$V$$. Otherwise there would exist at least one non-zero $$\mathbf{v} \in V$$ which would be orthogonal to all $$\mathbf{e}_k$$ such that

A \left\| \mathbf{v} \right\| ^2 \leq 0 \leq B \left\| \mathbf{v} \right\| ^{2}; $$ either violating the frame condition or the assumption that $$\mathbf{v} \neq 0$$.

However, a spanning set of $$V$$ is not necessarily a frame. For example, consider $$V = \mathbb{R}^2$$ with the dot product, and the infinite set $$\{\mathbf{e}_k\}$$ given by
 * $$\left\{ (1,0), \, (0,1), \, \left(0,\tfrac{1}{\sqrt{2}}\right) , \, \left(0,\tfrac{1}{\sqrt{3}}\right), \dotsc \right\}.$$

This set spans $$V$$ but since
 * $$\sum_k \left| \langle \mathbf{e}_k, (0,1)\rangle \right| ^2 = 0 + 1 + \tfrac{1}{2} + \tfrac{1}{3} +\dotsb = \infty,$$

we cannot choose a finite upper frame bound B. Consequently, the set $$\{\mathbf{e}_k\}$$ is not a frame.

Dual frames
Let $$\{\mathbf{e}_{k} \}$$ be a frame; satisfying the frame condition. Then the dual operator is defined as
 * $$\widetilde{\mathbf{T}}\mathbf{v} = \sum_{k} \langle \mathbf{v},\tilde{\mathbf{e}}_k\rangle,$$

with
 * $$\tilde{\mathbf{e}}_{k} = (\mathbf{T}^*\mathbf{T})^{-1} \mathbf{e}_{k} = \mathbf{S}^{-1} \mathbf{e}_{k},$$

called the dual frame (or conjugate frame). It is the canonical dual of $$\{\mathbf{e}_{k}\}$$ (similar to a dual basis of a basis), with the property that

\mathbf{v} = \sum_k \langle \mathbf{v}, \mathbf{e}_k \rangle \mathbf{\tilde{e}}_k = \sum_k \langle \mathbf{v}, \mathbf{\tilde{e}}_k \rangle \mathbf{e}_k, $$ and subsequent frame condition
 * $$\frac{1}{B}\|\mathbf{v}\|^2 \leq \sum_{k} |\langle \mathbf{v},\tilde{\mathbf{e}}_k\rangle|^2 = \langle \mathbf{T}\mathbf{S}^{-1}\mathbf{v},\mathbf{T}\mathbf{S}^{-1}\mathbf{v} \rangle = \langle \mathbf{S}^{-1}\mathbf{v},\mathbf{v}\rangle \leq \frac{1}{A}\|\mathbf{v}\|^2, \quad \forall \mathbf{v} \in V.$$

Canonical duality is a reciprocity relation, i.e. if the frame $$\{ \mathbf{\tilde{e}}_{k} \}$$ is the canonical dual of $$\{ \mathbf{e}_{k} \},$$ then the frame $$\{ \mathbf{e}_{k} \}$$ is the canonical dual of $$\{ \mathbf{\tilde{e}}_{k} \}.$$ To see that this makes sense, let $$\mathbf{v}$$ be an element of $$V$$ and let

\mathbf{u} = \sum_{k} \langle \mathbf{v}, \mathbf{e}_{k} \rangle \tilde{\mathbf{e}}_{k}. $$

Thus

\mathbf{u} = \sum_{k} \langle \mathbf{v}, \mathbf{e}_{k} \rangle ( \mathbf{S}^{-1} \mathbf{e}_{k} ) = \mathbf{S}^{-1} \left ( \sum_{k} \langle \mathbf{v}, \mathbf{e}_{k} \rangle \mathbf{e}_{k} \right ) = \mathbf{S}^{-1} \mathbf{S} \mathbf{v} = \mathbf{v}, $$ proving that
 * $$\mathbf{v}= \sum_{k} \langle \mathbf{v}, \mathbf{e}_{k} \rangle \tilde{\mathbf{e}}_{k}.$$

Alternatively, let

\mathbf{u} = \sum_{k} \langle \mathbf{v}, \tilde{\mathbf{e}}_{k} \rangle \mathbf{e}_{k}. $$

Applying the properties of $$\mathbf{S}$$ and its inverse then shows that

\mathbf{u} = \sum_{k} \langle \mathbf{v}, \mathbf{S}^{-1} \mathbf{e}_{k} \rangle \mathbf{e}_{k} = \sum_{k} \langle \mathbf{S}^{-1} \mathbf{v}, \mathbf{e}_{k} \rangle \mathbf{e}_{k} = \mathbf{S} (\mathbf{S}^{-1} \mathbf{v}) = \mathbf{v}, $$

and therefore



\mathbf{v} = \sum_{k} \langle \mathbf{v}, \tilde{\mathbf{e}}_{k} \rangle \mathbf{e}_{k}. $$ An overcomplete frame $$\{\mathbf{e}_{k}\}$$  allows us some freedom for the choice of coefficients $$c_{k}\neq \langle \mathbf{v}, \tilde{\mathbf{e}}_{k} \rangle$$ such that $\mathbf{v} = \sum_{k} c_{k} \mathbf{e}_{k}$. That is, there exist dual frames $$\{\mathbf{g}_{k}\} \neq \{\tilde{\mathbf{e}}_{k}\}$$ of $$\{\mathbf{e}_{k}\}$$ for which

\mathbf{v} = \sum_{k} \langle \mathbf{v}, \mathbf{g}_{k} \rangle \mathbf{e}_{k}, \quad \forall \mathbf{v} \in V. $$

Dual frame synthesis and analysis
Suppose $$V$$ is a subspace of a Hilbert space $$H$$ and let $$\{\mathbf{e}_k\}_{k\in \mathbb{N}}$$ and $$\{\tilde{\mathbf{e}}_k\}_{k\in \mathbb{N}}$$ be a frame and dual frame of $$V$$, respectively. If $$\{\mathbf{e}_k\}$$ does not depend on $$f \in H$$, the dual frame is computed as
 * $$\tilde{\mathbf{e}}_{k} = (\mathbf{T}^*\mathbf{T}_V)^{-1} \mathbf{e}_{k},$$

where $$\mathbf{T}_V$$ denotes the restriction of $$\mathbf{T}$$ to $$V$$ such that $$\mathbf{T}^*\mathbf{T}_V$$ is invertible on $$V$$. The best linear approximation of $$f$$ in $$V$$ is then given by the orthogonal projection of $$f \in H$$ onto $$V$$, defined as
 * $$P_V f =

\sum_k \langle f, \mathbf{e}_k \rangle \mathbf{\tilde{e}}_k = \sum_k \langle f, \mathbf{\tilde{e}}_k \rangle \mathbf{e}_k. $$ The dual frame synthesis operator is defined as
 * $$P_V f = \widetilde{\mathbf{T}}^*\mathbf{T} f = (\mathbf{T}^*\mathbf{T}_V)^{-1}\mathbf{T}^*\mathbf{T} f =\sum_k \langle f, \mathbf{e}_k \rangle \mathbf{\tilde{e}}_k,$$

and the orthogonal projection is computed from the frame coefficients $$\langle f,\mathbf{e}_k \rangle$$. In dual analysis, the orthogonal projection is computed from $$\{\mathbf{e}_k\}$$ as
 * $$P_V f = \mathbf{T}^*\widetilde{\mathbf{T}}f = \sum_k \langle f, \mathbf{\tilde{e}}_k \rangle \mathbf{e}_k$$

with dual frame analysis operator $$\{\widetilde{\mathbf{T}}f\}_k = \langle f,\tilde{\mathbf{e}}_k\rangle$$.

Applications and examples
In signal processing, it is common to represent signals as vectors in a Hilbert space. In this interpretation, a vector expressed as a linear combination of the frame vectors is a redundant signal. Representing a signal strictly with a set of linearly independent vectors may not always be the most compact form. Using a frame, it is possible to create a simpler, more sparse representation of a signal as compared with a family of elementary signals. Frames, therefore, provide "robustness". Because they provide a way of producing the same vector within a space, signals can be encoded in various ways. This facilitates fault tolerance and resilience to a loss of signal. Finally, redundancy can be used to mitigate noise, which is relevant to the restoration, enhancement, and reconstruction of signals.

Non-harmonic Fourier series
From Harmonic analysis it is known that the complex trigonometric system $\{\frac{1}{\sqrt{2\pi}}e^{ikx}\}_{k\in\mathbb{Z}}$ form an orthonormal basis for $L^2(-\pi,\pi)$. As such, $\{e^{ikx}\}_{k\in\mathbb{Z}}$ is a (tight) frame for $L^2(-\pi,\pi)$  with bounds $$A=B=2\pi$$.

The system remains stable under "sufficiently small" perturbations $$\{\lambda_k - k\}$$ and the frame $\{e^{i\lambda_k x}\}_{k\in\mathbb{Z}}$ will form a Riesz basis for $L^2(-\pi,\pi)$. Accordingly, every function $$f$$ in $L^2(-\pi,\pi)$ will have a unique non-harmonic Fourier series representation
 * $$f(x)=\sum_{k\in\mathbb{Z}} c_k e^{i\lambda_k x},$$

with $\sum |c_k|^2 < \infty$ and $\{e^{i\lambda_k x}\}_{k\in\mathbb{Z}}$  is called the Fourier frame (or frame of exponentials). What constitutes "sufficiently small" is described by the following theorem, named after Mikhail Kadets.

$$ The theorem can be easily extended to frames, replacing the integers by another sequence of real numbers $\{\mu_k\}_{k\in\mathbb{Z}}$ such that
 * $$|\lambda_k - \mu_k|\leq L < \frac{1}{4}, \quad \forall k \in \mathbb{Z}, \quad \text{and}\quad 1 - \cos (\pi L) + \sin(\pi L) < \sqrt{\frac{A}{B}},$$

then $\{e^{i\lambda_k x}\}_{k\in\mathbb{Z}}$ is a frame for $L^2(-\pi,\pi)$  with bounds
 * $$A(1 - \sqrt{\frac{B}{A}}(1-\cos(\pi L) + \sin(\pi L)))^2, \quad B(2-\cos(\pi L) + \sin(\pi L))^2.$$

Frame projector
Redundancy of a frame is useful in mitigating added noise from the frame coefficients. Let $$\mathbf{a} \in \ell^2(\mathbb{N})$$ denote a vector computed with noisy frame coefficients. The noise is then mitigated by projecting $$\mathbf{a}$$ onto the image of $$\mathbf{T}$$.

$$

The $$\ell^2$$ sequence space and $$\operatorname{im}(\mathbf{T})$$ (as $$\operatorname{im}(\mathbf{T}) \subseteq \ell^2$$) are reproducing kernel Hilbert spaces with a kernel given by the matrix $$M_{k,p}=\langle \mathbf{S}^{-1}\mathbf{e}_p,\mathbf{e}_k\rangle$$. As such, the above equation is also referred to as the reproducing kernel equation and expresses the redundancy of frame coefficients.

Tight frames
A frame is a tight frame if $$A=B$$. A tight frame $\{\mathbf{e}_k\}_{k =1}^{\infty}$ with frame bound $$A$$ has the property that
 * $$ \mathbf{v} = \frac{1}{A} \sum_k \langle \mathbf{v},\mathbf{e}_k\rangle \mathbf{e}_k, \quad \forall \mathbf{v}\in V.$$

For example, the union of $$k$$ disjoint orthonormal bases of a vector space is an overcomplete tight frame with $$A=B=k$$. A tight frame is a Parseval frame if $$A=B=1$$. Each orthonormal basis is a (complete) Parseval frame, but the converse is not necessarily true.

Equal norm frame
A frame is an equal norm frame if there is a constant $$c$$ such that $$\|\mathbf{e}_k\| = c$$ for each $$k$$. An equal norm frame is a normalized frame (sometimes called a unit-norm frame) if $$c=1$$. A unit-norm Parseval frame is an orthonormal basis; such a frame satisfies Parseval's identity.

Equiangular frames
A frame is an equiangular frame if there is a constant $$c$$ such that $$| \langle \mathbf{e}_i, \mathbf{e}_j \rangle | = c$$ for all $$i \neq j$$. In particular, every orthonormal basis is equiangular.

Exact frames
A frame is an exact frame if no proper subset of the frame spans the inner product space. Each basis for an inner product space is an exact frame for the space (so a basis is a special case of a frame).

Semi-frame
Sometimes it may not be possible to satisfy both frame bounds simultaneously. An upper (respectively lower) semi-frame is a set that only satisfies the upper (respectively lower) frame inequality. The Bessel Sequence is an example of a set of vectors that satisfies only the upper frame inequality.

For any vector $$ \mathbf{v} \in V $$ to be reconstructed from the coefficients $$ \{ \langle \mathbf{v},\mathbf{e}_k\rangle \}_{k \in \mathbb{N}} $$ it suffices if there exists a constant $$ A > 0 $$ such that
 * $$ A \| x-y \|^2 \le \| Tx-Ty \|^2, \quad \forall x,y \in V.$$

By setting $$ \mathbf{v}=x-y $$ and applying the linearity of the analysis operator, this condition is equivalent to:
 * $$ A \| \mathbf{v} \|^2 \le \| T\mathbf{v} \|^2, \quad \forall \mathbf{v}\in V,$$

which is exactly the lower frame bound condition.

Fusion frame
A fusion frame is best understood as an extension of the dual frame synthesis and analysis operators where, instead of a single subspace $$V\subseteq H$$, a set of closed subspaces $$\{W_i\}_{i\in\mathbb{N}}\subseteq H$$ with positive scalar weights $$\{w_i\}_{i\in\mathbb{N}}$$ is considered. A fusion frame is a family $$\{W_i,w_i\}_{i\in\mathbb{N}}$$ that satisfies the frame condition
 * $$A\|f\|^2 \leq \sum_i w_i^2\|P_{W_i}f\|^2 \leq B\|f\|^2, \quad \forall f\in H,$$

where $$P_{W_{i}}$$ denotes the orthogonal projection onto the subspace $$W_{i}$$.

Continuous frame
Suppose $$H$$ is a Hilbert space, $$X$$ a locally compact space, and $$\mu$$ is a locally finite Borel measure on $$X$$. Then a set of vectors in $$H$$, $$\{f_x\}_{x\in X}$$ with a measure $$\mu$$ is said to be a continuous frame if there exists constants, $$0<A\leq B$$ such that
 * $$A||f||^2\leq \int_{X}|\langle f,f_x\rangle|^2d\mu(x)\leq B||f||^2, \quad \forall f \in H.$$

To see that continuous frames are indeed the natural generalization of the frames mentioned above, consider a discrete set $$\Lambda\subset X$$ and a measure $$\mu= \delta_\Lambda$$ where $$\delta_\Lambda$$ is the Dirac measure. Then the continuous frame condition reduces to
 * $$A||f||^2\leq \sum_{\lambda\in \Lambda}|\langle f,f_{\lambda}\rangle|^2\leq B||f||^2, \quad \forall f\in H.$$

Just like in the discrete case we can define the analysis, synthesis, and frame operators when dealing with continuous frames.

Continuous analysis operator
Given a continuous frame $$\{f_x\}_{x\in X}$$ the continuous analysis operator is the operator mapping $$f$$ to a function on $$X$$ defined as follows:

$$T:H \to L^2(X,\mu)$$ by $$f \mapsto \langle f,f_x\rangle _{x\in X}$$.

Continuous synthesis operator
The adjoint operator of the continuous analysis operator is the continuous synthesis operator, which is the map


 * $$T^*:L^2(X,\mu) \to H$$ by $$a_x \mapsto\int_X a_x f_x d\mu(x)$$.

Continuous frame operator
The composition of the continuous analysis operator and the continuous synthesis operator is known as the continuous frame operator. For a continuous frame $$\{f_x\}_{x\in X}$$, it is defined as follows:
 * $$S:H\to H$$ by $$Sf:=\int_X \langle f,f_x\rangle f_x d\mu(x).$$

In this case, the continuous frame projector $$P : L^2(x,\mu) \to \operatorname{im}(T)$$ is the orthogonal projection defined by
 * $$P := TS^{-1}T^*.$$

The projector $$P$$ is an integral operator with reproducting kernel $$K(x,y) = \langle S^{-1}f_x,f_y\rangle$$, thus $$\operatorname{im}(T)$$ is a reproducing kernel Hilbert space.

Continuous dual frame
Given a continuous frame $$\{f_x\}_{x\in X}$$, and another continuous frame $$\{g_x\}_{x\in X}$$, then $$\{g_x\}_{x\in X}$$ is said to be a continuous dual frame of $$\{f_x\}$$ if it satisfies the following condition for all $$f, h\in H$$:


 * $$\langle f, h\rangle =\int_X\langle f,f_x\rangle \langle g_x, h\rangle d\mu(x).$$

Framed positive operator-valued measure
Just as a frame is a natural generalization of a basis to sets that may be linear dependent, a positive operator-valued measure (POVM) is a natural generalization of a projection-valued measure (PVM) in that elements of a POVM are not necessarily orthogonal projections.

Suppose $$(X, M)$$ is a measurable space with $$M$$ a Borel σ-algebra on $$X$$ and let $$F$$ be a POVM from $$M$$ to the space of positive operators on $$H$$ with the additional property that
 * $$0< A I \leq F(M) \leq B I < \infty,$$

where $$I$$ is the identity operator. Then $$F$$ is called a framed POVM. In case of the fusion frame condition, this allows for the substitution
 * $$ F(m) = \sum _{i\in m} w_i P_{W_i}, \quad m \in M.$$

For the continuous frame operator, the framed POVM would be
 * $$\langle F(M)f_x,f_y\rangle = \int_{M} \langle Sf_x,f_y\rangle d\mu(x).$$