Compressed sensing in speech signals

In communications technology, the technique of compressed sensing (CS) may be applied to the processing of speech signals under certain conditions. In particular, CS can be used to reconstruct a sparse vector from a smaller number of measurements, provided the signal can be represented in sparse domain. "Sparse domain" refers to a domain in which only a few measurements have non-zero values.

Theory
Suppose a signal $${x \in R^{N}}$$ can be represented in a domain where only $${\it M}$$ coefficients out of $${\it N}$$ (where $${M \ll N}$$) are non-zero, then the signal is said to be sparse in that domain. This reconstructed sparse vector can be used to construct back the original signal if the sparse domain of signal is known. CS can be applied to speech signal only if sparse domain of speech signal is known.

Consider a speech signal $${x}$$, which can be represented in a domain $${\Psi}$$ such that $${x} = {\Psi \boldsymbol\alpha}$$, where speech signal $${x \in R^{\it N}}$$, dictionary matrix $${\Psi \in R^{\it {N \times N}}}$$ and the sparse coefficient vector  $${\boldsymbol\alpha \in R^{\it N}}$$. This speech signal is said to be sparse in domain $${\Psi}$$, if the number of significant (non zero) coefficients in sparse vector $${\boldsymbol\alpha}$$ is $${\it{K}}$$, where $${\it{K \ll N}}$$.

The observed signal $${x} $$ is of dimension $${\it{N \times 1}}$$. To reduce the complexity for solving $${\boldsymbol\alpha}$$ using CS speech signal is observed using a measurement matrix $${\Phi}$$ such that

where $${y \in R^{\it M}}$$, and measurement matrix $${\Phi \in R^{\it M \times N}}$$ such that $${\it {M \ll N}}$$. Sparse decomposition problem for eq. 1 can be solved as standard $${l_1}$$ minimization as

If measurement matrix $${\Phi}$$ satisfies the restricted isometric property (RIP) and is incoherent with dictionary matrix $${\Psi}$$. then the reconstructed signal is much closer to the original speech signal. Different types of measurement matrices like random matrices can be used for speech signals. Estimating the sparsity of a speech signal is a problem since the speech signal varies greatly over time and thus sparsity of speech signal also varies highly over time. If sparsity of speech signal can be calculated over time without much complexity that will be best. If this is not possible then worst-case scenario for sparsity can be considered for a given speech signal. Sparse vector ($${\hat{\boldsymbol\alpha}}$$) for a given speech signal is reconstructed from as small as possible a number of measurements ($${y}$$) using $${l_1}$$ minimization. Then original speech signal is reconstructed form the calculated sparse vector $${\hat{\boldsymbol\alpha}}$$ using the fixed dictionary matrix as $${\Psi}$$ as $${\hat{x}}$$ = $${\Psi}$$$${\hat{\boldsymbol\alpha}}$$. Estimation of both the dictionary matrix and sparse vector from random measurements only has been done iterative ly. The speech signal reconstructed from estimated sparse vector and dictionary matrix is much closer to the original signal. Some more iterative approaches to calculate both dictionary matrix and speech signal from just random measurements of speech signal have been developed.

Applications
The application of structured sparsity for joint speech localization-separation in reverberant acoustics has been investigated for multiparty speech recognition. Further applications of the concept of sparsity are yet to be studied in the field of speech processing. The idea behind applying CS to speech signals is to formulate algorithms or methods that use only those random measurements ($${y})$$) to carry out various forms of application-based processing such as speaker recognition and speech enhancement.