User:Jeff b02901013/sandbox

Compressive sensing (CS) is a novel sampling paradigm that samples signals in a much more efficient way than the established Nyquist sampling theorem. CS takes advantage of the signal's characteristic of sparsity to find the sparse solutions for underdetermined linear systems and reconstruct the signals from far fewer samples. There are many efficient algorithms existing in literature which find coefficients iteratively instead of finding largest coefficients at the same time.

Reconstruction algorithm
There many type of reconstruction algorithms for sparse signal recovery in compressive sensing and these algorithms may be roughly divided into six types which are elaborated as follows.

Convex Relaxation
This class of algorithms solves a convex optimization problem through linear programming to obtain reconstruction. The number of measurements required for exact reconstruction is small but the methods are computationally complex. For example, BP, BP de-noising (BPDN), least absolute shrinkage and selection operator (LASSO), and least angle regression (LARS).

Greedy Iterative Algorithm
This class of algorithms solve the reconstruction problem by finding the answer, step by step, in an iterative fashion. The idea is to select columns of in a greedy fashion. At each iteration, the column of sensing matrix $$\Theta$$ that correlates most with $$Y$$ is selected. Conversely, the method of least square error is minimized in every iteration. That row's contribution is subtracted from $$Y$$ and iterations are done on the residual until correct set of columns is identified. This is usually achieved in $$M$$ iterations. This class algorithms are low implementation cost and high speed of recovery, for example, orthogonal matching pursuit (OMP), regularized OMP and compressive sampling matching pursuits (CoSaMP).

Iterative Thresholding Algorithms
Iterative approaches to CS recovery problem are faster than the convex optimization problems. For this class of algorithms, correct measurements are recovered by soft or hard thresholding. The thresholding function depends upon number of iterations. Expander matching pursuits, sparse matching pursuit, and sequential sparse matching pursuits are recently proposed algorithms in this domain that achieve near linear recovery time.

Combinatorial/Sublinear Algorithms
This class of algorithms recovers sparse signal through group testing. They are extremely fast and efficient, as compared to convex relaxation or greedy algorithms but require specific pattern in the measurements matrix $$\Phi$$ needs to be sparse. Representative algorithms are Fourier sampling algorithm, chaining pursuits, heavy hitters on steroids (HHS).

Non Convex Minimization Algorithms
Non-convex local minimization techniques recover CS signals from far less measurements by replacing $$l_1$$-norm by $$l_p$$-norm where $$p\leq1$$. Non-convex optimization is mostly utilized in medical imaging tomography, network state inference, streaming data reduction. There are many algorithms proposed in literature that use this technique like focal underdetermined system solution (FOCUSS), iterative re-weighted least squares, sparse bayesian learning algorithms, Monte-Carlo based algorithms.

Bregman Iterative Algorithms
These algorithms provide a simple and efficient way of solving $$l_1$$ minimization problem. A new idea is presented in which gives exact solution of constrained problems by iteratively solving a sequence of unconstrained sub-problems generated by a Bregman itertive regularization scheme. When applied to CS problems, the iterative approach using Bregman distance regularization achieves reconstruction in four to six iterations. The computational speed of these algorithms is particularly appealing compared to that available with other existing algorithms.