User:Xiaochen0516/sandbox

Detection of a Known Continuous Signal S(t)
In communication, we usually have to decide whether a signal from a noisy channel contains valuable information. The following hypothesis testing is used for detecting continuous signal s(t) from channel output X(t), N(t) is the channel noise, which is usally assumed zero mean gaussian process with correlation function $$R_{N} (t, s) = E[N(t)N(s)]$$ $$H: X(t) = N(t)$$,

$$K: X(t) = N(t)+s(t), t\in(0,T)$$.

Signal Detection in White Noise
When The channel noise is white, its correlation function is $$R_{N}(t) = \frac{N_0}{2} \delta (t)$$, and it has constant power spectrum density. In physically practical channel, the noise power is finite. so $$S_{N}(f) = \frac{N_{0}}{2} \text{ for } |f|w$$.Then the noise correlation function is sinc function with zeros at $$\frac{n}{2\omega}, n = ...-1,0,1,...$$. Since are uncorrelated and gaussian, they are independent. Thus we can take samples from X(t) with time spacing $$ \Delta t = \frac{n}{2\omega}$$ within (0,T). Let $$X_i = X(i\Delta t)$$. We have a total of $$n = \frac{T}{\Delta t} = T(2\omega) = 2\omega T$$ i.i.d samples $$\{X_1, X_2,...,X_n\}$$ to develop the Likelihood Ratio Test. Let signal $$S_i = S(i\Delta t)$$, the problem becomes, $$H: X_i = N_i$$,

$$K: X_i = N_i + S_i, i = 1,2...n.$$ The log-likelihood ratio $$\mathcal{L}(\underline{x}) = log\frac{\sum^n_{i=1} (2S_i x_i - S_i^2)}{2\sigma^2} \Leftrightarrow \Delta t \Sigma ^n_{i = 1} S_i x_i = \sum^n_{i=1} S(i\Delta t)x(i\Delta t)\Delta t \gtrless \lambda_2$$. As $$ t \rightarrow 0, \text{ let } G = \int^T_0 S(t)x(t)dt$$. Then G is the test statistics and the Neyman-Pearson Optimum Detector is: $$G(\underline{x}) > G_0 \Rightarrow K, < G_0 \Rightarrow H$$. As G is gaussian, we can characterize it by finding its mean and variances. Then we get And the probability of detection:$$\beta = \int^{\infty}_{G_0} N(E, \frac{N_0 E}{2})dG = 1-\Phi(\frac{G_0 - E}{\sqrt{\frac{N_0 E}{2}}}) = \Phi [\sqrt{\frac{2E}{N_0}} - \Phi^{-1}(1-\alpha)], \Phi(\cdot)$$ is the cdf of standard normal gaussian variable.

Signal Detection in Color Noise
When N(t)is colored(correlated in time in some sense) gaussian noise with zero mean and covariance function: $$R_N(t,s) = E[X(t)X(s)].$$  We couldn't sample independent discrete observations by evenly spacing the time. Instead, we can use K-L expansion to uncorrelate the noise process and get independent gaussian observation 'samples'. The K-L expansion of N(t): $$N(t) = \sum^{\infty}_{i=1} N_i \Phi_i(t), 0<t<T$$, where $$N_i =\int N(t)\Phi_i(t)dt$$ and the orthonormal bases $$\{\Phi_i{t}\} $$are generated by kernal $$R_N(t,s)$$, i.e, solution to $$ \int ^T _0 R_N(t,s)\Phi_i(s)ds = \lambda_i \Phi_i(t), var[N_i] = \lambda_i$$. Do expansion: $$S(t) = \sum^{\infty}_{i = 1}S_i\Phi_i(t)$$, where $$S_i = \int^T _0 S(t)\Phi_i(t)dt, 0 G_0 \Rightarrow K, < G_0 \Rightarrow H.$$ Define $$k(t) = \sum^{\infty}_{i=1} \lambda_i S_i \Phi_i(t), 0<t<T,$$ then $$G = \int^T _0 k(t)x(t)dt$$ if k(t)is availabe, the practical implementation of optimum detector is: picture
 * $${N_i}$$ are independent guassian r.v's with variance $$\lambda_i$$
 * under H: $$\{X_i\}$$ are independent gaussian r.v's. $$f_H[x(t)|0<t<T] = f_H(\underline{x}) = \prod^{\infty} _{i=1} \frac{1}{\sqrt{2\pi \lambda_i}}exp[-\frac{x_i^2}{2 \lambda_i}]$$
 * under K: $$\{X_i - S_i\}$$ are independent gaussian r.v's. $$f_K[x(t)|0<t<T] = f_K(\underline{x}) = \prod^{\infty} _{i=1} \frac{1}{\sqrt{2\pi \lambda_i}}exp[-\frac{(x_i - S_i)^2}{2 \lambda_i}]$$

How to find k(t)?
Since $$\int^T_0 R_N(t,s)k(s)ds = \sum^{\infty}_{i=1} \lambda_i S_i \int^T _0 R_N(t,s)\Phi_i (s) ds = \sum^{\infty}_{i=1} S_i \Phi_i(t) = S(t)$$, k(t) is the solution to$$\int^T_0 R_N(t,s)k(s)ds = S(t)$$. If N(t)is wide-sense stationary, $$\int^T_0 R_N(t-s)k(s)ds = S(t) $$, which is known as the Wiener-Hopf Equation. The equation can be solved by taking fourier transform, but not practically realizable since infinite spectrum needs spatial factorization. A special case which is easy to calculate k(t) is white gaussian noise. $$\int^T_0 \frac{N_0}{2}\delta(t-s)k(s)ds = S(t) \Rightarrow k(t) = C S(t), 0<t<T$$. The corresponding impulse response is h(t) = k(T-t) = C S(T-t). Let C = 1, this is just the result we arrived at in previous section for detecting of signal in white noise.

The test threshold for Neyman-Pearson Optimum Detector?
Since X(t)is gaussian process, $$G = \int^T_0 k(t)x(t)dt$$ is a gaussian random variable that can be characterized by its mean and variance. $$E[G^2|H] = \int^T_0\int^T_0 k(t)k(s) R_N(t,s)dtds = \int^T_0 k(t)(\int^T_0 k(s)R_N(t,s)ds)=\int^T_0 k(t)S(t)dt = \rho$$ $$E[G^2|K]=\int^T_0\int^T_0k(t)k(s)E[x(t)x(s)]dtds = \int^T_0\int^T_0k(t)k(s)(R_N(t,s) +S(t)S(s))dtds = \rho + \rho^2$$
 * $$E[G|H] = \int^T_0 k(t)E[x(t)|H]dt = 0$$
 * $$E[G|K] = \int^T_0 k(t)E[x(t)|K]dt = \int^T_0 k(t)S(t)dt \equiv \rho$$
 * $$var[G|H] = E[G^2|H] - (E[G|H])^2 = \rho$$
 * $$var[G|K] = E[G^2|K] - (E[G|K])^2 = \rho + \rho^2 -\rho^2 = \rho. $$

Hence, we obtain The false alarm error $$\alpha = \int^{\infty}_{G_0} N(0,\rho)dG = 1 - \Phi(\frac{G_0}{\sqrt{\rho}}).$$ So the test threshold for Neyman-Pearson Optimum Detector is $$G_0 = \sqrt{\rho} \Phi^{-1} (1-\alpha)$$. Its power of detection is $$\beta = \int^{\infty}_{G_0} N(\rho, \rho)dG = \Phi [\sqrt{\rho} - \Phi^{-1}(1 - \alpha)]$$. When the noise is white gaussian process, $$\rho = \int^T_0 k(t)S(t)dt = \int^T_0 S(t)^2 dt = E$$, the signal power.

Prewhitening
For some type of colored noise, a typical practise is to add a prewhiterning filter before the matched filter to transform the colored noise into white noise. For example, N(t) is a wide-sense stationary colored noise with correlation function $$\R_N(\tau) = \frac{B N_0}{4} e^{-B|\tau|}$$ and$$S_N(f) = \frac{N_0}{2(1+(\frac{w}{B})^2)}$$. The transfer function of prewhitening filter is $$H(f) = 1 + j \frac{w}{B}$$

Detection of a Gaussian Random Signal in AWGN
When the signal we want to detect from the noisy channel is also random, for example, a white gaussian process X(t), we can still implement K-L expansion to get independent sequence of observation. In this case, the detection problem is descibed as follows: The K-L expansion of X(t) is $$X(t) = \sum^{\infty}_{i=1} X_i \Phi_i(t)$$, where $$X_i =\int^T_0 X(t) \Phi_i(t). \Phi(t)$$'s are solution to $$ \int^T_0 R_X(t,s)\Phi_i(s)ds= \lambda_i \Phi_i(t)$$. So $$X_i$$'s are independent sequence of r.v's with zero mean and variance $$\lambda_i$$. Expanding Y(t) and N(t) by $$\Phi_i(t)$$, we get $$Y_i = \int^T_0 Y(t)\Phi_i(t)dt = \int^T_0 [N(t) + X(t)]\Phi_i(t) = N_i + X_i$$, where $$N_i = \int^T_0 N(t)\Phi_i(t)dt.$$ As N(t) is gaussian white noise, $$N_i$$'s are i.i.d sequence of r.v with zero mean and variance$$\frac{N_0}{2}$$, then the problem is simplified as follows, The Neyman Pearson Optimal Test: $$\Lambda = \frac{f_Y|H_1}{f_Y|H_0} = Ce^{-\sum^{\infty}_{i=1}\frac{y_i^2}{2} \frac{\lambda_i}{\frac{N_0}{2}(\frac{N_0}{2} + \lambda_i)}}$$, so the log-likelihood ratio $$\mathcal{L} = ln(\Lambda) = K + -\sum^{\infty}_{i=1}\frac{y_i^2}{2} \frac{\lambda_i}{\frac{N_0}{2}(\frac{N_0}{2} + \lambda_i)}$$. Since $$\hat{X_i} = \frac{\lambda_i}{\frac{N_0}{2}(\frac{N_0}{2} + \lambda_i)}$$ is just the minimum-mean-square estimate of $$X_i$$ given $$Y_i$$'s, $$\mathcal{L} = K + \frac{1}{N_0} \sum^{\infty}_{i=1} Y_i \hat{X_i}$$. K-L expansion has the following property: If $$f(t) = \sum f_i \Phi_i(t), g(t) = \sum g_i \Phi_i(t)$$, where $$f_i = \int_0^T f(t) \Phi_i(t), g_i = \int_0^T g(t)\Phi_i(t).$$, then $$\sum^{\infty}_{i=1} f_i g_i = \int^T_0 g(t)f(t)dt$$. So let $$\hat{X(t|T)} = \sum^{\infty}_{i=1} \hat{X_i}\Phi_i(t)$$,$$\mathcal{L} = K + \frac{1}{N_0} \int^T_0 Y(t) \hat{X(t|T)}dt$$. Noncausal filter Q(t, s) can be used to get the estimate through $$\hat{X(t|T)} = \int^T_0 Q(t,s)Y(s)ds$$. By orthogonality principle, Q(t,s) satisfies $$\int^T_0 Q(t,s)R_X(s,t)ds + \frac{N_0}{2} Q(t, \lambda) = R_X(t, \lambda), 0 < \lambda < T, 0 t, to get estimate $$\hat{X(t|t)}$$. Specifically, $$Q(t,s) = h(t,s) + h(s, t) - \int^T_0 h(\lambda, t)h(s, \lambda)d\lambda$$.