User:JFB80/sandbox

-== Properties of time-like


 * $$1 + z = \gamma \left(1 + \frac{v_{\parallel}}{c}\right) = \sqrt{\frac{1+\frac{v_{\parallel}}{c}}{1-\frac{v_{\parallel}}{c}}} = exp w $$ w being rapidity
 * $$z \approx \frac{v_{\parallel}}{c}$$ &ensp;for small $$v_{\parallel}$$

vectors ==

Scalar product
A scalar product of two events x1, y1, z1, t1 and x2, y2, z2, t2 may be defined either in space-like form as

$$ x_1x_2+y_1y_2+z_1z_2-c^2t_1t_2 $$

or in time-like form as

$$ c^2t_1t_2-x_1x_2-y_1y_2-z_1z_2 $$

Two vectors with zero scalar product were called normal by Minkowski. The condition resembles orthogonality but this term is inappropriate as right angles are not conserved under Lorentz transformation.

The time-like form of the scalar product satisfies the reversed Cauchy inequality valid for two time-like events:

$$ c^2t_1t_2-x_1x_2-y_1y_2-z_1z_2 > \sqrt{(c^2t_1^2-x_1^2-y_1^2-z_1^2)(c^2t_2^2-x_2^2-y_2^2-z_2^2)}$$

From this it follows that the scalar product of two time-like events is positive. So two time-like events cannot be normal. It follows that if two events are normal one at least must be space-like (and it is possible for both to be space-like).

Scalar product
Two event vectors both having either t>0 or t<0 are called similarly directed..

A scalar product of two similarly directed time-like event vectors $u = (t. x, y, z)$ $v = (t', x', y', z')$ may be defined as
 * $$\left(u, v \right) = c^2 t t' - x x' - y y' - z z' $$

This is always positive because tt'>0 since u and v are similarly directed and so by Cauchy's inequality


 * $$ x x' + y y' + z z' < \sqrt \left( x^2 + y^2 + z^2 \right) \sqrt \left(x '^2 + y'^2 + z '^2 \right) <  ct ct' $$

A similar result does not apply to similarly directed space-like event vectors because if (x, y, z), (x', y', z') are orthogonal
 * $$x x' + y y' + z z' - tt' = - tt' < 0 $$

Norm and reversed triangle inequality

For time-like event vectors $v = (t, x, y, z)$ a norm $$\left\| v \right\| $$ may be defined as


 * $$\left\| v \right\| = \sqrt{c^2 t^2 - x^2 - y^2 - z^2}$$

This does not satisfy the usual triangle inequality. Instead, it satisfies the reversed triangle inequality. If $v$ and $w$ are both future-directed ($t > 0$) or past-directed ($t < 0$) time-like event

vectors, then:


 * $$\left\| v + w \right\| \ge \left\| v \right\| + \left\| w \right\|.$$

The result may be proved by using the algebraic definition, squaring and taking all terms to the left hand side when

the result is positive by the reversed Cauchy's inequality.

No similar result holds for space-like events.

Chronological and causality relations

Let $x, y ∈ M$. We say that


 * 1) $x$ chronologically precedes $y$ if $y − x$ is future-directed timelike. This relation has

the transitive property and so can be written $x < y$.
 * 1) $x$ causally precedes $y$ if $y − x$ is future-directed null or future-directed timelike. It

gives a partial ordering of space-time and so can be written $x ≤ y$.

Suppose x ∈ M is timelike. Then the simultaneous hyperplane for x is $$\{y : \eta(x, y) = 0\}.$$

Since this hyperplane varies as x varies, there is a relativity of simultaneity in Minkowski space. --

Noncausal solution

 * $$G(s) = \frac{S_{x,s}(s)}{S_x(s)}e^{\alpha s},$$

where $$S$$ are spectral densities. Provided that $$ g(t)$$ is optimal, then the minimum mean-square error equation reduces to
 * $$E(e^2) = R_s(0) - \int_{-\infty}^{\infty} g(\tau)R_{x,s}(\tau + \alpha)\,d\tau,$$

and the solution $$ g(t)$$ is the inverse two-sided Laplace transform of $$G(s)$$.

Wiener filter problem setup
The input, $$x(t)$$, to the Wiener filter consists of an unknown signal, $$s(t)$$, corrupted by additive noise, $$n(t)$$:


 * $$ x(t) \,=\, s(t) \,+\, n(t)$$

The output, $$\hat{s}(t)$$, is calculated by means of a filter, $$ g(t)$$, using the following convolution:
 * $$\hat{s}(t) = g(t) * x(t) = (g * [s + n])(t) = \int\limits_{-\infty}^{\infty}{g(\tau)\left[s(t - \tau) + n(t - \tau)\right]\,d\tau},$$

where
 * $$s(t)$$ is the original signal (not exactly known; to be estimated),
 * $$ n(t)$$ is the noise, which is uncorrelated with $$s(t)$$,
 * $$ x(t) \,=\, s(t) \,+\, n(t)$$ is the observed or measured signal,
 * $$ \hat{s}(t)$$ is the estimated signal (the intention is to equal $$ s(t + \alpha)$$), and
 * $$ g(t)$$ is the Wiener filter's impulse response.

The error is defined as
 * $$e(t) = s(t + \alpha) - \hat{s}(t),$$

where the constant $$\alpha$$ is the delay of the Wiener Filter (since it is causal). In other words, the error is the difference between the estimated signal and the true signal shifted by the filter delay $$\alpha$$.

The squared error is
 * $$e^2(t) = s^2(t + \alpha) - 2s(t + \alpha)\hat{s}(t) + \hat{s}^2(t),$$

where $$ s(t \,+\, \alpha)$$ is the desired output of the filter and $$ e(t)$$ is the error. Depending on the value of $$\alpha$$, the problem can be described as follows:
 * if $$\alpha \,>\, 0$$ then the problem is that of prediction (error is reduced when $$\hat{s}(t)$$ is similar to a later value of s),
 * if $$\alpha \,=\, 0$$ then the problem is that of filtering (error is reduced when $$\hat{s}(t)$$ is similar to $$ s(t)$$), and
 * if $$\alpha \,<\, 0$$ then the problem is that of smoothing (error is reduced when $$\hat{s}(t)$$ is similar to an earlier value of s).

Taking the expected value of the squared error results in
 * $$\mathrm{E}(e^2) = R_s(0) - 2\int\limits_{-\infty}^{\infty}{g(\tau)R_{xs}(\tau + \alpha)\,d\tau} + \iint\limits^{[\infty, \infty]}_{[-\infty, -\infty]}{g(\tau)g(\theta)R_x(\tau - \theta)\,d\tau\,d\theta},$$

where $$ x(t) \,=\, s(t) \,+\, n(t)$$ is the observed signal, $$ R_s$$ is the autocorrelation function of $$ s(t)$$, $$ R_x$$ is the autocorrelation function of $$ x(t)$$, and $$ R_{xs}$$ is the cross-correlation function of $$ x(t)$$ and $$ s(t)$$. If the signal $$ s(t)$$ and the noise $$ n(t)$$ are uncorrelated (i.e., the cross-correlation $$ R_{sn}$$ is zero), then this means that $$ R_{xs} \,=\, R_s$$ and $$ R_x \,=\, R_s \,+\, R_n$$. For many applications, the assumption of uncorrelated signal and noise is reasonable.

The goal is to minimize $$ \mathrm{E}(e^2)$$, the expected value of the squared error, by finding the optimal $$ g(\tau)$$, the Wiener filter impulse response function. The minimum may be found by calculating the first order incremental change in the least square resulting from an incremental change in $$ g$$ for positive time. This is


 * $$ \delta \mathrm{E}(e^2) = -2 \int\limits_{-\infty}^{\infty}{\delta g(\tau)\left(R_{xs}(\tau + \alpha)- \int\limits_{0}^{\infty} {g(\theta) R_{x}(\tau - \theta)d\theta}\right)} d\tau.$$

For a minimum, this must vanish identically for all $$ \delta g(\tau)$$ for $$ \tau>0$$ which leads to the Wiener–Hopf equation:


 * $$ R_{xs}(\tau + \alpha) = \int\limits_{0}^{\infty} {g(\theta) R_{x}(\tau - \theta)d\theta}.$$

This is the fundamental equation of the Wiener theory. The right-hand side resembles a convolution but is only over the semi-infinite range. The equation can be solved to find the optimal filter $$ g$$ by a special technique due to Wiener and Hopf.

Wiener filter solutions
The Wiener filter problem has solutions for three possible cases: one where a noncausal filter is acceptable (requiring an infinite amount of both past and future data), the case where a causal filter is desired (using an infinite amount of past data), and the finite impulse response (FIR) case where a finite amount of past data is used. The first case is simple to solve but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect, and in an appendix of Wiener's book Levinson gave the FIR solution.

Noncausal solution

 * $$G(s) = \frac{S_{x,s}(s)}{S_x(s)}e^{\alpha s}.$$

Where $$ S$$ are spectra. Provided that $$ g(t)$$ is optimal, then the minimum mean-square error equation reduces to
 * $$E(e^2) = R_s(0) - \int_{-\infty}^{\infty}{g(\tau)R_{x,s}(\tau + \alpha)\,d\tau},$$

and the solution $$ g(t)$$ is the inverse two-sided Laplace transform of $$ G(s)$$.

Causal solution

 * $$G(s) = \frac{H(s)}{S_x^{+}(s)},$$

where
 * $$ H(s)$$ consists of the causal part of $$ \frac{S_{x,s}(s)}{S_x^{-}(s)}e^{\alpha s}$$ (that is, that part of this fraction having a positive time solution under the inverse Laplace transform)
 * $$ S_x^{+}(s)$$ is the causal component of $$ S_x(s)$$ (i.e., the inverse Laplace transform of $$ S_x^{+}(s)$$ is non-zero only for $$ t \,\ge\, 0$$)
 * $$ S_x^{-}(s)$$ is the anti-causal component of $$ S_x(s)$$ (i.e., the inverse Laplace transform of $$ S_x^{-}(s)$$ is non-zero only for $$ t < 0$$)

This general formula is complicated and deserves a more detailed explanation. To write down the solution $$ G(s)$$ in a specific case, one should follow these steps:


 * 1) Start with the spectrum $$ S_x(s)$$ in rational form and factor it into causal and anti-causal components: $$S_x(s) = S_x^{+}(s) S_x^{-}(s)$$ where $$ S^{+}$$ contains all the zeros and poles in the left half plane (LHP) and $$ S^{-}$$ contains the zeroes and poles in the right half plane (RHP). This is called the Wiener–Hopf factorization.
 * 2) Divide $$ S_{x,s}(s)e^{\alpha s}$$ by $$ S_x^{-}(s)$$ and write out the result as a partial fraction expansion.
 * 3) Select only those terms in this expansion having poles in the LHP.  Call these terms $$ H(s)$$.
 * 4) Divide $$ H(s)$$ by $$ S_x^{+}(s)$$.  The result is the desired filter transfer function $$ G(s)$$.

Wiener filter
-
 * $$E(e^2) = R_s(0) - 2\int\limits_{-\infty}^{\infty}{g(\tau)R_{xs}(\tau + \alpha)\,d\tau} + \iint\limits^{[\infty, \infty]}_{[-\infty, -\infty]}{g(\tau)g(\theta)R_x(\tau - \theta)\,d\tau\,d\theta}$$

The first order incremental change in the least square error resulting from an incremental change in g for positive time is


 * $$ \delta E(e^2) = -2 \int\limits_{0}^{\infty}{\delta g(\tau)(R_{xs}(\tau + \alpha)- \int\limits_{-\infty}^{\infty} {g(\theta) R_{x}(\tau - \theta)d\theta})} d\tau $$

The condition for this to vanish identically leads to the Wiener-Hopf equation


 * $$ R_{xs}(\tau + \alpha) = \int\limits_{0}^{\infty} {g(\theta) R_{x}(\tau - \theta)d\theta}$$

This is the fundamental equation of the Wiener theory. The right-hand side resembles a convolution but is only over the semi-infinite range. The equation can be solved by a special technique due to Wiener and Hopf in a previous paper.

$$ R_{xs}(\tau + \alpha) = g(\theta) R_{xs}(\tau - \theta)$$


 * $$ \delta g(\tau)(R_{xs}(\tau + \alpha)- \int\limits_{-\infty}^{\infty} {g(\theta) R_{xs}(\tau - \theta)d\theta})$$

--

--

Miscellaneous
∫ ∏ ∑ ─ ≠ ≡ ± ≈ ≤ ≥ ⌡ |⌠ √ ∞  ∫  º²³ⁿ ∂ ∆ ∏ ∑  → ← │ ┐└ ┘ ' n┴ ¼ ⅓ ½ ¾ ΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩ αβγδεζηθικλμνξοπρςστυφχψω

éêè ôö  üû  äáàâ  æ  î  ç  ø  ćč àáâãäåæçèéêëìíîïðñòóôõö÷øûüý