User:Felix Hoffmann/Sandbox

Stochastic Process
Given a probability space $$(\Omega, \Sigma, P)$$ and a measurable space $$(E,\mathcal{E})$$, a stochastic process is a family of stochastic variables $$(X_t:\Omega \to E)_{t \in I}$$, that is a map
 * $$X:\Omega \times I \to E, \; (\omega, t) \mapsto X_t(\omega)\;$$,

such that for all $$ t \in I $$ the map $$X_t:\;\omega \mapsto X_t(\omega)\;$$ is $$\Sigma$$-$$\mathcal{\mathcal{E}}$$-measurable.

If $$E$$ is finite or countable, $$(X_t)_{t \in I}$$ is called a point process.

Example: Poisson Process

Point Process
A poisson process is a counting process, that is a stochastic process {N(t), t ≥ 0} with values that are positive, integer, and increasing:


 * 1) N(t) ≥ 0.
 * 2) N(t) is an integer.
 * 3) If s ≤ t then N(s) ≤ N(t).

Poisson Distribution
The poisson distribution of intensity $$\lambda$$ of a stochastic variable $$X: \Omega \to \mathbb{N}_0$$, is a probability distribution given by the probability mass function


 * $$P_{\lambda}(k):=\operatorname{Pr}[X=k]= \frac{\lambda^k e^{-\lambda}}{k!}.$$

For the poisson distribution to be a well-defined distribution, we need to check that $$\Pr[\mathbb{N}_0]=1$$. Indeed,


 * $$\Pr[\mathbb{N}_0]=\Pr[\bigcup_{k \in \mathbb{N}_0}\{k\}]= \sum_{k \in \mathbb{N}_0} \Pr[\{k\}] = \underbrace{\sum_{k \in \mathbb{N}_0} \frac{\lambda^k}{k!}}_{=e^{\lambda}} e^{-\lambda} = 1.$$

Then, also, $$\Pr[S \subseteq \mathbb{N}_0]$$ exists for every subset $$S \subseteq \mathbb{N}_0$$, since $$\Pr[S]$$ is bounded by one and a monotonic growing function in $$n$$, since $$\Pr[\{k\}]$$ is positive for all $$k \in \mathbb{N}_0$$.

The expected value of a stochastic variable X following poisson distribution is computed as (link Expactation value of a discrete random variable) :


 * $$\operatorname{E}[X] = \sum_{k \in \mathbb{N}_0} k \operatorname{Pr}[X=k] = \sum_{k \in \mathbb{N}_0} k \frac{\lambda^k}{k!}\,\mathrm{e}^{-\lambda}

= \lambda\, \mathrm{e}^{-\lambda} \underbrace{\sum_{k=1}^{\infty}\frac{\lambda^{k-1}}{(k-1)!}}_{=\mathrm{e}^{\lambda}} = \lambda.$$

Expactation value of a discrete random variable
Let $$X: \Omega \to \mathbb{R}$$ be a discrete stochastic variable. Then the expected value of $$X$$ can be calculated as


 * $$\operatorname{E}[X] = \sum_{n \in \mathbb{N}} n P^{X}(n).$$

Proof:

It is $$ X^{-1}({r})= \emptyset $$ for $$r \notin \mathbb{N}$$, we have


 * $$X^{-1}(\mathbb{N})=X^{-1}(\mathbb{R})$$.

Thus


 * $$\begin{align} \operatorname{E}[X] & =\int_{\Omega} X \mathrm{d}P = \int_{\bigcup_{n \in \mathbb{N}} X^{-1}(\{n\})} X \mathrm{d}P

=\sum_{n \in \mathbb{N}} \int_{X^{-1}(\{n\})} X \mathrm{d}P \\ &=\sum_{n \in \mathbb{N}} \int_{X^{-1}(\{n\})} n \mathbf{1}_{X^{-1}(\{n\})} \mathrm{d}P \,\, \stackrel{\mathrm{Def.}}{=} \,\, \sum_{n \in \mathbb{N}} n P(X^{-1}(\{n\})) \,\, \stackrel{\mathrm{Def.}}{=} \,\, \sum_{n \in \mathbb{N}} n P^{X}(n). \end{align}$$

Binomial Distribution
The binomial distribution with parameters n and p of a stochastic variable $$X: \Omega \to \mathbb{N}_0$$, is a probability distribution of X given by the probability mass function


 * $$\operatorname{Pr}[X=k]= {n \choose k}\, p^k\,(1-p)^{n-k}.$$

If X follows the binomial distribution with parameters n, the number of independent experiments, and p, the probability for one experiment to give the answer "yes", we write K ~ B(n, p).

We have


 * $$\Pr[\mathbb{N}_0]=\Pr[\bigcup_{k \in \mathbb{N}_0}\{k\}]= \sum_{k \in \mathbb{N}_0} \Pr[\{k\}] =\sum_{k \in \mathbb{N}_0} \binom nk p^k (1-p)^{n-k} \,\, \stackrel{\mathrm{Binom.}}{=}\,\, (p+(1-p))^n = 1.$$

The expected value of a stochastic variable X following the binomial distribution is calculated as (link Expactation value of a discrete random variable) :


 * $$\begin{align}

\operatorname{E}[X]&= \sum_{k \in \mathbb{N}_0} k \operatorname{Pr}[X=k] \\ &= \sum_{k=0}^n k\binom nk p^k (1-p)^{n-k}\\ &= np\sum_{k=0}^n k\frac{(n-1)!}{(n-k)!k!}p^{k-1} (1-p)^{(n-1)-(k-1)}\\ &= np\sum_{k=1}^n \frac{(n-1)!}{(n-k)!(k-1)!}p^{k-1} (1-p)^{(n-1)-(k-1)}\\ &= np\sum_{k=1}^n \binom{n-1}{k-1} p^{k-1} (1-p)^{(n-1)-(k-1)}\\ &= np\sum_{\ell=0}^{n-1} \binom{n-1}\ell p^\ell (1-p)^{(n-1)-\ell}\quad\text{mit } l:=k-1\\ &= np\sum_{\ell=0}^m \binom m\ell p^\ell (1-p)^{m-\ell}\qquad\text{mit } m:=n-1\\ &= np\left(p+\left(1-p\right)\right)^m=np1^m=np \end{align}$$

Its variance is given by


 * $$\operatorname{Var}(X) =\sum_{k=0}^n k^2\binom nk p^k (1-p)^{n-k}-n^2p^2 = np(1-p),$$

where we used the computational formula for the variance in $$\operatorname{Var}(X)=\operatorname E\left(X^2\right)-\left(\operatorname E\left(X\right)\right)^2$$. (uncomplete proof!)

Spiketrains and instanteous firing rate (article)
Reference: Poisson Model of Spike Generation - David Heeger

A spike train of n spikes occuring at times $$ t_i, i=1,..,n$$ , is given as function


 * $$\rho(t)=\sum_{i=1}^{n} \delta(t-t_i),$$

which is more sophistically known as the neural response function.

The number of spikes N, occuring between two points in time $$t_1 < t_2$$, is computed as


 * $$N=\int_{t_1}^{t_2} \rho(t)\, \mathrm{d}t.$$

Because the sequence of action potentials generated by a given stimulus typically varies from trial to trial, neuronal responses are typically treated probabilistically. One (very simple) way to characterize the probabilitistic behaviour of the firing of a neuron is by the spike count rate r, which is given by


 * $$r = \frac{N}{T} = \frac{1}{T} \int_0^T \rho(t)\, \mathrm{d}t, $$

The spike count rate be determined vor a single trial period, or can be averaged over several trials. Another possible way of characterization