Intensity of counting processes

The intensity $$\lambda$$ of a counting process is a measure of the rate of change of its predictable part. If a stochastic process $$\{N(t), t\ge 0\}$$ is a counting process, then it is a submartingale, and in particular its Doob-Meyer decomposition is


 * $$N(t) = M(t) + \Lambda(t) $$

where $$ M(t) $$ is a martingale and $$\Lambda(t)$$ is a predictable increasing process. $$\Lambda(t)$$ is called the cumulative intensity of $$N(t)$$ and it is related to $$\lambda$$ by


 * $$\Lambda(t) = \int_{0}^{t} \lambda(s)ds$$.

Definition
Given probability space $$ (\Omega, \mathcal{F}, \mathbb{P})$$ and a counting process $$\{N(t), t\ge 0\}$$ which is adapted to the filtration $$\{\mathcal{F}_t, t\ge 0\}$$, the intensity of $$N$$ is the process $$\{\lambda(t), t\ge 0\}$$ defined by the following limit:


 * $$\lambda(t) = \lim_{h\downarrow 0} \frac{1}{h} \mathbb{E}[N(t+h) - N(t) | \mathcal{F}_t] $$.

The right-continuity property of counting processes allows us to take this limit from the right.

Estimation
In statistical learning, the variation between $$\lambda$$ and its estimator $$\hat{\lambda}$$ can be bounded with the use of oracle inequalities.

If a counting process $$N(t)$$ is restricted to $$t\in [0,1]$$ and $$n$$ i.i.d. copies are observed on that interval, $$ N_1, N_2, \ldots, N_n $$, then the  least squares functional for the intensity is


 * $$ R_n(\lambda) = \int_{0}^{1} \lambda(t)^2dt - \frac{2}{n} \sum_{i=1}^n \int_{0}^{1}\lambda(t)dN_i(t)$$

which involves an Ito integral. If the assumption is made that $$\lambda(t)$$ is piecewise constant on $$[0,1]$$, i.e. it depends on a vector of constants $$ \beta = (\beta_1, \beta_2, \ldots, \beta_m) \in \R_+^m $$ and can be written


 * $$ \lambda_\beta = \sum_{j=1}^m \beta_j \lambda_{j,m}, \;\;\;\;\;\; \lambda_{j,m} = \sqrt{m} \mathbf{1}_{(\frac{j-1}{m}, \frac{j}{m}]} $$,

where the $$\lambda_{j,m}$$ have a factor of $$\sqrt{m}$$ so that they are orthonormal under the standard $$L^2$$ norm, then by choosing appropriate data-driven weights $$\hat{w}_j$$ which depend on a parameter $$x>0$$ and introducing the weighted norm


 * $$ \|\beta\|_{\hat{w}} = \sum_{j=2}^m\hat{w}_j|\beta_j - \beta_{j-1}| $$,

the estimator for $$\beta$$ can be given:


 * $$ \hat{\beta} = \arg\min_{\beta\in \R_+^m} \left\{R_n(\lambda_\beta) + \|\beta\|_{\hat{w}}\right\} $$.

Then, the estimator $$\hat{\lambda}$$ is just $$\lambda_{\hat{\beta}}$$. With these preliminaries, an oracle inequality bounding the $$L^2$$ norm $$\|\hat{\lambda} - \lambda\|$$ is as follows: for appropriate choice of $$\hat{w}_j(x)$$,


 * $$ \|\hat{\lambda} - \lambda\|^2 \le \inf_{\beta \in \R_+^m} \left\{ \|\lambda_\beta - \lambda\|^2 + 2\|\beta\|_{\hat{w}} \right\} $$

with probability greater than or equal to $$ 1-12.85e^{-x} $$.