Recurrent event analysis

Recurrent event analysis is a branch of survival analysis that analyzes the time until recurrences occur, such as recurrences of traits or diseases. Recurrent events are often analyzed in social sciences and medical studies, for example recurring infections, depressions or cancer recurrences. Recurrent event analysis attempts to answer certain questions, such as: how many recurrences occur on average within a certain time interval? Which factors are associated with a higher or lower risk of recurrence?

The processes which generate events repeatedly over time are referred to as recurrent event processes, which are different from processes analyzed in time-to-event analysis: whereas time-to-event analysis focuses on the time to a single terminal event, individuals may be at risk for subsequent events after the first in recurrent event analysis, until they are censored.

Introduction
Objectives of recurrent event analysis include:


 * Understanding and describing individual event processes
 * Identifying and characterizing variation across a population of processes
 * Comparing groups of processes
 * Determining the relationship of fixed covariates, treatments, and time-varying factors to event occurrence

Notation and frameworks
For a single recurrent event process starting at $$t = 0$$, let $$0 \leq T_1 < T_2 < \dots $$ denote the event times, where $$T_k$$ is the time of the $$k$$th event. The associated counting process $$\{N(t), 0 \leq t\}$$ records the cumulative number of events generated by the process; specifically, $N(t) = \sum_{k=1}^{\infty}I(T_k \leq t)$ is the number of events occurring over the time interval $$[0, t]$$.

Models for recurrent events can be specified by considering the probability distribution for the number of recurrences in short intervals $$[t, t + \Delta t)$$, given the history of event occurrence before time $$t$$. The intensity function describes the instantaneous probability of an event occurring at time $$t$$, conditional on the process history, and describes the process mathematically. Define the process history as $$H(t) = \{N(s): 0 \leq s < t \}$$, then the intensity is formally defined as$$\lambda(t|H(t)) = \lim_{\Delta t \downarrow 0}\frac{P(N(t + \Delta t) - N(t) = 1)}{\Delta t}.$$When a heterogeneous group of individuals or processes is considered, the assumption of a common event intensity is no longer plausible. Greater generality can be achieved by incorporating fixed or time-varying covariates in the intensity function.

Description of recurrent event data
As a counterpart of the Kaplan–Meier curve, which is used to describe the time to a terminal event, recurrent event data can be described using the mean cumulative function, which is the average number of cumulative events experienced by an individual in the study at each point in time since the start of follow-up.

Poisson model
The Poisson model is a popular model for recurrent event data, which models the number of recurrences that have occurred. Poisson regression assumes that the number of recurrences has a Poisson distribution with a fixed rate of recurrence over time. The logarithm of the expected number of recurrences is modeled by a linear combination of explanatory variables.

Marginal means/rates model
The marginal means/rates model considers all recurrent events of the same subject as a single counting process and does not require time-varying covariates to reflect the past history of the process, which makes it a more flexible model. Instead, the full history of the counting process may influence the mean function of recurrent events.

Multi-state model
In multi-state models, the recurrent event processes of individuals are described by different states. The different states may describe the recurrence number, or whether the subject is at risk of recurrence. A change of state is called a transition (or an event) and is central in this framework, which is fully characterized through estimation of transition probabilities between states and transition intensities that are defined as instantaneous hazards of progression to one state, conditional on occupying another state.

Extended Cox proportional hazards (PH) models
Extensions of the Cox proportional hazard models are popular models in social sciences and medical science to assess associations between variables and risk of recurrence, or to predict recurrent event outcomes. Many extensions of survival models based on the Cox proportional hazards approach have been proposed to handle recurrent event data. These models can be characterized by four model components:


 * Risk intervals
 * Baseline hazard
 * Risk set
 * Correction for within-subject correlation

Well-known examples of Cox-based recurrent event models are the Andersen and Gill model, the Prentice, Williams and Petersen model and the Wei-Lin–Weissfeld model

Correlated event times within subjects
Time to recurrence is often correlated within subjects, as some subjects can be more frail to experiencing recurrences. If the correlated nature of the data is ignored, the confidence intervals (CI) for the estimated rates could be artificially narrow, which may result in false positive results.

Robust variance
It is possible to use robust 'sandwich' estimators for the variance of regression coefficients. Robust variance estimators are based on a jackknife estimate, which anticipates correlation within subjects and provides robust standard errors.

Frailty models
In frailty models, a random effect is included in the recurrent event model which describes the individual excess risk that can not be explained by the included covariates. The frailty term induces dependence among the recurrence times within subjects.