Generalized linear mixed model

In statistics, a generalized linear mixed model (GLMM) is an extension to the generalized linear model (GLM) in which the linear predictor contains random effects in addition to the usual fixed effects. They also inherit from generalized linear models the idea of extending linear mixed models to non-normal data.

Generalized linear mixed models provide a broad range of models for the analysis of grouped data, since the differences between groups can be modelled as a random effect. These models are useful in the analysis of many kinds of data, including longitudinal data.

Model
Generalized linear mixed models are generally defined such that, conditioned on the random effects $$u$$, the dependent variable $$y$$ is distributed according to the exponential family with its expectation related to the linear predictor $X\beta+Zu$ via a link function $g$ :
 * $$g(E[y\vert u])=X\beta+Zu$$.

Here $X$ and $\beta$  are the fixed effects design matrix, and fixed effects respectively; $Z$  and $u$  are the random effects design matrix and random effects respectively. To understand this very brief definition you will first need to understand the definition of a generalized linear model and of a mixed model.

Generalized linear mixed models are a special cases of hierarchical generalized linear models in which the random effects are normally distributed.

The complete likelihood
 * $$p(y)=\int p(y\vert u)\,p(u)\,du$$

has no general closed form, and integrating over the random effects is usually extremely computationally intensive. In addition to numerically approximating this integral(e.g. via Gauss–Hermite quadrature), methods motivated by Laplace approximation have been proposed. For example, the penalized quasi-likelihood method, which essentially involves repeatedly fitting (i.e. doubly iterative) a weighted normal mixed model with a working variate, is implemented by various commercial and open source statistical programs.

Fitting a model
Fitting generalized linear mixed models via maximum likelihood (as via the Akaike information criterion (AIC)) involves integrating over the random effects. In general, those integrals cannot be expressed in analytical form. Various approximate methods have been developed, but none has good properties for all possible models and data sets (e.g. ungrouped binary data are particularly problematic). For this reason, methods involving numerical quadrature or Markov chain Monte Carlo have increased in use, as increasing computing power and advances in methods have made them more practical.

The Akaike information criterion is a common criterion for model selection. Estimates of the Akaike information criterion for generalized linear mixed models based on certain exponential family distributions have recently been obtained.

Software

 * Several contributed packages in R provide functionality for generalized linear mixed models, including  and.
 * Generalized linear mixed models can be fitted using SAS and SPSS
 * MATLAB also provides a  function to fit generalized linear mixed model models.
 * The Python  package supports binomial and poisson implementations.
 * The Julia package  provides a function called   that fits a generalized linear mixed model to provided data.
 * DHARMa: residual diagnostics for hierarchical (multi-level/mixed) regression models (utk.edu)