Bayesian vector autoregression

In statistics and econometrics, Bayesian vector autoregression (BVAR) uses Bayesian methods to estimate a vector autoregression (VAR) model. BVAR differs with standard VAR models in that the model parameters are treated as random variables, with prior probabilities, rather than fixed values.

Vector autoregressions are flexible statistical models that typically include many free parameters. Given the limited length of standard macroeconomic datasets relative to the vast number of parameters available, Bayesian methods have become an increasingly popular way of dealing with the problem of over-parameterization. As the ratio of variables to observations increases, the role of prior probabilities becomes increasingly important.

The general idea is to use informative priors to shrink the unrestricted model towards a parsimonious naïve benchmark, thereby reducing parameter uncertainty and improving forecast accuracy.

A typical example is the shrinkage prior, proposed by Robert Litterman (1979) and subsequently developed by other researchers at University of Minnesota,  (i.e. Sims C, 1989), which is known in the BVAR literature as the "Minnesota prior". The informativeness of the prior can be set by treating it as an additional parameter based on a hierarchical interpretation of the model.

In particular, the Minnesota prior assumes that each variable follows a random walk process, possibly with drift, and therefore consists of a normal prior on a set of parameters with fixed and known covariance matrix, which will be estimated with one of three techniques: Univariate AR, Diagonal VAR, or Full VAR.

This type model can be estimated with Eviews, Stata, Python or R Statistical Packages.

Recent research has shown that Bayesian vector autoregression is an appropriate tool for modelling large data sets.