Risk matrix

A risk matrix is a matrix that is used during risk assessment to define the level of risk by considering the category of likelihood (often confused with one of its possible quantitative metrics, i.e. the probability) against the category of consequence severity. This is a simple mechanism to increase visibility of risks and assist management decision making.

Definitions
Risk is the lack of certainty about the outcome of making a particular choice. Statistically, the level of downside risk can be calculated as the product of the probability that harm occurs (e.g., that an accident happens) multiplied by the severity of that harm (i.e., the average amount of harm or more conservatively the maximum credible amount of harm). In practice, the risk matrix is a useful approach where either the probability or the harm severity cannot be estimated with accuracy and precision.

Although standard risk matrices exist in certain contexts (e.g. US DoD, NASA, ISO), individual projects and organizations may need to create their own or tailor an existing risk matrix. For example, the harm severity can be categorized as:
 * Catastrophic: death or permanent total disability, significant irreversible environmental impact, total loss of equipment
 * Critical: accident level injury resulting in hospitalization, permanent partial disability, significant reversible environmental impact, damage to equipment
 * Marginal: injury causing lost workdays, reversible moderate environmental impact, minor accident damage level
 * Minor: injury not causing lost workdays, minimal environmental impact, damage less than a minor accident level

The likelihood of harm occurring might be categorized as 'certain', 'likely', 'possible', 'unlikely' and 'rare'. However it must be considered that very low likelihood may not be very reliable.

The resulting risk matrix could be:

The company or organization then would calculate what levels of risk they can take with different events. This would be done by weighing the risk of an event occurring against the cost to implement safety and the benefit gained from it.

The following is an example matrix of possible personal injuries, with particular accidents allocated to appropriate cells within the matrix:

The risk matrix is approximate and can often be challenged. For example, the likelihood of death in an aircraft crash is about 1:11 million but death by motor vehicle is 1:5000, but nobody usually survives a plane crash, so it is far more catastrophic.

Development
On January 30 1978, a new version of US Department of Defense Instruction 6055.1 ("Department of Defense Occupational Safety and Health Program") was released. It is said to have been an important step towards the development of the risk matrix.

In August 1978, business textbook author David E Hussey defined an investment "risk matrix" with risk on one axis, and profitability on the other. The values on the risk axis were determined by first determining risk impact and risk probability values in a manner identical to completing a 7 x 7 version of the modern risk matrix.

A 5 x 4 version of the risk matrix was defined by the US Department of Defense on March 30 1984, in "MIL-STD-882B System Safety Program Requirements".

The risk matrix was in use by the acquisition reengineering team at the US Air Force Electronic Systems Center in 1995.

Huihui Ni, An Chen and Ning Chen proposed some refinements of the approach in 2010.

In 2019, the three most popular forms of the matrix were:
 * a 3x3 risk matrix (OHSAS 18001)
 * a 5x5 risk matrix (MIL-STD-882B)
 * a 4x4 risk matrix (AS/NZS 4360 2004)

Other standards are also in use.

Problems
In his article 'What's Wrong with Risk Matrices?', Tony Cox argues that risk matrices experience several problematic mathematical features making it harder to assess risks. These are:


 * Poor resolution. Typical risk matrices can correctly and unambiguously compare only a small fraction (e.g., less than 10%) of randomly selected pairs of hazards. They can assign identical ratings to quantitatively very different risks ("range compression").
 * Errors. Risk matrices can mistakenly assign higher qualitative ratings to quantitatively smaller risks. For risks with negatively correlated frequencies and severities, they can be "worse than useless," leading to worse-than-random decisions.
 * Suboptimal resource allocation. Effective allocation of resources to risk-reducing countermeasures cannot be based on the categories provided by risk matrices.
 * Ambiguous inputs and outputs. Categorizations of severity cannot be made objectively for uncertain consequences. Inputs to risk matrices (e.g., frequency and severity categorizations) and resulting outputs (i.e., risk ratings) require subjective interpretation, and different users may obtain opposite ratings of the same quantitative risks. These limitations suggest that risk matrices should be used with caution, and only with careful explanations of embedded judgments.

Thomas, Bratvold, and Bickel demonstrate that risk matrices produce arbitrary risk rankings. Rankings depend upon the design of the risk matrix itself, such as how large the bins are and whether or not one uses an increasing or decreasing scale. In other words, changing the scale can change the answer.

An additional problem is the imprecision used on the categories of likelihood. For example; 'certain', 'likely', 'possible', 'unlikely' and 'rare' are not hierarchically related. A better choice might be obtained through use of the same base term, such as 'extremely common', 'very common', 'fairly common', 'less common', 'very uncommon', 'extremely uncommon' or a similar hierarchy on a base "frequency" term.

Another common problem is to assign rank indices to the matrix axes and multiply the indices to get a "risk score". While this seems intuitive, it results in an uneven distribution.

Cybersecurity
Douglas W. Hubbard and Richard Seiersen take the general research from Cox, Thomas, Bratvold, and Bickel, and provide specific discussion in the realm of cybersecurity risk. They point out that since 61% of cybersecurity professionals use some form of risk matrix, this can be a serious problem. Hubbard and Seiersen consider these problems in the context of other measured human errors and conclude that "The errors of the experts are simply further exacerbated by the additional errors introduced by the scales and matrices themselves. We agree with the solution proposed by Thomas et al. There is no need for cybersecurity (or other areas of risk analysis that also use risk matrices) to reinvent well-established quantitative methods used in many equally complex problems."