User:Ytoren/sandbox

Risk score (or risk scoring) is the name given to a general practice in applied statistics, bio-statistics, econometrics and other related disciplines, of creating an easily calculated number (the score) that reflects the level of risk in the presence of some risk factors (e.g. risk of mortality or disease in the presence of symptoms or genetic profile, risk financial loss considering credit and financial history, etc.).

Risk scores are designed to be:
 * Simple to calculate: In many cases all you need to calculate a score is nothing a pen and a piece of paper (although some scores use rely on more sophisticated or less transparent calculations that require a computer program).
 * Easily interpreted: The result of the calculation is a single number, and higher score usually means means higher risk. Furthermore, many scoring methods enforce some form of monotonicity along the measured risk factors to allow a straight forward interpretation of the score (e.g. risk of mortality only increases with age, risk of payment default only increase with the amount of total debt the customer has, etc.).
 * Actionable: Scores are designed around a set of possible actions that should be taken as a result of the calculated score. Effective score-based policies can be designed and executed by setting thresholds on the value of the score and associating them with escalating actions.

Formal definition
A typical scoring method is composed of 3 components:
 * 1) A set of consistent rules (or weights) that assign a numerical value ("points") to each risk factor that reflect our estimation of underlying risk.
 * 2) A formula (typically a simple sum of all accumulated points) that calculates the score.
 * 3) A set of thresholds that helps to translate the calculated score into a level of risk, or an equivalent formula or set of rules to translate the calculated score back into probabilities (leaving the nominal evaluation of severity to the practitioner0.

Items 1 & 2 can be achieved by using some form of regression, that will provide both the risk estimation and the formula to calculate the score. Item 3 requires setting an arbitrary set of thresholds and will usually involve expert opinion.

Estimating risk with GLM
Risk score are designed to represent an underlying probability of an adverse event denoted $$ \lbrace Y = 1 \rbrace $$ given a vector of $$ P $$ explaining variables $$ \mathbf{X} $$ containing measurements of the relevant risk factors. In order to establish the connection between the risk factors and the probability we estimate a set of weights $$ \beta $$ is estimated using a generalized linear model:


 * $$\begin{align}

\operatorname{E}(\mathbf{Y} | \mathbf{X}) = \mathbf{P}(\mathbf{Y} = 1 | \mathbf{X}) = g^{-1}(\mathbf{X} \beta) \end{align}$$

Where $$g^{-1}: \mathbb{R} \rightarrow [0,1]$$ is a real-valued, monotonically increasing function that maps the values of the linear predictor $$ \mathbf{X} \beta $$ to the interval $$ [0,1] $$. GLM methods typically uses the logit or probit as the link function.

Estimating risk with other methods
While it's possible to estimate $$ \mathbf{P}(\mathbf{Y} = 1 | \mathbf{X}) $$ using other statistical or machine learning methods, the requirements of simplicity and easy interpretation (and monotonicity per risk factor) make most of these methods difficult to use for scoring in this context:
 * With more sophisticated methods it becomes difficult to attribute simple weights for each risk factor and to provide a simple formula for the calculation of the score. A notable exception are tree-based methods like CART, that can provide a simple set of decision rules and calculations, but cannot ensure the monotonicity of the scale across the different risk factors.
 * The fact that we are estimating underlying risk across the population, and therefore cannot tag people in advance on an ordinal scale (we can't know in advance if a person belongs to a "high risk" group, we only see observed incidences) classification methods are only relevant if we want to classify people into 2 groups or 2 possible actions.

Constructing the score
When using GLM, the set of estimated weights $$ \beta $$ can be used to assign different values (or "points") to different values of the risk factors in $$ \mathbf{X} $$ (continuous or nominal as indicators). The score can then be expressed as a weighted sum:


 * $$\begin{align}

\text{Score} = \mathbf{X} \beta = \sum_{j=1}^{P} \mathbf{X}_{j} \beta_{j} \end{align}$$


 * While some scoring methods use the values of the score "as is", some scoring methods (e.g. ABCD² score) will translate the score into probabilities by using $$ g^{-1} $$ or a look-up table. This makes the process of obtaining the score more complicated computationally but has the advantage of translating an arbitrary number to a more familiar scale of 0 to 1.
 * The columns of $$ \mathbf{X} $$ can represent complex transformations of the risk factors (including multiple interactions) and not just the risk factors themselves.
 * The values of $$ \beta $$ are sometimes scaled or rounded to allow working with integers instead of very small fractions (making the calculation simpler). While scaling has no impact ability of the score to estimate risk, rounding has the potential of disrupting the "optimality" of the GLM estimation.

Making score-based decisions
Let $$ \mathbf{A} = \lbrace \mathbf{a}_{1}, ... ,\mathbf{a}_{m} \rbrace $$ denote a set of $$ m \geq 2 $$ "escalating" actions available for the decision maker (e.g. for credit risk decisions: $$ \mathbf{a}_{1} $$ = "approve automatically", $$ \mathbf{a}_{2} $$ = "require more documentation and check manually", $$ \mathbf{a}_{3} $$ = "decline automatically"). In order to define a decision rule, we want to define a map between different values of the score and the possible decisions in $$ \mathbf{A} $$. Let $$ \tau = \lbrace \tau_1, ... \tau_{m-1} \rbrace $$ be a partition of $$ \mathbb{R} $$ into $$ m $$ consecutive, non-overlapping intervals, such that $$ \tau_1 < \tau_2 < \ldots < \tau_{m-1} $$.

The map is defined as follows:


 * $$\begin{align}

\text{If Score} \in [\tau_{j-1},\tau_{j}) \rightarrow \text{Take action } \mathbf{a}_{j} \end{align}$$


 * The values of $$ \tau $$ are set based on expert opinion, the type and prevalence of the measured risk, consequences of miss-classification, etc. For example, a risk of 9 out of 10 will usually be considered as "high risk", but a risk of 7 out of 10 can be considered either "high risk" or "medium risk" depending on context.
 * The definition of the intervals is on right open-ended intervals but can be equivalently defined using left open ended intervals $$ (\tau_{j-1},\tau_{j}] $$.
 * For scoring methods that are already translated the score into probabilities we either define the partition $$ \tau $$ directly on the interval $$ [0,1] $$ or translate the decision criteria into $$ [g^{-1}(\tau_{j-1}),g^{-1}(\tau_{j})) $$, and the monotonicity of $$ g $$ ensures a 1-to-1 translation.

Biostatistics

 * Framingham Risk Score
 * QRISK
 * TIMI
 * Rockall score
 * ABCD² score
 * CHA2DS2–VASc score

Financial industry
The primary use of scores in the financial sector is for Credit scorecards, or credit scores:
 * In many countries (such as the US) credit score are calculated by commercial entities and therefore the exact method is not public knowledge (for example the Bankruptcy risk score, FICO score and others). Credit scores in Australia and UK are often calculated by using logistic regression to estimate probability of default, and are therefore a type of risk score.
 * Other financial industries, such as the insurance industry also use variants of credit score.