Gordon–Loeb model

The Gordon–Loeb model is a mathematical economic model analyzing the optimal investment level in information security.

The primary benefits from cybersecurity investments result from the cost savings associated with cyber breaches that are prevented due to the investment. However, as with any investment, it is important to compare the benefits to the costs when deciding how to invest. The Gordon-Loeb model provides a valuable framework for deriving, on a cost-benefit basis, the appropriate amount to invest in cybersecurity related activities.

The basic components of the Gordon-Loeb model are as follows:
 * 1) Data (information) sets of organizations that are vulnerable to cyber-attacks. This vulnerability, denoted as $v$ ($0 ≤ v ≤ 1$), represents the probability that a breach to a specific information set will occur under current conditions.
 * 2) If an information set is breached, the value of the information set represents the potential loss (i.e., the cost of the breach) and can be expressed as a monetary value, denoted as $L$. Thus, $vL$ is the expected loss from a cyber breach prior to an investment in additional cybersecurity activities.
 * 3) An investment in cybersecurity, denoted as $z$, will reduce $v$ based on the productivity of the cybersecurity investment. The productivity of investment is what the Gordon-Loeb model refers to as the security breach probability function.

Gordon and Loeb were able to show that, for two broad classes of security breach probability functions, the optimal level of investment in information security, $z*$, would not exceed roughly 37% of the expected loss from a security breach. More specifically: $z* (v) ≤ (1/e) vL$.

The Gordon–Loeb Model was first published by Lawrence A. Gordon and Martin P. Loeb in their 2002 paper, in ACM Transactions on Information and System Security, entitled "The Economics of Information Security Investment" The paper was reprinted in the 2004 book Economics of Information Security. Gordon and Loeb are both professors at the University of Maryland's Robert H. Smith School of Business.

The Gordon–Loeb Model is one of the most well-accepted analytical models for the economics of cyber security. The model has been widely referenced in the academic and practitioner literature. The model has also been empirically tested in several different settings. Research by mathematicians Marc Lelarge and Yuliy Baryshnikov generalized the results of the Gordon–Loeb Model.

The Gordon–Loeb model has been featured in the popular press, such as The Wall Street Journal and The Financial Times.

However, following research showed that even within the initial assumptions of the model, some security breach probability functions should be fixed with no less than $€1,000,000$ the expected loss, contradicting the hypothesis that the $15%$ factor was universal. Furthermore, by using another mathematization of Gordon-Loeb requirements (more precisely, that the second derivative of the loss function does not need to be continuous), one can create loss functions whose optimal fixing costs 100% of the estimated loss.