User:Ybenhaim/sandbox

Info-gap decision theory is a non-probabilistic decision theory seeking to optimize robustness to failure, or opportuneness for windfall, under severe uncertainty.

In many fields, including engineering, economics, management, biological conservation, medicine, homeland security, and more, analysts use models and data to evaluate and formulate decisions. An info-gap is the disparity between what is known and what needs to be known in order to make a reliable and responsible decision. Info-gaps are Knightian uncertainties: a lack of knowledge, an incompleteness of understanding. Info-gaps are non-probabilistic and cannot be insured against or modelled probabilistically. A common info-gap, though not the only kind, is uncertainty in the value of a parameter or of a vector of parameters, such as the durability of a new material or the future rates or return on stocks. Another common info-gap is uncertainty in the shape of a probability distribution. Another info-gap is uncertainty in the functional form of a property of the system, such as friction force in engineering, or the Phillips curve in economics. Another info-gap is in the shape and size of a set of possible vectors or functions. For instance, one may have very little knowledge about the relevant set of cardiac waveforms at the onset of heart failure in a specific individual.

Info-gap robustness analysis evaluates each feasible decision by asking: how much can we deviate from a given estimate of the true value of the parameter, or function, or set, and still guarantee that the performance over the respective region of uncertainty surrounding the estimate will be acceptable? In other words, the robustness of a decision is a measure of the size of the area surrounding the estimate over which a decision meets pre-specified performance requirements. It is sometimes difficult to judge how much robustness is needed or sufficient. However, the ranking of feasible decisions in terms of their degree of robustness is independent of such judgments.

Info-gap theory also proposes an opportuneness function which evaluates the potential for windfall outcomes resulting from favorable uncertainty.

Some Applications of Info-Gap Theory
Info-gap theory has been studied or applied in a range of applications including engineering , biological conservation , theoretical biology , homeland security , economics , project management and statistics . Foundational issues related to info-gap theory have also been studied .

A typical engineering application is the vibration analysis of a cracked beam, where the location, size and shape of the crack is unknown and greatly influence the vibration dynamics. Another example is the structural design of a building subject to uncertain loads such as from wind or earthquakes. Another engineering application involves the design of a neural net for detecting faults in a mechanical system, based on real-time measurements. A major difficulty is that faults are highly idiosyncratic, so that training data for the neural net will tend to differ substantially from data obtained from real-time faults after the net has been trained. The info-gap robustness strategy enables one to design the neural net to be robust to the disparity between training data and future real events.

Biological systems are vastly more complex and subtle than our best models, so the conservation biologist faces substantial info-gaps in using biological models. For instance, Levy et al use an info-gap robust-satisficing "methodology for identifying management alternatives that are robust to environmental uncertainty, but nonetheless meet specified socio-economic and environmental goals." They use info-gap robustness curves to select among management options for spruce-budworm populations in Eastern Canada. Burgman uses the fact that the robustness curves of different alternatives can intersect, to illustrate a change in preference between conservation strategies for the orange-bellied parrot.

Project management is another area where info-gap uncertainty is common. The project manager often has very limited information about the duration and cost of some of the tasks in the project, and info-gap robustness can assist in project planning and integration. Financial economics is another area where the future is fraught with surprises, which may be either pernicious or propitious. Info-gap robustness and opportuness analyses can assist in portfolio design, credit rationing, and other applications.

A number of authors have noted and discussed similarities and differences between info-gap robustness and minimax or worst-case methods . Sniedovich has demonstrated formally that the info-gap robustness function can be represented as a minimax optimization, and is thus related to Wald's minimax theory. Sniedovich has claimed that Info-Gap's robustness analysis is conducted in the neighborhood of an estimate that is likely to be substantially wrong, concluding that the resulting robustness function is equally likely to be substantially wrong. On the other hand, the estimate is the best one has, so it is useful to know if it can err greatly and still yield an acceptable outcome. This is related to the issue of how much robustness is needed in order to obtain adequate confidence . (See also chapter 9 in and chapter 4 in ).

Info-gap models
Info-gaps are quantified by info-gap models of uncertainty. An info-gap model is an unbounded family of nested sets. A frequently encountered example is a family of nested ellipsoids all having the same shape. The structure of the sets in an info-gap model derives from the information about the uncertainty. In general terms, the structure of an info-gap model of uncertainty is chosen to define the smallest or strictest family of sets whose elements are consistent with the prior information. Since there is, usually, no known worst case, the family of sets is unbounded.

A common example of an info-gap model is the fractional error model. The best estimate of an uncertain function $$u(x)\!\,$$ is $${\tilde{u}}(x)$$, but the fractional error of this estimate is unknown. The following unbounded family of nested sets of functions is a fractional-error info-gap model:

\mathcal{U}(\alpha, {\tilde{u}}) = \left \{ u(x): \ $$ At any horizon of uncertainty $$\alpha$$, the set $$\mathcal{U}(\alpha, {\tilde{u}})$$ contains all functions $$u(x)\!\,$$ whose fractional deviation from $${\tilde{u}}(x)$$ is no greater than $$\alpha$$. However, the horizon of uncertainty is unknown, so the info-gap model is an unbounded family of sets, and there is no worst case or greatest deviation.
 * u(x) - {\tilde{u}}(x) | \le \alpha {\tilde{u}}(x), \ \mbox{for all}\ x \right \}, \ \ \ \alpha \ge 0

There are many other types of info-gap models of uncertainty. All info-gap models obey two basic axioms:


 * Nesting. The info-gap model $$\mathcal{U}(\alpha, {\tilde{u}})$$ is nested if $$\alpha < \alpha^\prime$$ implies that:

\mathcal{U}(\alpha, {\tilde{u}}) \ \subseteq \ \mathcal{U}(\alpha^\prime, {\tilde{u}}) $$


 * Contraction. The info-gap model $$\mathcal{U}(0,{\tilde{u}})$$ is a singleton set containing its center point:

\mathcal{U}(0,{\tilde{u}}) = \{ {\tilde{u}} \} $$

The nesting axiom imposes the property of "clustering" which is characteristic of info-gap uncertainty. Furthermore, the nesting axiom implies that the uncertainty sets $$\mathcal{U}(\alpha, u)$$ become more inclusive as $$\alpha$$ grows, thus endowing $$\alpha$$ with its meaning as an horizon of uncertainty. The contraction axiom implies that, at horizon of uncertainty zero, the estimate $${\tilde{u}}$$ is correct.

Recall that the uncertain element $$u$$ may be a parameter, vector, function or set. The info-gap model is then an unbounded family of nested sets of parameters, vectors, functions or sets.

Robustness and opportuneness
Uncertainty may be either pernicious or propitious. That is, uncertain variations may be either adverse or favorable. Adversity entails the possibility of failure, while favorability is the opportunity for sweeping success. Info-gap decision theory is based on quantifying these two aspects of uncertainty, and choosing an action which addresses one or the other or both of them simultaneously. The pernicious and propitious aspects of uncertainty are quantified by two "immunity functions": the robustness function expresses the immunity to failure, while the opportuneness function expresses the immunity to windfall gain.

The robustness function expresses the greatest level of uncertainty at which failure cannot occur; the opportuneness function is the least level of uncertainty which entails the possibility of sweeping success. The robustness and opportuneness functions address, respectively, the pernicious and propitious facets of uncertainty.

Let $$q$$ be a decision vector of parameters such as design variables, time of initiation, model parameters or operational options. We can verbally express the robustness and opportuneness functions as the maximum or minimum of a set of values of the uncertainty parameter $$\alpha$$ of an info-gap model:
 * {| width="100%" border="0"

{\hat{\alpha}}(q) = \max \{ \alpha: \ \mbox{minimal requirements are always satisfied}\} $$ {\hat{\beta}}(q) = \min \{ \alpha: \ \mbox{sweeping success is possible}\} $$ We can "read" eq. (1) as follows. The robustness $${\hat{\alpha}}(q)$$ of decision vector $$q$$ is the greatest value of the horizon of uncertainty $$\alpha$$ for which specified minimal requirements are always satisfied. $${\hat{\alpha}}(q)$$ expresses robustness — the degree of resistance to uncertainty and immunity against failure — so a large value of $${\hat{\alpha}}(q)$$ is desirable. Eq. (2) states that the opportuneness $${\hat{\beta}}(q)$$ is the least level of uncertainty $$\alpha$$ which must be tolerated in order to enable the possibility of sweeping success as a result of decisions $$q$$. $${\hat{\beta}}(q)$$ is the immunity against windfall reward, so a small value of $${\hat{\beta}}(q)$$ is desirable. A small value of $${\hat{\beta}}(q)$$ reflects the opportune situation that great reward is possible even in the presence of little ambient uncertainty. The immunity functions $${\hat{\alpha}}(q)$$ and $${\hat{\beta}}(q)$$ are complementary and are defined in an anti-symmetric sense. Thus "bigger is better" for $${\hat{\alpha}}(q)$$ while "big is bad" for $${\hat{\beta}}(q)$$. The immunity functions — robustness and opportuneness — are the basic decision functions in info-gap decision theory.
 * (robustness)
 * (1)
 * (opportuneness)
 * (2)
 * }

The robustness function involves a maximization, but not of the performance or outcome of the decision. The greatest tolerable uncertainty is found at which decision $$q$$ satisfices the performance at a critical survival-level. One may establish one's preferences among the available actions $$q, \, q^\prime,\, \ldots $$ according to their robustnesses $${\hat{\alpha}}(q),\, {\hat{\alpha}}(q^\prime), \, \ldots $$, whereby larger robustness engenders higher preference. In this way the robustness function underlies a satisficing decision algorithm which maximizes the immunity to pernicious uncertainty.

The opportuneness function in eq. (2) involves a minimization, however not, as might be expected, of the damage which can accrue from unknown adverse events. The least horizon of uncertainty is sought at which decision $$q$$ enables (but does not necessarily guarantee) large windfall gain. Unlike the robustness function, the opportuneness function does not satisfice, it "windfalls". Windfalling preferences are those which prefer actions for which the opportuneness function takes a small value. When $${\hat{\beta}}(q)$$ is used to choose an action $$q$$, one is "windfalling" by optimizing the opportuneness from propitious uncertainty in an attempt to enable highly ambitious goals or rewards.

Given a scalar reward function $$R(q,u)$$, depending on the decision vector $$q$$ and the info-gap-uncertain function $$u$$, the minimal requirement in eq. (1) is that the reward $$R(q,u)$$ be no less than a critical value $${r_{\rm c}}$$. Likewise, the sweeping success in eq. (2) is attainment of a "wildest dream" level of reward $${r_{\rm w}}$$ which is much greater than $${r_{\rm c}}$$. Usually neither of these threshold values, $${r_{\rm c}}$$ and $${r_{\rm w}}$$, is chosen irrevocably before performing the decision analysis. Rather, these parameters enable the decision maker to explore a range of options. In any case the windfall reward $${r_{\rm w}}$$ is greater, usually much greater, than the critical reward $${r_{\rm c}}$$:

{r_{\rm w}} > {r_{\rm c}} $$

The robustness and opportuneness functions of eqs. (1) and (2) can now be expressed more explicitly:
 * {| border="0" width="100%"

{\hat{\alpha}}(q, {r_{\rm c}}) = \max \left \{ \alpha: \ \left ( \min_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right ) \ge {r_{\rm c}} \right \} $$ {\hat{\beta}}(q, {r_{\rm w}}) = \min \left \{ \alpha: \ \left ( \max_{u \in \mathcal{U}(\alpha, \tilde{u})} R(q,u) \right ) \ge {r_{\rm w}} \right \} $$ $${\hat{\alpha}}(q, {r_{\rm c}})$$ is the greatest level of uncertainty consistent with guaranteed reward no less than the critical reward $${r_{\rm c}}$$, while $${\hat{\beta}}(q, {r_{\rm w}})$$ is the least level of uncertainty which must be accepted in order to facilitate (but not guarantee) windfall as great as $${r_{\rm w}}$$. The complementary or anti-symmetric structure of the immunity functions is evident from eqs. (3) and (4).
 * (3)
 * (4)
 * }

These definitions can be modified to handle multi-criterion reward functions. Likewise, analogous definitions apply when $$R(q,u)$$ is a loss rather than a reward.

The robustness function generates robust-satisficing preferences on the options. A robust-satisficing decision maker will prefer a decision option $$q\,\!$$ over an alternative $$q^\prime$$ if the robustness of $$q\,\!$$ is greater than the robustness of $$q^\prime$$ at the same value of critical reward $${r_{\rm c}}$$. That is:
 * {| border="0" width="100%"

$$       if        $$ {\hat{\alpha}}(q, {r_{\rm c}}) > {\hat{\alpha}}(q^\prime, {r_{\rm c}})$$
 * $$q \succ _{\rm r} q^\prime
 * (5)
 * }

Let $$\mathcal{Q}$$ be the set of all available or feasible decision vectors $$q$$. A robust-satisficing decision is one which maximizes the robustness on the set $$\mathcal{Q}$$ of available $$q$$-vectors and satisfices the performance at the critical level $${r_{\rm c}}$$:

{\hat{q}_}({r_{\rm c}}) = \arg \max_{q \in \mathcal{Q}} {\hat{\alpha}}(q, {r_{\rm c}}) $$ Usually, though not invariably, the robust-satisficing action $${\hat{q}_}({r_{\rm c}})$$ depends on the critical reward $${r_{\rm c}}$$.

The opportuneness function generates opportune-windfalling preferences on the options. An opportune-windfalling decision maker will prefer a decision $$q$$ over an alternative $$q^\prime$$ if $$q$$ is more opportune than $$q^\prime$$ at the same level of reward $${r_{\rm w}}$$. Formally:
 * {| border=0 width="100%"

$$       if         $$ {\hat{\beta}}(q, {r_{\rm w}}) < {\hat{\beta}}(q^\prime, {r_{\rm w}})$$ The opportune-windfalling decision, $${\hat{q}_}({r_{\rm w}})$$, minimizes the opportuneness function on the set of available decisions:
 * $$q \succ _{\rm o} q^\prime
 * (6)
 * }

{\hat{q}_}({r_{\rm w}}) = \arg \min_{q \in \mathcal{Q}} {\hat{\beta}}(q, {r_{\rm w}}) $$

The two preference rankings, eqs. (5) and (6), as well as the corresponding the optimal decisions $${\hat{q}_}({r_{\rm c}})$$ and $${\hat{q}_}({r_{\rm w}})$$, may be different.

The robustness and opportuneness functions have many properties which are important for decision analysis. Robustness and opportuneness both trade-off against aspiration for outcome: robustness and opportuneness deteriorate as the decision maker's aspirations increase. Robustness is zero for model-best anticipated outcomes. Robustness curves of alternative decisions may cross, implying reversal of preference depending on aspiration. Robustness may be either sympathetic or antagonistic to opportuneness: a change in decision which enhances robustness may either enhance or diminish opportuneness. Various theorems have also been proven which identify conditions in which the probability of success is enhanced by enhancing the info-gap robustness, without of course knowing the underlying probability distribution.

== Example: Managing an epidemic ==

The impact of an epidemic is evaluated with data and models from epidemiology and other fields. These models predict morbidity and mortality, and underlie strategies for prevention and response. Severe uncertainties accompany these models, and information is often too sparse to represent these uncertainties probabilistically, and even worst-case scenarios are hard to identify reliably. The rate of infection is central in determining the spread of the epidemic, but it is hard to predict. Population studies of infection rates in normal circumstances may be of limited value when applied to stressful situations. Also, the disease may be poorly understood, as in SARS or avian flu. Population-mixing behavior, which influences the infection rate, can be influenced by public health announcements, but this is also hard to predict. Other properties are important, for instance, the number of initial infections and the identity of the disease, both of which may be hard to know.

The spread of disease can be managed through prophylactic or remedial treatment, quarantine, public information, and other means. Consider an example in which public officials must choose whether to quarantine the effected population, and if so, what population size to include. Also, planners must determine how quickly medical treatment will be dispensed upon detecting an emerging epidemic. Thousands or millions of individuals cannot be treated immediately, and plans must allow time for implementing large scale treatment.

We will consider three different quarantine options: no quarantine in which case the vulnerable population is one million; moderate quarantine which limits the vulnerable population to fifty thousand; and strict quarantine which limits the vulnerables to one thousand. We wish to analyze each option and then pick one, together with a rate of deployment of medical assistance. Our models are incomplete and uncertain, but they nonetheless represent the best available understanding of the processes involved. We will use the models as the starting point in the policy-selection process, followed by an analysis of the robustness to model-uncertainty and its implication for selecting a plan. We will illustrate the info-gap robust-satisficing strategy for selecting a plan.

Specifically, for each quarantine scenario, we will use an epidemiological model to predict the morbidity as a function of the time required to deploy and dispense medical treatment. From such an analysis one finds that stricter quarantine allows medical assistance to be deployed more slowly without increasing morbidity. Conversely, stricter quarantine results in lower morbidity at the same deployment rate. For instance, one might find that the morbidity reached in the no-quarantine case, is reached in a quarantined sub-population of 50 thousand after a deployment approximately 20 times slower than in the no-quarantine case. If the disease can be contained within a population of 1000, then the deployment-duration is extended by a further factor of 50 without increasing morbidity.

Clearly, quarantine is valuable, but how robust are these conclusions to modelling error? These conclusions depend on models and data which entail many info-gaps. Even small errors can result in performance which is substantially worse than predicted. We seek a strategy for which we are confident that the outcomes are acceptable. For any given choice of quarantine size and deployment duration, we must ask: by how much can the models err, and an acceptable outcome is still guaranteed? (Note that this is different from asking: 'how wrong are the models', or 'what is the worst case'?) The answer to the question we are asking is expressed by the robustness function. In comparing two options, we will prefer the one with greater robustness; we will prefer the option which guarantees a specified level of morbidity for the widest range of contingencies. Incidentally, this preference-ranking of the options is independent of whether or not we judge the robustness of either option to be large enough.

Before discussing how the robustness function is used to select a plan, let's recall that large robustness means that the corresponding morbidity is obtained for a large range of contingencies. The robustness of a plan is a result of how, and how much, information is exploited, and how uncertain that information is as expressed by an info-gap model of uncertainty. Further understanding of the roots of robustness can be obtained by mathematical analysis.

Two attributes of any option are expressed by the robustness function. (The theorems which underlie these statements depend on the nesting axiom of info-gap models of uncertainty, discussed earlier.) First, the robustness function quantifies the irrevocable trade-off between robustness and morbidity: for any plan, lower morbidity entails lower robustness; higher morbidity, higher robustness. This suggests that one is less confident in obtaining low morbidity than high morbidity, with any given plan, because the robustness to uncertainty is lower for low morbidity. The planner who uses uncertain models to choose a plan for achieving low morbidity, must also accept low robustness. This is useful in identifying feasible values of morbidity: those values for which the robustness is very low are clearly unfeasible, large robustness suggests greater feasibility, etc. Second, when choosing between several plans, which plan is most robust may depend on the acceptable level of morbidity (in a way which is revealed by the mathematical representation of robustness). The preference-ranking of the plans (based on robustness) can change as the planner considers different levels of morbidity. In other words, there need not be a unique robust-optimal plan, but instead the most robust plan may depend on the planner's (or the society's) preferences on the outcome (morbidity in this example). By identifying feasible goals, and plans which can reliably achieve them, the robustness analysis supports debate and decision about resource allocation. In short, the info-gap analysis of robustness assists strategic planners to weigh the pros and cons of available options.

Example: Managing an epidemic: Long
Consider the management of an epidemic. Infectious diseases have seriously effected humanity for centuries. The Black Death annihilated 30 to 60 percent of Europe's population around 1340. 50 to 100 million people died in the 1918-1920 influenza pandemic. SARS, AIDS, mad-cow disease, and avian flu have caused serious injury, economic loss, and death in recent decades. Civilian populations face serious threat of attack with biological agents by rouge states or terror groups.

The impact of a potential epidemic is evaluated with data and models from epidemiology and other fields. These models predict morbidity and mortality, and underlie strategies for prevention and response. The spread of disease can be managed through prophylactic or remedial treatment, quarantine, public information, and other means. Severe uncertainties accompany these models.

We will consider a simple example in which public officials must choose whether to quarantine the effected population, and if so, what population size to include. Also, planners must determine how quickly medical treatment will be dispensed upon detecting an emerging epidemic. Thousands or millions of individuals cannot be treated immediately, and plans must allow time for implementing large scale treatment.

Our simple epidemic model assumes constant population (no deaths, or death occurs much more slowly than infection), and infected individuals continue to infect the remaining susceptible population. This simple model will illustrate the info-gap robust-satisficing analysis of decisions facing strategic planners. The model contains four central parameters. $$\beta$$ determines the rate of infection of susceptible by infected individuals. $$t_{\rm c}$$ is the duration from detection of the epidemic until medical treatment has been dispensed. $$y_0$$ is the number of initially infected people. $$N$$ is the size of the population within which the disease can spread.

The rate of mixing and infection, $$\beta$$, is hard to predict. Population studies in normal circumstances may be of limited value when applied to stressful situations. Also, the disease may be poorly understood, as in SARS or avian flu. Population-mixing behavior can be influenced by public health announcements, but this is also hard to predict. We focus in this example only on uncertainty in $$\beta$$, though other uncertainties are important, for instance, the number and identity of initial infections is hard to know.

Consider the choice of $$t_{\rm c}$$ (delay until treatment) and $$N$$ (quarantine size). Suppose the vulnerable population is one million people. The best estimate of our model predicts the number of new infections as a function of the time required to deploy and dispense medical treatment. Now suppose that, by quarantine, the disease can be contained within a sub-population of 50 thousand. The model predicts that the same morbidity will be reached in the sub-population only after a duration approximately 20 times longer than for one million people. In other words, mass quarantine allows slower medical deployment, or, conversely, lower morbidity at the same deployment rate. If the disease can be contained within a population of 1000, then the deployment-duration is extended by a further factor of 50 without increasing morbidity. Clearly, quarantine is valuable, but how robust are these conclusions to modelling error?



Considerations such as these may contribute to a choice of quarantine size, $$N$$, and deployment time, $$t_{\rm c}$$. However, the models entail many info-gaps. Even small errors in the models and data can result in performance substantially worse than predicted. We seek a strategy for which we are confident that the outcomes are not unacceptable. For any given plan, $$(N, t_{\rm c})$$, we must ask: by how much can the models err, while acceptable outcomes are still guaranteed? The answer to this question, for various levels of morbidity and three different plans, are the robustness curves shown in fig. 1. The horizonal axis is the morbidity, $$y_{\rm c}$$, and the vertical axis is the robustness $$\widehat{\alpha}(N_i, t_{{\rm c},i}, y_{\rm c})$$: the greatest fractional error in the estimated mixing rate, $$\overline{\beta}$$, up to which the corresponding morbidity, $$y_{\rm c}$$, will not be exceeded.

We see two important features of the robustness curves in fig. 1. First, the slopes are positive, indicating trade-off between robustness and morbidity: morbidity, $$y_{\rm c}$$, can be reduced only by also reducing the robustness against error in the models and data, $$\widehat{\alpha}$$. Second, the robustness becomes zero at the value of morbidity predicted by the best available models and data. For instance, the best estimate of the mixing rate, $$\overline{\beta}$$, indicates that morbidity in plans 1, 2 and 3 will not exceed 268, 182 and 227 infections, respectively. However, even tiny errors could result in greater morbidity. Best-model predictions are a poor basis for evaluating a plan. We should evaluate plans in terms of how much added morbidity we must tolerate, in order to gain added robustness against surprises, errors, and data deficiencies. For instance, in plan 2 (intermediate quarantine) we can "purchase" robustness to 0.5 fractional error in the estimated mixing coefficient, $$\overline{\beta}$$, by accepting the possibility of as many as 339 infections (point $$A$$), as opposed to the best-estimate of 182 (point $$B$$).

Note that plan 2 has lower estimated morbidity than the other two plans displayed in fig. 1, and that plan 2 is also more robust up to robustness of about 120 percent (1.2 fractional error). However, in light of the many unknown factors which can influence the infectious-mixing rate, $$\beta$$, the decision-maker may well want far greater robustness. In this case plan 1---severe quarantine---is indicated due to its greater immunity to uncertainty. Nonetheless it must be recognized that this greater robustness is obtained only at the expense of accepting the possibility of greater morbidity. The large robustness premium for plan 1 is particularly striking since this plan has the largest best-estimated morbidity.

This example illustrates how info-gap analysis of robustness assists strategic planners to weigh the pros and cons of available options.

Treatment of severe uncertainty
Sniedovich has challenged the validity of info-gap theory for making decisions under severe uncertainty. He questions the effectiveness of info-gap theory in situations where the best estimate $$\displaystyle \tilde{u}$$ is a poor indication of the true value of $$\displaystyle u$$. Sniedovich notes that the info-gap robustness function is "local" to the region around $$\displaystyle \tilde{u}$$, where $$\displaystyle \tilde{u}$$ is likely to be substantially in error. He concludes that therefore the info-gap robustness function is an unreliable assessment of immunity to error. There are several possible responses to this concern.

Simon introduced the idea of bounded rationality. Limitations on knowledge, understanding, and computational capability constrain the ability of decision makers to identify optimal choices. Simon advocated satisficing rather than optimizing: seeking adequate (rather than optimal) outcomes given available resources. Schwartz , Conlisk and others discuss extensive evidence for the phenomenon of bounded rationality among human decision makers, as well as for the advantages of satisficing when knowledge and understanding are deficient. The info-gap robustness function provides a means of implementing a satisficing strategy under bounded rationality. For instance, in discussing bounded rationality and satisficing in conservation and environmental management, Burgman notes that "Info-gap theory ... can function sensibly when there are 'severe' knowledge gaps." The info-gap robustness and opportuneness functions provide "a formal framework to explore the kinds of speculations that occur intuitively when examining decision options." Burgman then proceeds to develop an info-gap robust-satisficing strategy for protecting the endangered orange-bellied parrot. Similarly, Vinot, Cogan and Cipolla discuss engineering design and note that "the downside of a model-based analysis lies in the knowledge that the model behavior is only an approximation to the real system behavior. Hence the question of the honest designer: how sensitive is my measure of design success to uncertainties in my system representation? ... It is evident that if model-based analysis is to be used with any level of confidence then ... [one must] attempt to satisfy an acceptable sub-optimal level of performance while remaining maximally robust to the system uncertainties." They proceed to develop an info-gap robust-satisficing design procedure for an aerospace application.

Thus it is correct that the info-gap robustness function is local, with respect to $$\tilde{u}$$. However, the value judgment of whether this neighborhood of robustness is small, too small, large, large enough, etc., is characteristic of all decisions under uncertainty. A major purpose of quantitative decision analysis is to provide focus for the subjective judgments which must be made.