Talk:Probability bounds analysis

Numerical example
Consider the sum of two uncertain numbers, a and b, characterized by the p-boxes A = normal(mean = [0.5, 0.6], variance = [0.001, 0.01]) = {f : f(x) = exp(<<>>)/<<>>,   [0.5, 0.6], 2  [0.001, 0.01]}, < > and B = p-box(min=0, max = 1, mean = mode = 0.1) = {f : f  &#x1D53B; - x f(x) = 0.1}.

cumulative distribution function is convex on (, 0.1] and concave on [0.1, )

That is, A denotes all the probability distribution functions of normally distributed random variables whose mean is between 0.5 and 0.6, and whose variance is between 0.001 and 0.01. Likewise B denotes all probability distributions functions ranging between 0 and 1 whose mean and mode are both 0.1.

a = normal([0.3,0.4], sqrt([0.001,.01])) b = minmaxmeanismode(0, 1, 0.1) e =a |+| b f =a +b

aa=max(0.001,min(.999,a)) bb=max(0.001,min(.999,b)) clear;show aa

show e on '1111' show f on '1111' clear; show e in red; show f

What can be inferred about the sum of these uncertain numbers depends on the assumptions about the stochastic dependence between the quantities. If, for instance, they can be assumed to be independent, or related according to some other specific copula or dependence function, the bounds on the sum will be tighter than they would be if their dependence were imprecisely specified (e.g., that they are positively related, or that their interaction can be characterized by a particular correlation coefficient). Making no assumption whatever about the dependencies among the quantities leads to the broadest bounds. can be computed in different ways as a result of assumptions made about the dependence among the quantities.

In this example, we compute bounds on the sum from only partial information about each of the respective random variables. There is no way to do these calculations with a sampling strategy such as Monte Carlo simulation.

Shown below are the bounds on each of the four inputs and bounds on the sum, both with an assumption of independence and without any assumption about the dependence among the variables. The dotted curves represent the inputs and answers that might have been used in a traditional probabilistic assessment that did not acknowledge the uncertainty about the distributions and dependencies. Compare them with the solid edges of the p-boxes to quantify how much the tail risks would have been underestimated.

When the quantities are independent and their p-boxes are degenerate so they define particular distribution functions, the result of the probability bounds analysis is the same as would be obtained in a traditional probabilistic convolution such as is commonly implemented with Monte Carlo simulation.

A+B+C+D	A+B+C+D

Figure 7. Example calculation of a sum of four addends characterized by p-boxes. a=lognormal1([.5,.6], sqrt([.001,.01])) b = minmaxmode(0, 1, .3) c= hist(0,1,.2, .5, .6, .7, .75, .8) d=uniform(0,1) e =a |+| b The table below lists the summary statistical measures yielded by three analyses of this hypothetical calculation. The second column gives the results that might be obtained by a standard Monte Carlo analysis under an independence assumption (the dotted lines in the figure above). The third and fourth columns give results from probability bounding analyses, either with or without an assumption of independence.

Summary		Monte Carlo		Independence		General 95th percentile		2.45			[ 2.1, 2.9] 	 	[ 1.3, 3.2] median			1.87			[ 1.4, 2.3] 	 	[ 0.79, 2.8] mean			1.88			[ 1.4, 2.3] 	 	[ 1.4, 2.3] variance		0.135			[ 0.086, 0.31]		[ 0, 0.90]

Notice that, while the Monte Carlo simulation produces point estimates, the bounding analyses yield intervals for the various measures. The intervals represent sure bounds on the respective statistics. They reveal just how unsure the answers given by the Monte Carlo simulation actually were. If we look in the last column with no assumption, for instance, we see that the variance might actually be over six times larger than the Monte Carlo simulation estimates.

Subinterval reconstitution
< >

Schemes to control dependency problems
A special case of the dependency problem in probability bounds analysis is the problem of repeated uncertain variables.

The issue effects all uncertainty calculi

the dependence problem

repeated-variable problem 

Use of dependence operators  Rearranging to reduce repeated uncertaintiesThe dependency problem can be eliminated by replacing an expression to be evaluated by an algebraically equivalent expression in which no variable appears more than once. For instance, in an expression such as a/x +b/x+ c/x + d/x where x denotes an uncertain quantity, the perfect dependence among the various instantiations of x is difficult to account for in the computation. But when this expression is replaced by the equivalent expression (a+b+c+d)/x, the dependence problem disappears because the rearranged expression has no repeated variables.

Likewise, the x2−x can be replaced by the algebraically equivalent expression (x−1/2)2−1/4.

Subinterval reconstitution



Affine arithmetic 



-- Magioladitis (talk) 12:35, 8 November 2013 (UTC)