User:Lpetroia/sandbox

Simulation-base Optimization
In simulation experiment, the goal is to evaluate the effect of different values of input variables on a system, which is called running simulation experiments. However sometimes there are interest in finding the optimal value for input variables in terms of the system outcomes. One way could be running simulation experiments for all possible input variables. However this approach is not always practical due to several possible situations and it just makes it intractable to run experiment for each scenario. For example, there might be so many possible values for input variables, or simulation model might be so complicated and expensive to run for suboptimal input variable values. In these cases, the goal is to find optimal values for input variables rather than trying all possible values. This process is called simulation optimization.

Specific simulation based optimization methods can be chosen according to figure 1 based on the decision variable types. Optimization exists in two main branches of operational research:

Optimization parametric (static) – the objective is to find the values of the parameters, which are “static” for all states, with the goal of maximize or minimize a function. In this case, there is the use of mathematical programming, such as linear programing. In this scenario, simulation helps when the parameters contain noise or the evaluation of the problem would demand excess of computer time, due to its complexity.

Optimization control (dynamic) – used largely in computer sciences and electrical engineering, what results in many papers and projects in these fields. The optimal control is per state and the results change in each of them. There is use of mathematical programming, as well as dynamic programming. In this scenario, simulation can generate random samples and solve complex and large-scale problems.

Methods
There are five methods classifying the simulation based optimization. They are discussed below:

Response surface methodo logy (RSM)
In response surface methodology, the objective is to find the relationship between the input variables and the response variables. The process starts from trying to fit a linear regression model. If the P-value turns out to be low, then a higher degree polynomial regression, which is usually quadratic, will be implemented. The process of finding a good relationship between input and response variables will be done for each simulation test. In simulation optimization, response surface method can be used to find the best input variables that produce desired outcomes in terms of response variables.

Heuristic methods
Heuristic methods change accuracy by speed. Their goal is to find a good solution faster than the traditional methods, when they are too slow or fail in solving the problem. Usually they find local optimal instead of the optimal value; however, the values are considered close enough of the final solution. Examples of this kind of method is tabu search or genetic algorithm.

Stochastic approximation
Stochastic approximation is used when the function cannot be computed directly, only estimated via noisy observations. In this scenarios, this method (or family of methods) looks for the extrema of these function. The objective function would be :

$$\underset{\text{x}\in\theta}{\min}f\bigl(\text{x}\bigr) = \underset{\text{x}\in\theta}{\min}\Epsilon[F\bigl(\text{x,y})]$$

$$y$$  is a random variable that represents the noise.

$$x$$ is the parameter that minimizes $$f\bigl(\text{x}\bigr)$$.

$$\theta$$ is the domain of the parameter $$x$$.

Derivative-free optimization methods
Derivative-free optimization is a subject of mathematical optimization. This method is applied to a certain optimization problem when its derivatives are unavailable or unreliable. Derivate-free method establishes model based on sample function values or directly draw a sample set of function values without exploiting detailed model. Since it needs no derivatives, it cannot be compared to derivative-based methods.

For unconstrained optimization problems, it has a form:

$$\underset{\text{x}\in\R^n}{\min}f\bigl(\text{x}\bigr)$$

The limitation of derivative-free optimization:

1. It is usually cannot handle optimization problems with a few tens of variables, the results via this method are usually not so accurate.

2. When confronted with minimizing non-convex functions, it will show its limitation.

3. Derivative-free optimization methods is simple and easy, however, it is not so good in theory and in practice.

Dynamic programming
Dynamic programming deals with situations where decisions are made in stage. The key to this kind of problems is to trade off the present and future costs.

One dynamic basic model has two features:

1) Has a discrete time dynamic system.

2) The cost function is additive over time.

For discrete feature, dynamic programming has the form:

$$x_{k+1} = f_k(x_{k},u_{k},w_{k}), k=0,1,...,N-1$$

$$k$$ represents the index of discrete time.

$$x_k$$ is the state of the time k, it contains the past information and prepare it for the future optimization.

$$u_k$$ is the control variable.

$$w_k$$ is the random parameter.

For cost function, it has the form:

$$g_N(X_N) + \sum_{k=0}^{N-1} gk(x_k,u_k,W_k)$$

$$g_N(X_N)$$ is the cost at the end of the process.

As the cost cannot be optimized meaningfully, it can be used the expect value:

$$E\{g_N(X_N) + \sum_{k=0}^{N-1} g_k(x_k,u_k,W_k) \}$$

Neuro-dynamic programming
Neuro-dynamic programming is the same as dynamic programming except that the former has the concept of approximation architectures. It combines artificial intelligence, simulation-base algorithms, and functional approach techniques. “Neuro” in this term origins from artificial intelligence community. It means learning how to make improved decisions for the future via built-in mechanism based on the current behavior. The most important part of neuro-dynamic programming is to build a trained neuro network for the optimal problem.

Limitations
Simulation base optimization has some limitations, such as the difficulty of create a model that imitates the dynamic behavior of the system in a way that is considered good enough for its representation. Other problem is how complex it is the determination of uncontrollable parameters of the real-world system and of the simulation. Moreover, only a statistical estimation of the real values can be obtained. It is not easy to determine the objective function, since it is result of measurements, what can be harmful for the solutions.

Examples
The literature presents many uses of Simulation Based Optimization. Nguyen et al., for example, discuss in their paper the use of simulation-based optimization for supporting the project of high performance buildings, such as green buildings. The figure 2 presents their method simplified.

Saif et al. present another possible use of Simulation Based Optimization: allocate energy resources in an imperfect power distribution system, in an optimal way, considering location and capacity.