User:SpiritN/Example/test

Particle swarm optimization (PSO) is is a relatively recent population based stochastic method for constrained global optimization introduced by James Kennedy and Russell C. Eberhart in 1995 [1] and based on simulation of social behavior. PSO is inspired by the ability of flocks of birds, schools of fish, and herds of animals to adapt to their environment, find rich sources of food, and avoid predators by implementing an information sharing approach, hence, developing an evolutionary advantage. [1*]

PSO shares many similarities with evolutionary computation (EC) techniques in general and Genetic Algorithms (GA) in particular [5*]. As in GA and ES, PSO exploits a population of potential solutions to explore a multidimensional search space looking for the global optimum (minimum or maximum) by updating generations [2*] [3*] [4*]. However, in contrast to GA and ES, PSO has no evolution operators such as crossover and mutation [2*]. Instead of this, PSO relies on the exchange of information between individuals, called particles, of the population, called swarm [3*]. Each particle "flies" through the problem space and adjusts its trajectory towards its own previous best position, and towards the best previous position attained by any member of its neighborhood [9] [3*].

PSO can be easily implemented and is computationally inexpensive as its memory and CPU speed requirements are low. Also, it does not require any gradient information of the optimized function and has few parameters to tune. PSO is proving itself to be an efficient method for several optimization problems, and in certain cases it does not suffer from the problems encountered by other EC techniques [2*]. The algorithm performance does not deteriorate severely with the growth of the search space dimensions as well [9*]. PSO has been successfully applied in many areas: function optimization, artificial neural network training, fuzzy system control and others. [2*] An extensive survey of PSO applications is made by Poli [4] [5].

Problem Definition
Optimization is central to any problem involving decision making, whether in Mathematics, Engineering or in Economics. The area of optimization has received enormous attention in recent years, primarily because of the rapid progress in computer technology, including the development and availability of user-friendly software, high speed and parallel processors [6*].

The task of global optimization is to find under a set of constraints a solution for which the objective (fitness) function obtains its smallest value, the global minimum. Unfortunately, most of the traditional optimization techniques are centered around evaluating the derivatives to locate the optima on a given constrained surface. Because of the difficulties in evaluating derivatives for many rough and discontinuous functions in recent times several derivative free optimization algorithms have emerged. The optimization problem, now-a-days, is represented as an intelligent search problem, where one or more agents are employed to determine the optima on a search landscape [7*].

In 1995 Eberhart and Kennedy enunciated an alternative solution to the complex non-linear optimization problem by emulating the social behavior and called their brainchild the particle swarm optimization (PSO) [9*]. "Swarm Intelligence" by Kennedy and Eberhart [3] describes many philosophical and sociological aspects of PSO and swarm intelligence.

Neighborhood Topology
As it was mentioned above particles while exploring the search space tend to be influenced by the best success of anyone they are connected to, the member of their neighborhood that has had the most success so far. Individuals can be connected to one another according to a great number of schemes. Most particle swarm implementations use one of two simple sociometric principles. The first, called gbest, conceptually connects all members of the population to one another. The effect of this is that each particle is influenced by the very best performance of any member of the entire population. The second, called lbest (g and l stand for "global" and "local"), creates a neighborhood for each individual comprising itself and its k nearest neighbors in the population. For instance, if $$k = 2$$, then each individual $$i$$ will be influenced by the best performance among a group made up of particles $$i - 1$$, $$i$$, and $$i + 1$$. Different neighborhood topologies may result in different kinds of effects.

Continuous Particle Swarm Optimization
Suppose the global optimum of a $$d$$-dimensional function is to be located. The function may be mathematically represented as:


 * $$~f(x_1, x_2, x_3, \ldots, x_d) = f(\vec{x})$$

where $$\vec{x}$$ is the search-variable vector, which actually represents the set of independent variables of the given function. The task is to find out such a $$\vec{x}$$, that the function value $$f(\vec{x})$$ is either a minimum or a maximum in the search range. If the components of $$\vec{x}$$ assume real values then the task is to locate a particular point in the $$d$$-dimensional space which is a continuum of such points. [9*]

PSO is a multi-agent search technique. Particles are conceptualentities, which fly through the multidimensional search space. At any particular instant, each particle has a position and a velocity. The position vector of a particle represents a potential solution of the search problem. Each particle $$i$$ has its current position $$\vec{x_i}(t)$$ and its current velocity $$\vec{v_i}(t)$$ that are added to the position in order to move the particle from one time step to another.

The particle is also equipped with a small memory comprising its previous best position $$\vec{p_i}(t)$$ and the best position found so far in the neighborhood of the particle $$\vec{g_i}(t)$$.

At the beginning, a population of particles is initialized with random positions and random velocities. Initial settings for $$\vec{p_i}(t)$$ and $$\vec{g_i}(t)$$ are $$\vec{p_i}(0) = \vec{g_i}(0) = \vec{x_i}(0)$$ for all particles.

Once the particles are all initialized, an iterative optimization process begins, where the positions and velocities of all the particles are updated by the following recursive equations for the dimension $$d$$ of the position and velocity of the particle $$i$$:


 * $$~ v_{id}(t + 1) = \chi \cdot[ w \cdot v_{id}(t) + c_1 \cdot \varphi _1 \cdot (p_{id}(t) - x_{id}(t)) + c_2 \cdot \varphi _2 \cdot (g_{id}(t) - x_{id}(t)) ]$$
 * $$~ x_{id}(t + 1) = x_{id}(t) + v_{id}(t + 1)$$

where
 * $$w$$ is the inertia weight which controls the influence of the previous velocity vector on the new vector;
 * $$c_1$$ and $$c_2$$ are constants called "self-confidence" and "swarm confidence" respectively ($$c_1$$ has a contribution towards the self-exploration of a particle and $$c_2$$ has a contribution towards motion of the particles in global direction);
 * $$\varphi _1$$ and $$\varphi _2$$ are random numbers uniformly distributed within the range [0, 1];
 * $$\chi$$ is a constant called constriction factor which is used to control and constrict velocities.

After having calculated the velocities and position for the next time step $$t + 1$$, the first iteration of the algorithm is completed. Typically, this process is iterated for a certain number of time steps, or until some acceptable solution has been found. The algorithm can be summarized in the following flowchart.


 * $$~ f(\vec{x}) = 10d + \sum^{d}_{i = 1}{(x^2_i - 10 \cos{(2 \pi x_i)})}$$
 * $$~ x_i \mathcal{2} [-5{,}12;5{,}12]$$