User:Vrkunkel/Sandbox

The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a Multi-Criteria Decision Analysis method, which was originally developed by Hwang and Yoon in 1981 with further developments by Yoon in 1987, and Hwang, Lai and Liu in 1993. TOPSIS is based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution and the longest geometric distance from the negative ideal solution. It is a method of compensatory aggregation that compares a set of alternatives by identifying weights for each criteria, normalising scores for each criteria and calculating the geometric distance between each alternative and the ideal alternative, which is the best score in each criteria. An assumption of TOPSIS is that the criterion are monotonically increasing or decreasing. Normalisation is usually required as the parameters or criteria are often of incongruous dimensions in multi-criteria problems. Compensatory methods such as TOPSIS, allow trade-offs between criteria, where a poor result in one criterion can be negated by a good result in another criterion. This provides a more realistic form of modelling than non-compensatory methods, which include or exclude alternative solutions based on hard cut-offs.

TOPSIS method
The TOPSIS process is carried out as follows:


 * Step 1: Create an evaluation matrix consisting of m alternatives and n criteria, with the intersection of each alternative and criteria given as $$x_{ij}$$, we therefore have a matrix $$( x_{ij} )_{m \times n}$$.


 * Step 2: The matrix $$( x_{ij} )_{m \times n}$$ is then normalised to form the matrix

$$R = ( r_{ij} )_{m \times n}$$, using the normalisation method

$$ r_{ij} = x_{ij} / pmax(v_j), i = 1, 2,. . ., m, j = 1, 2,. . ., n, $$ where $$ pmax( v_j )$$ is the maximum possible value of the indicator $$v_j, j = 1, 2,. . ., n$$.

$$T =(t_{ij})_{m \times n} = ( w_jr_{ij} )_{m \times n}, i = 1, 2,. . ., m $$
 * Step 3: Calculate the weighted normalised decision matrix


 * Where $$w_j = W_j / \sum_{j=1}^{n}W_j, j = 1, 2, . . ., n $$ so that $$ \sum_{j=1}^{n} w_j = 1$$, and $$W_j$$ is the original weight given to the indicator $$v_j, j = 1, 2, . . ., n.$$


 * Step 4: Determine the worst alternative $$(A_w)$$ and the best alternative $$(A_b)$$:

$$ A_w = \{ \langle max(t_{ij} | i = 1,2,...,m)| j \in J_- \rangle, \langle min(t_{ij} | i = 1,2,...,m)| j \in J_+ \rangle \rbrace \equiv \{ t_{wj} | j= 1,2,...,n \rbrace, $$

$$ A_b = \{ \langle min(t_{ij} | i = 1,2,...,m)| j \in J_- \rangle, \langle max(t_{ij} | i = 1,2,...,m)| j \in J_+ \rangle \rbrace \equiv \{ t_{bj} | j= 1,2,...,n \rbrace, $$


 * where,


 * $$ J_+ = \{ j = 1,2,...,n | j $$ associated with the criteria having a positive impact, and


 * $$ J_- = \{ j = 1,2,...,n | j $$ associated with the criteria having a negative impact.


 * Step 5: Calculate the L2-distance between the target alternative $$i$$ and the worst condition $$A_w$$


 * $$ d_{iw} = \sqrt{\sum_{j=1}^{n}(t_{ij} - t_{wj})^2}, i = 1, 2, . . ., m $$,

and the distance between the alternative $$i$$ and the best condition $$A_b$$


 * $$ d_{ib} = \sqrt{\sum_{j=1}^{n}(t_{ij} - t_{bj})^2}, i = 1, 2, . . ., m $$


 * where $$d_{iw}$$ and $$d_{ib}$$ are L2-norm distances from the target alternative $$i$$ to the worst and best conditions, respectively.

$$ s_{iw}= d_{ib} / (d_{iw} + d_{ib}), 0 \le s_{iw} \le 1, i = 1, 2,. . ., m $$.
 * Step 6: Calculate the similarity to the worst condition:

$$s_{iw} = 1$$ if and only if the alternative solution has the worst condition; and

$$s_{iw} = 0$$ if and only if the alternative solution has the best condition.


 * Step 7: Rank the alternatives according to $$s_{iw} (i = 1, 2, . . ., m)$$.

Normalisation
Two methods of normalisation that have been used to deal with incongruous criteria dimensions are linear normalisation and vector normalisation.

Linear normalisation can be calculated as in Step 2 of the TOPSIS process above. Vector normalisation was incorporated with the original development of the TOPSIS method, and is calculated using the following formula:

$$ r_{ij} = \frac {x_{ij}} {\sqrt{\sum_{j=1}^{n} x_{ij}^2 }}, i = 1, 2,. . ., m, j = 1, 2,. . ., n$$

In using vector normalisation, the non-linear distances between single dimension scores and ratios should produce smoother trade-offs.