Multicriteria classification

In multiple criteria decision aiding (MCDA), multicriteria classification (or sorting) involves problems where a finite set of alternative actions should be assigned into a predefined set of preferentially ordered categories (classes). For example, credit analysts classify loan applications into risk categories (e.g., acceptable/unacceptable applicants), customers rate products and classify them into attractiveness groups, candidates for a job position are evaluated and their applications are approved or rejected, technical systems are prioritized for inspection on the basis of their failure risk, clinicians classify patients according to the extent to which they have a complex disease or not, etc.

Problem statement
In a multicriteria classification problem (MCP) a set


 * $$ X=\{\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_m\}$$

of m alternative actions is available. Each alternative is evaluated over a set of n criteria. The scope of the analysis is to assign each alternative into a given set of categories (classes) C = {c1, c2, ..., ck}. It is therefore a kind of classification problem.

The categories are defined in an ordinal way. Assuming (without loss of generality) an ascending order, this means that category c1 consists of the worst alternatives whereas ck includes the best (most preferred) ones. The alternatives in each category cannot be assumed be equivalent in terms of their overall evaluation (the categories are not equivalence classes).

Furthermore, the categories are defined independently of the set of alternatives under consideration. In that regard, MCPs are based on an absolute evaluation scheme. For instance, a predefined specific set of categories is often used to classify industrial accidents (e.g., major, minor, etc.). These categories are not related to a specific event under consideration. Of course, in many cases the definition of the categories is adjusted over time to take into consideration the changes in the decision environment.

Relationship to pattern recognition
In comparison to statistical classification and pattern recognition in a machine learning sense, two main distinguishing features of MCPs can be identified:


 * 1) In MCPs the categories are defined in an ordinal way. This ordinal definition of the categories implicitly defines a preference structure. In contrast, machine learning is usually involved with nominal classification problems, where classes of observations are defined in a nominal way (i.e., collection of cases described by some common patterns), without any preferential implications.
 * 2) In MCPs, the alternatives are evaluated over a set of criteria. A criterion is an attribute that incorporates preferential information. Thus, the decision model should have some form of monotonic relationship with respect to the criteria. This kind of information is explicitly introduced (a priory) in multicriteria methods for MCPs.

Methods
The most popular modeling approach for MCPs are based on value function models, outranking relations, and decision rules:


 * In a value function model, the classification rules can be expressed as follows: Alternative i is assigned to group cr if and only if


 * $$ t_{r+1} t2 > ... > tk−1 are thresholds defining the category limits.


 * An important example of this approach is the use of the potentially all pairwise rankings of all possible alternatives (PAPRIKA) method to create models for classifying patients according to the extent to which they have a disease or not – e.g. Sjögren syndrome, gout, systemic sclerosis, etc.


 * Examples of outranking techniques include the ÉLECTRE TRI method and its variants, models based on the PROMETHEE method such as the FlowSort method, and the Proaftn method. Outranking models are expressed in a relational form. In a typical setting used in ELECTRE TRI, the assignment of the alternatives is based on pairwise comparisons of the alternatives to predefined category boundaries.
 * Rule-based models are expressed in the form of "If ... then ... " decision rules. The conditions part involve a conjunction of elementary conditions on the set of criteria, whereas the conclusion of each rule provides a recommendation for the assignment of the alternatives that satisfy the conditions of the rule. The dominance-based rough set approach is an example of this type of models.

Model development
The development of MCP models can be made either through direct or indirect approaches. Direct techniques involve the specification of all parameters of the decision model (e.g., the weights of the criteria) through an interactive procedure, where the decision analyst elicits the required information from the decision-maker. This is can be a time-consuming process, but it is particularly useful in strategic decision making.

Indirect procedures are referred to as preference disaggregation analysis. The preference disaggregation approach refers to the analysis of the decision–maker's global judgments in order to specify the parameters of the criteria aggregation model that best fit the decision-maker's evaluations. In the case of MCP, the decision–maker's global judgments are expressed by classifying a set of reference alternatives (training examples). The reference set may include: (a) some decision alternatives evaluated in similar problems in the past, (b) a subset of the alternatives under consideration, (c) some fictitious alternatives, consisting of performances on the criteria which can be easily judged by the decision-maker to express his/her global evaluation. Disaggregation techniques provide an estimate β* for the parameters of a decision model $$f$$ based on the solution of an optimization problem of the following general form:

\beta^*= \underset{\beta\in B}{\operatorname{argmin}{}} L[D(X),D^'(X,f_\beta)] $$ where X is the set of reference alternatives, D(X) is the classification of the reference alternatives by the decision-maker, D'(X,fβ) are the recommendations of the model for the reference alternatives, L is a function that measures the differences between the decision-maker's evaluations and the model's outputs, and B is the set of feasible values for the model's parameters.

For example, the following linear program can be formulated in the context of a weighted average model V(xi) = w1xi1 + ... + wnxin with wj being the (non-negative) trade-off constant for criterion j (w1 + ... + wn = 1) and xij being the data for alternative i on criterion j:
 * $$ \begin{align}

& \text{minimize} && \sum_i (s_i^+ + s_i^-) \\ & \text{subject to:} && w_1x_{i1}+\cdots+w_nx_{in}-t_r+s_i^+\ge\delta& \text{for all reference alternatives in class } c_r (r=1,\ldots,k-1)\\ & && w_1x_{i1}+\cdots+w_nx_{in}-t_{r-1}-s_i^-\leq-\delta& \text{for all reference alternatives in class } c_r (r=2,\ldots,k)\\ & && w_1+\cdots+w_n=1\\ & && w_j,s_i^+,s_i^-,t_r\ge 0 \end{align} $$ This linear programming formulation can be generalized in context of additive value functions. Similar optimization problems (linear and nonlinear) can be formulated for outranking models,  whereas decision rule models are built through rule induction algorithms.