User:Vipul/Makridakis Competitions

The Makridakis Competitions (also known as the M Competitions or M-Competitions) is a term used for a series of competitions organized by teams led by forecasting researcher Spyros Makridakis and intended to evaluate and compare the accuracy of different forecasting methods.

First competition in 1982
The first Makridakis Competition, held in 1982, and known in the forecasting literature as the M-Competition, used 1001 time series and 15 forecasting methods (with another nine variations of those methods included). According to a later paper by the authors, the following were the main conclusions of the M-Competition:


 * 1) Statistically sophisticated or complex methods do not necessarily provide more accurate forecasts than simpler ones.
 * 2) The relative ranking of the performance of the various methods varies according to the accuracy measure being used.
 * 3) The accuracy when various methods are combined outperforms, on average, the individual methods being combined and does very well in comparison to other methods.
 * 4) The accuracy of the various methods depends on the length of the forecasting horizon involved.

The findings of the study have been verified and replicated through the use of new methods by other researchers.

Newbold (1883) was critical of the M-competition, and argued against the general idea of using a single competition to attempt to settle the complex issue.

Second competition, published in 1993
The second competition, called the M-2 Competition or M2-Competition, was conducted on a grander scale. A call to participate was published in the International Journal of Forecasting, announcements were made in the International Symposium of Forecasting, and a written invitation was sent to all known experts on the various time series methods. The M2-Competition was organized in collaboration with four companies and included six macroeconomic series, and was conducted on a real-time basis. Data was from the United States. The results of the competition were published in a 1993 paper. The results were claimed to be statistically identical to those of the M-Competition.

Fildes and Makridakis (1995) argue that despite the evidence produced by these competitions, the implications continued to be ignored by theoretical statisticians.

Third competition, published in 2000
The third competition, called the M-3 Competition or M3-Competition, was intended to both replicate and extend the features of the M-competition and M2-Competition, through the inclusion of more methods and researchers (particularly researchers in the area of neural networks) and more time series. A total of 3003 time series was used. The paper documenting the results of the competition was published in the International Journal of Forecasting in 2000 and the raw data was also made available on the International Institute of Forecasters website. According to the authors, the conclusions from the M3-Competition were similar to those from the earlier competitions.

A number of other papers have been published with different analyses of the data set from the M3-Competition.