User:Chenruduan/sandbox

In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term is also used in time series analysis with a different meaning. Although linear models are developed and are most popular during the 1950s, they are closely related to many aspects of modern statistical learning. In each case, the designation "linear" is used to identify a subclass of models for which substantial reduction in the complexity of the related statistical theory is possible.

Linear regression models
For the regression case, the statistical model is as follows. Given a (random) sample $$ (Y_i, X_{i1}, \ldots, X_{ip}), \, i = 1, \ldots, n $$ the relation between the observations Yi and the independent variables Xij is formulated as


 * $$Y_i = \beta_0 + \beta_1 \phi_1(X_{i1}) + \cdots + \beta_p \phi_p(X_{ip}) + \varepsilon_i \qquad i = 1, \ldots, n $$

where $$ \phi_1, \ldots, \phi_p $$ may be nonlinear functions. In the above, the quantities εi are random variables representing errors in the relationship. The "linear" part of the designation relates to the appearance of the regression coefficients, βj in a linear way in the above relationship. Alternatively, one may say that the predicted values corresponding to the above model, namely
 * $$\hat{Y}_i = \beta_0 + \beta_1 \phi_1(X_{i1}) + \cdots + \beta_p \phi_p(X_{ip}) \qquad (i = 1, \ldots, n), $$

are linear functions of the βj.

Given that estimation is undertaken on the basis of a least squares analysis, estimates of the unknown parameters βj are determined by minimising a sum of squares function
 * $$S = \sum_{i = 1}^n \left(Y_i - \beta_0 - \beta_1 \phi_1(X_{i1}) - \cdots - \beta_p \phi_p(X_{ip})\right)^2 .$$

From this, it can readily be seen that the "linear" aspect of the model means the following:
 * the function to be minimised is a quadratic function of the βj for which minimisation is a relatively simple problem;
 * the derivatives of the function are linear functions of the βj making it easy to find the minimising values;
 * the minimising values βj are linear functions of the observations Yi;
 * the minimising values βj are linear functions of the random errors εi which makes it relatively easy to determine the statistical properties of the estimated values of βj.

Time series models
An example of a linear time series model is an autoregressive moving average model. Here the model for values {Xt} in a time series can be written in the form


 * $$ X_t = c + \varepsilon_t + \sum_{i=1}^p \phi_i X_{t-i} + \sum_{i=1}^q \theta_i \varepsilon_{t-i}.\,$$

where again the quantities εt are random variables representing innovations which are new random effects that appear at a certain time but also affect values of X at later times. In this instance the use of the term "linear model" refers to the structure of the above relationship in representing Xt as a linear function of past values of the same time series and of current and past values of the innovations. This particular aspect of the structure means that it is relatively simple to derive relations for the mean and covariance properties of the time series. Note that here the "linear" part of the term "linear model" is not referring to the coefficients φi and θi, as it would be in the case of a regression model, which looks structurally similar.

Linear regression and linear classification
A linear model of regression has the hypothesis that the regression output has a linear relation with its input. This hypothesis is simple such that with a cost function of root mean square deviation (RMSD), its solution can be analytically obtained. Linear regression models are most prevalent precomputer age of statistical analysis, however, today people still have good reasons to apply them. First, they are easy to implement even for people who do not have a strong background in statistics and programming, providing an initial glimpse to the dataset. The performance of a linear model can also serve as a baseline for comparison with a more complicated model. Second, although very simple, linear models are not guaranteed to perform worse than fancier models, such as neural networks, especially in cases where only a small number of training data are available or training data are very noisy. This situation may hold in experimental research such as chemistry or biology experiments, in which the data density could be low and the noise in data could be large. Finally, linear models can be generalized to what is called basis-function methods, which expands their scope considerably.

Linear methods are also related to other statistical methods, such as principal component analysis and neural network. Some people hold a belief that understanding the behaviors of linear models are essential to understand those more complicated nonlinear ones. Besides being used in regression tasks, linear models can also be applied in classification tasks with only minor changes. Usually, a sigmoid function is applied at the outcome of the linear function, turning the outcome as the probability of a certain class. Cost functions used in classification tasks are also different from those in regression tasks, for example, cross entropy. Despite these differences, the spirit of linear models in regression and classification tasks is the same: the hypothesis of a linear relationship between the inputs and target.

Relation to principal component analysis
Principal component analysis (PCA) is a statistical procedure that converts a set of linearly correlated variables into a set of linearly uncorrelated variables called principal components with an orthogonal transformation. This transformation is implemented such that
 * Any pairs of principal components are orthogonal with each other.
 * These principal components are sorted with respect to their variances.
 * Each principal component is a linear combination of all input variables.

In this sense, PCA is connected to linear models. Simply treated as a set of new input variables into a linear model, these principal components cannot make linear model perform better due to the linear nature of these models. However, using principal components as inputs for nonlinear models, such as kernel methods and neural networks, the performance can possibly be improved. Thus, PCA is commonly used to do feature selection and dimension reduction, although feature selected in this way may be difficult to be interpreted.

Relation to neural networks
The neural network itself is not a single algorithm, but rather a framework of machine learning algorithms. Several types of most popular machine learning models today, such as artificial neural network, convolutional neural network, and recurrent neural network, all belong to the category of the neural network. They usually consist of a series non-linear transformations of the input data, which in principle can approximate any functions. Despite the architectures of neural networks can be wildly different, they share one thing in common: the last layer of the neural network is always linear regression. Therefore, one way of viewing neural networks is that each neural network can be split into two parts: the non-linear transformations before the last layer and the last layer of linear regression. The first part works as a process of feature engineering, with which input data are transformed properly so that at the very last layer the label is almost linearly dependent on these engineered features. With this idea, various of feature extraction models are developed. One representative is the autoencoder-decoder model, which is frequently used for dimensionality reduction.

Other uses in statistics
There are some other instances where "nonlinear model" is used to contrast with a linearly structured model, although the term "linear model" is not usually applied. One example of this is nonlinear dimensionality reduction.