User:MalininArt

User:MalininArt

Approach to multidimentional interpolation, approximation and machine learning on basis of random function theory.
Bahvalov J.N., Malinin A.P.

In this paper investigate’s the problem of multivariate interpolation and approximation. It is shown that these problems can be solved on basis of the theory of random functions. The paper proposed a method of machine learning, ensuring optimum results in terms of the considered mathematical apparatus. The method is simple to implement and allows us to reduce a variety of tasks in machine learning to solving of systems of linear equations.

Introduction.
This article is divided into two parts.
 * The first part is devoted to the foundations of the theory of random functions in relation to the problems of multivariate interpolation and approximation, as well as machine learning and its theoretical basis. The purpose of the theoretical part is to show that machine learning in his paradigm of "supervised learning", a multi-dimensional problem of interpolation and approximation, can be generalized to the theory of random functions.


 * The second part provides practical conclusions from the provisions of the first part in the form of a complete machine-learning method (the method of multivariate interpolation and approximation).


 * The proposed method allows to obtain an exact solution of multivariate interpolation and approximation ("supervised learning") guarantees optimal under certain assumptions, the minimum specified in the theoretical part. The method is simple and reduces to a system of linear equations, but in this case does not limit the possibilities. Using of "special function" obtained in the theoretical part, enables us to reduce the multidimensional problem of interpolation and approximation or a variety of tasks in machine learning (with certain assumptions) to a system of linear equations, guaranteeing optimality of the solution, the lack of retraining, the oscillation interpolant and other undesirable effects.

Introduction.

 * The purpose of this paper show that the machine learning paradigm in his "supervised learning" a multi-dimensional problem of interpolation and approximation, can be resolved on the basis of the theory of random functions.

Using the mathematical apparatus of random functions is convenient for theoretical justification of methods of machine learning and the notion of "probability of realization" can serve as a criterion to justify generalizing abilities of the teaching method.
 * Using the concept of a random function and the related theoretical constructs provides accurate solutions of problems in machine learning tasks as a multi-dimensional interpolation (or approximation) guarantees optimal solutions in terms of the criterion of probability. There is a way with minimal assumptions to get an optimal solution of multivariate interpolation and approximation by solving a system of linear equations, with proof of its optimality. This allows an exact solution that eliminates the problem of finding global optimum and the choice of network topology, which occur in many existing iterative algorithms, "supervised learning" neural networks.

Theory of random processes is a mathematical science that studies the laws of random phenomena in the dynamics of their development. Just as the notion of a random process is a generalization of the random variable, the random function is a generalization of a random process, since functions may depend not on one but on several arguments.


 * Realizations of the random functions are functions whose form it takes under the supervision or during the experiment. A random function is a set of its realizations, with a corresponding probability distribution. Any function can be considered as the realization of a random function without any additional assumptions.

In solving the problem of multidimensional interpolation or approximation, or in machine learning want to associate the values ​​of input and output variables with a function. Solving these problems requires finding the right function, satisfying the training vectors and such functions may be infinite.


 * Consider the problem of machine learning or multidimensional interpolation or approximation. Assume that the desired function can be regarded as a realization of a random function, the information contained in set of training vectors (or interpolation points).

In this approach the criterion to search for the desired function is the probability of a realization. In this case it is necessary to find the most probable realization of the random function, which satisfy the training examples. This idea is the basis for the consideration proposed in this paper method.


 * The adequacy of this approach and its range of applicability will be entirely determined by what specific characteristics to give an a priori random function.

Multidimensional interpolation.
Consider the problem of multidimensional interpolation, assuming that the interpolation knots belong to the realization of a random function.    where $$\varphi_i (x)$$ - the coordinate functions, $$V_i$$ - normal random uncorrelated variables with mean zero and unit variance, $$f_m(x)$$ - mathematical expectation
 * Assume that the function (1) can be regarded as the realization of a random function $$F(x)$$ which can be written as (2):


 * Under the coordinate functions in (2) can be understood as any arbitrary function. The condition is that there exist some coordinate functions, such that you can burn them through a random function of the form (2). The number of terms in (2) can be arbitrary. For example, as a special case (2) can be considered expansion into an infinite series of sines and cosines, when the realizations of the random function can be set of all continuous real functions
 * Assume that variables in (2) have normal distribution. Requirement to the normal law with mean zero and unit variance is not necessary (but convenient in the future). To perform the following conversions would be enough to require, that the joint probability distribution law of random variables in (2) was symmetrical about the origin and decreases monotonically with increasing distance from the origin.
 * Requirements (2) to mean zero and unit variance made to facilitate conversions. If the variance of the random variable is not equal to one, then can replace it and simultaneously replaced the corresponding coordinate function (by multiplying a factor), having received expression of the form (2). If mean is not zero, than can change the function of the expectation $$f_m(x)$$ of a way to get an expression like (2).
 * Consider in more detail function $$f_m(x)$$. In the particular case it may be zero or constant.
 * If this function is known beforehand, we can exclude it from further conversions, having carried out the subtraction of the values of the interpolation nodes and deletion of the expression (2). After interpolation of the modified nodes will be done again, we can add this function to the resulting solution to obtain the final result.
 * If this function is not known beforehand, then we can accept it as a kind of realization of another random function which can again be expressed in terms of expression (2) and then repeat this procedure if necessary.
 * Eliminating the function of the expectation, obtain an expression of a random function:


 * Introduce the notation $$v_1,v_2,...v_m$$ – specific values of random variables in (3). Then the probability distribution of realizations of the parameter space $$v_1,v_2,...v_m$$ written as (4):

$$P(v_1,v_2,...,v_m)=\frac{1}{{(2\pi)}^{\frac{m}{2}}}e^{-\frac{v_1^2+v_2^2+...+v_m^2}{2}}$$ Let sequences

$$x_1,x_2,...,x_k\;(x_i\in R^n)$$ $$y_1,y_2,...,y_k\;(y_i\in R)$$ are the coordinates and the corresponding values of interpolation.


 * Assuming that the interpolation points belong to one of the realizations of the random function (3), which corresponds to vector $$v_1,v_2,...v_m$$, they can be expressed in the form of equations (6):

$$\begin{cases} \sum^{m}_{i=1} {v_i\varphi_i(x_1)}=y_1\\ \sum^{m}_{i=1} {v_i\varphi_i(x_2)}=y_2\\ \vdots\\ \sum^{m}_{i=1} {v_i\varphi_i(x_k)}=y_k\\\end{cases}$$ Then the most probable realization of the random function (3) would be one in which the probability density function (4) takes the maximum value in the light of constraints (6). $$v_1^2+v_2^2+...+v_m^2\rightarrow \min$$ Find the maximum of (4), equivalent to finding the minimum of (7).


 * This suggests that the solution of multivariate interpolation search as the most probable realization of the function (2) or (3) satisfying the interpolation nodes is reduced to a quadratic programming problem with constraints as equalities.
 * To find the minimum (7) with a system of constraints (6), use the method of Lagrange multipliers.
 * Let us find the Lagrange function:

$$L(v_1,...,v_m,\lambda_1,...,\lambda_k)=v_1^2+v_2^2+...+v_m^2+\sum^{k}_{j=1} {\lambda_j ((\sum^{m}_{i=1} {v_i\varphi_i(x_j)})-y_j)}$$  Denote $$V^*(v_1^*,v_2^*,...,v_m^*)$$ – solution. From the condition $$\frac {dL}{d\lambda_j}=0$$, obtain the equations (6). From the condition $$\frac {dL}{dv_i}=0$$, get: $$v_i^*=-0.5 \sum^{k}_{j=1} {\lambda_j \varphi_i (x_j),\;i=1,...,m}$$ 


 * Denote sequences of values of the coordinate functions $$(\varphi_1(x_i),\varphi_2,(x_i),...,\varphi_m(x_i))$$ as a vector $$L_i\;,\;i=1,...,k\;,\;L_i\in R^m$$.
 * Let us introduce the coefficients $$q_1,q_2,...,q_k\;,$$, denoting $$q_i=-0.5\lambda_i\;,\;i=1,..,k$$. Then (9) can be written in terms of new notation in the form of a linear combination of vectors (10):

$$V^*=q_1L_1+q_2L_2+...+q_kL_k$$  Expression (10) can be interpreted from a geometrical point of view. Find minimum of function (7), with a system of constraints (6), equivalent to lowering of the perpendicular from the origin to the hyperplane formed by the intersection of hyperplanes defined in the form of equations (6), therefore, $$V^*$$ will be a linear combination of the perpendiculars. System constraints (6) can be written in terms of the scalar product of vectors (11): $$\begin{cases} L_1V^*=y_1\\ L_2V^*=y_2\\ \vdots\\ L_kV^*=y_k\\\end{cases}$$  Replacing $$V^*$$ using the expression (10) obtain the system of equations: $$\begin{cases} q_1L_1L_1+q_2L_1L_2+...+q_kL_1L_k=y_1\\ q_1L_2L_1+q_2L_2L_2+...+q_kL_2L_k=y_2\\ \vdots\\ q_1L_kL_1+q_2L_kL_2+...+q_kL_kL_k=y_k\\\end{cases}$$  Denote the vector $$L_*$$ sequence of values $$(\varphi_1(x),\varphi_2(x),...,\varphi_m(x))\;,\;x\in R^n$$. Then the most probable realization $$y=f^*(x)$$ of the random function (3) can be expressed as (13). $$f^*(x)=\sum^{m}_{i=1} {v_i^*\varphi_i(x)}=L_*V^*=q_1L_*L_1+q_2L_*L_2+...+q_kL_*L_k$$  Consider in more detail scalar product $$L_iL_j$$ : $$L_iL_j=\sum^{m}_{t=1} {\varphi_t(x_i) \varphi_t(x_j)}=K_f(x_i,x_j)\;,\;x_i,x_j \in R^n$$ which is nothing more than a canonical decomposition of the correlation function.  Then the system of equations (12) can be written as (15): $$\begin{cases} q_1K_f(x_1,x_1)+q_2K_f(x_1,x_2)+...+q_kK_f(x_1,x_k)=y_1\\ q_1K_f(x_2,x_1)+q_2K_f(x_2,x_2)+...+q_kK_f(x_2,x_k)=y_2\\ \vdots\\ q_1K_f(x_k,x_1)+q_2K_f(x_k,x_2)+...+q_kK_f(x_k,x_k)=y_k\\\end{cases}$$  and (13) for the most probable realization in the form of (16):<br\> $$f^*(x)=q_1K_f(x,x_1)+q_2K_f(x,x_2)+...+q_kK_f(x,x_k)$$ <br\>


 * In such a way it can be concluded that the most probable realization of the random function (3) satisfying the interpolation nodes is a linear combination of sections of its correlation functions.
 * Therefore, knowing the correlation function and solving the system of equations (15), we obtain the most probable realization of the random function (16).
 * If now consider interpolation how a variant of machine learning that result can be easily generalized to multi-output version. In this case, can calculate a single inverse matrix of equations (15), which can be used for interpolation at all possible options for the values of output variables.
 * However, in order to carry out such calculations it is necessary to know the correlation function, as well as to know mathematical expectation in (2) or correctly converted to (3).
 * Consider the case where there is no a priori information about a random function in which can solve the problem of interpolation. Elimination of uncertainty by selecting the correlation function, making certain assumptions.
 * Suppose that initially the potential options interpolant (unknown function) can be any continuous real function and unknown about a priori probability distribution. Suppose that in this case, any points, as well as directions in space are equivalent interpolation between them. Therefore, require that the interpolation does not depend on the choice of the coordinate system of interpolation nodes on the operations of its rotation, position of the origin and scale.
 * These requirements can be achieved by introducing a set of provisions for the implementation of a priori probabilities of existence a random function. Assume a priori equiprobable realization transformed into each other by the operations of the shift, rotation or scale change.
 * Requirements outlined above is enough to uniquely calculate the correlation function necessary for the expression (15) and (16).

Denote the stated requirements in set of propositions: 1. The probability density of realizations that are transformed into each other by parallel translation of the coordinate system should be the same. Suppose that there are two realizations $$f_1(x)$$ and $$f_2(x)$$ such that: $$\forall x \;f_1(x)=f_2(x+t),\;x,t \in R^n\;,\;t=const$$ Then the probability density of realizations $$f_1(x)$$ and $$f_2(x)$$ should be the same. 2. The probability density of realizations that are transformed into each other the same scaling in all directions should be the same. Suppose that there are two realizations $$f_1(x)$$ and $$f_2(x)$$ such that: $$\forall x \;f_1(x)=kf_2(x/k),\;x \in R^n$$<br\> where $$k$$ - is a coefficient Then the probability density of realizations $$f_1(x)$$ and $$f_2(x)$$ should be the same. 3. The probability density of realizations that are transformed into each other by turning the coordinate system should be the same. Suppose that there are two realizations $$f_1(x)$$ and $$f_2(x)$$ such that: $$\forall x_1 \;f_1(x_1)=f_2(x_2),\; x_2=Ax_1,\;x_1,x_2 \in R^n$$<br\> where $$A$$ - is a rotation matrix. Then the probability density of realizations $$f_1(x)$$ and $$f_2(x)$$ should be the same.


 * Implementation of the first step means that the random function is stationary. In this case, it can be written as a multidimensional spectral decomposition. Denote $$S(\omega)$$ its spectral density (nonnegative real function, symmetric with respect to any frequencies $$\omega$$ and $$-\omega$$).

$$f(x)=\frac {1}{{(2\pi)}^{\frac {n}{2}}} \int\limits_{R^n}S^*(\omega)\sqrt{S(\omega)}e^{ix\omega}d\omega$$<br\> $$x,\omega \in R^n\;,\;x\omega$$ – scalar product


 * In (20) $$S^*(\omega)$$ – realization of a complex random function. Each of its value is an independent random variable having a normal distribution, unit variance and mean zero for both the real and the imaginary part. Expression (20) can be considered as a special case of (3), in which $$m=\infty$$, the role of the coordinate functions performs $$\sqrt{S(\omega)}e^{ix\omega}$$, $$S^*(\omega)$$  is analog of random variables values.
 * Impose an additional condition on $$S^*(\omega)$$. Because at the moment considered interpolation of real functions then values $$S^*(\omega)$$ for all frequencies $$\omega$$ and $$-\omega$$ must be complex conjugates. Divide the frequency space $$R^n$$ into two parts $$R_1^n$$ and $$R_2^n$$. The division must be carried out in such way as to any frequency $$\omega$$ and $$-\omega$$ were in different parts (for example, any hyperplane passing through the origin). Then $$S^*(\omega)$$ can be written as:

$$S^*(\omega)=\begin{cases} R^*(\omega)+iI^*(\omega),\;\omega\in R_1^n\\ R^*(\omega)-iI^*(\omega),\;\omega\in R_2^n\\\end{cases}$$ <br\>


 * Where $$R^*(\omega)$$ and $$I^*(\omega)$$ can be regarded as a realization of real random functions, each value of which is the same for frequencies $$\omega$$ and $$-\omega$$. This value is an independent random variable having a normal distribution with mean zero and unit variance.<br\>

If the condition for the reality of realizations is fulfilled, then the random function (20) can be written as (22): $$f(x)=\frac {1}{{(2\pi)}^{\frac {n}{2}}} (\int\limits_{R_1^n}(R^*(\omega)+iI^*(\omega))\sqrt{S(\omega)}e^{ix\omega}d\omega+\int\limits_{R_2^n}(R^*(\omega)-iI^*(\omega))\sqrt{S(\omega)}e^{ix\omega}d\omega)$$ Since $$R_1^n$$ and $$R_2^n$$ symmetric with respect to $$\omega$$ and $$-\omega$$ then in the second integral can change the domain of integration to $$R_1^n$$ replacing the sign of the frequency. In this case, obtain: $$f(x)=\frac {1}{{(2\pi)}^{\frac {n}{2}}} (\int\limits_{R_1^n}(R^*(\omega)+iI^*(\omega))\sqrt{S(\omega)}e^{ix\omega}d\omega+\int\limits_{R_1^n}(R^*(-\omega)-iI^*(-\omega))\sqrt{S(-\omega)}e^{-ix\omega}d\omega)$$ Performing further conversions, obtain: $$f(x)=\frac {1}{{(2\pi)}^{\frac {n}{2}}} \int\limits_{R_1^n}\sqrt{S(\omega)}(R^*(\omega)(e^{i\omega x}+e^{-i\omega x})+iI^*(\omega)(e^{i\omega x}-e^{-i\omega x}))d\omega=$$<br\>$$=\frac {2}{{(2\pi)}^{\frac {n}{2}}} \int\limits_{R_1^n}\sqrt{S(\omega)}(R^*(\omega)\cos(\omega x)-I^*(\omega)\sin(\omega x))d\omega$$


 * Each value of $$S^*(\omega)$$ is an independent complex random variable with mean zero and unit variance, therefore it is possible to get (25) – an analog of (7). When the constraints on the interpolation points is fulfilled, minimizing (25) provide the most probable realization

$$\int\limits_{R_1^n}\mid S^*(\omega)\mid^2 d\omega\rightarrow\min$$ <br\> For any two realizations of equal a priori probability, the value of (25) must have the same value. Expression (25) can be written as (26): $$\int\limits_{R_1^n} (R^*(\omega)^2+I^*(\omega)^2) d\omega\rightarrow\min$$ <br\> Let the sequence $$x_1,x_2,...,x_k\;(x_i\in R^n)$$ and $$y_1,y_2,...,y_k\;(y_i\in R)$$ – are the interpolation points. Then, using expression (24), the constraints on these nodes will represent system of equations: $$\begin{cases} \frac {2}{{(2\pi)}^{\frac {n}{2}}} \int\limits_{R_1^n}\sqrt{S(\omega)}(R^*(\omega)\cos(\omega x_1)-I^*(\omega)\sin(\omega x_1))d\omega=y_1\\ \frac {2}{{(2\pi)}^{\frac {n}{2}}} \int\limits_{R_1^n}\sqrt{S(\omega)}(R^*(\omega)\cos(\omega x_2)-I^*(\omega)\sin(\omega x_2))d\omega=y_2\\ \vdots\\ \frac {2}{{(2\pi)}^{\frac {n}{2}}} \int\limits_{R_1^n}\sqrt{S(\omega)}(R^*(\omega)\cos(\omega x_k)-I^*(\omega)\sin(\omega x_k))d\omega=y_k\\\end{cases}$$ <br\> Finding the most probable realization will be conform the determination of functions $$R^*(\omega)$$ and $$I^*(\omega)$$ such that (27) hold and the value of expression (26) is minimized.


 * Further arguments can be performed as in (8) - (16).
 * The Lagrange function becomes:<br\>

$$\int\limits_{R_1^n}(R^*(\omega)^2+I^*(\omega)^2)+\frac {2}{{(2\pi)}^{\frac {n}{2}}} \sum^{k}_{i=1}\lambda_i (\int\limits_{R_1^n}\sqrt{S(\omega)}(R^*(\omega)\cos(\omega x_i)-I^*(\omega)\sin(\omega x_i))d\omega-y_i)$$ In differentiating (28) with respect to $$\lambda_i$$ possible get the system (27). Since any value of functions $$R^*(\omega)$$ and $$I^*(\omega)$$ is the independent variable for any $$\omega$$, then for each of them can do independent differentiation. In this case, obtain: $$R^*(\omega)=-\frac{\sqrt{S(\omega)}}{{(2\pi)}^{\frac {n}{2}}}\sum^{k}_{i=1}\lambda_i\cos(\omega x_i)$$ $$I^*(\omega)=\frac{\sqrt{S(\omega)}}{{(2\pi)}^{\frac {n}{2}}}\sum^{k}_{i=1}\lambda_i\sin(\omega x_i)$$ Substituting solutions (29) and (30) into (24) obtain the most probable realization: $$f(x)=-\frac {2}{{(2\pi)}^{n}}\sum^{k}_{i=1}\lambda_i \int\limits_{R_1^n}S(\omega)(\cos(\omega x_i)\cos(\omega x)+\sin(\omega x_i)\sin(\omega x))d\omega$$ Expression (31) is converted into (32): $$f(x)=-\frac {2}{{(2\pi)}^{n}}\sum^{k}_{i=1}\lambda_i \int\limits_{R_1^n}S(\omega)\cos(\omega(x_i-x))d\omega$$ Change the domain of integration to $$R^n$$: $$f(x)=-\frac {1}{{(2\pi)}^{n}}\sum^{k}_{i=1}\lambda_i \int\limits_{R^n}S(\omega)\cos(\omega(x_i-x))d\omega$$ Denote the coefficients: $$q_i=-\frac{\lambda_i}{(2\pi)^n}$$ The expression for the correlation function (it will be proportional to the correlation function now): $$K_f(x_1,x_2)=\int\limits_{R^n}S(\omega)\cos(\omega(x_1-x_2))d\omega,\;x_1,x_2\in R^n$$ As can be seen from (35), the spectral density of a random function plays the role of the spectrum for its correlation function. Correlation function itself is transformed into the autocorrelation, as expected from the stationarity: $$K_f(\tau)=\int\limits_{R^n}S(\omega)\cos(\omega\tau)d\omega,\;\tau\in R^n$$


 * The absence of sinusoidal components in (35 - 36), suggests that the autocorrelation function must be symmetric about the origin. If the spectral density of a random function is known, it is possible to calculate the autocorrelation function.<br\>

Then the most probable realization can be expressed: $$f^*(x)=q_1K_f(x-x_1)+q_2K_f(x-x_2)+...+q_kK_f(x-x_k)$$ Find a system of equations to calculate the coefficients in (37). Substituting (29) and (30) in (27): $$\begin{cases} -\frac {2}{{(2\pi)}^{n}}\sum^{k}_{i=1}\lambda_i \int\limits_{R_1^n}S(\omega)(\cos(\omega x_i)\cos(\omega x_1)+sin(\omega x_i)\sin(\omega x_1))d\omega=y_1\\ -\frac {2}{{(2\pi)}^{n}}\sum^{k}_{i=1}\lambda_i \int\limits_{R_1^n}S(\omega)(cos(\omega x_i)\cos(\omega x_2)+\sin(\omega x_i)\sin(\omega x_2))d\omega=y_2\\ \vdots\\ -\frac {2}{{(2\pi)}^{n}}\sum^{k}_{i=1}\lambda_i \int\limits_{R_1^n}S(\omega)(\cos(\omega x_i)\cos(\omega x_k)+\sin(\omega x_i)\sin(\omega x_k))d\omega=y_k\\\end{cases}$$ When use (34) and (36) in (38), obtain a system of equations to calculate the coefficients of (37) (similar to (15)). $$\begin{cases} q_1K_f(x_1-x_1)+q_2K_f(x_1-x_2)+...+q_kK_f(x_1-x_k)=y_1\\ q_1K_f(x_2-x_1)+q_2K_f(x_2-x_2)+...+q_kK_f(x_2-x_k)=y_2\\ \vdots\\ q_1K_f(x_k-x_1)+q_2K_f(x_k-x_2)+...+q_kK_f(x_k-x_k)=y_k\\\end{cases}$$


 * Thus able the expression (37) and (39) – counterparts (15) and (16). The expression (36) is for the correlation function. For the disclosure of uncertainty remains to determine the spectral density of a random function.

Denoted as $$S_f(\omega)$$– Fourier spectral decomposition of realizations considered a random function. From the expression (20) – the spectral decomposition of a random function can be seen that the spectrum of realizations will be equal: $$S_f(\omega)=S^*(\omega)\sqrt{S(\omega)}$$ Now back to the second point (the expression (18)). Denoted as $$S_f^1(\omega)$$ – the spectrum of function $$f_1(x)$$, $$S_f^2(\omega)$$– the spectrum of function $$f_2(x)$$. Shall assume that $$f_2(x)=kf_1(x/k),\;x\in R^n$$ Whereas $$f_1(x)=\frac {1}{{(2\pi)}^{\frac{n}{2}}}\int\limits_{R^n}S_f^1(\omega)e^{ix\omega}d\omega$$<br\> $$x,\omega\in R^n,\;x\omega$$ - scalar product Obtain $$f_2(x)=kf_1(x/k)=k\frac {1}{{(2\pi)}^{\frac{n}{2}}}\int\limits_{R^n}S_f^1(\omega)e^{\frac{ix\omega}{k}}d\omega=$$ $$=\frac {1}{{(2\pi)}^{\frac{n}{2}}}\int\limits_{R^n}k^{(n+1)}S_f^1(\omega)e^{\frac{ix\omega}{k}}d\frac{\omega}{k}=\frac {1}{{(2\pi)}^{\frac{n}{2}}}\int\limits_{R^n}k^{(n+1)}S_f^1(\omega k)e^{ix\omega}d\omega$$ Can be derived the ratio of the spectra of such functions: $$S_f^2(\omega)=k^{n+1}S_f^1(\omega k)$$ For both the considered functions, the expression (25) must be the same. The function $$S^*(\omega)$$ from (25) can be expressed by spectra of the realizations and spectral density of the random function: $$S^*(\omega)=\frac{S_f(\omega)}{\sqrt{S(\omega)}}$$ Equating expression (25) for two realizations, which were considered earlier and using expressions (44) and (45) obtain: $$\int\limits_{R^n}\frac{\mid S_f^1(\omega)\mid^2}{S(\omega)}d\omega=\int\limits_{R^n}\frac{\mid S_f^2(\omega)\mid^2}{S(\omega)}d\omega=\int\limits_{R^n}\frac{k^{2n+2}\mid S_f^1(\omega k)\mid^2}{S(\omega)}d\omega=$$<br\> $$=\int\limits_{R^n}\frac{k^{n+2}\mid S_f^1(\omega k)\mid^2}{S(\omega)}d(\omega k)=\int\limits_{R^n}\frac{k^{n+2}\mid S_f^1(\omega)\mid^2}{S(\omega/k)}d\omega$$ From the expression (46), comparing the beginning and end of these equalities can obtain an expression for the spectral density: $$\frac{S(\omega/k)}{S(\omega)}=k^{n+2}$$ To satisfy the third position (19), autocorrelation function should have a radial symmetry. Accordingly, the spectral density should have a radial symmetry. Then from (47) can obtain an expression for the spectral density: $$S(\omega)=\alpha \mid\omega\mid^{-(n+2)},\;\omega\in R^n$$<br\> where $$\alpha$$ - is a coefficient or (same thing): $$S(\omega_1,\omega_2,...,\omega_n)=\alpha (\omega_1^2+\omega_2^2+...+\omega_n^2)^{-\frac{n+2}{2}}$$<br\> $$\omega_1,\omega_2,...,\omega_n\in R$$


 * Thus, in (48-49) have the desired spectral density of a random function. When using the correlation function with such spectrum and solving systems of linear equations (39) it is guaranteed to obtain the most probable realization of the random function (if one takes as the basis of (17) - (19)).
 * The coefficient in (48) - (49) can be any. If this coefficient change, then the coefficients in the left side of the system (39) and in (37) will also change, as a result obtain same function.
 * To evaluate the expression of the function with the spectrum (49) can be use symmetry property of the function. In this case, to simplify one can first consider the one-dimensional version and then get a multi-dimensional version, using the requirement of symmetry.<br\>

In the one-dimensional version, the required spectral density is: $$S(\omega)=\mid\omega\mid^{-3}$$ Then the one-dimensional version expression for the autocorrelation function (36) becomes: $$K(\tau)=\int\limits_{-\infty}^{\infty}\frac{\cos(\omega\tau)}{\mid\omega\mid^3}d\omega$$ Expression (51) can be written as: $$K(\tau)=\lim_{\omega_0\rightarrow+0}(2\int\limits_{\omega_0}^{\infty}\frac{\cos(\omega\tau)}{\omega^3}d\omega)$$ Successively integrating by parts, obtain: $$K(\tau)=\lim_{\omega_0\rightarrow+0}(((-\frac{\cos(\omega\tau)}{\omega^2}+\tau^2\frac{\sin(\omega\tau)}{\omega\tau})\mid_{\omega_0}^{\infty})-\tau^2\int\limits_{\omega_0}^{\infty}\frac{\cos(\omega\tau)}{\omega}d\omega)$$ As the last term on the right in (53) obtained an integral cosine. Consider it in more detail: $$-\int\limits_{\omega_0}^{\infty}\frac{\cos(\omega\tau)}{\omega}d\omega=-\int\limits_{\omega_0\tau}^{\infty\tau}\frac{\cos(\omega\tau)}{\omega\tau}d(\omega\tau)=-\int\limits_{\omega_0\mid\tau\mid}^{\infty}\frac{\cos(\omega\mid\tau\mid)}{\omega\mid\tau\mid}d(\omega\mid\tau\mid)$$ Integral cosine may be disclosed: $$-\int\limits_{\omega_0\mid\tau\mid}^{\infty}\frac{\cos(\omega\mid\tau\mid)}{\omega\mid\tau\mid}d(\omega\mid\tau\mid)=\gamma+\ln(\omega_0\mid\tau\mid)+\int\limits_{0}^{\omega_0\mid\tau\mid}\frac{\cos(t)-1}{t}dt$$<br\> where $$\gamma$$ - Euler's constant Using (53) and (55) obtain: $$K(\tau)=\lim_{\omega_0\rightarrow+0}(\tau^2\gamma+\tau^2\ln(\omega_0\mid\tau\mid)+\int\limits_{0}^{\omega_0\mid\tau\mid}\frac{\cos(t)-1}{t}dt+\frac{\cos(\omega_0\tau)}{\omega_0^2}-\tau^2\frac{sin(\omega_0\tau)}{\omega_0\tau})=$$ $$=\lim_{\omega_0\rightarrow+0}(\tau^2\ln(\omega_0 e^{\gamma-1}\mid\tau\mid)+\frac{\cos(\omega_0\tau)}{\omega_0^2})$$ Thus, obtain the result: $$K(\tau)=\int\limits_{-\infty}^{\infty}\frac{\cos(\omega\tau)}{\mid\omega\mid^3}d\omega=\lim_{\omega_0\rightarrow+0}(\tau^2\ln(\omega_0 e^{\gamma-1}\mid\tau\mid)+\frac{\cos(\omega_0\tau)}{\omega_0^2})$$ $$K_f(\tau)=\lim_{n,t\rightarrow\infty}(\parallel\tau\parallel^2\ln(\frac{\parallel\tau\parallel^2}{t})+n)$$<br\><br\> where $$\parallel\tau\parallel^2$$ – norm of the vector, $$\tau\in R^n$$ Let write again the expression (39) and (37): $$\begin{cases} q_1K_f(x_1-x_1)+q_2K_f(x_1-x_2)+...+q_kK_f(x_1-x_k)=y_1\\ q_1K_f(x_2-x_1)+q_2K_f(x_2-x_2)+...+q_kK_f(x_2-x_k)=y_2\\ \vdots\\ q_1K_f(x_k-x_1)+q_2K_f(x_k-x_2)+...+q_kK_f(x_k-x_k)=y_k\\\end{cases}$$ <br\> $$f^*(x)=q_1K_f(x-x_1)+q_2K_f(x-x_2)+...+q_kK_f(x-x_k)$$
 * Although in the actual calculations can not be used $$\omega_0=0$$, its value can be taken arbitrarily close to zero, thus bringing its spectrum closer $$\alpha$$ to the expression (50). For the calculations, instead of (57) can be used in a more simplified version of (58).
 * Given that the coefficient can be chosen arbitrarily (take equal to 0.5) and using the symmetry property, obtain the version of the function for the multidimensional case:
 * Thus, obtain the final expression of multivariate interpolation method in the form of (58) – (60). Method in the limit of the function (58) gives the most probable realization of a random function satisfying the provisions of (17) - (19).

this issue.
 * In the actual computations (as shown by computational experiments) as $$t$$ and $$n$$ can be taken of approximately the same order, and many orders of magnitude greater than the range of variation $$\tau$$. The following will return to
 * As shown in Figure 1, the system of linear equations with using of the function (58) allows produce high-quality interpolation of nonlinear functions.

The variance of a random function. Set of realizations satisfying the interpolation nodes.
$$D_F(x)=\sum^{m}_{i=1}\varphi_i^2(x)=K_f(x,x),\;x\in R^n$$ $$D_F=K_f(0)=n$$ Consider what would be the value realization of the random function (3) for $$\tilde{x}\in R^n$$. Denote $$(\varphi_1(\tilde{x}),\varphi_2(\tilde{x}),...,\varphi_m(\tilde{x}))$$ how $$\tilde{L}$$ and vector $$(V_1,V_2,...,V_m)$$ of random variables in (3) how $$V$$. Then $$F(\tilde{x})=V\tilde{L}$$ The value of the random function (3) or (63) for a certain specific value $$\tilde{x}$$ will be a random normal variable with variance defined by (61 – 62). $$\tilde{L}=L_a+L_b$$<br\> where<br\> $$L_a=\sum^{k}_{i=1} a_iL_i$$ Moreover, require that the vector $$L_b$$ is perpendicular to any of the vectors $$L_i$$. Then the expansion (64) for each $$\tilde{x}$$ will be individual and unique. Coefficients $$a_i$$ in (65) one can always find a way to get the expansion (64). The vector $$L_a$$ is the projection of the vector $$\tilde{L}$$ to the hyperplane, formed a basis of vectors $$L_i$$ and the vector $$L_b$$ is the remaining part of it, perpendicular to this hyperplane. Then (63) can be written as: $$F(\tilde{x})=V(L_a+L_b)=VL_a+VL_b=F_a(\tilde{x})+F_b(\tilde{x})$$ <br\> (66) shows that this representation the random variable $$F(\tilde{x})$$ can be considered as the sum of two random variables $$F_a(\tilde{x})$$ and $$F_a(\tilde{x})$$, and they will be independent. Let prove it. $$V=V_{a^*}+V_{b^*}$$<br\> where $$V_{a^*}=\sum^{k}_{i=1}a_i^*L_i$$<br\> Also require that the vector $$V_{b^*}$$ is perpendicular to any of the vectors $$L_i$$. Then obtain: $$F(\tilde{x})=V\tilde{L}=(V_{a^*}+V_{b^*})(L_a+L_b)=V_{a^*}L_a+V_{b^*}L_b$$ $$F_a(\tilde{x})=V_{a^*}L_a$$ $$F_b(\tilde{x})=V_{b^*}L_b$$ Part of the terms in (69) reduced, due to the fact that the scalar product of perpendicular vectors is zero. Consider $$F_a(\tilde{x})$$: $$F_a(\tilde{x})=V_{a^*}L_a=(a_1^*L_1+a_2^*L_2+...+a_k^*L_k)\tilde{L}$$ (72) becomes the most probable realization, for known values at the interpolation nodes. Suppose that the values of the unknown functions are known at the interpolation nodes. Then (66) apply to the following form: $$\tilde{F}(\tilde{x})=f^*(\tilde{a})+F_b(\tilde{x})$$ <br\><br\> where $$f^*(\tilde{a})$$ - – the most probable realization calculated in (15)-(16) Expression (73) can be understood as a random function which expresses the set of realizations satisfying the interpolation nodes (with the corresponding probability distribution between them). Variance (61) of random function using (69) can be written: $$D_F(\tilde{x})=K_f(\tilde{x},\tilde{x})=\parallel L_a\parallel^2+\parallel L_b\parallel^2=D_a+D_b$$ The variance of the random function (73) is equal to: $$D_{\tilde{F}}(\tilde{x})=0+\parallel L_b\parallel^2=K_f(\tilde{x},\tilde{x})-D_a$$ Consider $$K_f(\tilde{x},x_j)$$, where $$x_j$$ –coordinate one of the interpolation nodes. $$K_f(\tilde{x},x_j)=\tilde{L}L_i=(L_a+L_b)L_j=L_aL_j=\sum^{k}_{i=1}a_iL_iL_j=\sum^{k}_{i=1}a_iK_f(x_i,x_j)$$ The system of equations: $$\begin{cases} a_1K_f(x_1,x_1)+a_2K_f(x_1,x_2)+...+a_kK_f(x_1,x_k)=K_f(\tilde{x},x_1)\\ a_1K_f(x_2,x_1)+a_2K_f(x_2,x_2)+...+a_kK_f(x_2,x_k)=K_f(\tilde{x},x_2)\\ \vdots\\ a_1K_f(x_k,x_1)+a_2K_f(x_k,x_2)+...+a_kK_f(x_k,x_k)=K_f(\tilde{x},x_k)\\\end{cases}$$ Express the variance $$D_a$$: $$D_a=L_aL_a=\sum^{k}_{i=1} a_iL_iL_a=a_1K_f(\tilde{x},x_1)+a_2K_f(\tilde{x},x_2)+...+a_kK_f(\tilde{x},x_k)$$ Again, the function (58): $$K_f(\tau)=\lim_{n,t\rightarrow\infty}(\parallel\tau\parallel^2\ln(\frac{\parallel\tau\parallel^2}{t})+n)$$<br\><br\> where $$\parallel\tau\parallel^2$$ - norm of the vector, $$\tau\in R^n$$ If use the obtained autocorrelation function (58), obtain the following system of equations: $$\begin{cases} a_1K_f(x_1-x_1)+a_2K_f(x_1-x_2)+...+a_kK_f(x_1-x_k)=K_f(\tilde{x}-x_1)\\ a_1K_f(x_2-x_1)+a_2K_f(x_2-x_2)+...+a_kK_f(x_2-x_k)=K_f(\tilde{x}-x_2)\\ \vdots\\ a_1K_f(x_k-x_1)+a_2K_f(x_k-x_2)+...+a_kK_f(x_k-x_k)=K_f(\tilde{x}-x_k)\\\end{cases}$$ As seen from (79), a system of equations, the left side of which coincides with (59), but in the right side instead of the values at the interpolation nodes are the values of the autocorrelation function. Expression of the variance $$D_a$$: $$D_a=a_1K_f(\tilde{x}-x_1)+a_2K_f(\tilde{x}-x_2)+...+a_kK_f(\tilde{x}-x_k)$$ Then the variance of the set of realizations that satisfy the interpolation nodes: $$D_{\tilde{F}}(\tilde{x})=n-D_a$$ $$\delta_{\tilde{F}}(\tilde{x})=\sqrt{n-D_a}$$ $$K_f(\tau)=C_K(\parallel\tau\parallel^2\ln(\frac{\parallel\tau\parallel^2}{t})+n)$$<br\><br\> where $$C_K$$ - the calibration coefficient Calculate the value of the calibration coefficient for a given $$t$$ and $$n$$ such that the variance at unit distance from the node equal to the desired value $$D^1$$. From (79) obtain: $$a_1K_f(0)=K_f(1)$$ Substituting (83) in (84): $$a_1 C_K n=C_K(\ln(\frac{1}{t})+n)$$ Express the $$a_1$$: $$a_1=\frac{\ln(\frac{1}{t})+n}{n}$$ Then: $$D_a=C_K\frac{(\ln(\frac{1}{t})+n)^2}{n}$$ Express the desired variance at unit distance: $$D^1=K_f(0)-D_a$$ Performing conversions, obtain: $$D^1=C_K\frac{2n-\ln(t)}{n}\ln(t)$$ Where can calculate the calibration coefficient: $$C_K=\frac{D^1n}{(2n-\ln(t))ln(t)}$$ <br\> If instead of the unit distance for the calibration more convenient to take another distance equal to $$d$$ and estimate the variance $$D^d$$, then it is also possible to calibrate. If execute conversion similar to (84) - (90) obtain an expression for the the calibration coefficient: $$C_K=\frac{D^dn}{(2n-d^2\ln(\frac{t}{d^2}))d^2\ln(\frac{t}{d^2})}$$ Example:<br\> Consider the case of one-dimensional interpolation. Suppose that the interpolation is in the range from 0 to 10. Shall use the following values for the function (83): $$t=10^6,\;n=10^6,\;D^1=1$$. Compute the calibration coefficient: $$C_K=0.036191456826998$$
 * In the above case, a solution of the interpolation problem (58 - 60) means finding the most probable realization of a random function satisfying the interpolation nodes. However, set of realizations that satisfy the interpolation nodes is indefinitely. Is necessary to have the characteristics of this set to go from multidimensional interpolation to approximation.
 * Express the variance of the random function (3) for some arbitrary point, knowing that the variance of random variables in (3) equal to one:
 * As can be seen from (60) the variance of the random function (3) at the point $$x$$ equal to the value of the correlation function of two identical values of this point. For a random function with the autocorrelation function (51), (57) or (58) the variance will be the same at all points and equal to their value of zero. For (58) the variance will be equal to $$n$$:
 * Because in (58), the value $$n$$ directed at infinity then in the limit of variancehttp://en.wikipedia.org/w/index.php?title=User:MalininArt/Editnotice&action=edit&redlink=1 this random function will be equal to infinity.
 * Let the sequence $$x_1,x_2,...,x_k\;(x_i\in R^n)$$ and the corresponding sequence $$y_1,y_2,...,y_k\;(y_i\in R)$$ – are the interpolation nodes.
 * Denote the sequence $$(\varphi_1(x_i),\varphi_2(x_i),...,\varphi_m(x_i)),\;i=1,...,k$$, how $$L_i$$.
 * Then the vector $$\tilde{L}$$ can be represented as the sum of two perpendicular vectors:
 * Any set of random values of the vector $$V$$ can also be decomposed into two perpendicular vectors:
 * The coefficients $$a_i^*$$ from (68) are the coefficients $$q_i$$ from (10) or (15).
 * Vectors $$V_{a^*}$$ and $$V_{b^*}$$ are perpendicular and form in the sum of vector $$V$$ having multivariate normal distribution. Therefore, using the properties of the normal distribution can be considered (66) as an expansion of the original random function to the sum $$F_a(\tilde{x})$$ and $$F_a(\tilde{x})$$ of two independent random functions.
 * Thus, the random function (3) can be represented as the sum of two independent random functions. $$F_a(\tilde{x})$$– random function, its realizations are the most probable realizations of the original random function (3) computed by (15) and (16). $$F_b(\tilde{x})$$– random function representing the difference ("error") between the most probable realization and the real realization of a random function which is not known.
 * If considered $$F_b(\tilde{x})$$ as a random variable at a particular point $$\tilde{x}$$, it will be normally distributed with mean zero.
 * Since the expansion (64) - (65) does not depend on the specific values at the interpolation nodes then the error probability of interpolation – $$F_b(\tilde{x})$$ at any given $$\tilde{x}$$ will not depend on the values at the interpolation nodes, but will depend on the location coordinates of nodes themselves.
 * The standard deviation of the set of realizations that satisfy the interpolation nodes:
 * As noted earlier, the multiplication of functions in equations (15) - (16) or (58) - (60) for any finite non-zero coefficient, and their use does not change the most probable realization.
 * However, the variance is affected by the autocorrelation function at zero. In addition, as will be seen below, this affects the transition to a multi-dimensional problem of approximation.
 * Since the multiplication of the autocorrelation function for a finite non-zero coefficient has no effect on the results of interpolation, then can perform "calibrate" the variance of the autocorrelation function in accordance with a priori preferences.
 * The calibration can be carried out in various ways. Below is one of the options.
 * The calibration can be performed on the basis of a priori preferences and specifying the coefficient of the autocorrelation function.
 * Suppose know the value realization of the random functions into a single interpolation node (Fig. 2). If it is possible to estimate the variance of realizations at unit distance from the node then can be calibrate.
 * Let the a priori random function satisfies requirement that the variance realizations that satisfy the node is equal to some desired value $$D^1$$ at unit distance from the known node.
 * Assume that the function (58) is used with given values of $$t$$ and $$n$$. Introduce the calibration coefficient:
 * One-dimensional interpolation example is shown in Figure 3. The result was obtained using the function (83) and equations (59) - (60). However, as mentioned above, calibration coefficient $$C_K$$ does not affect the result of interpolation.
 * Let find the standard deviation of a random function, which is a subset realizations of a random function satisfying the interpolation nodes (expressions (79) - (82)). The result is shown in Figure 4.
 * Random function (73) for arbitrary $$\tilde{x}$$ is a random variable having a normal distribution. With known standard deviation can display the resulting set realizations graphically (Fig. 5).
 * As seen from Figure 4 the standard deviation of the interpolation nodes is zero, which corresponds to the expectations, since the values of the unknown function at these nodes are known and any uncertainty can not be.
 * Above was an example of a one-dimensional interpolation and calculation for this particular case the standard deviation on the remaining set of realizations that satisfy the nodes. Was considered one-dimensional case because of the visibility of its graphical representation. For the proposed transformations there is no restriction on the dimension of the interpolation.
 * If such an interpolation method is considered as a machine learning, then the calculation of standard deviation $$\delta_{\tilde{F}}(\tilde{x})$$ may allow not only to calculate the values of output variables, but also to evaluate their reliability, the potential spread of their values.
 * Despite the fact that the computational experiments strongly support the efficiency of the method using $$t=10^6,\;n=10^6$$ ( by interpolation of the measured ones), remains an open question on the impact of their values on the interpolation error compared with the calculated "ideal function" (57) – (58).
 * Obviously, this error is associated with a deviation of the spectrum used by the autocorrelation function from the reference spectrum (49) – (50), which can also lead to a distortion of the resulting interpolant at the same frequency range. The main distortion will be reduced at lower frequencies in the range $$\omega\approx\sqrt{n}$$ or $$\omega\approx\sqrt{t}$$, that for $$t=10^6,\;n=10^6$$  and field interpolation   should not significantly affect the results.
 * However, the question remains open about the error, which is planned in future to devote a separate research.

Multidimensional approximation.
$$F(x)=\sum^{m}_{i=1} {V_i \varphi_i (x)}+\sum^{k}_{j=1} V_{(m+j)}E_j(x)$$ where $$E_j(x)=\{\begin{array}{ccccc} s,\;x=x_j\\ 0,\;x\ne x_j\\\end{array}$$<br\> $$x_j\in Z^n,\;j=1,..,k$$ - updated coordinates of the interpolation nodes Expressions (91) and (92) completely mimic the presence of errors in the values of the interpolation nodes. Using a random function (91) is equivalent to the presence in the values of interpolation noise with normal distribution and standard deviation equal to $$s$$. At the same time can use functions $$E_j(x)$$ as an additional coordinate functions in the expression (3). In (3) can be performed without changes all the transformations (3) - (16). The changes will affect only the correlation function, denote it as $$\hat{K_f}(x_1,x_2)$$: $$\hat{K_f}(x_1,x_2)=\sum^{m}_{i=1}\varphi_i(x_1)\varphi_i(x_2)+\sum^{k}_{j=1}E_j(x_1)E_j(x_2)$$<br\> $$x_1,x_2\in Z^n$$ - some arbitrary values. <br\> $$\hat{K_f}(x_1,x_2)=\begin{cases} K_f(x_i,x_j)+s^2,\;x_i=x_j\\ K_f(x_i,x_j),\;x_i\ne x_j\\\end{cases}$$ <br\> $$\hat{K_f}(x_1,x_2)=K_f(x_1,x_2)$$ For the autocorrelation function (58) or (83) obtain the system of equations: <br\> $$\begin{cases} q_1(K_f(x_1-x_1)+s^2)+q_2K_f(x_1-x_2)+\;...\;+q_kK_f(x_1-x_k)=y_1\\ q_1K_f(x_2-x_1)+q_2(K_f(x_2-x_2)+s^2)+\;...\;+q_kK_f(x_2-x_k)=y_2\\ \vdots \\q_1K_f(x_k-x_1)+q_2K_f(x_k-x_2)+\;...\;+q_k(K_f(x_k-x_k)+s^2)=y_k\\\end{cases}$$ <br\> The equation for the most probable realization remains unchanged: $$f^*(x)=q_1K_f(x-x_1)+q_2K_f(x-x_2)+...+q_kK_f(x-x_k)\;,\;x\in R^n$$ Example of approximation. Graphic display of the set of realizations corresponding to the approximation shown in Figure 7. All the arguments concerning the variance of the random function (61) - (62) remain valid. The changes will affect only the system of equations (79), which now can be written as (98): $$\begin{cases}a_1(K_f(x_1-x_1)+s^2)+a_2K_f(x_1-x_2)+\;...\;+a_kK_f(x_1-x_k)=K_f(x-x_1)\\ a_1K_f(x_2-x_1)+a_2(K_f(x_2-x_2)+s^2)+\;...\;+a_kK_f(x_2-x_k)=K_f(x-x_2)\\ \vdots \\a_1K_f(x_k-x_1)+a_2K_f(x_k-x_2)+\;...\;+a_k(K_f(x_k-x_k)+s^2)=K_f(x-x_k)\\\end{cases}$$ $$E_j(x)=\begin{cases} S_m(x),\;x=x_j\\ 0,\;x\ne x_j\\\end{cases}$$<br\> where $$S_m(x)$$ - value of standard deviation of noise in $$x$$ Then the expression (96) becomes: $$\begin{cases}q_1(K_f(x_1-x_1)+S_m^2(x_1))+q_2K_f(x_1-x_2)+\;...\;+q_kK_f(x_1-x_k)=y_1\\ q_1K_f(x_2-x_1)+q_2(K_f(x_2-x_2)+S_m^2(x_2))+\;...\;+q_kK_f(x_2-x_k)=y_2\\ \vdots \\q_1K_f(x_k-x_1)+q_2K_f(x_k-x_2)+\;...\;+q_k(K_f(x_k-x_k)+S_m^2(x_k))=y_k\\\end{cases}$$ <br\> And expression (98) will be: $$\begin{cases}a_1(K_f(x_1-x_1)+S_m^2(x_1))+a_2K_f(x_1-x_2)+\;...\;+a_kK_f(x_1-x_k)=K_f(x-x_1)\\ a_1K_f(x_2-x_1)+a_2(K_f(x_2-x_2)+S_m^2(x_2))+\;...\;+a_kK_f(x_2-x_k)=K_f(x-x_2)\\ \vdots \\a_1K_f(x_k-x_1)+a_2K_f(x_k-x_2)+\;...\;+a_k(K_f(x_k-x_k)+S_m^2(x_k))=K_f(x-x_k)\\\end{cases}$$ <br\> Example. Consider the one-dimensional case. Assume that the distribution of noise (its standard deviation) in the approximation given by: $$S_m(x)=\frac{x^2}{40},\;x\in R$$ I.e. in the region near zero noise is almost absent and increases with distance.
 * In solving the problem of multidimensional interpolation was assumed that the required realization must exactly match the interpolation nodes. If transfer in terms of machine learning, was considered a variant of training, when training error should be zero. In many cases it is not required to accurately reproduce the training set, or it may be harmful impact on the quality of generalizing abilities. The data may be inaccurate, contain errors or even be contradictory. Consider the approximation of multidimensional data.
 * Suppose that the values of the interpolation nodes differ from values of their realization on some random variable.
 * Let the sequence $$x_1,x_2,...,x_k$$ and the corresponding sequence $$y_1,y_2,...,y_k$$– are the interpolation nodes. Assume that the coordinates of the nodes $$x_1,x_2,...,x_k$$ are complex numbers to which were added an infinitesimal imaginary part. Now the coordinates of nodes can not be exactly equal to either each other or what any other real number.
 * In this case, expression (3) can be written as follows:
 * In (91) have been added random variables and set of functions $$E_j(x)$$, which take a value of $$s$$ in the corresponding interpolation nodes and having a zero value at all other points.
 * Then the correlation function at the interpolation nodes $$x_i,x_j$$can be expressed as:
 * For arbitrary real values $$x_1,x_2\in R^n$$ correlation function remains unchanged (14).
 * Using (94) find that accounting of the existence noise in the values of interpolation with a normal distribution and standard deviation equal to $$s$$ is equivalent to adding to the main diagonal of the equation (15) values $$s^2$$.
 * It should be noted that the value of standard deviation of the realizations that satisfy the nodes (displayed in Figure 7) reflects only the distribution of generating realizations of the random function without regard to possible errors in their values.
 * In expression (92) functions $$E_j(x)$$ take a value equal to $$s$$ or zero. However, if it's known the distribution noise of approximation, this distribution can be accounted for in (92).
 * Figure 8 shows an approximation using equations (100) and the expression for the standard deviation (102).
 * Figure 9 for a visual comparison with Figure 8 displays the set of realizations, including the noise (102).

Conclusions.
The arguments and transformations show that the interpolation and approximation can be generalized to the theory of random functions as a search of the most probable realizations that satisfy the information about the nodes. The result was a method that can be viewed as a method of "machine learning", which has an exact solution, ensuring optimal results on the criterion of probability, subject to conditions imposed by the transformation of reference systems.

The method of machine learning.
Write down some key expressions obtained in the first part as a complete method. Function that relates the values of the training set input and output will be determined by the expression (103): <br\> $$f^*(x)=q_1K_f(x-x_1)+q_2K_f(x-x_2)+...+q_kK_f(x-x_k)\;,\;x\in R^n$$ <br\> Function $$K_f(\tau)$$ from (103) defined as (104):: <br\> $$K_f(\tau)=C_K(\parallel\tau\parallel^2\ln(\frac{\parallel\tau\parallel^2}{t})+n)$$ <br\> where $$C_K,\;t,\;n\;$$ - coefficients <br\>$$\parallel\tau\parallel$$ - norm of the vector $$\tau$$ <br\> Coefficients $$q_1,q_2,...,q_k$$ from (103) computed from a system of linear equations (105): <br\><br\> $$\begin{cases} q_1(K_f(x_1-x_1)+S_m^2(x_1))+q_2K_f(x_1-x_2)+\;...\;+q_kK_f(x_1-x_k)=y_1\\ q_1K_f(x_2-x_1)+q_2(K_f(x_2-x_2)+S_m^2(x_2))+\;...\;+q_kK_f(x_2-x_k)=y_2\\ \vdots \\q_1K_f(x_k-x_1)+q_2K_f(x_k-x_2)+\;...\;+q_k(K_f(x_k-x_k)+S_m^2(x_k))=y_k\\\end{cases}$$<br\><br\> where $$S_m(x)$$ - standard deviation for the noise at $$x$$ <br\> Now consider the coefficients in the expression (104). "Calibration" coefficient $$C_K$$ associated with the properties of a priori random function. Although the random functions directly in the representation of the expression (103) - (105) are not used, these expressions are derived on the basis of their. (If $$S_m(x)=0$$ then the value $$C_K$$ can be taken arbitrarily, by solving the system (105) and calculating the function (103) it will decrease.) The calculation procedure $$C_K$$ (proposed version): <br\> $$C_K=\frac{D^1 n}{(2n-\ln{(t)})\ln{(t)}}$$ <br\> $$C_K=\frac{D^dn}{(2n-d^2\ln(\frac{t}{d^2}))d^2\ln(\frac{t}{d^2})}$$ <br\> $$\delta_{\tilde{F}}(x)=\sqrt{(n-D_a)}$$ $$D_a=a_1K_f(x-x_1)+a_2K_f(x-x_2)+...+a_kK_f(x-x_k)$$ <br\> $$\begin{cases}a_1(K_f(x_1-x_1)+S_m^2(x_1))+a_2K_f(x_1-x_2)+\;...\;+a_kK_f(x_1-x_k)=K_f(x-x_1)\\ a_1K_f(x_2-x_1)+a_2(K_f(x_2-x_2)+S_m^2(x_2))+\;...\;+a_kK_f(x_2-x_k)=K_f(x-x_2)\\ \vdots \\a_1K_f(x_k-x_1)+a_2K_f(x_k-x_2)+\;...\;+a_k(K_f(x_k-x_k)+S_m^2(x_k))=K_f(x-x_k)\\\end{cases}$$ $$\delta_{\tilde{F}}(x)=\sqrt{(n-D_a+S_m^2(x))}$$ <br\> Example. Consider the one-dimensional case. $$S_m(x)=\frac{x^2}{40},\;x\in R$$
 * Let the sequence $$x_1,x_2,...,x_k\;(x_i\in R^n)$$ is a set of input vectors for training. Let the corresponding sequence $$y_1,y_2,...,y_k\;(y_i\in R)$$ is a set of output values. If the value of the output are not one but a set of values (vector), the representation of the transformation can be considered sequentially for each of the output parameters.<br\ >
 * The method allows determining the function that relates the values at the input and output. The method also allows for arbitrary input values determine the standard deviation of errors.
 * Values $$S_m(x)$$ at the interpolation nodes define a priori the anticipated level of noise (errors) in the training data and the degree of approximation, in which the function (103) reproduces the training data.
 * If $$S_m(x)=0$$ it will comply with the special case when the error, or any inconsistency in the data is missing and the desired function must accurately reproduce the training set i.e. the problem of learning becomes a problem of multidimensional interpolation.
 * If $$S_m(x)=const>0$$ it is assumed that the training data equally to the whole sample contains noise, which has a normal distribution with standard deviation equal to $$S_m(x)=const$$. The task of the training can be viewed as multidimensional approximation.
 * If the noise level and its distribution is uneven but it is known, its presence can be defined as a function of $$S_m(x)$$(rather only its values at the interpolation nodes).
 * Coefficients $$t$$ and $$n$$ in an "ideal case" should be roughly equal and fixed to infinity. In actual calculations, as these factors can be used $$t\approx n \approx 10^5 \div 10^6$$ provided normalization of input values in the range [-1,1](they must be several orders of magnitude greater than the range of variation of input values). With increasing $$t$$ and $$n$$ difference from using the function (104) instead of an "ideal function" rushes into the region of lower frequencies (at decomposition of the desired function to the spectrum) measured $$\omega\approx\sqrt{t}\approx\sqrt{n}$$. Thus, the difference is less impact on learning outcomes in the parameter space where the training set.
 * Calibration is required to find a balance between possible errors, inaccuracies and contradictions in the training data and the possible nonlinearity of the function.
 * Suppose it is known that the unknown function (103) passes through a node. For the calibration can be set a priori the desired level of variance the set of possible realizations $$D^1$$ at unit distance from the node. With increasing $$D^1$$ in the calculations (103) - (105) "Big role" will be assigned to a possible nonlinearity of the function, with decreasing - in the presence of inaccuracies in the training data. With a certain $$D^1$$ can be calculated required coefficient $$C_K$$:
 * If instead of the unit distance for the calibration more convenient to take another distance equal to $$d$$ and estimate the variance $$D^d$$, then it is also possible to calibrate. If execute conversion similar to (84) - (90) obtain an expression for the the calibration coefficient:
 * Figure 11 shows an example of a one-dimensional interpolation ($$S_m(x)=0$$).As can be seen from the figure, the interpolation is made qualitatively (no oscillations, which, for example, often occur when using the Lagrange interpolation polynomial).
 * If $$S_m(x)=const>0$$ interpolation is easily converted into an approximation – Fig.12.
 * Fig. 11 - 14 are examples of interpolation and approximation in one-and two-dimensional case for illustration of results. Expressions (103) - (105) may be used without restrictions for the space of any dimension.
 * To determine the standard deviation of the errors were obtained the expression:
 * $$D_a$$ values determined by the expression:
 * Coefficients $$a_1,a_2,...,a_k$$ of (108) are the solution of equations (109):
 * It should be noted that when calculating the standard deviation for each value of $$x$$, it is necessary to calculate the individual value $$D_a$$ and re-solve the system of equations (109) (although the calculations can be simplified, once calculating the inverse matrix).
 * The expected error of the function (103) as a random variable will have a normal distribution with standard deviation, that calculated from (107) and the expectation is zero. Knowing this, can display it graphically, as shown in Figure 17.
 * On the other hand, the fact that one can understand a possible error is the distribution of values realizations of the random function which satisfy data training.
 * Standard deviation in (107) reflects the range of values associated with the possible distribution of realizations. If need to take into account possible noise in the data itself, to calculate the standard deviation can use the expression:
 * Assume that the distribution of noise (its standard deviation) in the approximation given by:
 * I.e. in the region near zero noise is almost absent and increases with distance.