Wikipedia:Reference desk/Archives/Mathematics/2020 May 19

= May 19 =

why multi variable anti-derivative isn't defined?
anti-derivative is defined and used in R->R. Why anti-derivative isn't defined in a similar way that derivative is defined in R^m->R^n? Thanks.--Exx8 (talk) 19:36, 19 May 2020 (UTC)
 * Courtesy link: Multiple integral:
 * Since the concept of an antiderivative is only defined for functions of a single real variable, the usual definition of the indefinite integral does not immediately extend to the multiple integral.
 * I await my betters to explain the why. -- ToE 20:11, 19 May 2020 (UTC)
 * yeah I read it, but I see no reason why.--Exx8 (talk) 20:48, 19 May 2020 (UTC)
 * I think you mean you want to solve an ordinary differential equation.  The article Picard–Lindelöf theorem starts out technical but the example further down can be applied to multi-dimensional systems straightforwardly.  In other words what you want and exists and is important, but is usually not called an integral. 2601:648:8202:96B0:3567:50D5:8BFF:4588 (talk) 22:15, 19 May 2020 (UTC)
 * I really see no connection. why for every f in C there isn't F, such that F(a)-F(b)=integral(f,a,b)?--Exx8 (talk) 07:18, 20 May 2020 (UTC)


 * An antiderivative of a given function $$f$$ is any function $$F$$ such that $$F$$ has a derivative, which then has to be equal to $$f$$. What is the derivative of a function $$F:\mathbf{R}^m\rightarrow \mathbf{R}^n$$? If $$m=1$$, there is no issue; it can be defined in a straightforward way (see Derivative § Vector-valued functions). But what if $$m>1$$? What is "the derivative" of a function in several variables? There is a notion of total derivative. Using the version of a Jacobian matrix, the total derivative of a function $$F:\mathbf{R}^m\rightarrow \mathbf{R}^n$$ is some function $$DF:\mathbf{R}^m\rightarrow \mathbf{R}^{n\times m}$$, where $$DF(x,y,...):\mathbf{R}^{n\times m}$$ is an $$n\times m$$ matrix. A Jacobian matrix satisfies rather specific conditions; it will be rare for a matrix to arise naturally that happens to be Jacobian other than as the Jacobian matrix of a given function, so defining a notion of "anti-Jacobian" is not very interesting. So we would need to define some new notion of "derivative" of a function in several variables. Indeed, one could define, say, the "star derivative" of $$F:\mathbf{R}^m\rightarrow \mathbf{R}^n$$ as the function $$D^\ast F:\mathbf{R}^m\rightarrow \mathbf{R}^n$$, if it exists, defined by $$D^\ast F(x_1,...,x_m) =$$ $$\tfrac{\partial}{\partial x_1}...\tfrac{\partial}{\partial x_m}F(x_1,...,x_m)$$. Then it is meaningful to seek the "anti-star derivative" (or "star antiderivative"?) of a given function, which, if it exists, will be the multiple indefinite integral. The reason that these notions have not been defined is probably that they are insufficiently useful for doing interesting mathematics. --Lambiam 07:47, 20 May 2020 (UTC)
 * Actually, for every invertible function, there is an M so MD=I locally. I would expect That F to be exactly M^(-1)f, locally.--Exx8 (talk) 08:12, 20 May 2020 (UTC)
 * I'm going to concur with Lambiam on this. If n=1 then the derivative would basically be the gradient (See the article for how they aren't quite the same.) Only certain functions can be gradients, so the gradient as a "function" isn't invertible, or at least you'd have to restrict its range to make it work (see Closed and exact differential forms.) It might be possible to define an "antigradient" with the right conditions and caveats, but apparently it's not really that useful to do so and generally mathematicians don't look for things to define just to make new definitions. --RDBury (talk) 09:47, 20 May 2020 (UTC)
 * PS. I just did a quick search and the word antigradient does appear in a couple articles, so the idea does make sense in certain contexts. There doesn't seem to be much information on this in general though. --RDBury (talk) 09:55, 20 May 2020 (UTC)

I do understand that it isn't defined because it can't be defined generally. The question is why. Why can't I define a function F which F(a)-F(b)=integral(f,a,b) for every a and b?--Exx8 (talk) 12:21, 20 May 2020 (UTC)


 * I understand the meaning of $$F(x_{2},y_{2})-F(x_{1},y_{1})$$. But what is the meaning of $${\textstyle\int} (f,(x_{1},y_{1}),(x_{2},y_{2}))$$? If it is $$\textstyle\int_{x_1}^{x_2} \int_{y_1}^{y_2} f~dy~dx$$, then – except for uninteresting cases like  $$f(x,y)=0$$ everywhere – no such $$F$$ exists. For a simple example, take the function defined by $$f(x,y)=4xy$$. Its double integral is $$(x_2^2 - x_1^2)(y_2^2 - y_1^2)$$. If $$F$$ exists, it is (like an antiderivative) defined up to an additive constant (see constant of integration), so we may fix $$F$$ by setting $$F(0,0)=0$$. So assume $$F$$ with the desired property exists. Then we find $$F(x,y)=F(x,y)-F(0,0)=$$ $$(x^2 - 0^2)(y^2 - 0^2)=$$ $$x^2 y^2.$$ Now we compute $$F(2,2)-F(1,1)=2^2{\cdot}2^2 - 1^2{\cdot}1^2=15$$. However, the double integral evaluates to $$(2^2 - 1^2) (2^2 - 1^2) = 9$$. These two outcomes are not the same, so no such $$F$$ exists.  --Lambiam 14:40, 20 May 2020 (UTC)


 * You can do that in a gradient field-- otherwise the value of the line integral will be path dependent. 2601:648:8202:96B0:3567:50D5:8BFF:4588 (talk) 01:12, 21 May 2020 (UTC)
 * This corresponds to the interpretation of "integral(f,a,b)" as a line integral $$\textstyle\int_{\mathcal{C}} f(\mathbf{r})\, ds$$ along a path $$\mathcal{C}$$ connecting $$\mathbf{a}$$ to $$\mathbf{b}$$. To illustrate the path-dependence, take $$f(x,y) = xy$$, $$\mathbf{a}= (0,0)$$ and $$\mathbf{b}= (1,2)$$. We consider two paths, both of which are two-segment polylines: $$\mathcal{C}_1$$ going from $$\mathbf{a}$$ to $$(1,0)$$ to $$\mathbf{b}$$, and $$\mathcal{C}_2$$ going from $$\mathbf{a}$$ to $$(0,2)$$ to $$\mathbf{b}$$. Then $$\textstyle\int_{\mathcal{C}_1} f(\mathbf{r})\, ds = 2$$, while $$\textstyle\int_{\mathcal{C}_2} f(\mathbf{r})\, ds = 1$$. --Lambiam 09:35, 21 May 2020 (UTC)


 * I wonder if any insight might be extracted if we direct our reader to the articles on Calculus of variations or functional variational methods? Once we start doing work on functionals rather than values of functions, the math gets a lot more abstract and the notation gets a lot messier - so we have to be careful about how we use and abuse our terminology.
 * The indefinite integral of a functional is expressible as a functional; we have an article on functional integration; we sometimes see this kind of abstruse notation in some of the more advanced discussions of classical and modern physics. It is possible to define the functional for the indefinite integral of a multi-parameter function.
 * So: it is possible to construct the integral of a function, and to express it in the notation of a functional; it is sometimes useful; but it will really stretch your mind. The derivative, and the antiderivative, itself, must be expressed as a functional (not a "function"); and one can no longer operate in the mind-set where the range (or the domain) of that functional is R; rather, the domain is the space of all possible functions; the co-domain of the antiderivative is in the same sense the space of all possible functions; and in special cases, we can restrict these range and image domains.
 * It has been a long time, but I seem to recall that all the fourth-year mathematicians at my college - a group of individuals who love mathematics and excel in the pursuit of its finer points - all dreaded that course in mathematical analysis in which their human brains must come to terms with these bizarre ideas about all possible permutations of all possible possibilities, transmogrified by the space of all possible transformative mappings; because, while it may seem that anything is possible, it turns out that real mathematics is about formally and correctly specifying these possibilities.
 * Naturally, the physicists did not so dread this class, because our brains work a little bit differently. There is, of course, at least one or two practical applications for this stuff; the least famous of these, perhaps, is the solution to the infamous brachistochrone, or, "how long will the ball take to roll down this hill?"  Here is a lovely 22-page treatment from the fine folks at the mathematical physics department of CalTech: Variational Calculus - which not only walks you through the math, but formally extends it to multiple dimensions; and then even provides a handy reference quantitative, numerical goalpost ("15%"), which one may use to decide when to "declare victory."  Knowing-when-to-quit turns out to be an actual, incredibly important famous theoretical problem for students of the practical application of the functions that operate over the domain of all other possible functions.
 * I have, many times in the past, recommended the following book to our readers: A Transition to Advanced Mathematics - which I feel will help the university-level student of engineering or science to connect the proverbially dots between their already-developed knowledge of elementary calculus, and the more general formalities of modern mathematics.
 * Nimur (talk) 14:29, 21 May 2020 (UTC)