Wikipedia:Reference desk/Archives/Mathematics/2015 July 29

= July 29 =

General solution of heat equation in polar coordinates
As a followup to my earlier question, I figured that polar coordinates would offer the most promising route because I found that this region is most easily described in it. Firstly, if the rink is longer than it is wide (as are real rinks), then it can be transformed to the "square" (as long as it is wide) case that I can treat using ordinary polar coordinates using a simple change of variables (scaling). Secondly, it's easier to deal with edges defined by trigonometric functions than by the square roots you would need in the Cartesian coordinate system.

Still, to better understand the equation itself, I'm attempting to find a general solution for the circular case to see an example of a generalized Fourier series. But after separation of variables, it looks like I must put some artificial restrictions on the angular (theta) component for the solution to be meaningful. But that becomes the least of my problems. Here's my work.

Assume a solution in separated variables $$T(r, \theta, t) = R(r)F(\theta)G(t)$$. Then $$\frac{FG}{r} \frac{\partial}{\partial r} ( r \frac{\partial R}{\partial r}) + \frac{RG}{r^2} \frac{\partial^2 F}{\partial \theta^2} = \frac{RF}{\alpha}\frac{\partial G}{\partial t}$$.

Divide both sides by the product RFG and it becomes clear that since the left side is now only a function of r and theta, and the right side a function of only t, they must be equal to some constant which I will call $$-\lambda_1$$: $$\frac{1}{Rr} \frac{\partial}{\partial r} ( r \frac{\partial R}{\partial r}) + \frac{1}{Fr^2} \frac{\partial^2 F}{\partial \theta^2} = -\lambda_1 = \frac{1}{G\alpha}\frac{\partial G}{\partial t}$$ (that constant will later be the cause of most of my headaches). We now have for G, $$\lambda_1\alpha G + \frac{dG}{dt} = 0$$, and for R and F, $$\frac{1}{Rr} \frac{d}{dr} ( r \frac{dR}{dr}) + \frac{1}{Fr^2} \frac{d^2 F}{d\theta^2} = -\lambda_1$$. Multiply the second equation by r2 on both sides and subtract the first term on the left hand side from both sides: $$\frac{1}{F}\frac{d^2F}{d\theta^2} = -\lambda_1r^2 - \frac{r}{R}\frac{d}{dr}(r\frac{dR}{dr})$$. The left only depends on theta and the right only on r, so again they must both be equal to a constant $$-\lambda_2$$: $$\frac{1}{F}\frac{d^2F}{d\theta^2} = -\lambda_2 = -\lambda_1r^2 - \frac{r}{R}\frac{d}{dr}(r\frac{dR}{dr})$$. So we have three separate linear ordinary differential equations for each of R, F and G: The solution of the first two of course is easy. The third is a bit trickier; the change of variables $$x = \sqrt{\lambda_1}r$$ transforms it into Bessel's equation, and the Bessel function I want is that of the first kind: $$R_{\lambda_1, \lambda_2} = C_0J_\sqrt{\lambda_2}(\sqrt{\lambda_1}r)$$. Here is one part I am not sure of: I don't want weird behavior at the origin so I don't want the Bessel function of the second kind. Is this really wise though?
 * 1) $$\lambda_1\alpha G + \frac{dG}{dt} = 0$$
 * 2) $$\lambda_2 F + \frac{d^2F}{d\theta^2} = 0$$
 * 3) $$r^2\frac{d^2R}{dr^2} + r\frac{dR}{dr} + (\lambda_1r^2 - \lambda_2)R = 0$$

Proceeding further, it becomes clear that F must be 2pi-periodic, so that I don't have a jump discontinuity at theta = 0, 2pi. This rules out $$\lambda_2 < 0$$ because the sum of two exponentials won't have this property. Then I must have $$F_{\lambda_2} = C_1\sin(\sqrt{\lambda_2}\theta) + C_2\cos(\sqrt{\lambda_2}\theta)$$. For this to be 2pi-periodic, $$\sqrt{\lambda_2}$$ must be an integer; hence it must be a nonnegative integer. This is also good news because the Bessel function of the first kind now becomes entire (while that of the second kind will still have an undesirable singularity).

There's no such restriction on $$\lambda_1$$, although to avoid complex numbers it too would have to be nonnegative.

The issue (along with that of not using Bessel functions of the second kind) is that a Fourier-Bessel series, which I wanted to use for the radial component, can only be used for a Bessel function of a single order, not of variable order. How would I decompose an initial value function using what I obtained here? It would appear that I have to appeal to an "inner product" of these functions involving a double integral over the disk rather than relying on being able to calculate univariate generalized Fourier series in each variable alone. Does anyone know which inner product on the disk these are orthogonal with respect to?--Jasper Deng (talk) 06:34, 29 July 2015 (UTC)
 * Why not call the constants $$-\lambda_1^2$$ and $$-\lambda_2^2$$ to obtain a tiny simplification with not cost? Bo Jacoby (talk) 07:10, 29 July 2015 (UTC).
 * Generally when I separate variables, I have no expectation that the square root is the only way the constants show up. Indeed, for the equation for G there is no square root taken. When I build a generalized Fourier series, I use something other than them anyways. This is mainly a notational convention I was taught to use, especially when I had no expectation of the form of the solutions.--Jasper Deng (talk) 07:18, 29 July 2015 (UTC)
 * When the square root shows up you may regret your choice before submitting the problem to the public:


 * 1) $$\lambda^2\alpha G + \frac{dG}{dt} = 0$$
 * 2) $$\mu^2 F + \frac{d^2F}{d\theta^2} = 0$$
 * 3) $$r^2\frac{d^2R}{dr^2}+r\frac{dR}{dr}+(\lambda^2 r^2-\mu^2)R=0$$
 * 4) $$x = \lambda r$$
 * 5) $$R_{\lambda, \mu} = C_0 J_{\mu}(\lambda r)$$
 * Bo Jacoby (talk) 16:55, 29 July 2015 (UTC).
 * You fed in that the angular component needs to be periodic, which gives a strong restriction on $$\lambda_2$$. You need to do something similar with the radial component, and impose zero boundary conditions (the "free" Bessel functions appear because the Fourier transform on R^2 has a continuous spectrum in the radial directions, but this discretizes when we restrict to a disc).  It seems to me that the "natural" Bessel functions for this problem are J_0, because these are the eigenfunctions of the radial Laplacian.  The differential operator $$L(f) = r^{-1} (rf')'$$ is self-adjoint with respect to the inner product $$\int_0^1 f(r)g(r)\, rdr$$ that one gets by integrating radial functions in polar coordinates.  So there's a basis of eigenfunctions, of the form $$J_0(u_n r)$$, where the $$u_n$$ are the positive zeros of J_0 (here I'm assuming that the radius of the disc has been normalized to 1).  The general solution will then be of the form $$A+B$$ where A is a particular solution with nonzero boundary conditions (which we can use ordinary Fourier series to write in the form $$J_0(r)\sum c_n e^{-n^2t+in\theta}$$), and B is a solution with zero boundary conditions that will be of a Fourier-Bessel form.  I should add that to to this in dimension four and higher, you will use Bessel functions J_{n/2-1}, because these are the eigenfunctions of the radial Laplacian in n dimensions.  It seems like we can solve the problem without Bessel functions in three dimensions, since the radial eigenfunctions are basically just exponentials in that case.  S ławomir  Biały  13:27, 29 July 2015 (UTC)
 * Would this not then put further restrictions on $$\lambda_1, \lambda_2$$? I feared that by using a Bessel function of only a single order I could lose some generality. In particular, wouldn't choosing $$J_0$$ imply $$\lambda_2 = 0$$ and then my angular component becomes constant? I'm still elementary with solving in more than one spatial dimension (I usually have not been asked to perform separation of variables in more than one spatial dimension).--Jasper Deng (talk) 20:41, 29 July 2015 (UTC)