Wikipedia:Reference desk/Archives/Mathematics/2013 March 30

= March 30 =

Incorrect proof of boundary property
Hi, I know that in $$R^n$$, given a set $$A$$ that is equal to a ball of radius $$r$$, around a point $$x$$,i.e B_r(x), the boundary $$B(A)=\{y:d(x,y)=r\}$$. I know in the case of the general metric space $$(X,d)$$ with an open ball A it is not the case that $$\{y:d(x,y)=r\}\subset B(A)$$ but I cannot see where my proof below explicitly assumes that the metric space is $$R^n$$.

Since $$B(A)=X\setminus \text{ext}A\cup\text{int}A$$, $$\text{int}A=A,\text{ and } \{y:d(x,y)>r\}\subset\text{ext}A$$,

$$x\in X\setminus\{\text{ext}A\cup\text{int}A\} \Rightarrow x\in B(A)$$,

$$x\notin \text{ext}A\text{ and }x\notin \text{int}A\Rightarrow x\in B(A)$$,

$$x\notin \{y:d(x,y)>r\}\text{ and }x\notin \{y:d(x,y)<r\} \Rightarrow x\in B(A)$$,

$$x\in \{y:d(x,y)\leq r\}\text{ and }x\in \{y:d(x,y)\geq r\} \Rightarrow x\in B(A)$$,

$$x\in \{y:d(x,y)=r\} \Rightarrow x\in B(A)$$,

so $$\{y:d(x,y)=r\}\subset B(A)$$,

Help very much appreciated.

Neuroxic (talk) 10:02, 30 March 2013 (UTC)
 * Enlighten me please… what does your italic capital "R" letter denote? A metric ring, I guess? Incnis Mrsi (talk) 11:23, 30 March 2013 (UTC)
 * Oh, I should have said the real numbers. I tried typing in \mathbb{R} but it didn't work so I just went with plain R.Neuroxic (talk) 12:07, 30 March 2013 (UTC)
 * Do you have a counter example? You could try your method with a concrete example to see where it goes wrong.--Salix (talk): 18:20, 30 March 2013 (UTC)
 * The statement
 * $$x\notin \text{ext}A\text{ and }x\notin \text{int}A\Rightarrow x\in B(A)$$,
 * does not imply
 * $$x\notin \{y:d(x,y)>r\}\text{ and }x\notin \{y:d(x,y)r\}$$ is a weaker condition than $$x\notin \operatorname{ext}A$$.  Sławomir Biały  (talk) 18:31, 30 March 2013 (UTC)

Treating linear differential operators like a matrices
Matrices are linear transformations from $$\mathbb{R}^n$$ to $$\mathbb{R}^m$$. Differential operators like $$\frac{d}{dx}$$ are also linear transformations, this time from differentiable functions into integrable functions. Does it make sense to speak of matrix properties of linear differential operators, like the "determinant" or "transpose" or things like that?

I was experimenting and noticed this example. If you restrict your set of functions to polynomials of some degree n (in this example I will take n=2)

The derivative operator $$\frac{d}{dx}$$ sends the quadratic

$$ax^2+bx+c$$

to

$$2ax+b$$,

essentially it sends the coefficients

$$a \rightarrow 0$$

$$b \rightarrow 2a$$

$$c \rightarrow b$$

and the matrix

$$\begin{bmatrix}0 & 0 & 0 \\2 & 0 & 0 \\0 & 1 & 0 \end{bmatrix}$$

does exactly the same thing when acting on the vector $$\begin{bmatrix}a \\b \\c \end{bmatrix}$$.

This would suggest $$\det(\frac{d}{dx}) = 0$$ and $$\left(\frac{d}{dx}\right)^T(ax^2+bx+c) = 2bx^2+cx$$.

Is there a way to do this in general?

150.203.115.98 (talk) 12:55, 30 March 2013 (UTC)


 * You can define the transpose as the formal adjoint of the differential operator. The determinant usually needs regularization before it is well-defined.  See functional determinant.  Sławomir Biały  (talk) 13:08, 30 March 2013 (UTC)


 * As a pointer, the derivative is commonly treated as an infinite-dimensional matrix operating on a Hilbert space or a similar infinite-dimensional space of functions. It is not invertible, though. Looie496 (talk) 14:51, 30 March 2013 (UTC)


 * Of course, the key reason that the derivative matrix is not invertible is that the derivative maps the constant term to zero. But you can define a pseudo-inverse anti-derivative straightforwardly, exactly like a matrix pseudo-inverse, that will get all the other terms right.
 * The key concept here is basis function. You have used (1, x, x2...) above, but there are lots of other choices you could have made -- for example the Fourier basis (1, cos(x), sin(x), cos(2x), sin(2x)...); or various families of orthogonal polynomials; or a set of regularly spaced boxcar functions; or a set of cubic spline polynomials; or a set of wavelet functions.  Each set of basis functions can be particularly useful, in particular applications.  Once you have chosen your set of basis functions, you can then represent any function of your original space as rather a big vector.
 * The mathematicians further up-thread have jumped straight to the infinite dimensional case. But in engineering maths and in mathematical physics we're often quite happy with (or, at any rate, may very often have to make do with) a finite number of basis functions, exactly as you were doing above, though of course you get different coefficients in your derivative matrix, depending which set of basis functions you use.
 * Using a finite number of basis functions to approximate a set of continuous equations is called the Galerkin method. It's also the basis of finite element analysis, used eg to predict in a computer the vibration modes of an airliner or a Formula 1 car (or whatever).  Or in signal processing it's how you think about and design digital filters.  And in physics, it's very heavily used in quantum mechanics.  (You may remember that the first "modern" form of quantum mechanics was Heisenberg's matrix mechanics -- which initially was rather a mystery.  But Hilbert asked Heisenberg whether there was a differential equation they could be related to.  Heisenberg didn't take the hint.  But if he had, it's entirely possible he might have beaten Schrodinger to the Schrodinger equation.  In the end it was Dirac who showed how the two systems were equivalent, in just the sort of way you've written out above, and that synthesis has been the bedrock of quantum mechanics ever since.)
 * So if you look inside, for example, a big numerical weather forecasting installation, you'll basically find the entire world's weather represented as a big vector, which all the differential operators act on like a big matrix. So, having defined your basis, you can think of the functional that maps today's weather forward to tomorrow as essentially again a very big matrix.  You can then use standard matrix techniques like singular value decomposition to see what sort of vectors are least stable when you apply that matrix -- ie which are the vector directions, that if we add a little change in that vector direction today, it will be blown up the largest amount by the matrix to make the biggest possible vector tomorrow.  That's basically how the ECMWF chooses what perturbations to run for Ensemble forecasting -- in this case, that unstable vector found by the SVD may typically correspond to explosive formation of an entire weather system.
 * So in short, yes, it's no coincidence that you can represent differential operators by matrices; and this has huge relevance in the real world. Jheald (talk) 17:02, 30 March 2013 (UTC)


 * You can also consider the linear space spanned by cos(x) and sin(x), or simply the one dimensional complex vector space A exp(i x). Then the differential operator is equivalent to rotating over 90 degrees. You then do have an inverse, also it's then clear what the square root of the differential operator should be. You can then generalize this and define fractional powers of the differential operator for any analytic function. Count Iblis (talk) 18:20, 30 March 2013 (UTC)