Wikipedia:Reference desk/Archives/Mathematics/2012 May 22

= May 22 =

Convolution identity
Is it true that $$\int_\mathbb{R} f(x-a)g(x)dx = f(x-a)$$ if $$\int_\mathbb{R}g(x) = 1$$? Widener (talk) 03:06, 22 May 2012 (UTC)
 * Certainly not &mdash; the definite integral can't even have x in it. --Tardis (talk) 04:15, 22 May 2012 (UTC)
 * Take for instance $$g$$ to be the normalized characteristic function of the interval $$[-r,r]$$ that is $$g:=\frac{1}{2r}\chi_{[-r,r]}$$. Then $$\int_\mathbb{R} f(a-x)g(x)dx = \frac{1}{2r}\int_{a-r}^{a+r}f(u)du$$, that is, it is the integral mean of f in the interval $$[a-r,a+r]$$. If $$f$$ is continuous, this certainly converges to $$f(a)$$ as $$r\to 0$$. With another $$g$$ you would get a weighted integral mean. --pm a  05:31, 22 May 2012 (UTC)
 * This problem is really annoying. Is it true that $$\int_\mathbb{R} f(x - \frac{u}{t})g(u)du$$ converges to f(x) as t approaches infinity, and if so, why? And why is the convergence uniform if f is uniformly continuous? I can't figure this out; I would really like an answer soon; this is annoying me. Widener (talk) 06:04, 22 May 2012 (UTC)

For small values of $$\frac{|u|}{t}$$ you have $$f(x - \frac{u}{t})\approx f(x)$$. For big values of |u| you have $$g(u)\approx 0$$. Now $$\int_{-\infty}^\infty f(x - \frac{u}{t})g(u)du=\int_{-\infty}^{-W} f(x - \frac{u}{t})g(u)du+\int_{-W}^W f(x - \frac{u}{t})g(u)du+\int_W^\infty f(x - \frac{u}{t})g(u)du$$ $$\approx \epsilon + f(x)+\epsilon $$. Bo Jacoby (talk) 06:37, 22 May 2012 (UTC).


 * Wait, how do you get $$\int_{-W}^W f(x - \frac{u}{t})g(u)du = f(x)$$? Widener (talk) 16:25, 22 May 2012 (UTC)


 * $$\int_{-W}^W f(x - \frac{u}{t})g(u)du \rarr f(x)\int_{-W}^W g(u)du$$ for $$t\rarr\infty$$ because of continuity, and $$\int_{-W}^W g(u)du \rarr \int_{-\infty}^\infty g(u)du=1$$  for $$W\rarr\infty$$. Bo Jacoby (talk) 13:04, 25 May 2012 (UTC).


 * That's the idea; let's write it down more formally. Let $$f$$ be continuous and bounded, and $$g$$ integrable and non-negative; let's denote $$g^t(x):=tg(tx)$$ and $$f^t(x):=\int_\R f(x-y)g^t(y)dy.$$  We have, for any $$x\in\R$$ and for any $$r>0$$:
 * $$|f^t(x)-f(x)|=\left|\int_\R f(x-y)g^t(y)dy - f(x)\right|= \left|\int_\R f(x-y)g^t(y)dy - f(x)\int_\R g^t(y)dy \right|= \left|\int_\R \left(f(x-y)-f(x)\right) g^t(y)dy \right|$$


 * $$\le \int_\R \left|f(x-y)-f(x)\right| g^t(y)dy =\int_{|y|\le r } \left|f(x-y)-f(x)\right| g^t(y)dy + \int_{|y| > r } \left|f(x-y)-f(x)\right| g^t(y)dy$$


 * $$\le \sup_{|y|\le r } \left|f(x-y)-f(x)\right| \int_\R g^t(y)dy + 2\|f\|_\infty \int_{|y| > r }  g^t(y)dy$$


 * $$\le\sup_{|y|\le r} \left|f(x-y)-f(x)\right|  + 2\|f\|_\infty   \int_{\R\setminus [-tr,tr] }g(u)du .$$


 * For the pointwise convergence: taking a limit superior, and noticing that the last integral goes to zero as $$t\to\infty$$


 * $$\limsup_{t\to \infty}\left|f^t(x) - f(x)\right| \le\sup_{|y|\le r} \left|f(x-y)-f(x)\right|$$.
 * Since this holds for all r>0, also, by the continuity of $$f$$
 * $$\limsup_{t\to \infty}\left|f^t(x) - f(x)\right| \le \inf_{r>0}\sup_{|y|\le r} \left|f(x-y)-f(x)\right|=0.$$
 * For the uniform convergence: assume $$f$$ uniformly continuous, and let $$\omega$$ be a modulus of continuity for $$f$$. Then
 * $$|f^t(x)-f(x)|\le \omega(r)+2\|f\|_\infty  \int_{\R\setminus [-tr,tr] }g(u)du .$$
 * Now first take a $$\sup_{x\in\R}$$ and then the $$\limsup_{t\to\infty}$$ as before, so:
 * $$\limsup_{t\to \infty} \|f^t - f\|_\infty \le \omega(r).$$
 * So, as before, since this holds for any $$r>0 ,$$
 * $$\limsup_{t\to \infty} \|f^t - f\|_\infty=0 ,$$
 * showing the uniform convergence. Please ask for any doubt or further explanation. --pm a 17:47, 22 May 2012 (UTC)
 * Cool! I think you have a few too many $$\le$$ signs though. I don't know about this modulus of continuity; I have a different version of uniform continuity. Widener (talk) 18:27, 22 May 2012 (UTC)
 * Surplus $$\le$$'s removed!! Having a modulus of continuity is equivalent to the epsilon-delta definition of uniform continuity. It is useful in doing estimates. --pm a 18:59, 22 May 2012 (UTC)

Matrices and infinite sums
I need help understanding something out of a paper I'm http://upload.wikimedia.org/wikipedia/en/7/70/Button_lower_letter.png reading

If you have the identity

$$\mathbf{Y=F+DY}$$ for Nx1 vectors Y and F, and NxN matrix D, then you can express the i-th entry of Y as

$$Y_i=F_i+\Sigma_{j=1}^N d_{ij}Y_j$$

where $$d_{ij}$$ is the (i,j)-th element of D.

Now this can be iterated as an infinite sequence:

$$Y_i=F_i+\Sigma_{j=1}^N d_{ij}F_j + \Sigma_{j=1}^N \Sigma_{k=1}^N d_{ij}d_{kj}F_j + \Sigma_{j=1}^N \Sigma_{k=1}^N \Sigma_{l=1}^N d_{il}d_{lk}d_{kj}F_j + ...$$

and the authors of this paper are specifying a certain measure $$U_{1i}$$, which is a weighted average of terms in the above equation, defined as:

$$U_{1i}= 1 \times \frac{F_i}{Y_i} + 2 \times \frac{\Sigma_{j=1}^N d_{ij}F_j}{Y_i} + 3 \times \frac{\Sigma_{j=1}^N \Sigma_{k=1}^N d_{ij}d_{kj}F_j}{Y_i} + 4 \times \frac{\Sigma_{j=1}^N \Sigma_{k=1}^N \Sigma_{l=1}^N d_{il}d_{lk}d_{kj}F_j}{Y_i}+...$$

and then say that (provided $$\Sigma_{j=1}^N d_{ij}<1$$), the numerator of the above measure equals the i-th element in the Nx1 matrix $$[I-D]^{-2} F$$

(or equivalently, it's equal to the i-th element of $$[I-D]^{-1}Y$$, since $$Y=(1-D)^{-1}F$$)

I cannot figure out how they reached this conclusion.

I tried to manually verify it where N=2, but my $$[I-D]^{-2} F$$ didn't end up looking like the numerator. What special matrix tricks do I need to follow this?

Thorstein90 (talk) 03:27, 22 May 2012 (UTC)


 * The mathematical part of that paper seems quite weak. In any case, you can solve Y=F+DY without writing down the coordinates. You need an assumption on D for the solvability (of course, if e.g. D is the identity, there's no solution unless for the data F=0). If D has spectral radius less than 1, then I-D is invertible, and the Neumann series formula holds. In this case, you also have the expansion $$(I-D)^{-2}=I+2D+3D^2+\dots $$, so, yes, the i-th coordinate of it is the numerator of that expression. It does not mean that it make a sense to introduce that "measure" $$U_{1i}$$; actually the whole section is quite smoky. --pm a  06:36, 22 May 2012 (UTC)

Scale Symmetry
On the symmetry page it says "Scale symmetry refers to the idea that if an object is expanded or reduced in size, the new object has the same properties as the original"

from the context i understand this is refering to something akin to a grain of sand and a cliff, where they are statistically similiar but not identical, or alternatively where some but not all properties are the same when scaled.

There can however be objects which are identical when scaled e.g. a square, with another square half the size inside, and a square half the size of that inside it, and so on ad infinitum, as well as squares double the size outside the square ad infinitum. this is identical if scaled by 2^n around the center of these squares. So why isn't that version of the symmetry talked about?

I can think of countless more examples, including a few in elliptic space that use only a single finite shape yet are identical when scaled in certain ways. — Preceding unsigned comment added by 86.147.224.178 (talk) 10:00, 22 May 2012 (UTC)


 * Please note there are also figures, which preserve their properties also for continuous scaling, for example a point alone or a plane. --CiaPan (talk) 11:04, 22 May 2012 (UTC)


 * That type of symmetry is talked about -- it is known as self-similarity. The weaker type of symmetry is basically a generalization of self-similarity. Looie496 (talk) 18:17, 22 May 2012 (UTC)


 * they are not identical when scaled-they are double the size when scaled by a scale factor of 2. I'm talking about something that when scaled is the same size and indistinguishable from the original (although in euclidean space this requires an infinite object, or an infinite number of objects, in eliptic space it doesn't even need that. — Preceding unsigned comment added by 86.147.224.178 (talk) 10:22, 23 May 2012 (UTC)


 * The set $$\mathbb Q$$ of rational numbers in the number line remains identical after scaling with any rational scale (and with any rational number as a transformation's center point). --CiaPan (talk) 12:27, 23 May 2012 (UTC)

Orbits of GL(n,F) on bivectors?
Hello, I was looking at Exterior product and bivector, and this question came up: What are the orbits of $$GL(n,F)$$, where F is some field, on bivectors?

I'm sure the answer is quite simple, but still I couldn't find a reference. As I see it, simple non-zero bivectors of the form $$v\wedge w$$ would form just one orbit?

Many thanks, Evilbu (talk) 12:18, 22 May 2012 (UTC)


 * Projectively, this can be identified with the space of two-planes and GL is transitive on 2-planes. Sławomir Biały  (talk) 12:47, 22 May 2012 (UTC)


 * Thank you, but what exactly do you mean by 2-planes? By "projectively", are you saying this transitivity only works up to scalars?Evilbu (talk) 14:23, 22 May 2012 (UTC)


 * See Plucker embedding. Sławomir Biały  (talk) 14:55, 22 May 2012 (UTC)
 * Thanks, but maybe there is a misunderstanding... does transitivity on 2-planes say anything about transitivity on all bivectors? Not every bivector can be realized as the image under Plucker embedding?Evilbu (talk) 20:20, 23 May 2012 (UTC)

Point probability of a continuous distribution
Hi,

I was reading a text book on probability, and at one place while talking about continuous probability distributions it suddenly says without much of an explanation that, the probability mass function pmf, denoted as p(x) has a value of zero for all real 'x'. Is there any intuitive explanation for this?

Thanks, Gulielmus estavius (talk) —Preceding undated comment added 18:34, 22 May 2012 (UTC).


 * More or less, by definition. Check here the definition of continuous probability distribution The PMF is zero just because $$\int_x^x f(t)dt=0$$ for all x.--pm a 18:51, 22 May 2012 (UTC)


 * It was slightly confusing since the function $$ f(t) $$ can have a non zero positive value at any $$ t $$, and what exactly does the value $$ f(t) $$ signify?

Gulielmus estavius (talk) 18:43, 23 May 2012 (UTC)
 * The value of f is the probability density at that point, or generally speaking, the derivative of the cumulative density function. If you want an intutive reason why there is zero chance of any given number, consider this. Imagine you select a real number between 0 and 1, and I generate a random real number between 0 and 1 (from a uniform distribution). Now each digit in the decimal expansion of my random number is a random digit from 0 to 9. There are infinitely many digits. I could get 0.5, but that means, by random fluke, I got 0.5000000....., or a zero for every decimal place after the first. Now, regardless of the number you pick, what are the chances that our numbers will match in every decimal place? IBE (talk) 12:45, 24 May 2012 (UTC)

Continuous distributions are simple approximations to complicated discrete distributions. Consider an urn filled with colored pebbles. It contains Ki pebbles of color number i. The total number of pebbles in the urn is N = &Sigma;Ki. The number Xi = Ki/N is the probability that a randomly picked pebble has color number i. Take a handful of pebbles - a sample - out of the urn. It contains ki pebbles of color number i. Knowing N and ki you can consider the pmf pi(x) = Pr(Xi = x | N, ki ) for each color i. For big values of N the individual probabilities pi(x) are small. So it is convenient to consider continuous distribution functions fi(x) such that  fi(x) dx = Pr(x < Xi < x+dx | N&rarr;∞, ki ) for values of dx that are small enough so that the pdf is constant in the interval [x, x+dx], and still dx>>1/N. Bo Jacoby (talk) 06:28, 25 May 2012 (UTC).


 * The function $$f$$ here represents a density. You may think of a lot of analogies from physics etc. For instance: think of a distribution of mass along a unit thread. In the mathematical description, we say that this distribution has a (linear) density f(x), or that "it is an (absolutely) continuous distribution" if the mass in each interval [a,b] is given by the integral $$\scriptstyle\int_a^b f(x) dx$$. In this case, the total mass in a single point {x} is zero. A distribution does not necessarily admit a density: it could be totally concentrated in a point, for instance: then, no representation as an integral of some f(x) in dx.  --pm a  06:56, 25 May 2012 (UTC)