Wikipedia:Reference desk/Archives/Mathematics/2010 March 26

= March 26 =

unsolved problems
are these problems still unsolved? —Preceding unsigned comment added by 208.79.15.130 (talk) 01:01, 26 March 2010 (UTC)
 * 1) Equichordal Points: Can a closed curve in the plane have more than one equichordal point?
 * 2) Is pi+e irrational?
 * 3) (Tiling the Unit Square) Will all the 1/k by 1/(k+1) rectangles, for k>0, fit together inside a 1 X 1 square?


 * no(no) yes yes. I'm being minimalist today :) Dmcq (talk) 11:28, 26 March 2010 (UTC)
 * Dcmq, are you saying that the problem of whether pi plus e is irrational has been solved? Can you be specific?  Who and when? Michael Hardy (talk) 02:42, 27 March 2010 (UTC)
 * I think he says yes, it is still unsolved. 66.127.52.47 (talk) 07:49, 27 March 2010 (UTC)

Ok,if pi+e is still unsolved problems,does this applied for all irrational numbers?and where can one look in wikipedia for this subject?respectfully. —Preceding unsigned comment added by 208.79.15.130 (talk) 19:47, 27 March 2010 (UTC)
 * See Irrational number. -- Meni Rosenfeld (talk) 21:05, 27 March 2010 (UTC)

Navier-Stokes equation
A University of Sydney student with not much spare time asks another question. If a metric is defined on solutions to the Navier-Stokes equation, then do 'distances' that diverge allow classification as turbulence? Then is fractal geometry or nonlinear dynamics the way to classify turbulence of Navier-Stokes solutions, a 'diverging' metric allowing turbulence to be inferred? Has this approach already been tried long ago? 122.152.132.156 (talk) 05:53, 26 March 2010 (UTC)
 * Just about everything has been tried. I don't understand much of this article but you might find it interesting.  66.127.52.47 (talk) 08:44, 28 March 2010 (UTC)

Wiener process properties
Hi, do you know how to calculate the expected value of the absolute value of the correlation between the time and the value of a Wiener process of some length t? I.e. for a sample function f what I mean would be

$$\left\vert \frac{Cov(time,value)}{Var(time) \cdot Var(value)} \right\vert = \left\vert \frac{\frac{1}{t} \cdot \int_0^t{ds \ s \cdot f(s)} - \frac{t}{2} \cdot \frac{1}{t} \cdot \int_0^t{ds \ f(s)})}{(\frac{t^2}{3} - (\frac{t}{2})^2) \cdot (\frac{1}{t} \cdot \int_0^t{ds \ f(s)^2} - (\frac{1}{t} \cdot \int_0^t{ds \ f(s)})^2)} \right\vert$$

Thank you very much in advance. Hurugu (talk) 07:38, 26 March 2010 (UTC)


 * Two remarks that don't answer the question: (1) The correlation between X and Y is
 * $$ \frac{\text{cov}(X,Y)}{\sqrt{\text{var}(X)\text{var}(Y)}}, $$
 * with the radical in the denominator. That makes correlation dimensionless.  (2) In order to speak of the correlation between time and something else, you'd need to think of time as a random variable, with a probability distribution.  Without that, it's unclear at best what the question means. Michael Hardy (talk) 19:15, 26 March 2010 (UTC)
 * .... OK, from your formula is looks as if you're thinking of time as uniformly distributed between 0 and t. Michael Hardy (talk) 19:18, 26 March 2010 (UTC)
 * Your proposed way of computing the variance of the value of the Wiener process looks weird. Squaring the density function (if that's what you intend &fnof; to be) is not done.  And why integrate from 0 to the time.  The value of a Wiener process is not constrained to lie in that interval.  And do you mean the value at the time that's uniformly distributed in that interval?  Or the value at time t?  Or what?  The question is unclear. Michael Hardy (talk) 19:21, 26 March 2010 (UTC)

Thank you for trying to answer, I am sorry for the confusing phrasing (and the missing square root in the denominator). $$f$$ is not meant to be interpreted as a density function. Maybe I can explain the question better in terms of a discrete problem: Take a sample function of the Wiener process, and then pick the values at the times $$\frac{1}{2}\frac{t}{n}$$, $$(1 + \frac{1}{2})\frac{t}{n}$$, $$(2 + \frac{1}{2})\frac{t}{n}$$, ..., $$(n - \frac{1}{2})\frac{t}{n}$$ for some natural number $$n$$. Together with the corresponding times they can be thought of as points in a plane, like the patterns in the top right image in the article Correlation. I am looking for the expected value of the absolute value of this correlation, for the limit $$n \rightarrow \infin$$. Hurugu (talk) 22:19, 26 March 2010 (UTC)
 * OK, I think the question is clear now. In effect the time is uniformly distributed between 0 and t and you mean the value of the Wiener process at that random time.  Maybe more later.... Michael Hardy (talk) 00:17, 27 March 2010 (UTC)

One of the complications is that you said "absolute value".

If (capital) T is the time that is a random variable uniformly distributed between 0 and (lower-case) t, then cov(T, BT) = 0. But for any particular sample path, the correlation is not 0; rather it is a random variable whose expected value is 0. But now we want the expected value of its absolute value. To be continued..... Michael Hardy (talk) 01:43, 27 March 2010 (UTC)
 * ....and now I see that by &fnof; you say you mean the sample function. Now that that's clear, the correlation you wrote above makes sense. Michael Hardy (talk) 02:36, 27 March 2010 (UTC)

This now seems like a harder problem than I initially thought it was. Possibly it can only be done numerically. More later maybe.... Michael Hardy (talk) 18:54, 27 March 2010 (UTC)
 * OK, maybe I'll have something shortly.... "Hurugu", you haven't enabled Wikipedia email.  Are you still there? Michael Hardy (talk) 19:45, 30 March 2010 (UTC)


 * Still here, and enabled email. Hurugu (talk) 20:55, 30 March 2010 (UTC)

Here's something whose details I haven't worked out yet. Consider the line y = ms, where m is chosen so as to minimize the sum of squares of residuals:
 * $$ \int_0^t (B_s - ms)^2\,ds, $$

where Bs is the value of the Brownian motion at time s. Then we can partition the total sum of squares
 * $$ \int_0^t B_s^2 \, ds $$

as the sum of an "explained" part
 * $$ \int_0^t (ms)^2 \, ds $$

and an "unexplained" part
 * $$ \int_0^t (B_s - ms)^2\,ds. $$

Since m is so chosen as to minimize the unexplained part, some consequences follow, one of which is that
 * $$ \int_0^t B_s - ms \, ds = 0 $$

(which explains why the total sum of squares really is the sum of the two other expressions I put above), and another consequence is that the square of the correlation should be the explained part of the sum of squares divided by the total sum of squares. Michael Hardy (talk) 00:37, 31 March 2010 (UTC)


 * .....OK, a bit more detail. The value of m that minimizes
 * $$ \int_0^t (B_s - ms)^2 \, ds $$
 * can be found by observing that
 * $$ \int_0^t (B_s - ms)^2 \, ds = \int_0^t B_s^2 \, ds - 2m\int_0^t s B_s \, ds + m^2 \int_0^t s^2 \,ds, $$
 * then differentiating with respect to m, setting that equal to 0, and solving for m, getting
 * $$ m = \frac{\int_0^t s B_s \, ds}{\int_0^t s^2 \, ds} $$
 * Now substitute that for m in the expression
 * $$ \int_0^t (B_s - ms)ms \, ds, $$
 * (But write it as
 * $$ \frac{\int_0^t r B_r \, dr}{\int_0^t r^2 \, dr} $$
 * so that the letter s won't be overworked), and you find that
 * $$ \int_0^t (B_s - ms)ms \, ds = 0 \, $$
 * because of the way m was chosen. So now

\begin{align} \text{total sum of squares} & = \int_0^t B_s^2 \, ds \\ & = \int_0^t ( (B_s - ms) + ms)^2 \, ds \\ & = \int_0^t (B_s - ms)^2 \, ds + 2m\int_0^t (B_s - ms)s \, ds + m^2 \int_0^t s^2 \,ds \\ & = \int_0^t (B_s - ms)^2 \, ds + 0 + m^2 \int_0^t s^2 \,ds \\ & = \text{unexplained (or residual) sum of squares} + 0 + \text{explained sum of squares}. \end{align} $$
 * Now

\begin{align} \text{square of correlation} = R^2 & = \frac{\text{explained sum of squares}}{\text{total sum of squares}} \\ & = \frac{ m^2 \int_0^t s^2 \, ds}{\int_0^t B_s^2 \, ds} = \left( \frac{\int_0^t s B_s \,ds}{ \int_0^t s^2 \, ds} \right)^2 \cdot \frac{\int_0^t s^2 \, ds}{\int_0^t B_s^2 \, ds} \\[6pt] & = \frac{\left( \int_0^t s B_s \, ds \right)^2}{ \int_0^t s^2 \, ds \int_0^t B_s^2 \,ds}. \end{align} $$
 * So the thing that is random and Gaussian gets squared, and appears in both the numerator and the denominator, so we've got some non-linearity here. How then, do we find the probability distribution of R = the absolute value of the correlation?  I haven't worked that out and I still don't know if it's going to require numerical methods. Michael Hardy (talk) 02:32, 1 April 2010 (UTC)

....and now I'm thinking, maybe the proposed idendity
 * $$ \text{square of correlation} = R^2 = \frac{\text{explained sum of squares}}{\text{total sum of squares}} $$
 * $$ \text{square of correlation} = R^2 = \frac{\text{explained sum of squares}}{\text{total sum of squares}} $$

isn't true in the usual sense of correlation unless you do the least-squares fit of both the slope and the intercept. But also, maybe the usual sense of correlation isn't the one that one should use in this context, where one knows that B0 = 0 exactly. Michael Hardy (talk) 04:00, 1 April 2010 (UTC)


 * Wow, thanks for all the effort! Numerically, by running a kind of random walk, I get a value of about 0.595. Hurugu (talk) 17:05, 1 April 2010 (UTC)

Opposite Category of Sets
Let S be the category of sets. Consider the category with sets as objects and morphisms A -> B as preimages of functions f from B to A, is this the same as the opposite of S? Thanks:) 66.202.66.78 (talk) 10:26, 26 March 2010 (UTC)
 * What do you mean by preimage of f?—Emil J. 11:16, 26 March 2010 (UTC)

OK, here's a guess: If &fnof;:B &rarr; A, what if we say
 * P(B) is the set of all subsets of B,
 * P(A) is the set of all subsets of A,
 * P(&fnof;) is the function from P(A) to P(B) defined by
 * $$ P(f)(C) = \{ b \in B : f(b) \in C \}\text{ for }C \in P(A). \, $$

Does that give us an opposite category? Michael Hardy (talk) 02:27, 27 March 2010 (UTC)

PS: I don't know any standard definition of the concept of "preimage" of a function. Michael Hardy (talk) 02:30, 27 March 2010 (UTC)


 * Michael Hardy, I think your talking about the contravariant power set functor P from SET to SET. This is not the same as the opposite of SET, which has exactly the same objects and arrows as SET, with the domain of a function f:A->B equal to B instead of A. Composition is reversed, so fg (in the opposite category) is equal to gf (in SET). However P is covariant as a functor from the opposite of SET to SET. I don't know what the OP meant by preimage of a function. Money is tight (talk) 06:40, 27 March 2010 (UTC)

Do you generally take it to be part of the definition of opposite category that it has exactly the same objects? I had thought the category of Boolean algebras and Boolean homomorphisms is the opposite of the category of Stone spaces and continuous functions. Michael Hardy (talk) 18:56, 27 March 2010 (UTC)
 * The opposite of a category is just the same category with arrows reversed. Stone duality states that the category of Boolean algebras is equivalent to the opposite of the category of Stone spaces (and vice versa), not equal. Algebraist 19:04, 27 March 2010 (UTC)


 * Op here, sorry I was very sleepy when I posted this. What I meant was this,

let f:A -> B, define a function D(f):P(B) -> P(A) so D(f)(r) is the set of all x in A so that f(x) is in r. Then, D(fg) = D(g)D(f). Define a category, with Hom(B, A) the image of Hom(A, B) under D. Is this new category the opposite category of sets. *By preimage of functions, I meant taking sets in B to preimages of them in A for a given f; sorry this was so unclear. I apologize if this still lacks sense:) 66.202.66.78 (talk) 11:11, 31 March 2010 (UTC)

Finite differences in exponential powers
Hi. I took positive integers starting from 1 to the power of y, in the order of 1y - 2y, 3y - 2y and so on. I did this up to an exponent of 6, found the differences between the powers (first differences), then the differences between those (second differences), and so on, always subtracting the previous number from the next number. For example, the first differences for x22 - x12... starting from 12 - 0 were 1, 3, 5, 7, 9, 11, and so on. Here are some of the things I've found by doing this:


 * x2:
 * First differences are odd numbers.
 * Second differences are all 2.


 * x3:
 * First differences are prime numbers, or multiples of prime numbers.
 * Second differences begin at 6 and are all multiples of 6.
 * Third differences are all 6.


 * x4:
 * First differences have a final digit of 1, 5, or 9.
 * The remaining inverse pyramid of differences appear to be almost random; some examples include 14, 40, 306, 820, 154, 38, 112, 148, -74, 24, 4, -106, 324, -576, -1116, etc.
 * Fourth differences end with 4, 8 or 6.


 * x5
 * All first differences have a final digit of 1.
 * All second, third and fourth differences have a final digit of 0.
 * All fourth differences are multiples of 120, and start at 240.
 * All fifth differences are 120.


 * x6
 * All first differences are odd.
 * All second differences have a final digit of 2, and a second-final digit of 6, 0 or 1.
 * All remaining differences end with 0.
 * All sixth differences are >700 and <800.

So, my question is, what is the significance of this? Can it be applied to other areas of mathematics, and do we have an article excliciptly on this phenomenon? Thanks. ~ A H  1 (TCU) 11:47, 26 March 2010 (UTC)


 * here's an idea...work out $$(n+1)^2 - n^2$$. Use that to work out why the diffs are odd, and the second differences are 2. Then try the same thing for other powers.  If you start out with n^k, and take differences, what is the highest power of n that appears?  What does this mean if you take differences k times? Tinfoilcat (talk) 12:12, 26 March 2010 (UTC)


 * ... and check your working. 4th differences of x4 sequence all have the same value; and, in general, nth differences of xn sequence will be constant. Gandalf61 (talk) 13:04, 26 March 2010 (UTC)


 * Could this problem have any applications for dimensions above 3, or even manifolds? ~ A H  1 (TCU) 00:13, 27 March 2010 (UTC)

"Multiples of prime numbers"?? What number is not a "multiple of a prime number"? Michael Hardy (talk) 01:44, 27 March 2010 (UTC)

Try Finite difference as a starting point. These patterns are well known but are rarely covered in standard math curricula, at least at an elementary level. This leads to them being rediscovered frequently. The nth differences of xn is n!.--RDBury (talk) 03:03, 27 March 2010 (UTC)


 * This can also be understood with derivatives. In particular the fact that d(xn)/dx = nxn-1. Rckrone (talk) 06:59, 28 March 2010 (UTC)

I need your knowledge in all mathematical realms you are familiar with.
Let's assume that the concept of "even fraction" is defined as an irreducible fraction whose denominator is even. Unfortunately, no unique definiendum is received from this definition, even not from any of its sub-definitions referring to a given even denominator. Let's assume we would like to receive a unique definiendum. Fortunately, we know that there is a 'natural' surjection - from the class of pairs of an even denominator with an odd nominator - on the class of even fractions which are received by the original definition; Thanks to this surjection, we can receive a unique definiendum of "even fraction", by replacing the previous "indefinite" definition by a "definite" definition, which is received by deviding the original definition into sub-definitions, each of which defines the (unique) "even fraction" as the (unique) irreducible fraction whose given denominator is even and whose given nominator is odd, in such a way that this devision of the original definition into sub-definitions - succeeds to preserve the original class of "even fractions" received by the original "indefinite" definition.

Ignoring the very issue of "even fractions" (and fractions at all), do you know of any (well-known) similar process, in any mathematical realm you are familiar with? i.e., a process in which the classical definition of Y (whatever Y is) does not yield a unique definiendum; however, thanks to the existence of a 'natural' surjection (whatever this 'naturality' means) - from the class of X's - on the class of Y's received from the original definition, we can receive a unique Y, by replacing the previous "indefinite" definition by a "definite" definition, which is received by deviding the original definition into sub-definitions, each of which refers to a given X (by which Y can be defined uniquely), in such a way that this devision of the original definition into sub-definitions - succeeds to preserve the original class of Y's received by the original "indefinite" definition.

HOOTmag (talk) 12:19, 26 March 2010 (UTC)


 * Category theory is the closest, but overall it sounds to me more like All your base are belong to us :) Dmcq (talk) 18:07, 26 March 2010 (UTC)
 * More details? Does category theory deal with the uniqueness of definiendum? HOOTmag (talk) 18:42, 27 March 2010 (UTC)


 * I'd have guessed "even fraction" would mean one where the numerator is even, since then even numbers would be even fractions. Michael Hardy (talk) 19:11, 26 March 2010 (UTC)
 * The examples given here for "odd fractions" are f/3, f/5, f/7, etc. HOOTmag (talk) 18:42, 27 March 2010 (UTC)


 * I'm having trouble understanding what you're getting at, but maybe you're just talking about a "coding" or "construction" (as in constructive logic)? 66.127.52.47 (talk) 23:17, 26 March 2010 (UTC)
 * Any connection between my request and "coding" or "construction"? HOOTmag (talk) 18:42, 27 March 2010 (UTC)
 * In your example, you code the even fraction as a pair of integers, (even, odd). 66.127.52.47 (talk) 18:49, 27 March 2010 (UTC)
 * I code nothing. Look again at my first paragraph, where I explain how I explicitly define the even fraction (without coding anything), so that I receive a unique definiendum. HOOTmag (talk) 19:55, 27 March 2010 (UTC)
 * Hm. We could also say that you gave an axiomatic definition of an even fraction, then described a structure (the set of (even,odd) integer pairs) that interprets the definition.  Does that help?  There is a topic called constructive type theory that might also reach towards what you might be getting at.  Unfortunately, our article about it is very technical.  66.127.52.47 (talk) 02:26, 28 March 2010 (UTC)

Curve Sketching
Hello. For some function f, f(a) = r, where r is a constant term; f'(x) is undifferentiable at x = a; $$\lim_{x\rightarrow a^{-}} f'(x) = -\infin$$; $$\lim_{x\rightarrow a^{+}} f'(x) = \infin$$. While sketching a graph with the information above, is there a plausible function where $$\lim_{x\rightarrow a^{-}} f(x) = -\infin$$ and $$\lim_{x\rightarrow a^{-}} f(x) = -\infin$$ while f(a) = r? If so, name the function. Thanks in advance. --Mayfare (talk) 20:21, 26 March 2010 (UTC)
 * There are definitely functions that satisfy the conditions you mentioned: f'(x) is undifferentiable at x = a; $$\lim_{x\rightarrow a^{1-}} f'(x) = -\infin$$; $$\lim_{x\rightarrow a^{1+}} f'(x) = \infin$$. There is the classic $$f(x)=1/(x-a)$$ for example.
 * You can then define the function where it is defined at f(a)=r (a single point) such that you have a piecewise function. --Kvasir (talk) 22:01, 26 March 2010 (UTC)

Can there be a cusp at x = a with the information in the first sentence? --Mayfare (talk) 12:50, 27 March 2010 (UTC)
 * Yeah, that's possible too, and if you want the function to be continuous then it would have to be that way. For example let $$f(x) = \sqrt{R^2 - (x-a+R)^2} + r$$ for a-R ≤ x ≤ a and $$f(x) = \sqrt{R^2 - (x-a-R)^2} + r$$ for a ≤ x ≤ a+R, for some radius R > 0, which is two quarter circles up against each other. Rckrone (talk) 17:45, 27 March 2010 (UTC)