Wikipedia:Reference desk/Archives/Mathematics/2011 February 28

= February 28 =

Formal proof of equivalences
Hello. I've been asked to prove formally that
 * $$B \subseteq A \Leftrightarrow (A \cap B = B) \Leftrightarrow (A \cup B = A).$$

I'm not entirely sure what a proof of this should look like, but I've taken a stab; how does it hold up?
 * First,
 * $$(A \cap B = B) \Leftrightarrow ( (x \in A \land x \in B) \equiv (x \in B) ) \Leftrightarrow (x \in B \Rightarrow x \in A) \Leftrightarrow B \subseteq A;$$
 * then
 * $$(A \cup B = A) \Leftrightarrow ( (x \in A \lor x \in B) \equiv (x \in A) ) \Leftrightarrow (x \in B \Rightarrow x \in A) \Leftrightarrow B \subseteq A.$$
 * Hence the three are equivalent.

Thanks for the help. — Anonymous Dissident  Talk 11:05, 28 February 2011 (UTC)
 * It doesn't seem to me obvious enough that $$( (x \in A \land x \in B) \equiv (x \in B) ) \Leftrightarrow (x \in B \Rightarrow x \in A)$$. Usually the best way to show that 3 or more things are equivalent is to show a circle of implication, in this case
 * $$B \subseteq A \Rightarrow (A \cap B = B) $$
 * $$(A \cap B = B) \Rightarrow (A \cup B = A)$$
 * $$(A \cup B = A) \Rightarrow B \subseteq A $$
 * To show the first, for example, you need to assume $$B \subseteq A$$ and show that $$A\cap B \subseteq B$$ and $$A\cap B \supseteq B$$. The $$\subseteq$$ part is always true; to show $$\supseteq$$, let $$x\in B$$. Then also $$x\in A$$ (because $$B\subseteq A$$), and so $$x \in A\cap B$$. -- Meni Rosenfeld (talk) 11:22, 28 February 2011 (UTC)


 * I would accept $$( (x \in A \land x \in B) \equiv (x \in B) ) \Leftrightarrow (x \in B \Rightarrow x \in A)$$ as an instance of the propositional tautology $$\mathcal{(A \land B \leftrightarrow B) \leftrightarrow (B \rightarrow A)}$$, which is easy to check by truth table (much shorter than giving either a Hilbert-style or natural deduction argument). –Henning Makholm (talk) 14:08, 28 February 2011 (UTC)

Integration by Parts
This question is in keeping with the theme of integration and WolframAlpha. Consider the indefinite integral
 * $$ I := \int \! e^x \sin x \, \operatorname{d}x \, . $$

To solve this integral we apply integration by parts twice and then solve for $$I$$. But instead, let's perform integration by parts repeatedly. We have
 * $$ I = -e^x\cos x + \int \! e^x\cos x \, \operatorname{d}x \,, $$
 * $$ I = -e^x\cos x + e^x\sin x - \int \! e^x \sin x \, \operatorname{d}x \, . $$

At this point, we could solve for $$I$$ and have our answer; but let's continue.
 * $$ I = -e^x\cos x + e^x\sin x + e^x\cos x - \int \! e^x\cos x \, \operatorname{d}x \, . $$

As one proceeds to calculate these leading terms, a simply pattern develops. If Ln denotes the leading terms after (n&thinsp;+&thinsp;1)-applications of integration by parts then
 * $$ L_n = \sum_{k=1}^n (-1)^ke^x(\cos x - \sin x) = \left( \sum_{k=1}^n (-1)^k \right) \cdot e^x(\cos x - \sin x) \, . $$

The next question to ask is: what happens to Ln as n tends to infinity. Well, clearly, (&thinsp;Ln&thinsp;) is a divergent sequence. This is where, I think, it starts to get interesting. WolframAlpha mentions the idea of a regularized result for a non-convergent series. I can't seem to find anything on Wikipedia or, for that matter, elsewhere on the web. Consider a series S, the regularized result is given by R where
 * $$ S := \sum_{k=1}^{\infty} a_k \, \mbox{ and } \, R := \lim_{s \to 0} \left( \sum_{k=1}^{\infty} k^sa_k \right) . $$

The amazing thing is that the regularized result corresponding to L∞ is, modulo the constant of integration, exactly the answer to the integral. In other words:
 * $$ \lim_{s \to 0}\left( \sum_{k=1}^{\infty} k^s(-1)^k\right) \cdot e^x(\cos x - \sin x) = \frac{e^x}{2}(\sin x - \cos x) = I \, . $$

— Fly by Night  ( talk )  18:32, 28 February 2011 (UTC)
 * Does anyone know what this regularized result is, and do they have a reference for it?
 * Has anyone any ideas as to what is going on, and why this regularized result gives the integral (modulo a constant)?


 * Abel's theorem is probably of interest. Invrnc (talk) 18:42, 28 February 2011 (UTC)


 * Sorry for being dense. But I can't see, from the article at least, how that might be connected. Could you explain, please? — Fly by Night  ( talk )  22:45, 28 February 2011 (UTC)


 * If in $$\textstyle \lim_{s \to 0} \left( \sum_{k=1}^{\infty} k^sa_k \right)$$ you let s approach 0 from below (otherwise the sums don't converge at all), you have an Abelian mean with $$\lambda_n = \ln(n)$$ (so there's at least a connection to Abel). Our article states that Abelian means satisfy axioms (regularilty, linearity, stability) which imply that when they exist they have to match the value you get by solving for I after two unfoldings.
 * Other interesting articles include Summation of Grandi's series and Zeta function regularization. –Henning Makholm (talk) 00:44, 1 March 2011 (UTC)

Henning: thanks for your, as ever, thoughtful and insightful reply. But it doesn't really help me to understand the main point of my question. Why does this regularized sum give me, modulo a constant, the answer to the integral? The sum I have is divergent; but we add some (seemingly) random factors, take a limit, and hey presto. I know you'll know the answer... — Fly by Night  ( talk )  23:10, 1 March 2011 (UTC)


 * I'm not sure I can offer a full understanding, but here's what I've got, in more detail: First you integrated by parts twice to get
 * $$\int e^x\sin x\,dx = -e^x\cos x + k_1 + e^x\sin x + k_2 - \int e^x\sin x\,dx$$
 * where $$k_1$$ and $$k_2$$ are constants of integration. Usually one silently takes these to be zero without loss of generality, because they can be subsumed into the residual integral -- but showing explicitly allows us to declare the the two identically-looking indefinite integrals in the equation are in fact the same solution, which we can then give a name:
 * $$F(x) = -e^x\cos x + k_1 + e^x\sin x + k_2 - F(x)$$
 * Now, once we fix $$k_1$$ and $$k_2$$ we get a concrete equation we could solve to find $$F(x)$$, and which particular solution we end up with will then depend on which k's we choose. (Sorry if I'm belaboring this point too much -- it's unclear to me whether "modulo a constant" is part of what puzzles you or just routine proactive pedantry). In any case, once the $$k$$'s have concrete values, everything else happens completely pointwise, and we can consider x to be a constant too, and be left with
 * $$F = D - F$$
 * where $$D = -e^x\cos x + k_1 + e^x\sin x + k_2$$ is just a number that depends on x in some way.
 * What we've done up to now is remove every trace of the problem having to do with an integral. There's just a simple linear equation left, before any infinite series has entered the picture. Now we can start unfolding the equation repeatedly:
 * $$F = D - F = D - (D - F) = D - (D - (D - F)) = D - D + D - D + D - \cdots$$
 * There's a divergent series, and we then apply a regularization operator
 * $$A(a_1, a_2, a_3, \ldots) = \lim_{s\to 0^-}\sum_{n=1}^{\infty} n^s a_n$$
 * to the sequence $$(D, -D, D, -D, \ldots)$$. According to the Divergent series article, this $$A$$ operator has, among others, two properties
 * Linearity: $$A$$ is a linear map from a subspace of R&infin; to R.
 * Stability: $$A$$ satisfies $$A(a_1, a_2, a_3, \ldots) = a_1 + A(a_2, a_3, a_4, \ldots)$$.
 * That it is linear is easy enough to prove; for stability I'm trusting the article's claim. With these two properties we can set $$X = A(D,-D,D,-D,\ldots)$$ and calculate:
 * $$X = A(D,-D,D,-D,\ldots) = D + A(-D,D,-D,D,\ldots) = D + (-1)A(D,-D,D,-D,\ldots) = D - X$$
 * So, if the limit in $$A(\cdots)$$ exists at all, it must solve the very same equation that the original indefinite integral $$F$$ does, and since this equation in fact determines F the only possible value of $$\lim_{s\to 0^-}$$ is $$F$$ itself.
 * Is that the kind of explanation you were expecting? –Henning Makholm (talk) 02:30, 2 March 2011 (UTC)


 * That's excellent; thanks. The "modulo constant" was pedantry. If I didn't say that then someone would add "don't forget the constant of integration". The reason I started to think about it is because I thought that the leading terms (with a different integrand) might give a convergent power series, and then I'd get a formula for the sum of that series in terms of integrals. Thanks again Henning. — Fly by Night  ( talk )  03:19, 2 March 2011 (UTC)

FWIW, this kind of argument is sometimes called a "swindle" in mathematics. See Eilenberg-Mazur swindle. Sławomir Biały (talk) 18:33, 2 March 2011 (UTC)

Series question
Write the function $$e^z$$ as $$1-q$$, with $$q = -\sum^{\infty}_{n=1}\frac{z^n}{n!}$$. Taking the reciprocal of this gives $$\frac{1}{1-q}$$, which can be expanded as a binomial series in $$q$$. This series clearly should only converge for $$\|q\| < 1 $$; however, writing $$q$$ as a series in $$z$$, I end up with the standard series for $$e^{-z}$$, which converges for any $$z$$ and furthermore any $$q \neq 1$$. What is the source of this ostensible paradox?--Leon (talk) 19:23, 28 February 2011 (UTC)


 * When you express the geometric series for $$(1-q)^{-1}$$ in terms of z, you need to rearrange the terms of a divergent series. Divergence and (conditional) convergence are not stable under rearrangement because rearranging terms can produce new cancellations.  See Riemann series theorem for context.   Sławomir Biały  (talk) 20:02, 28 February 2011 (UTC)

Number of solutions in a system of linear equations
Is there a proof that a system of linear equations can only have zero, one or infinitely many solutions? Widener (talk) 19:44, 28 February 2011 (UTC)
 * If $$\mathrm A\mathbf x=\mathbf b$$ and $$\mathrm A\mathbf y=\mathbf b$$, then $$\mathrm A(\mathbf y-\mathbf x)=0$$, and $$\forall c\in\mathbb R\;\;\mathrm A\bigl(\mathbf x+c(\mathbf y-\mathbf x)\bigr)=\mathbf b$$. If $$\mathbf x\neq\mathbf y$$ (two solutions), this is an infinite family of solutions.  --Tardis (talk) 19:53, 28 February 2011 (UTC)
 * Thanks, however that only proves that if there are at least two solutions, then there must be infinitely many, it doesn't prove that there indeed exist systems of linear equations with only one solution.Widener (talk) 20:21, 28 February 2011 (UTC)
 * Although I guess it's pretty obvious that there exist systems of linear equations with only one solution; $$x=1$$ for example! Widener (talk) 20:25, 28 February 2011 (UTC)
 * Exactly. To prove existence you just need to show an example. $$0x=0$$ and $$0x=1$$ are examples for the cases of infinitely many and no solutions, respectively. -- Meni Rosenfeld (talk) 20:54, 28 February 2011 (UTC)
 * You didn't ask for a proof that the three cases $$\{0,1,\infty\}$$ occurred, merely that no others did. The Math desk (myself included) is wont to be (excessively) literal about such things.  --Tardis (talk) 21:46, 28 February 2011 (UTC)

Taylor series question
If I have an arbitrary number of terms from the Taylor series of an unknown function (which we can assume is infinitely differentiable/analytic) at a known point, how can I find the original function (again assuming that it is a combination of elementary functions such as sin, cos, tan, arcsin, log, exp, etc, etc) if I do not recognize this particular combination (for example I might know the series for sin(x) on sight but not sin(sin(x)) or sin^2(x), you get the idea)? 72.128.95.0 (talk) 23:07, 28 February 2011 (UTC)
 * It's not possible to know, if you have only finitely many terms (I assume that's what you mean) of the series. There's no way to know if every remaining term is 0, in which case it's a polynomial, or if the remaining terms are nonzero, in which case it's something else. Staecker (talk) 23:23, 28 February 2011 (UTC)


 * Exactly as Staecker says. As another example, let ƒ be an analytic function, for a fixed n define
 * $$ \left[g_n(f)\right](x) := f(x) - \sum_{k=0}^n \frac{f^{(k)}(0)}{k!}\, x^k \, . $$
 * The function gn(ƒ) has a zero n-jet at x = 0, for any choice of analytic function ƒ. Allowing ƒ to vary over the space of analytic functions, we see that the functions gn(ƒ) all have the same Taylor series up to and including terms of order n; while the functions gn(ƒ) themselves are all (modulo n-jets) very different. — Fly by Night  ( talk )  23:53, 28 February 2011 (UTC)

what if I know (for argument's sake) an infinite number of terms? —Preceding unsigned comment added by 72.128.95.0 (talk) 02:06, 1 March 2011 (UTC)
 * Even then you cannot recover the function, unless you know that the function is analytic (which just means that it is equal to its Taylor series in a neighborhood). This will be true in many cases of practical interest, but is not always true (even for reasonably decent functions) so you will need to be careful.  An example of a function that is not analytic is
 * $$f(x) = \begin{cases} e^{-1/x^2} & \text{if}\ x\not=0\\ 0 &\text{if}\ x=0\end{cases}$$
 * This function is differentiable infinitely many times at the origin and all of its derivatives are zero there (which basically follows from L'Hopital's rule). So the Taylor series is zero at the origin, but the function is not the zero function.   Sławomir Biały  (talk) 02:28, 1 March 2011 (UTC)

but we have assumed the function is analytic 72.128.95.0 (talk) 04:20, 1 March 2011 (UTC)
 * I don't think there is a general method for that, any more than there is a method to find a symbolic expression for a number given its digits. However, Plouffe's inverter exists as a practical solution for the latter, and it is conceivable to construct a lookup table of series expansions where you can put in some terms and see if there's a match.
 * Right now, the closest approximation I know for this is OEIS, where you can put in the numerators or denominators of the first few terms. This works, for example, for 1, 2, 8, 16, 128, 256, 1024, 2048, the denominators of the expansion of $$\sqrt{1+x}$$.
 * Keeping a practical perspective, it is best not to think about "finding the only function given infinitely many terms", but rather "finding the simplest function given several terms". By Occam's razor, this will usually be the one you want. This is especially true if you have enough terms, in which case the simplest function will be ahead of the next best by a huge margin. -- Meni Rosenfeld (talk) 09:12, 1 March 2011 (UTC)

There does exist general method for this, you can e.g. use the Lenstra–Lenstra–Lovász lattice basis reduction algorithm for this in a straightforward way. In practice, knowing just a single high order Taylor series coefficient suffices to find the correct linear combination of many hundreds of your elementary functions. Count Iblis (talk) 15:03, 1 March 2011 (UTC)
 * Do I understand correctly that this is to find a linear combination from a given set of basis functions? I don't think this is what the OP is talking about. He wants to be able to find a function like $$\exp(\sin(\log(1+x))\sqrt{1+x})$$ from its coefficients, without knowing a priori that this particular function should be considered. -- Meni Rosenfeld (talk) 15:14, 1 March 2011 (UTC)
 * I see, yes, the LLL algorithm works for a linear set of basis functions. However, you are free to choose a large set of basis functions. The method will not yield an answer if you choose this set too large compared to the information present in the Taylor coefficients. Count Iblis (talk) 01:11, 2 March 2011 (UTC)