Wikipedia:Reference desk/Archives/Mathematics/2009 August 12

= August 12 =

Using the δ/ε method to show that lim(x→0) of 1/x does not exist
So just when I thought I was getting the hang of the δ/ε method, I got stuck on showing that $$\lim_{x \to 0}1/x$$ doesn't exist. I started by assuming the contrary, i.e., that there was a limit L such that $$\lim_{x \to 0}1/x = L$$. After setting up the basic δ/ε inequalities, I got

a) $$L-\epsilon < 1/x < L + \epsilon$$ whenever b) $$-\delta < x < \delta$$. I then decided to look at two different cases for L, namely when L ≥ 0 and when L < 0.

For L ≥ 0, the rightmost side of a) is always positive (since we choose ε > 0), so I thought it would be safe to say that $$ \frac{1}{L + \epsilon}$$ is always positive as well. It then seemed logical to use $$ \frac{1}{L + \epsilon}$$ as an effective delta, i.e., say that $$0 < x < \frac{1}{L + \epsilon}$$ (however, I could not satisfactorily explain to myself why it seemed logical to do that - explanations appreciated =] ).  However, after multiplying this latest inequality by $$\frac{L + \epsilon}{x}$$, the contradiction between the resulting statement and the inequality a) led me to deduce that there isn't an L ≥ 0 that satisfies the limit, and that I'd done something correctly.

So then I moved on to the case where L < 0.

I noticed the leftmost side of a) would always be negative, and hence $$ \frac{1}{L - \epsilon}$$ would always be negative as well. Then I got stuck.  I know I'm looking for contradiction between a) and some manipulation this last inequality, but I'm not entirely sure how to get there, especially since multiplication by negative terms changes the direction of the < signs...

Would somebody be able to explain the completion of this proof to me, please? Korokke (talk) 06:37, 12 August 2009 (UTC)
 * You can make a similar argument as you did in the first case. Specifically, for any δ>0, there is always an x with -δ < x < δ such that $$\frac{1}{L - \epsilon} < x < 0$$.  Note that in both cases it's not enough just to argue that a specific δ doesn't work, but that there is no possible δ that works.  Rckrone (talk) 07:41, 12 August 2009 (UTC)
 * And you only need to show that for a single ε, although in this case no ε will work. For example, it's sufficient to show that ε=2 will not work, by showing that, no matter how small δ is made, 1/x will still get further than ε from a given L. For δ≥1, you can use x = ±1/2 (depending on the value of L), and for δ<1, you can use x = ±δ/2 (likewise). I'll let you complete the argument from there. --COVIZAPIBETEFOKY (talk) 13:09, 12 August 2009 (UTC)

Random Walk
I was reading up on the random walk article, which says that in a situation where someone flips a coin to see if they will step right or left, and do this repeatedly, eventually their expected distance from the starting point should be sqrt(n), where n = number of flips. This is derived from saying that D_n = D_(n-1) + 1 or D_n = D_(n-1) - 1, then squaring the two and adding them, then dividing by two to get (D_n)^2 = (D_(n-1))^2 +1. So if (D_1)^2 = 1, it follows that (D_n)^2 = n, and that D_n = sqrt(n). But if we instead work with absolute values, we seem to get a different result: abs(D_n) = abs(D_(n-1)) + 1 or abs(D_n) = abs(D_(n-1) - 1), therefore abs(D_n) = abs(D_(n-1)), so D_n should stay around 0 and 1. Is there a way around this apparent contradiction? —Preceding unsigned comment added by 76.69.240.190 (talk) 17:19, 12 August 2009 (UTC)
 * For one thing, your argument fails when D_(n-1) is 0. Algebraist 17:23, 12 August 2009 (UTC)
 * It is true that D_n = D_(n-1) + 1 or D_n = D_(n-1) - 1, but squaring the two and adding them, then dividing by two to get (D_n)^2 = (D_(n-1))^2 + 1  is not legitimite. Multiplying the two gives (D_n)^2 = (D_(n-1))^2 &minus; 1  which is not correct. Sometimes a bad argument leads to a good result. Bo Jacoby (talk) 04:21, 13 August 2009 (UTC).


 * Adding the two then dividing by two amounts to computing the expectation, as both possibilities have probability 1/2. So, as far as I can see, this makes a valid argument that the expected value of Dn2 is n (which is however different from the expected value of |Dn| being $$\sqrt n$$, indeed the latter is false by Michael Hardy's comment below). The subsequent computation directly with |Dn| is wrong, because the two possibilities do not always have probability 1/2: as Algebraist pointed out, it fails when Dn−1 = 0. — Emil J. 10:53, 13 August 2009 (UTC)


 * Nevertheless, the argument is fixable. |Dn| = |Dn−1| + 1 when Dn−1 = 0, otherwise the difference is +1 or −1 with equal probability. Thus the expectation of |Dn| − |Dn−1| equals


 * $$\Pr(D_{n-1}=0)=\begin{cases}\binom{n-1}{(n-1)/2}2^{-(n-1)}&n\text{ odd,}\\0&n\text{ even.}\end{cases}$$


 * Using Stirling's approximation and linearity of expectation,


 * $${\operatorname E|D_n|=\sum_{k<n/2}\binom{2k}k2^{-2k}=\sum_{k<n/2}\frac1{\sqrt{\pi k}}+O(1)=\frac1{\sqrt\pi}\int_1^{n/2}\frac{dx}{\sqrt x}+O(1)=\sqrt{\frac{2n}\pi}+O(1).}$$


 * — Emil J. 11:40, 14 August 2009 (UTC)

If you read carefully, you see that it says
 * $$ \lim_{n\to\infty} \frac{\text{expected distance}}{\sqrt{n}} =\sqrt{\frac{2}{\pi}}. $$

Michael Hardy (talk) 10:36, 13 August 2009 (UTC)

Using the identity theorem
Here's the problem I'm working on:

Let A be the annulus $$\left\{z \,|\, 1\leq |z| \leq2\right\}$$. Then there exists a positive real number r such that for every entire function f, $$\max_{z\in A}\left| f(z) - \frac{1}{z} \right| \geq r$$.

So, if the claim is not true, then we can get a sequence of entire functions $$f_n, n\in\mathbb{N}$$ with $$\max_{z\in A}\left| f_n(z) - \frac{1}{z} \right| < \frac{1}{n}$$. These functions converge uniformly on the annulus A, so their limit f is also analytic on A, and agrees on that set with $$g(z)=\frac{1}{z}$$. Therefore, by the identity theorem, f and g have the same Taylor series expansion around any point in A, and this expansion converges in the largest radius possible, avoiding singularities. We know that g has one simple pole at the origin, so our function is defined and unbounded on the disk $$\left\{z \,|\, \left|z-z_0\right| < |z_0| \right\}$$, where $$z_0$$ is any point inside the annulus. This disk lies inside the disk $$\left\{z\, | \, \left|z\right| \leq 2\right\}$$, where any entire function would have to be bounded...

Here I'm stuck. I think I just proved that f isn't entire, but what's the contradiction, exactly? Why can't f be the uniform limit of entire functions in some domain, without itself being an entire function? -GTBacchus(talk) 18:58, 12 August 2009 (UTC)
 * It is possible for f to be the uniform limit of entire functions in some domain, without being entire. The geometry here (with the domain enclosing the singularity of f) is crucial. Try taking contour integrals around a circle in the annulus. Algebraist 19:10, 12 August 2009 (UTC)
 * That does it; thank you. The integral for each $$f_n$$ is zero, while that for 1/z is 2pi*i. The identity theorem isn't needed here; I just didn't think to integrate. Complex integration sure does a lot of stuff that real integration doesn't. I think I'm still getting used to that. As for the example where entire functions uniformly converge to a non-entire function, I can just take the Taylor expansion of 1/z around 1, and then look at the domain B(1,1/2). The partial sums of the series are polynomials, and therefore entire, but their uniform limit has its singularity just over the horizon to the west. Is that right? -GTBacchus(talk) 21:58, 12 August 2009 (UTC)
 * Looks OK to me. Michael Hardy (talk) 23:56, 12 August 2009 (UTC)
 * Thanks. :) -GTBacchus(talk) 00:49, 13 August 2009 (UTC)