User:TakuyaMurata/Intro to Analysis/Sequence and continuity

This chapter succinctly covers sequences, series and continuity. For we assume that readers have gained some familiarity with these concepts in advanced calculus. To give a general view, we develop materials in a logical manner, rather than pedagogical. The emphasis is put on the connection with other related topics.

Sequence
Definition: ''A map on countable set is said to be a "sequence", denoted by {s}. Each element in a sequence is called a "term"''.

Since every countable set has the same cardinality, we usually use a natural number to denote the index of each term of a sequence, and we denote it by $$\{s_n\}$$ instead of $$s(n)$$ for a natural number $$\mathit{n}$$. For example, the identity map of the set of prime numbers, of which there are uncountably many, forms a sequence. A sequence can be constructed recursively as well. For example, let $$s_0 = a_0$$ and $$s_n = a_n x^n + s_{n - 1}$$; this sequence is called a power series.

The definition implies that a sequence is at most countable.

The terms of a sequence can be functions as well. For example, let $$f_n(x) = {x \over n}$$.

Definition: ''If $$\{s\}$$ is a sequence on S, then a subsequence $$\{a\}$$ is a sequence on a countable subset A of S such that $$a_n = s_n$$ for every element in A.

The countability implies that a subsequence is a sequence. To give an example, suppose $$s_n = 1/n$$ for a natural number n. If we suppose n to be even, then we get a subsequence of $$s_n$$.

We say that a sequence $${s}$$ converges to $$\mathit{p}$$", denoted by s -> p, when for every neighborhood N of p, every subsequence of {s} intersects with N. Such p is called a limit point. The next lemma follows immediately, which shows the uniqueness of a limit point.

Lemma: For every neighborhood N of p, there exists a subsequence $$\{a\}$$ that is contained in $$\mathit{N}$$.

Theorem: A limit point of a convergent sequence is unique:

Proof: Suppose a sequence and two limits p and q. Then there exist two neighborhood $$N_1$$ of p and $$N_2$$ of q. Since neighborhoods can be as small as we want, let $$N_1$$ and $$N_2$$ be disjoint. The lemma shows the existence of subsequences {a} in $$N_1$$ and {b} in $$N_2$$. This contradicts the definition of a convergent sequence.

, it has a subsequence $$\mathit{a}$$ such that $$|a| < \epsilon$$ and (2) every subsequence converges to p as well.''

Theorem: A sequence s converge to if and only if (1) for every $$\epsilon > 0$$, it has a subsequence $$\mathit{a}$$ such that $$|a| < \epsilon$$ and (2) every subsequence converges to p as well.

Suppose two sequences {a} and {b}, and let {s} be a sequence in which the term of {a} and {b} occur alternatively. If {a} and {b} converges to the same point, so does {s}. If the two do to different points, {s} diverges. This motivates the following.

Theorem: A sequence $$\{s\}$$ converges to a point $$\mathit{p}$$ if and only if all of its subsequences converges to $$\mathit{p}$$.

Proof: If a sequence converges, then there exists a subsequence s $$|s| < \epsilon$$. Clearly, any subsequence that contains s converges. If

The next theorem shows elementary properties like uniqueness of a limit.

Theorem: Given sequences $${a_i}$$ converging to $$\mathit{a}$$ and $${b_i}$$ converging to $$\mathit{b}$$ and a complex number c, we have: Proof: (1) The convergence implies that there exist $${a_{k_i}}$$ and $${b_{k_i}}$$ that are subsequences of $${a_i}$$ and $${b_i}$$ respectively such that $$|{a_{k_i}}| < {\epsilon \over 2}$$ and $$|{b_{k_i}}| < {\epsilon \over 2}$$. Thus, that $$|{a_{k_i}} + {b_{k_i}}| < \epsilon$$ shows $$a_i + b_i$$ converges to a + b. (2) Since $${a_i}$$ converges, there is its subsequence {s_i} such that $$|{s_i}| < {\epsilon \over c}$$. This shows $$|{cs_i}| < {\epsilon \over c}$$ while $${cs_i}$$ is a subsequence of $${ca_i}$$.
 * (1) (additivity) $$a_i + b_i$$ converging to $$a + b$$
 * (2) (scalar multiplication) $${ca_i}$$ converging to $$ca$$.

In other words, the theorem shows that convergent sequences of complex numbers form a linear space.

Continuity
Definition: A derivative of a function at a point x is defined as $$\lim_{h \to 0}{f(x + h) - f(x)\over h}$$

Theorem: A derivative is zero if and only the function is constant.

Proof: (1) Suppose the function f(x) = c for c constant. Then $$\lim_{h \to 0}{f(x + h) - f(x)\over h} = \lim_{h \to 0}{c - c\over h} = 0.$$ (2) The converse follows:

Theorem (Uniqueness of limits)
A sequence can have at most one limit. In other words: if $$x_n\to a$$ and $$x_n\to b$$ then $$a=b$$.

Proof
Suppose the sequence has two distinct limits, so $$a\not=b$$. Let $$\epsilon=|a-b|/3$$.

Certainly $$\epsilon>0$$, so we can apply the defining property of the sequence (twice) to obtain

$$N_a:\forall n\geq N_a:|x_n-a|\leq\epsilon$$

and also

$$N_b:\forall n\geq N_b:|x_n-b|\leq\epsilon$$

So, in particular, if we take $$n=\max(N_a,N_b)$$ then both of these conditions hold for $$n$$, and so we deduce that $$|x_n-a|\leq\epsilon$$ and also $$|x_n-b|\leq\epsilon$$. Applying the triangle inequality for $$\mathbb R$$, we deduce that $$|a-b|\leq 2\epsilon=(2/3)|a-b|$$, which is a contradiction.

Thus, we deduce that our original assumption (that the sequence had two distinct limits) was false, and so any sequence can have at most one limit. $$\Box$$

Theorem (Convergent Sequences Bounded)
If $$(x_n)_{n=1}^{\infty}$$ is a convergent sequence, then it is bounded.

Proof
Let $$a=\lim_{n\to\infty}x_n$$, and let $$\epsilon=1$$.

From the definition of convergence there exists a corresponding $$N$$ such that

$$\forall n\geq N:|x_n-a|\leq 1$$

The sequence $$(x_n)_{n=N+1}^{\infty}$$ is bounded above by $$a+\epsilon$$ and below by $$a-\epsilon$$. Now consider the finite sequence $$(x_1,x_2,\ldots,x_{N})$$. Let $$M=\max(|x_1|,|x_2|,|x_3|,\ldots,|x_N|, |a|+\epsilon)$$. It follows that $$\forall n:-M\leq x_n\leq M$$ and the sequence is thus bounded. $$\Box$$

Subsequences
Given a sequence $$(x_n)_{n=1}^\infty$$, a subsequence of this sequence is a sequence $$(x_{n_j})_{j=1}^\infty$$, where $$(n_j)_{j=1}^\infty$$ is itself a strictly increasing sequence of natural numbers.

For example, taking $$n_j=2j$$ would provide a subsequence consisting of every other element of the original sequence, that is $$(x_2, x_4, x_6, \ldots)$$.

Theorem (Subsequence Convergence)
If $$(x_n)_{n=1}^{\infty}$$ converges to $$a\in\mathbb{R}$$, every subsequence $$(x_{n_j})_{j=1}^{\infty}$$ converges to $$a$$.

Proof
Let $$\epsilon>0$$.

From the definition of convergence, there exists a $$N\in\mathbb{N}$$ such that $$n\geq N$$ implies $$|x_n-a|\leq\epsilon$$.

Since $$n_k$$ is a strictly increasing sequence of natural numbers, $$n_j\geq j$$ (Exercise: prove this fact. There is a simple proof by induction).

It follows that $$j\geq N$$ implies $$|x_{n_j}-a|\leq\epsilon$$. This is the required property. $$\Box$$

Algebraic Operations on Sequences
We can also perform algebraic operations on sequences. In other words, we can add, negate, subtract, multiply, invert and divide sequences. This is useful because, when the sequences converge, taking the limits commutes with the algebraic operation --- in other words, the limit of a sum of two sequences is equal to the sum of the individual limits, and similarly for the other operations. The following theorem makes this explicit.

Theorem (Algebraic Operations)
If $$(x_n)$$ and $$(y_n)$$ are convergent sequences and $$a\in\mathbb{R}$$, the following properties hold:
 * $$\lim_{n\to\infty}(x_n+y_n)=\lim_{n\to\infty}(x_n)+\lim_{n\to\infty}(y_n)$$.
 * $$\lim_{n\to\infty}(x_ny_n)=\left(\lim_{n\to\infty}x_n\right)\left(\lim_{n\to\infty}y_n\right)$$.
 * $$\lim_{n\to\infty}(ax_n)=a\lim_{n\to\infty}(x_n)$$.
 * If $$\forall n:y_n\neq0$$, then $$\lim_{n\to\infty}\left(\frac{x_n}{y_n}\right)=\frac{\lim_{n\to\infty}x_n}{\lim_{n\to\infty}y_n}$$.
 * If $$\forall n: x_n \leq y_n $$, then $$\lim_{n\to\infty}x_n \leq \lim_{n\to\infty}y_n$$.

Proof
Given any $$\epsilon>0$$ we have $$\epsilon/3>0$$ so from the definition of convergence $$\exists N_x:\forall n \geq N_x:|x_n-a|\leq\epsilon/3$$ and $$\exists N_y:\forall n \geq N_y:|y_n-b|\leq\epsilon/3$$.
 * Let $$a=\lim_{n\to\infty}x_n$$ and $$b=\lim_{n\to\infty}y_n$$. We need to show that $$\forall\epsilon>0:\exists N:\forall n \geq N:|(x_n + y_n)-(a + b)|\leq\epsilon$$.

Letting $$N=\max(N_x ,N_y)$$, we have from the triangle inequality for $$\mathbb R$$,

$$\forall n \geq N:| (x_n-a) + (y_n-b) |\leq \epsilon /3 + \epsilon /3$$ or $$\forall n \geq N:| (x_n+y_n)- (a+b) |\leq 2\epsilon /3<\epsilon$$ which is what we had to prove.

TODO

Theorem (Squeeze/Sandwich Limit Theorem)
Given sequences $$(x_n)$$, $$(y_n)$$, and $$(w_n)$$, if $$x_n$$ and $$y_n$$ converge to $$a$$ and $$x_n\leq w_n\leq y_n$$, $$w_n$$ converges to $$a$$.

Proof
Let $$\epsilon > 0$$.

We need to find an $$ N $$ such that $$\forall n \geq N, |w_n - a| < \epsilon $$. Since $$(x_n) \rightarrow a$$ and $$(y_n) \rightarrow a$$, so from the definition of convergence

$$\exists N_1, N_2: \forall n\geq N_1, |x_n - a| < \epsilon $$ and $$\forall n\geq N_2, |y_n - a| < \epsilon $$.

Let $$N=\max\{N_1,N_2\}$$. Then $$\forall n \geq N, -\epsilon < x_n - a $$ and $$ y_n - a < \epsilon $$

Since $$ x_n \leq w_n \leq y_n$$, we have $$ x_n - a \leq w_n - a \leq y_n - a $$.

Thus $$\forall n \geq N,$$ $$ -\epsilon < x_n - a \leq w_n - a \leq y_n - a < \epsilon $$.

In other words, $$|w_n - a|<\epsilon$$, which is what we wanted.$$\Box$$

Other Completeness Axioms
As promised earlier, here are several equivelant formulations of the completeness axiom.

Theorem (Monotone Convergence)
Any monotone, bounded sequence converges.

Proof
Let $$(x_n)$$ be any monotone sequence that is bounded by a real number M. Without loss of generality, let $$(x_n)$$ be increasing.

First, we must use the completeness axiom. Since $$(x_n)$$ is bounded above, it has a least upper bound.

Let $$x=sup\{x_n\}$$. Now we must show that$$ (x_n) \rightarrow x $$.

It follows from the definition of supremum(Prove it) that if s = sup($$A$$) then $$ \forall \epsilon > 0: \exists a \in A : s-\epsilon < a < s$$.

Applying this to the current situation, we see that $$\exists N: x-\epsilon < x_N < x $$.

Since $$(x_n)$$ is increasing, $$x- \epsilon < x_N \leq x_n < x$$ $$\forall n \geq N$$.

Thus $$\forall n \geq N, |x-x_n|<\epsilon$$. By the definition of convergence, $$(x_n)$$ converges to $$x$$. $$\Box$$

Theorem (Nested Interval Property)
If there exists a sequence of closed intervals $$I_n = [a_n, b_n] = \{x:a_n \leq x \leq b_n\}$$ such that $$I_{n+1} \subset I_n$$ for all n, then $$\cap _{n=1}^{\infty} I_n$$ is nonempty.

Proof
Note that this is obvious if you know some basic topology. However, that isn't necessary.

Since $$I_{n+1} \subset I_n$$ $$a_{n+1}, b_{n+1} \in I_n $$. So, $$a_n \leq a_{n+1}$$ and $$b_n \geq b_{n+1}$$.

Thus $$(a_n)$$ and $$(b_n)$$ are monotone sequences, which must converge by the previous theorem.

Since $$a_n < b_n \forall n$$, $$\lim_{n\to\infty}a_n \leq \lim_{n\to\infty}b_n $$.

Furthermore, $$ \lim_{n\to\infty}a_n = sup\{a_n\} $$ and $$\lim_{n\to\infty}b_n = inf\{b_n\}$$, by the previous proof.

Thus $$a_n \leq \lim_{n\to\infty}a_n \leq \lim_{n\to\infty}b_n \leq b_n $$, so$$\lim_{n\to\infty}a_n \in [a_n,b_n]$$.

Since this holds for all n, $$\lim_{n\to\infty}a_n \in \cap _{n=1}^{\infty} I_n $$, showing that the intersection is indeed nonempty.$$\Box$$

Theorem (Balzano-Weierstrass)
Every bounded sequence has a converging subsequence.

Proof
Let $$(x_n)$$ be any sequence bounded by M. The idea is to confine a subsequence to successively smaller intervals whose length approaches 0. Construct a sequence of nested intervals $$I_n = [a_n,b_n]$$ and a subsequence $$x_{n_k}$$ inductively (make sure you know what this means) as follows:

1) Let $$a_0 = -M, b_0 = M $$. Since $$(x_n)$$ is bounded by M, infinitely many terms of $$(x_n)$$ lie in $$I_0$$.  Pick one of these and call it $$x_{n_0}$$.

2) Given $$I_k = [a_k,b_k]$$, consider the intervals $$J = \left[a_k, \frac{a_k+b_k}{2}\right], K = \left[\frac{a_k+b_k}{2},b_k\right]$$. By induction, infinitely many terms of $$(x_n)$$ lie in $$I_k$$.  Thus one of the intervals J, K must contain infinitely many terms of $$(x_n)$$.  Call this interval $$I_{k+1}$$.  Pick one of the terms of $$(x_n)$$ that lie in $$I_{k+1}$$, and call it $$x_{n_{k+1}}$$

By the Nested Interval Property, $$\cap _{n=1}^{\infty} I_n$$ is non-empty. Say it contains the number $$x$$. This number is unique, but we don't need to prove it. We only have to show that $$(x_{n_k})$$ converges to x.

By the construction of the intervals $$I_k$$, $$|a_k - b_k|=\frac{M}{2^k}$$. Since both $$x_{n_k}, x \in I_k$$, $$|x - x_{n_k}|< |a_n - b_n| < \frac{1}{2^k}$$. (This step is not completely rigorous, but an ugly triangle inequality argument can be made to justify it)

Thus $$\forall \epsilon: \exists k: |x - x_{n_k}| < \frac{1}{2^k} < \epsilon $$. By definition, $$(x_{n_k}) \rightarrow x$$, which is what we wanted.$$\Box$$

Cauchy Sequences
If you read about the construction of the real numbers, you may already be familiar with Cauchy sequences. If not, we repeat the definition here:

A sequence $$(x_n)$$ is Cauchy if $$\forall \epsilon: \exists N: \forall n,m \geq N: |x_n - x_m| < \epsilon $$

Although this seems like a weaker property than convergence, it is actually equivelant, as the following theorem shows:

Theorem (Cauchy Criterion)
A sequence converges if and only if it is Cauchy.

Proof
($$\rightarrow$$) Let $$(x_n) \rightarrow x$$.

Then $$\forall \epsilon \exists N: \forall n \geq N: |x_n - x|< \frac{\epsilon}{2}$$. Applying the triangle inequality, we see that

$$\forall n,m \geq N: |x_n - x_m| \leq |x_n - x| + |x_m - x| < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$$.

Thus $$(x_n)$$ is Cauchy.

($$\leftarrow$$) Let $$(x_n)$$ be Cauchy, and let $$\epsilon > 0$$.

By definition of a Cauchy Sequence, $$\exists N: \forall n,m \geq N: |x_n - x_m| < \frac{\epsilon}{2}$$. Now consider the subsequence $$(x_n)_{n=N}^{\infty}$$. If $$x_n \in (x_n)_{n=N}^{\infty}$$, then $$|x_n| \leq |x_n - x_N| + |x_N| \leq \frac{\epsilon}{2} + |x_N|$$.

Thus $$(x_n)_{n=N}^{\infty}$$ is bounded. By the Balzano-Weierstrass Theorem, it has a convergent subsequence, $$(x_{n_k}) \rightarrow x$$.

Since $$(x_{n_k}) \rightarrow x$$, $$\exists M > N: \forall n_k \geq M: |x_{n_k} - x| < \frac{\epsilon}{2}$$. Thus $$\forall n \geq M: |x_n - x_{n_k}| \leq |x_n - x_{n_k}| + |x_{n_k} - x| < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$$. Thus by definition of convergence $$(x_n) \rightarrow x$$.$$\Box$$

Types of Convergence
There are many different types of convergence, of which three are listed here(there may be more in the future). The first, as we have already seen, is logically equivelant to the original definition of convergence, whereas the other two are (logically) weaker.


 * Cauchy convergence: See above.


 * Cesaro Mean convergence: We say a sequence $$(x_n)$$ converges to $$ x $$ by Cesaro means if the sequence of averages $$(\frac{x_1 + x_2 + ... + x_n}{n})$$ converges to $$x$$.


 * Limit Superior: We define $$\lim_{n\to\infty} sup(x_n)$$ for any bounded sequence $$(x_n)$$ by the following: Let $$y_n = sup\{ x_k : k \geq n \}$$.  Then $$\lim_{n\to\infty} sup(x_n) = \lim_{n\to\infty}(y_n)$$.  The limit inferior is defined similarly, replacing "sup" with "inf".

As a self-test, the reader might try to prove the following theorems on their own before continuing.

Theorem (Cesaro Mean Convergence)
If $$(x_n)\rightarrow x $$, then $$ (x_n)\rightarrow x $$ by Cesaro means.

Proof
TODO

Theorem (Limit Superior and Inferior)
$$ \lim_{n\to\infty}sup(x_n)$$ exists for any bounded sequence $$(x_n)$$. Furthermore, whenever $$\lim_{n\to\infty}(x_n)$$ exists, $$\lim_{n\to\infty}sup(x_n) = \lim_{n\to\infty}(x_n) $$

Proof
TODO

Recursively Defined Sequences
We can define a sequence recursively by specifying an initial value and a recursion relation:

$$x_0 = a$$

$$x_n = f(x_{n-1})$$

In this case, it isn't hard to see that $$\lim_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} f(x_n)$$ (As an exercise, prove this rigorously). Recursive sequences are very useful for this reason; the limit only depends on the recursion relation. Thus, in order to find a sequence with a certain limit, we only have to find an algebraic expression satisfied by the limit and specify an initial condition that ensures convergence of the sequence. For example, if we want a sequence that converges to the golden ratio $$\phi = (1 + \sqrt{5})/2$$, we notice that $$\phi$$ is a root of the polynomial x^2 - x - 1 = 0, and define the sequence as follows:

$$x_0 = 1$$

$$x_n = 1 + \frac{1}{x_{n-1}}$$

If L is the limit of this sequence, then $$L = 1 + \frac{1}{L} \implies L^2 = L + 1 \implies L^2 - L - 1 = 0$$. Thus the limit must be a root of this quadratic. Since it is clear that this sequence must always be positive, the only possible limit is $$\phi$$. A bit more work shows that the sequence is bounded and contains two monotone subsequences, $$x_{2n}$$ and $$x_{2n+1}$$, each of which converge by the monotone convergence theorem, and whose difference converges to 0 (exercise:prove this). Thus both subsequences have the same limit, the limit of the sequence, which must be $$\phi$$. In fact, this sequence convergences very, very rapidly, faster than $$\frac{1}{q_n^2}$$, where $$q_n$$ is the denominator of the rational number $$x_n$$. It is somewhat amazing that we can prove such things given little more than a recursion formula.

The type of recursive sequence used above is called a "continued fraction" (write out each term $$x_n$$ explicitly to see why). These sequences are very important in the subject of Number Theory. A class examples that are perhaps more important to Analysis are the sequences defined by Newton's method for approximating zeros of differentiable functions:

$$x_n = x_{n-1} + \frac{f(x_{n-1})}{f'(x_{n-1})}$$

We will discuss these later, after we define the derivative.

Exercises
Use the definition of convergence to prove that the following sequences converge/diverge and find their limits:

1)$$\left(\frac{x}{n}\right)$$; $$x \in \mathbb{R}$$

2)$$\left(\frac{n-1}{n}\right)$$

3)$$\left(\frac{2n-7}{n^2+1}\right)$$

4)$$\left(x^n\right)$$; $$x > 0$$

5)$$\left(\frac{1}{n^x}\right)$$; $$x \in \mathbb{R}$$

6)$$\left(\frac{\sqrt{2n-1} + 2}{\sqrt{n+3}}\right)$$

7)The recursive sequence defined by the following:

$$x_0 = 2 $$

$$ x_n = \sqrt{x_{n-1}}$$

8)The sequence of Cesaro Means for (1, 1, -1, 1, 1, -1...)

9)$$\lim_{n\rightarrow}sup(x_n)$$ where $$(x_n) = \left(\frac{1}{2}, \frac{1}{3}, \frac{2}{3}, \frac{1}{4}, \frac{3}{4},...\right)$$

10)Given any real number c, find a recursively defined sequence that converges to $$\sqrt{c}$$. (Hint: Use Newton's Method to find approximate the zeroes of the function x^2-a. If you don't know about derivatives yet, just skip this and come back to it when you get there)

Prove the following statements about limits:

1) If every convergent subsequence of $$(x_n)$$ converges to x, then $$(x_n) \rightarrow x$$

2) $$\lim_{n \rightarrow \infty}(x_n) = 0$$ if and only if $$\lim_{n \rightarrow \infty}\left(\frac{1}{x_n}\right) = \infty $$

3) If $$(x_n) \rightarrow 0$$ and $$(y_n)$$ is bounded, then  \lim_{n \rightarrow \infty}(x_ny_n) = 0.

4) Show that each of the equivelant formulations of completeness(Monotone Convergence Theorem, Nested Interval Property, Balzano-Weierstrass, Cauchy Convergence) are, in fact, equivelant. That is, show that we can arbitrarily take one of them as an axiom and directly deduce all the others.  If you don't have that kind of time on your hands, just show that the original completeness axiom follows from Cauchy Convergence (this takes care of most of the cases anyway, if you go back and look at the proofs).


 * (a) Prove that the set of all rational numbers is countable. Hint: first show it is not finite, and then arrange the numbers in a triangle. (b) Prove, by induction, that a countable set of n-tuple is countable.


 * In most of analysis books, the convergence is defined as follows: If $${a_n}$$ is a sequence, then $${a_n}$$ converges to p if for all $$\epsilon > 0$$ and some  real function N, $$|a_n - p| < \epsilon$$ whenever n > N(\p, \epsion). (a) Show this is equivalent to the definition given in the text. (b) Show a real function N can be replaced by an integral function N.