User:Tethys sea/sandbox

Corrections to section 3.3

$$ \gamma $$ an integer and $$ \gamma \ne 1 $$

The case where $$ \gamma $$ is zero or a negative integer

To begin with, we shall simplify matters by concentrating a particular value of $$ \gamma $$ and generalise the result at a later stage. We shall use the value $$ \gamma=-2 $$. The indicial equation has a root at $$ c=0 $$, and we see from the recurrence relation $$ a_r = \frac{(r + c + \alpha - 1)(r + c + \beta - 1)}{(r + c)(r + c -3)} a_{r - 1}, $$ that when $$ r=3 $$ that that denominator has a factor $$ c $$ which vanishes when $$ c=0 $$. In this case, a solution can be obtained by putting $$ a_0= b_0 c $$ where $$ b_0 $$ is a constant. With this substitution, the coefficients of $$ x^r $$ vanish when $$c=0$$ and $$ r < 3 $$. The factor of $$ c $$ in the denominator of the recurrence relation cancels with that of the numerator when $$ r \ge 3 $$. Hence, our solution takes the form $$ y_1= \frac{b_0}{(-2)\times (-1)} \left( \frac{(\alpha)_{3} (\beta)_{3}}{(3! 0! } x^{3} + \frac{(\alpha)_{4} (\beta)_{4}}{4! 1!} x^{4} + \frac{(\alpha)_{5} (\beta)_{5}}{5! 2!} x^{5}+\cdots \right) $$ $$ =\frac{b_0}{ (-2)_2} \sum_{r=3}^\infty \frac{(\alpha)_r (\beta)_r}{r! (r-3)!} x^r =\frac{b_0}{ (-2)_2} \frac{(\alpha)_3 (\beta)_3}{3!} \sum_{r=3}^\infty \frac{(\alpha+3)_{r-3} (\beta+3)_{r-3}}{(1+3)_{r-3} (r-3)!} x^r. $$ If we start the summation at $$r=0$$ rather than $$r=3$$ we see that $$ y_1=b_0 \frac{(\alpha)_3 (\beta)_3}{(-2)_2 \times 3!}  x^3 {_2 F_1} (\alpha+3, \beta+3; (1+3); x). $$ The result (as we have written it) generalises easily. For $$\gamma=1+m$$, with $$m=1,2,3,\cdots$$ then $$ y_1=b_0 \frac{(\alpha)_m (\beta)_m}{(1-m)_{m-1} \times m!}  x^m {_2 F_1} (\alpha+m, \beta+m; (1+m); x). $$ Obviously, if $$ \gamma=-2 $$, then $$ m=3$$. The expression for $$y_1(x)$$ we have just given looks a little inelegant since we have a multiplicative constant apart from the usual arbitrary multiplicative constant $$b_0$$. Later, we shall see that we can recast things in such a way that this extra constant never appears

The other root to the indicial equation is $$ c=1-\gamma=3$$, but this gives us (apart from a multiplicative constant) the same result as found using $$ c=0$$. This means we must take the partial derivative (w.r.t. $$ c$$) of the usual trial solution in order to find a second independent solution. If we define the linear operator $$ L$$ as $$ L=x(1-x)\frac{d^2}{d x^2}-(\alpha+\beta+1) x\frac{d}{d x}+\gamma \frac{d}{d x}-\alpha \beta, $$ then since $$ \gamma=-2 $$ in our case, $$ L c \sum_{r=0}^\infty b_r(c) x^r = b_0 c^2(c-3). $$ (We insist that $$ b_0 \ne 0 $$.) Taking the partial derivative w.r.t $$c$$, $$ L \frac{\partial}{\partial c} c \sum_{r=0}^\infty b_r(c) x^{r+c} = b_0(3 c^2-6c). $$ Note that we must evaluate the partial derivative at $$c=0$$ (and not at the other root $$c=3$$). Otherwise the right hand side is non-zero in the above, and we do not have a solution of $$Ly(x)=0$$. The factor $$ c $$ is not cancelled for $$ r=0,1 $$ and $$ r=2 $$. This part of the second independent solution is $$ {\bigg [} \frac{\partial}{\partial c} b_0 \bigg ( c + c\frac{(c+\alpha)(c+\beta)}{(c+1) (c-2)} x + c\frac{(c+\alpha)(c+\alpha+1)(c+\beta)(c+\beta+1)}{(c+1)(c+2) (c-2)(c-1)} x^2 {\bigg )} {\bigg ]} {\bigg \vert}_{c=0}. $$ $$ = b_0 \left ( 1 +\frac{\alpha \beta}{1! \times (-2)} x +\frac{\alpha (\alpha+1) \beta(\beta+1)}{2! \times (-2)\times (-1)}x^2 \right ) = b_0 \sum_{r=0}^{3-1} \frac{(\alpha)_r (\beta)_r}{r! (1-3)_r } x^r. $$ Now we can turn our attention to the terms where the factor $$ c $$ cancels. First $$ c b_3= \frac{b_0}{(c-1)(c-2)} \cancel{c} \frac{ (c+\alpha)(c+\alpha+1)(c+\alpha+2) (c+\beta)(c+\beta+1)(c+\beta+2) }{\cancel{c}(c+1)(c+2)(c+3)}. $$ After this, the recurrence relations give us $$ c b_4=c b_3(c)\frac{ (c+\alpha+3)(c+\beta+3) }{(c+1) (c+4))}. $$ $$ c b_5=c b_3(c) \frac{ (c+\alpha+3)(c+\alpha+4)(c+\beta+3)(c+\beta+4)) }{(c+2)(c+1) (c+5)(c+4)}. $$ So, if $$ r \ge 3 $$ we have $$ c b_r= \frac{b_0}{(c-1)(c-2)} \frac{ (c+\alpha)_r(c+\beta)_r }{(c+1)_{r-3} (c+1)_r}. $$ We need the partial derivatives $$ \frac {\partial c b_3(c)}{\partial c} {\bigg \vert}_{c=0}= \frac{b_0}{(1-3)_{3-1}} \frac{(\alpha)_3 (\beta)_3}{0! 3!} {\bigg [} \frac{1}{1}+\frac{1}{2}+ \frac{1}{\alpha}+\frac{1}{\alpha+1}+\frac{1}{\alpha+2} $$ $$ + \frac{1}{\beta}+\frac{1}{\beta+1}+\frac{1}{\beta+2}-\frac{1}{1}-\frac{1}{2}-\frac{1}{3} {\bigg ]}. $$ Similarly, we can write $$ \frac {\partial c b_4(c)}{\partial c} {\bigg \vert}_{c=0}= \frac{b_0}{(1-3)_{3-1}} \frac{(\alpha)_4 (\beta)_4}{1! 4!} {\bigg [} \frac{1}{1}+\frac{1}{2} $$ $$ +\sum_{k=0}^{k=3}\frac{1}{\alpha+k}+\sum_{k=0}^{k=3}\frac{1}{\beta+k} -\frac{1}{1}-\frac{1}{2}-\frac{1}{3} -\frac{1}{4}-\frac{1}{1} {\bigg ]}, $$ and $$ \frac {\partial c b_5(c)}{\partial c} {\bigg \vert}_{c=0}= \frac{b_0}{(1-3)_{3-1}} \frac{(\alpha)_5 (\beta)_5}{2! 5!} {\bigg [} \frac{1}{1}+\frac{1}{2} $$ $$ +\sum_{k=0}^{k=4}\frac{1}{\alpha+k}+\sum_{k=0}^{k=4}\frac{1}{\beta+k} -\frac{1}{1}-\frac{1}{2}-\frac{1}{3}-\frac{1}{4}-\frac{1}{5} -\frac{1}{1}-\frac{1}{2} {\bigg ]}. $$ It becomes clear that for $$ r \ge 3 $$ $$ \frac {\partial c b_r(c)}{\partial c} {\bigg \vert}_{c=0}= \frac{b_0}{(1-3)_{3-1}} \frac{(\alpha)_r (\beta)_r}{(r-3)!r!} {\bigg [} H_2 +\sum_{k=0}^{k=r-1}\frac{1}{\alpha+k}+\sum_{k=0}^{k=r-1}\frac{1}{\beta+k} -H_r -H_{r-3} {\bigg ]}. $$ Here, $$ H_k $$ is the $$k$$th partial sum of the harmonic series, and by definition $$H_0=0 $$ and  $$H_1=1 $$.

Putting these together, for the case $$ \gamma=-2 $$ we have a second solution $$ y_2(x)= \log x \times \frac{b_0}{ (-2)_2} \sum_{r=3}^\infty \frac{(\alpha)_r (\beta)_r}{r! (r-3)!} x^r + b_0 \sum_{r=0}^{3-1} \frac{(\alpha)_r (\beta)_r}{r! (1-3)_r } x^r $$ $$ +\frac{b_0}{ (-2)_2} \sum_{r=3}^\infty \frac{(\alpha)_r (\beta)_r}{(r-3)!r!} {\bigg [} H_2 +\sum_{k=0}^{k=r-1}\frac{1}{\alpha+k}+\sum_{k=0}^{k=r-1}\frac{1}{\beta+k} -H_r -H_{r-3} {\bigg ] x^r}. $$ The two independent solutions for $$ \gamma=1-m $$ (where $$ m $$  is a positive integer) are then $$ y_1(x)=\frac{1}{(1-m)_{m-1}} \sum_{r=m}^\infty \frac{(\alpha)_r (\beta)_r}{r! (r-m)!} x^r $$ and $$ y_2(x)= \log x \times y_1(x) + \sum_{r=0}^{m-1} \frac{(\alpha)_r (\beta)_r}{r! (1-m)_r } x^r $$ $$ +\frac{1}{ (1-m)_{m-1}} \sum_{r=m}^\infty \frac{ (\alpha)_r (\beta)_r}{(r-m)!r!} {\bigg [} H_{m-1}+\sum_{k=0}^{k=r-1}\frac{1}{\alpha+k} +\sum_{k=0}^{k=r-1}\frac{1}{\beta+k} -H_r -H_{r-m} {\bigg ]} x^r. $$ The general solution is as usual $$ y(x)=A y_1(x)+B y_2(x) $$ where $$A$$ and $$B$$ are arbitrary constants. Now, if the reader consults a ``standard solution" for this case, such as given by Abramowitz and Stegun in §15.5.21 (which we shall write down at the end of the next section) it shall be found that the $$ y_2 $$ solution we have found looks nothing like the standard solution. In our solution for $$y_2$$, the first term in the  infinite series part of  $$y_2$$ is a term in $$x^m$$. The first term in the corresponding infinite series in the standard solution is a term in $$x^{m+1}$$. The $$x^m$$ term is missing from the standard solution. Nonetheless, the two solutions are entirely equivalent.

Comparison With the Standard Solution

The reason for the apparent discrepancy between the solution given above and the Abramowitz and Stegun's standard solution §15.5.21 is that there are an infinite number of ways in which to represent the two independent solutions of the hypergeometric ODE. In the last section, for instance, we replaced $$a_0$$ with $$ b_0 c$$. Suppose though, we are given some function $$h(c)$$ which is continuous and finite everywhere in an arbitrarily small interval about $$c=0$$. Suppose we are also given $$ h(c) \vert_{c=0} \ne 0, $$ and $$ \frac{d h}{d c} {\bigg \vert}_{c=0} \ne 0. $$ Then, if instead of replacing $$a_0$$ with $$ b_0 c$$ we replace $$a_0$$ with $$ b_0 h(c) c$$, we still find we have a valid solution of the hypergeometric equation. Clearly, we have an infinity of possibilities for $$h(c)$$. There is however a ``natural choice" for $$h(c)$$. Suppose that $$c b_N(c) =b_0 f(c)$$ is the first non zero term in the first $$y_1(x)$$ solution with $$c=0$$. If we make $$h(c)$$ the reciprocal of $$f(c)$$, then we won't have a multiplicative constant involved in $$y_1(x)$$ as we did in the previous section. From another point of veiw, we get the same result if we ``insist" that $$a_N$$ is independent of $$c$$, and find $$a_0(c)$$ by using the recurrence relations backwards.

For the first $$(c=0)$$ solution, the function $$ h(c)$$ gives us (apart from multiplicative constant) the same $$y_1(x)$$ as we would have obtained using $$h(c)=1$$. Suppose that using $$h(c)=1$$ gives rise to two independent solutions $$y_1(x)$$ and $$y_2(x)$$. In the following we shall denote the solutions arrived at given some $$h(c)\ne 1$$ as $${\tilde y}_1(x)$$ and  $${\tilde y}_2(x)$$.

The second solution requires us to take the partial derivative w.r.t $$c$$, and substituting the usual trial solution gives us $$ L \frac{\partial}{\partial c} \sum_{r=0}^\infty c h(c) b_r x^{r+c} = b_0 \left ( \frac{d h}{d c} c^2 (c-1)+ 2 c h(c) (c-1)+ h(c) c^2 \right ). $$ The operator $$L$$ is the same linear operator discussed in the previous section. That is to say, the hypergeometric ODE is represented as $$Ly(x)=0$$.

Evaluating the left hand side at $$c=0$$ will give us a second independent solution. Note that this second solution $${\tilde y_2} $$ is in fact a linear combination of $$y_1(x)$$ and $$y_2(x)$$.  Any two independent linear combinations ($${\tilde y}_1$$ and $${\tilde y}_2$$) of $$y_1$$ and $$y_2$$ are independent solutions of $$Ly=0$$. The general solution can be written as a linear combination of $$ {\tilde y}_1$$ and $${\tilde y}_2$$ just as well as linear combinations of $$y_1$$ and $$y_2$$.

We shall review the special case where $$\gamma=1-3=-2$$ that was considered in the last section. If we ``insist" $$a_3(c)=const.$$, then the recurrence relations yield $$ a_2=a_3 \frac{ c (3+c)}{(2+\alpha+c)(2+\beta+c) }, $$ $$ a_1=a_3 \frac{ c (2+c)(3+c)(c-1)}{(1+\alpha+c)(2+\alpha+c)(1+\beta+c)(2+\beta+c)}, $$ and $$ a_0=a_3 \frac{ c (1+c)(2+c)(3+c)(c-1)(c-2)}{(\alpha+c)_3 (\beta+c)_3}=b_0 c h(c). $$ These three coefficients are all zero at $$c=0$$ as expected. We have three terms involved in $$y_2(x)$$ by taking the partiial derivative w.r.t $$c$$, we denote the sum of the three terms involving these coefficients as $$S_3$$ where $$ S_3=\left [ \frac{\partial }{\partial c} \left (a_0(c) x^c+a_1(c) x^{c+1}+a_2(c) x^{c+2} \right ) \right ]_{c=0}, $$ $$ =a_3 \left [\frac{3\times 2 \times 1 (-2)\times (-1)}{(\alpha)_3 (\beta)_3 }x^{3-3} + \frac{3 \times 2 \times (-1)}{(\alpha+1)(\alpha+2)(\beta+1)(\beta+2)}x^{3-2} + \frac{3 }{(\alpha+2) (\beta+2)_1 }x^{3-1}\right ] . $$ The reader may confirm that we can tidy this up and make it easy to generalise by putting $$ S_3=-a_3 \sum_{r=1}^3 \frac{ (-3)_r (r-1)!}{(1-\alpha-3)_r (1-\beta-3)_r} x^{3-r}. $$ Next we can turn to the other coefficients, the recurrence relations yield $$a_4=a_3 \frac{(3+c+\alpha)(3+c+\beta)}{(4+c)(1+c)}$$ $$a_5=a_3 \frac{(4+c+\alpha)(3+c+\alpha)(4+c+\beta)(3+c+\alpha}{(5+c)(4+c)(1+c)(2+c)}$$ Setting $$c=0$$ gives us $$ {\tilde y}_1(x)=a_3 x^3 \sum_{r=0}^\infty \frac{(\alpha+3)_r (\beta+3)_r}{(3+1)_r r!} x^r =a_3 x^3 {_2 F_1}(\alpha+3,\beta+3;(1+3);z). $$ This is (apart from the multiplicative constant$$(a)_3 (b)_3/2$$) the same as  $$y_1(x)$$. Now, to find $${\tilde y}_2$$ we need partial derivatives $$\frac{\partial a_4 }{\partial c}{\bigg \vert}_{c=0}= a_3 {\bigg [}  \frac{(3+c+\alpha)(3+c+\beta)}{(4+c)(1+c)} {\bigg (} \frac{1}{\alpha+3+c}+\frac{1}{\beta+3+c}-\frac{1}{4+c}-\frac{1}{1+c} {\bigg )} {\bigg ]}_{c=0} $$ $$ = a_3 \frac{(3+\alpha)_1(3+\beta)_1}{(1+3)_1 \times 1} {\bigg (} \frac{1}{\alpha+3}+\frac{1}{\beta+3}-\frac{1}{4}-\frac{1}{1} {\bigg )}. $$ Then $$ \frac{\partial a_5 }{\partial c}{\bigg \vert}_{c=0} = a_3 \frac{(3+\alpha)_2(3+\beta)_2}{(1+3)_2 \times 1 \times 2} {\bigg (} \frac{1}{\alpha+3}+\frac{1}{\alpha+4}+\frac{1}{\beta+3} +\frac{1}{\beta+4}-\frac{1}{4}-\frac{1}{5}-\frac{1}{1} -\frac{1}{2} {\bigg )}. $$ we can re-write this as $$ \frac{\partial a_5 }{\partial c}{\bigg \vert}_{c=0} =a_3 \frac{(3+\alpha)_2(3+\beta)_2}{(1+3)_2 \times 2!} {\bigg [} \sum_{k=0}^1\left ( \frac{1}{\alpha+3+k}+\frac{1}{\beta+3+k} \right ) +\sum_{k=1}^3 \frac{1}{k}-\sum_{k=1}^5 \frac{1}{k}-\frac{1}{1} -\frac{1}{2} {\bigg ]}. $$ The pattern soon becomes clear, and for $$r=1,2,3,\cdots$$ $$ \frac{\partial a_{r+3} }{\partial c}{\bigg \vert}_{c=0} =a_3 \frac{(3+\alpha)_{r}(3+\beta)_r}{(1+3)_r \times r!} {\bigg [} \sum_{k=0}^{r-1} \left (  \frac{1}{\alpha+3+k}+\frac{1}{\beta+3+k} \right ) +\sum_{k=1}^3 \frac{1}{k}-\sum_{k=1}^{r+3} \frac{1}{k}-\sum_{k=1}^r\frac{1}{k} {\bigg ]}. $$ Clearly, for $$r=0$$, $$\frac{\partial a_{3} }{\partial c}{\bigg \vert}_{c=0} =0.$$ The infinite series part of $${\tilde y}_2$$ is $$S_\infty $$, where $$ S_\infty=x^3 \sum_{r=1}^\infty \frac{\partial a_{r+3} }{\partial c}{\bigg \vert}_{c=0} x^r. $$ Now we can write (disregarding the arbitrary constant) for$$\gamma=1-m$$ $$ {\tilde y}_1(x)= x^3 {_2 F_1}(\alpha+m,\beta+m;1+m;z) $$ $$ {\tilde y}_2(x)={\tilde y}_1(x) \log x -\sum_{r=1}^m \frac{ (-m)_r (r-1)!}{(1-\alpha-m)_r (1-\beta-m)_r} x^{m-r}. $$ $$ +x^3 \sum_{r=0}^\infty \frac{(\alpha+m)_{r}(\beta+m)_r}{(1+m)_r \times r!} {\bigg [} \sum_{k=0}^{r-1} \left ( \frac{1}{\alpha+m+k}+\frac{1}{\beta+m+k} \right ) +\sum_{k=1}^3 \frac{1}{k}-\sum_{k=1}^{r+3} \frac{1}{k}-\sum_{k=1}^r\frac{1}{k} {\bigg ] x^r}. $$ Some authors prefer to express the finite sums in this last result using digamma functions $$\psi(x)$$. In particular, the following results are used $$ H_n=\psi(n+1)+\gamma_{em}. $$ Here, $$ \gamma_{em} =0.5772156649=\psi(1)$$ is the Euler-Mascheroni constant. Also $$ \sum_{k=0}^{n-1} \frac{1}{z+k}=\psi(z+n)-\psi(z). $$ With these results we obtain the form given in Abramamowitz and Stegun §15.5.21, namely $$ {\tilde y}_2(x)={\tilde y}_1(x) \log x -\sum_{r=1}^m \frac{ (-m)_r (r-1)!}{(1-\alpha-m)_r (1-\beta-m)_r} x^{m-r}. $$ $$ +x^3 \sum_{r=0}^\infty \frac{(\alpha+m)_{r}(\beta+m)_r}{(1+m)_r \times r!} {\bigg [} \psi(\alpha+r+m)-\psi(\alpha+m)+ \psi(\beta+r+m)-\psi(\beta+m) $$ $$ -\psi(r+1+m)-\psi(r+1)+\psi(1+m)+\psi(1) {\bigg ] x^r}. $$

The case where $$ \gamma $$ is zero or a negative integer In this section, we shall concentrate on the ``standard solution", and we shall not replace $$ a_0 $$ with $$ b_0 (c-1+\gamma)$$. We shall put $$ \gamma =1+m $$ where $$m=1,2,3, \cdots$$. For the root $$c=1-\gamma$$ of the indicial eqauation we had $$ A_r= \left [ A_{r-1}\frac{ (r+\alpha-1+c)(r+\beta-1+c)}{(r+c)(r+c+\gamma-1)} \right ]_{c=1-\gamma} =A_{r-1}\frac{ (r+\alpha-\gamma)(r+\beta-\gamma)}{(r+1-\gamma)(r)}, $$ where $$ r\ge 1$$ in which case we are in trouble if $$ r=\gamma-1=m$$. For instance, if $$ \gamma=4 $$, the denominator in the recurrence relations vanishes for $$r=3$$. We can use exactly the same methods that we have just used for the standard solution in the last section. We shall not (in the instance where $$ \gamma=4 $$) replace $$ a_0 $$ with $$ b_0 (c+3)$$ as this will not give us the standard form of solution that we are after. Rather, we shall ``insist" that $$ A_3 =const.$$ as we did in the standard solution for $$ \gamma=-2 $$ in the last section. (Recall that this defined the function $$ h(c)$$ and that $$ a_0 $$ will now be replaced with $$ b_0 (c+3)h(c)$$.) Then we may work out the coefficients of $$ x^0 $$ to $$ x^2 $$ as functions of $$ c $$ using the recurrence relations backwards. There is nothing new to add here, and the reader may use the same methods as used in the last section to replicate the results given by Abramowitz and Stegun §15.5.18 and §15.5.19, these are $$ y_{1}={_2F_1}(\alpha,\beta;1+m;x), $$ and $$ y_{2}=  {_2F_1}(\alpha,\beta;1+m;x) \log x + z^m \sum_{r=1}^\infty \frac{ (\alpha)_r (\beta)_r}{r! (1+m)_r} [ \psi(\alpha+r)-\psi(\alpha) +\psi(\beta+k)-\psi(\beta) $$ $$ -\psi(m+1+r)+\psi(m+1)-\psi(r+1)+\psi(1) ] z^r -\sum_{k=1}^m \frac{ (k-1)! (-m)_k }{ (1-\alpha)_k (1-\beta)_k} z^{-r}. $$ Note that the powers of $$z$$ in the finite sum part of $$y_2(x)$$ are now negative so that this sum diverges as $$z \rightarrow 0$$$