Wikipedia:Reference desk/Archives/Mathematics/2012 May 7

= May 7 =

Beta Function
For the beta function, I have the relation $$ (\frac{u+v}{u}) \Beta(u+1,v) =  \Beta(u,v)$$, for Re(u)>0 and Re(v)>0. Given this, I have to show that the beta function is analytic except for simple poles and then find the residues at the poles. Now, it is clear from this relation that $$ \Beta(u,v) = (\frac{u+v}{v})(\frac{u+1+v}{u+1})...(\frac{u+n-1+v}{u+n-1})\Beta(u+n,v)$$, so we see that the beta function has simple poles at the non-positive integers. Does this suffice for showing that it is analytic except for the poles that we have located? And how does one go about determining the values of the residues at the poles? Thanks. meromorphic  [talk to me]  10:50, 7 May 2012 (UTC)
 * I think this argument only works if you know that B(u,v) has no poles for Re(u)>0 and Re(v)>0. Then for any point in C2 you can pull this pole-free region back to it the way you suggested. Edit: Actually if you only have this relation for Re(u)>0 and Re(v)>0 then it tells you nothing about the values of B(u,v) for Re(u)<0.  Edit again: Although I guess maybe you get that they agree below Re(u)>0 when you define these values with analytic continuation, but I don't really remember anything about analytic continuation. Rckrone (talk) 01:37, 8 May 2012 (UTC)

Riemann P function
I have established, through manipulations of P symbols, that

$$\;(z-1)^{-a}P \left\{ \begin{matrix} 0 & 1 & \infty & \; \\ 0 & 0 & a & z(z-1)^{-1} \\ 1-c & b-a & c-b & \; \end{matrix} \right\} = P \left\{ \begin{matrix} 0 & 1 & \infty & \; \\ 0 & 0 & a & z \\ 1-c & c-a-b & b & \; \end{matrix} \right\}$$.

I believe this shows that $$(z-1)^{-a}F(a,c-b;c;z(z-1)^{-1})$$ is a branch of the second P function above (is this correct?). I now have to find a linearly independent branch, which I am unsure of how to do. Can someone help me please? Thanks. meromorphic  [talk to me]  12:17, 7 May 2012 (UTC)

Table problem
Given a table with 2 columns and n rows (n from 2 to 10), plus a sum for each row and a sum for each column, is it possible to fill the table, such that the values in each column add up to its sum, and for each row, the value in the first cell plus double the value in the second cell equals the sum for that row? If not possible, why? If possible, is there an efficient algorithm for filling the table with suitable values? Thanks. — Preceding unsigned comment added by 220.255.1.139 (talk) 15:12, 7 May 2012 (UTC)


 * You have 2n unknowns and only n+2 constraints, so yes, it is possible, but if n > 2 the solution is not unique. One approach is to put 0 in the second column for the first n-2 rows and put the sum of each row in the first column. Then you are left with 4 linear equations in 4 unknowns, which is not difficult to solve. Gandalf61 (talk) 16:51, 7 May 2012 (UTC)
 * A solution only exists if the first column sum plus twice the second column sum is equal to the total of all row sums. -- Meni Rosenfeld (talk) 20:42, 7 May 2012 (UTC)
 * So in fact if this condition mentioned by Meni Rosenfeld is satisfied, there is a linear dependence among the constraints and there are n-1 degrees of freedom. You can put 0 (or whatever values you like) in all but one of the entries in the second column (or all but one in the first column, or mix and match as you like), and the rest will be determined in the obvious way. Rckrone (talk) 22:11, 7 May 2012 (UTC)
 * You didn't say if the row/column sums and entries have to be nonnegative integers. If they do, Meni's condition is not enough.  It is also required that the first column sum is at least equal to the number of row sums that are odd. McKay (talk) 08:09, 11 May 2012 (UTC)

Probability ball
Let $$f : \mathbb{R} \rightarrow [0,1]$$ be continuous, and such that $$\int_\mathbb{R} f(x) dx = 1$$, and $$\lim x^2 f(x) = 0$$ Prove that, for all $$\epsilon > 0$$, there exist $$\delta >0, N \in \mathbb{N}, (x_j)_{j=0}^N \in \mathbb{R}^{N+1}$$ increasing, and $$(p_j)_{j=0}^N \in [0,1]^{N+1}$$, such that $$\int_{\mathbb{R}} |f(x) - \sum_{j=0}^N \frac{p_j}{\delta} 1_{[x_j - \delta/2, x_j + \delta/2)} (x)|dx \le \epsilon$$. Widener (talk) 22:07, 7 May 2012 (UTC)
 * Should there be an additional factor of 1/(N+1) on the sum? The step function should also integrate to 1. Rckrone (talk) 22:21, 7 May 2012 (UTC)
 * Ah, the thing is $$\sum_{j=0}^N p_j = 1$$. Widener (talk) 22:25, 7 May 2012 (UTC)
 * Oh yeah, I forgot they are weighted by pj. For any non-negative integrable function f, you can find some bounded interval that has most of the weight. Then on that interval f is uniformly continuous so you can approximate it with these boxes of width δ.  Since $$\sum_{j=0}^N p_j = 1$$, the last box's height is determined by the previous ones, but the error there should be small since the the two functions both integrate to 1.  I'm not sure where $$\lim x^2 f(x) = 0$$ comes in, so maybe I'm missing something. Rckrone (talk) 22:33, 7 May 2012 (UTC)
 * That seems very informal - is that a correct proof? Can you make it more formal? Widener (talk) 23:24, 7 May 2012 (UTC)
 * Yes, I'm fairly sure that I can. ;) Rckrone (talk) 23:36, 7 May 2012 (UTC)
 * You can? Then why you haven't you shown me what it is already? And do you use $$\lim_{n \rightarrow \pm \infty} x^2 f(x) = 0$$? Widener (talk) 23:55, 7 May 2012 (UTC)
 * Seriously, what's the answer? I can't figure it out and it's due in an hour. Widener (talk) 05:48, 8 May 2012 (UTC)
 * Ok I don't think I'm supposed to do this, but: Choose M so that $$\int_{-M}^M f(x) dx > 1 - \epsilon/3$$. By uniform continuity of f on [-M,M], we can find δ such that for all |x - y| < δ, |f(x)-f(y)| < ε/6M.  Now pick N+1 and the xis so the δ/2 balls cover [-M,M].  Choose each pi so that pi/δ the average value of f on [xi-δ/2, xi+δ/2].  Letting p be the step function defined by these values of pi, show that |f(x) - p(x)| < ε/6M for all x in [-M,M].  This is good but these pis don't sum to 1, so let $$p'_0 = 1 - \sum_{i=1}^n p_i$$ and let p' be the step function defined by replacing p0 with p'0.  Use the fact that $$\int_{-M}^M f(x) dx = \int_{-M}^M p(x) dx$$ and $$\int_{-M}^M p'(x) dx = 1$$, to show that |p0/δ - p'0/δ| < ε/3δ.  Then $$\int_{-M}^M |f(x) - p'(x)| dx < 2\epsilon/3$$ and finally $$\int_\mathbb{R} |f(x) - p'(x)| dx < \epsilon.$$ Rckrone (talk) 06:21, 8 May 2012 (UTC)

How do you show completeness?
Show $$(C[0,T],\rho(x,y) = \sup_{0 \le t \le T} e^{-Lt}|x(t) - y(t)|)$$ is a complete metric space ($$T >0, L \ge 0$$). Widener (talk) 23:39, 7 May 2012 (UTC)
 * More generally, how do you show that metric spaces are complete? Widener (talk) 00:26, 8 May 2012 (UTC)
 * A metric space is complete if every Cauchy sequence has a limit point in the space. I think here you need to show that if you have a Cauchy sequence of continuous functions, that (a) they converge point-wise, and (b) that the function described by these point-wise limits is continuous. Rckrone (talk) 00:34, 8 May 2012 (UTC)


 * All right; it is quite easy to show that every Cauchy sequence converges to some function under this metric. How do we know this limit is continuous? Hmm, is this convergence uniform perhaps? How would you show that? Widener (talk) 00:52, 8 May 2012 (UTC)
 * Yes, it's uniform. You can find an explicit bound on |x(t) - y(t)| for all t in terms of ρ(x,y). Rckrone (talk) 01:11, 8 May 2012 (UTC)