Wikipedia:Reference desk/Archives/Mathematics/2010 September 5

= September 5 =

Question on polynomials
Hey guys, how are ya all? ANyway, I'm stuck on a question and your help would be appreciated.

The question is this: '''Let k be a natural number and let r be a real number such that |r| < 1. Prove (by induction on k) that for any polynomial P of degree k there's a polynomial Q of degree k s.t. Q(n+1)r^(n+1) - Q(n)r^n = P(n)r^n.'''

The hint is this: Consider differences of succesive terms for n^k r^n, and use the inductive hypothesis.

OK. So that's the question and hint. I've been trying hard at this question since I've woken up but to no avail. So I've been trying for 5 hours. I really would appreciate an answer. Like for example I've tried it when P(x) = x^2, r = 1/2 and I've found that Q(x) = -2x^2 - 4x - 6 and it works. The reason I did this is 'cause it helps me to sum the series n^2 2^(-n). But I'm really stumped on this one. Help??? Thanks guys ... I've worked out about 7 pages of rough work so please don't say I'm lazy and I want my homework done for me. This isn't homework just independent study and it's for my own benefit but I'd really like a decent hint or an answer please. —Preceding unsigned comment added by 114.72.228.12 (talk) 05:02, 5 September 2010 (UTC)

Also you don't have to use the hint if you don't want to. Thanks guys ...


 * I've just run through the argument but only have a mess (though I think it's a correct mess). The point you might be missing (which is basically the hint restated) is that you don't want to evaluate what Q is if P = x^k; it suffices to show you can deal with x^k (in much the same way as the grounding case of the induction) and then the left overs are of less degree so can be absorbed into the rest of P and dealt with by Inductive Hypothesis. Also I don't see that you need |r|<1 (though you're in trouble if r=1). Hope that helps. 95.150.22.63 (talk) 14:17, 5 September 2010 (UTC)


 * Following from 95.150.22.63's bit about induction, I think it is pretty clear that we can rewrite as rQ(n+1)r^n - Q(n)r^n = x^k r^n. We ignore the r = 0 case, since it's rather meaningless, and so we divide through by r^n. Q is a polynomial of degree k, so let us express it as
 * $$Q(n) = \sum_{i=0}^k a_i n^i$$, so our equation is $$r\sum_{i=0}^k a_i (n+1)^i - \sum_{i=0}^k a_i n^i = x^k$$
 * Then expand the first term by the binomial theorem, and then equating coefficients of n^i for i = 0, ..., k, you get k+1 linear equations to solve for the k+1 coefficients a_0, ..., a_k. To calculate, you can start from a_k and work back, a_k = 1/(r-1) always, for example (a_i can be solved for directly once you know the values of a_{i+1}, ..., a_k). In any case, you have k+1 linear equations for k+1 unknowns, and now you want to be sure that for all |r|<1, these have a unique solution. Invrnc (talk) 14:31, 5 September 2010 (UTC)

Jensen's inequality?
Is this a case of Jensen's inequality, or some other inequality?

$$(\mathbb{E}(|a-c|^2))^{1/2} \leq (\mathbb{E}(|a-b|^2))^{1/2}+(\mathbb{E}(|b-c|^2))^{1/2}$$

—Preceding unsigned comment added by 130.102.158.15 (talk • contribs) 08:14, 5 September 2010 (UTC)


 * That's Minkowski's inequality, with $$p=2$$. —Bkell (talk) 15:43, 5 September 2010 (UTC)


 * Thanks! —Preceding unsigned comment added by 130.102.158.15 (talk) 22:32, 5 September 2010 (UTC)

Limit Question
I know it's possible to reason out that $$\lim_{(x,y)\rightarrow (0,0)} \exp{\tfrac{-1}{x^2+y^2}} = 0$$, but is there any way to do it algebraically? Thanks. --Basho: banana tree (talk) 20:42, 5 September 2010 (UTC)
 * For every ε>0, you can choose an appropriate δ>0 (you will be able to express this value in terms of ε) such that for every point (x,y) within a distance of δ of (0,0), $$|\exp{\tfrac{-1}{x^2+y^2}}| < \epsilon$$. This is formally how you demonstrate a limit. Rckrone (talk) 22:34, 5 September 2010 (UTC)


 * (edit conflict) You are interested in how the function behaves as (x,y) tends towards (0,0). But there are many ways in which a point (x,y) can tend towards (0,0). Let's assume that (x,y) follows a line towards (0,0). In other words x = r⋅cos(θ) and y = r⋅sin(θ), where θ is a fixed parameter and r is the variable along the line. We consider:


 * $$\lim_{(x,y) \to (0,0)} e^\left({\frac{-1}{x^2+y^2}}\right) = \lim_{r \to 0} e^\left({\frac{-1}{(r\cos\theta)^2+(r\sin\theta)^2}}\right) = \lim_{r \to 0} e^\left({-1/r^2}\right) = 0. $$


 * Notice that the penultimate expression is independent of θ and does not depend upon the sign of r since 1/(+r)2 = 1/(−r)2. — Fly by Night  ( talk )  23:04, 5 September 2010 (UTC)
 * Note that this is not a proof. There are functions which converge along every line towards the origin, and yet do not converge at the origin.--203.97.79.114 (talk) 09:53, 6 September 2010 (UTC)
 * That's the whole point of a limit. The function may not be defined at the origin, but you calculate the limit of the function as you tend towards the origin. In this example the function is indeterminate at the origin; but the limit exists and is well defined. — Fly by Night  ( talk )  12:59, 6 September 2010 (UTC)
 * I didn't say such functions were undefined; I said they did not converge. Consider the function $$\tfrac{xy}{x + y^3}$$.  The limit as $$(x,y)\rightarrow (0,0)$$ along any line is $$0$$.  The limit as $$(x,y)\rightarrow (0,0)$$ along the curve $$x = y^2$$ is $$1$$.  The limit in this case does not converge, yet if you only considered approaching along a line, you would think it does.  —Preceding unsigned comment added by 203.97.79.114 (talk) 13:18, 6 September 2010 (UTC)
 * I don't think that example works. Surely if we have $$x = y^2$$ then $$\tfrac{xy}{x + y^3} = \tfrac{y}{1+y}$$ and then $$\lim_{y \rightarrow 0}\tfrac{y}{1+y} = 0$$ ? (However, Meni's example below illustrates the point).Gandalf61 (talk) 13:36, 6 September 2010 (UTC)
 * Woops. $$\tfrac{xy^2}{x^2+y^4}$$ works if you want something elementary.  Or Meni's, as you say. --203.97.79.114 (talk) 13:48, 6 September 2010 (UTC)


 * [ec] 203's point is that only looking at straight lines is insufficient; you need it to work for every curve. The canonical example is
 * $$f(x,y)=\begin{cases}1&x>0,y=x^2\\0&\textrm{otherwise}\end{cases}$$
 * This function has no limit at the origin; but along any line, it converges to 0 at the origin.
 * Of course, in the OP's case, this objection can be considered a nitpick, since you have demonstrated that the function depends only on r, so finding the limit for a single curve suffices. -- Meni Rosenfeld (talk) 13:27, 6 September 2010 (UTC)


 * Strictly speaking, you cannot prove this statment (or any other statement involving limits) algebraically because it is a statement of analysis, not of algebra. The result depends on the topology that you use - with the usual topology on R2 the statement is true, but with the discrete topology (for example) it is false. The choice of topology does not affect the algebraic properties of R2 - therefore the result cannot be derived from algebraic properties alone. Gandalf61 (talk) 08:34, 6 September 2010 (UTC)
 * True, but we often teach an algebraic approach to limits that relies on analysis only for the fact that certain basic functions (addition, division where defined, etc) are continuous. Presumably that's what was being asked for.--203.97.79.114 (talk) 13:29, 6 September 2010 (UTC)


 * So what do we think the OP is looking for ? What would a proof of the OP's statement look like under this algebraic approach to limits ? Presumably it would not involve any &delta;s, &epsilon;s, open intervals or neighbourhoods ? Gandalf61 (talk) 13:51, 6 September 2010 (UTC)
 * I think it would be something like $$\lim_{(x,y)\rightarrow (0,0)} \exp{\tfrac{-1}{x^2+y^2}} = \exp\lim_{(x,y)\rightarrow (0,0)} {\tfrac{-1}{x^2+y^2}} = \exp(-\infty)=0$$. -- Meni Rosenfeld (talk) 14:55, 6 September 2010 (UTC)

dL
In physics you express uncertainty as dL or delta-L. WHat does this have to do with derivatives? 76.229.214.25 (talk)` —Preceding undated comment added 23:00, 5 September 2010 (UTC).
 * because if y=f(x), then dy=f '(x)dx is an approximate computation of the uncertainty of y based on the uncertainty of x. Bo Jacoby (talk) 08:13, 6 September 2010 (UTC).