Wikipedia:Reference desk/Archives/Mathematics/2009 August 16

= August 16 =

Bounded linear operator on sigma-finite measure space
Let $$(X, \mu)$$ be a $$\sigma$$-finite measure space. Let $$1 \leq p \leq \infty$$ and $$g \in L^\infty$$. Show that the operator $$T : L^p \to L^p$$, $$Tf = gf$$, is bounded, and $$\|T\| = \|g\|_\infty$$.

Proof starts:

T is bounded: If $$1 \leq p < \infty$$, then
 * $$\|Tf\|_p = \left( \int_X |fg|^p \right)^{1/p} \leq \|g\|_\infty \left( \int_X |f|^p \right)^{1/p} = \|g\|_\infty \cdot \|f\|_p$$.

If $$p = \infty$$ then it's easier, $$\|Tf\|_\infty = \|fg\|_\infty \leq \|f\|_\infty \|g\|_\infty$$. So, for $$1 \leq p \leq \infty$$, we have $$\|T\| = \sup_{f \neq 0} \frac{\|Tf\|_p}{\|f\|_p} \leq \|g\|_\infty < \infty$$. So, T is bounded and this is half the inequality.

I'm not exactly sure where to go from here. I have a friend's solution but they do something which is obviously wrong at this point. Any ideas? Thanks! StatisticsMan (talk) 00:56, 16 August 2009 (UTC)
 * For simplicity, make $$\|g\|_\infty=1$$. By definition of the $$ L^\infty$$ norm, there is a set $$E$$, with $$0<|E|<\infty$$ such that $$|g|\ge 1-\epsilon$$ on $$E$$. Let us put $$f_\epsilon=1/|E|^{1/p}$$ on $$E$$, and 0 otherwise. Then $$\|f_\epsilon g \|_p \ge 1-\epsilon$$, and $$\|f_\epsilon\|_p=1$$. Phil s 02:45, 16 August 2009 (UTC)
 * I was thinking about a similar thing, but if $$g \in L^\infty$$, that just means it's essentially bounded (we can just say bounded). So, g = 1 is such a g, even satisfying $$\|g\|_\infty = 1$$.  But $$mE = \infty$$ then, no matter what $$\epsilon > 0$$ is. StatisticsMan (talk) 03:25, 16 August 2009 (UTC)
 * Oh, I get it. You're not defining E to be the set where $$|g| \geq 1 - \epsilon$$.  If that set is real big, just take a part that has finite measure. And, since $$X = \cup X_n$$ with $$\mu X_n < \infty$$, we can look at the set where $$|g| \geq 1 - \epsilon$$ is true and intersect it with $$X_n$$ to get a set where it is true such that the set has finite measure, just as you said.  Thanks! StatisticsMan (talk) 03:28, 16 August 2009 (UTC)

Entropy
What is a simple derivation for the formula for calculating entropy? Mo-Al (talk) 08:14, 16 August 2009 (UTC)
 * In what context? Do you mean in statistical mechanics?  In which case, entropy is $$S = - k\sum_i P_i \ln (P_i) \!$$, which sums over the different microstates that correspond to a given macrostate.  But I wouldn't call this "derivable", rather, that it's a definition of entropy.--Leon (talk) 14:38, 16 August 2009 (UTC)
 * Well I suppose my question is really what motivates the definition -- why is this formula a natural thing to use? Mo-Al (talk) 19:03, 16 August 2009 (UTC)
 * Ah! I was taught that the definition allows one to "correctly" derive ALL the thermodynamics of any particular system; that is, a non-trivially different definition would lead to different, incorrect (in accordance with experiment) thermodynamic predictions.  This definition, however, allows one to correctly predict thermodynamic behaviour.  However, there is an intuitive "logic" to it, in that the more microstates corresponding to a given macrostate, the lower the information communicated by the macrostate variables.  For instance, in a lowest entropy configuration, with one microstate corresponding to the macrostate in question, the macrostate tells us EVERYTHING about the system.  For a high entropy configuration, the system contains much more information than the macrostate variables (temperature, pressure etc.) can communicate.

Does that make any sense?--Leon (talk) 19:20, 16 August 2009 (UTC)


 * And see this.--Leon (talk) 19:25, 16 August 2009 (UTC)
 * You may also want to take a look at Entropy (information theory). -- Meni Rosenfeld (talk) 20:02, 17 August 2009 (UTC)
 * In particular Claude Shannon's 1948 paper "A Mathematical Theory of Communication" gives a derivation/plausibility argument for the entropy formula. He lists some properties such a formula should have and it naturally leads to the familiar result. —Preceding unsigned comment added by 87.102.1.156 (talk) 18:36, 19 August 2009 (UTC)

Please fill the gaps in a table
Please help me to fill the gaps in this table: Derivatives and integrals of elementary functions in alternative calculi--MathFacts (talk) 08:25, 16 August 2009 (UTC)


 * This sort of thing should be filled from looking up a book or journal - and I doubt you'll find much in that way for those systems, they're pretty obscure! Anyway they can mostly be filled in fairly automatically by formulae in the systems once you can do the normal version so overall I'm not certain about the point. Dmcq (talk) 12:50, 16 August 2009 (UTC)
 * My computer algebra systems do not give expressions for the empty cells.--MathFacts (talk) 13:39, 16 August 2009 (UTC)


 * I don't think an algebra system producing results from things you feed in is counted as a notable source. And I had to cope with such a result stuck in an article just a day or so ago where the results were not quite right and had ambiguities and besides weren't in simplest terms. Dmcq (talk) 13:45, 16 August 2009 (UTC)
 * Source can not be notable or unnotable. It's reliable or unreliable. Notable or unnotable may be topic.--MathFacts (talk) 13:56, 16 August 2009 (UTC)


 * Running expressions you think of through a program and sticking them in a table is counted as original research. The stuff in wikipedia really does have to have some bit of notability and if you can't find some tables giving the expressions or something very similar then the subject really doesn't satisfy notability criteria. Wikipedia isn't for publishing facts you dreamt might be useful and worked out, they have to be notable. Dmcq (talk) 14:25, 16 August 2009 (UTC)
 * Nobody will publish something that can be derived with a machine - it is simply ridiculous. Only scientific discoveries are published. Using machine or calculator is not a research of course.--MathFacts (talk) 15:16, 16 August 2009 (UTC)
 * What you put in is original research as far as wikipedia is concerned. Please read the leader of that article. It is quite specific and is a core wikipedia policy. I know maths doesn't always follow that to the letter and it shouldn't either for straightforward things. However you have set up an article that reflects nothing in published literature full of things you thought of yourself and generated the results using a program without any results in sources to check them against. That really is going way beyond the bounds. Interesting articles I would have preferred kept where the person had cited the facts but where the synthesis was not something that had been written about have been removed because of that rule. Dmcq (talk) 16:11, 16 August 2009 (UTC)
 * There are published rules on how to compute such things and anyone can prove them either by himself or using some mathematical software. Regarding integrals anyone can take a derivative to verify.--MathFacts (talk) 16:19, 16 August 2009 (UTC)
 * I would like to point out that there is a link at the top of each column in this article (except for 1) which takes you to the article on that specific subject. And, those likely have most of the formulas in the table.  So, it's not original research, at least mostly. StatisticsMan (talk) 16:29, 16 August 2009 (UTC)
 * Check them yourself and you'll see they don't. Dmcq (talk) 16:31, 16 August 2009 (UTC)
 * And ones which have seem to have been filled in by MathFacts presumably the same way as he did this list. Generating content using his computer without looking up things. Dmcq (talk) 16:41, 16 August 2009 (UTC)


 * (ec) One of the gaps to be filled in your table asks for the "discrete integral" of $$\textstyle f(x):=a^\frac{1}{x}$$. Checking your link, I see you want a solution of the functional equation $$\scriptstyle F(x+1)-F(x)=f(x)$$. By the way, the definition there is a bit misleading: you should be aware of the fact that the solution is unique up to a 1-periodic function, not just up to an additive constant C (and the analogous remark holds for your "multiplicative discrete integral"). That said, a particular solution is
 * $$F(x):=x+\sum_{k=0}^\infty\left(a^\frac{1}{k+1}-a^\frac{1}{k+x} \right),$$
 * not a particularly relevant information as far as I know.--pma (talk) 16:39, 16 August 2009 (UTC)
 * Yes. There is a inconsistency in that article. It needs clarification. I know about it. Still not have enough time to clarify. The abovementioned equation is not enough to define the sum. But it is usually defined through Faulhaber's formula or equivalent.--MathFacts (talk) 17:33, 16 August 2009 (UTC)


 * So, I don't see a real case of original research, but I do not see any reason for the name "alternative calculi", either. --pma (talk) 18:46, 16 August 2009 (UTC)
 * Any suggestions?--MathFacts (talk) 18:47, 16 August 2009 (UTC)
 * "A table of formulas"? Maybe it's too generic... ;) --pma (talk) 07:51, 17 August 2009 (UTC)

Learning about multiple regression online
It is many years since I last was conversant with multiple regression, I need a refresher. And I have never used any recent MR software. Could anyone recommend any easy online learning materials to use please? I want to do multiple regression on a number of economic time series with the aim of forecasting the independant variable. Forecasting, not modelling - I think this means that correlations between the variables is not important, as it would be if I was modelling; but I'm not sure. I'm also aware of the different types of MR and unsure which would be best to use. Thanks. 78.144.207.41 (talk) 17:12, 16 August 2009 (UTC)

Uniformly convergent sequence of polynomials
Characterize those sequences $$\{p_n(x)\}_{n=1}^\infty$$ of polynomials such that the sequence converges uniformly on the real line.

Here's another qual question for which I have no solution. If you happen to know this is not true or the solution is very complicated, you can just say that. If you know of a somewhat elementary solution, any help would be great. Thanks. StatisticsMan (talk) 19:27, 16 August 2009 (UTC)


 * Any such sequence must be a sequence of either constant polynomials, or polynomials with identical leading coefficient after finitely many terms, no? Otherwise, for $$m, n > N$$, $$|f_n(x) - f_m(x)| \sim |ax^p-bx^p| = |(a-b) x^p|$$ which for $$a \ne b$$, $$p > 0$$ is arbitrarily large as $$|x| \to \infty$$. Fredrik Johansson 19:45, 16 August 2009 (UTC)


 * By the same reasoning, wouldn't the next term need the same coefficient after finitely many terms? If the first terms of the polynomials had same coefficient, they cancel out and you're left with a polynomial of degree 1 less.  So, would the answer be that the sequence of polynomials must differ only in the constant term after finitely many terms of the sequence, and those constant terms must converge to some real number? StatisticsMan (talk) 20:16, 16 August 2009 (UTC)
 * Yes --pma (talk) 20:21, 16 August 2009 (UTC)


 * Of course; my blunder. Fredrik Johansson 22:09, 16 August 2009 (UTC)
 * I don't think the above is correct. The polynomials don't even have to be bounded in degree (see Taylor series example below) as long as the high order terms decrease in size fast enough.  67.117.147.249 (talk) 04:57, 17 August 2009 (UTC)
 * What pma said (both above and below) is correct. Eric. 216.27.191.178 (talk) 03:05, 18 August 2009 (UTC)

Since you were asking complex analysis questions earlier, this current question may be getting at the idea that the Taylor series of an analytic function is uniformly convergent. Those are sequences of polynomials whose terms look like (x-a)^k/k! so the higher order coefficients if you treat them as polynomials in x don't stay the same as you change a slightly, but those terms get squashed down by the k! divisor as the order gets higher. 67.117.147.249 (talk) 00:18, 17 August 2009 (UTC)
 * Is a Taylor series of an analytic function uniformly convergent everywhere, rather than just on compact sets? --Tango (talk) 00:29, 17 August 2009 (UTC)
 * It's only convergent inside the circle of convergence, and (maybe) at some points on the boundary. (Or another possibility is that I've gotten confused and am thinking of something else--I don't remember this stuff very well any more).  67.117.147.249 (talk) 04:51, 17 August 2009 (UTC)
 * Actually I probably do have it wrong and am mixing several ideas together incorrectly. The article uniform convergence gives exp z as an example of a function whose Taylor series is uniformly convergent, but it doesn't say that's the case for all analytic functions (within the radius of convergence around a given point).  Analytic functions are uniformly continuous but I guess that's not enough.  Maybe some expert here can straighten this out.  67.117.147.249 (talk) 05:46, 17 August 2009 (UTC)
 * What they say is correct; the thing is very simple. The only polynomials that are uniformly bounded on R are the constants. Hence two polynomials have a finite uniform distance ||p-q||∞ if and only if they differ at most by the constant term. So a sequence of polynomials uniformly convergent on R is, up to finitely many polynomials, a sequence of polynomials that differ at most by the constant term. (The exponential series is not uniformly convergent on C, but only uniformly convergent on compact sets of C) --pma (talk) 06:21, 17 August 2009 (UTC)
 * Most analytic functions are not uniformly continuous either. f(z)=z2 is an example. Algebraist 12:40, 17 August 2009 (UTC)


 * Here's an intuitive way to think about which may or may not be correct. If we're talking nonconstant entire functions, then the derivative is also an entire function.  Only if the original function was a linear polynomial is the derivative a constant.  Otherwise, the derivative is unbounded.  So, it would seem to me that any nonconstant entire function, other than a linear polynomial, is obviously not uniformly convergent on the entire complex plane.  Of course, a linear polynomial isn't either as has already been discussed.  It is true for sure that if a holomorphic function has a power series which converges inside some open disk that the power series is uniformly convergent on any closed subset of that disk.  You can't really say anything more than that without a specific function. StatisticsMan (talk) 12:43, 17 August 2009 (UTC)


 * Linear polynomials are uniformly continuous, in fact Lipschitz continuous. Conversely, if f is an entire uniformly continuous function, we can find a δ > 0 such that |f(z) − f(w)| ≤ 1 whenever |z − w| ≤ δ, which implies |f(z)| ≤ |f(0)| + |z|/δ + 1, i.e., f is bounded by a linear function, and therefore it is itself linear by (one of the variants of) Liouville's theorem. — Emil J. 12:53, 17 August 2009 (UTC)


 * Good point. Let $$\epsilon > 0$$ be given.  If f(x) = ax + b and $$|x - y| < \epsilon/a$$, then $$|f(x) - f(y)| = |ax + b - ay - b = |a(x - y)| < a \epsilon/a = \epsilon$$.  I take back what I said. StatisticsMan (talk)