Wikipedia:Reference desk/Archives/Mathematics/2008 March 3

= March 3 =

Idefinite Integral
I'm a highschool freshman in geometry and just trying to get acquainted with calculus so it wont be such a shock when I get there. So the following may sound like a homework question, but its not. For the function $$f(x)=x^3+4\,\!$$, and the integral of that function from 2 to 5 $$\int_2^5x^3+4\ \operatorname{d}x$$, what is $$F(5)\,\!$$, $$F(2)\,\!$$, and $$\int{x^3+4}\ \operatorname{d}x$$. I'm not sure what to do with the constant when taking the indefinite integrals. (I'm trying to get the definite integral from 2 to 5) Will someone please explain to me how to do treat the constant when antidifferentiating? Zrs 12 (talk) 00:01, 3 March 2008 (UTC)

Is it $$\int{x^3+4}\ \operatorname{d}x=\frac{x^4}{4}+4x+C$$ and then $$F(2)\,\!$$ and $$F(5)\,\!$$ is just 2 and 5 plugged into that equation? Zrs 12 (talk) 00:55, 3 March 2008 (UTC)


 * Yes, that's it exactly. As for the constant in an indefinite integral, you don't do anything with it, it just stays there. Your answer will have "+ c" at the end. --Tango (talk) 01:04, 3 March 2008 (UTC)


 * Note that if you have a definite integral, then you can ignore the constant since it will add the same value to F(5) and F(2), and thus when you take the difference to find the definite integral it doesn't matter whether there's a constant or not. -mattbuck (Talk) 01:17, 3 March 2008 (UTC)

Alright, thanks. I have one more question though. I saw a video on how to reverse the chain rule with substitution. The equation was $$\int[(\sin{x})^3\cos{x}]\operatorname{d}x$$. Then, u was substituted for sine of x so the equation became $$\int{u^3}\frac{du}{dx}dx$$. The dx's canceled so $$\int{u^3}du=\frac{1}4u^4$$. What happens to the du? Why does it just dissapear? (I know that du is just like dx when there is no substitution but, I don't know why dx dissapears either.) Zrs 12 (talk) 01:39, 3 March 2008 (UTC)
 * You can think of the dx as "with respect to x". It's kind of like 1+1=2 - where did the + sign go?  It's a notational thing, simply put.  x42bn6 Talk Mess  04:04, 3 March 2008 (UTC)
 * Mostly it is notation. Informally, however, the differential terms are produced by the chain rule and behave like variables, so you could think of it like $$y = x^2, dy = 2x\,dx, \frac{dy}{dx} = \frac{2x\,dx}{dx} = 2x$$, which is the answer you would expect. As another example, let's say we have the equation $$\frac{dy}{dx} = \frac{2x}{y}$$. We can rearrange this (using the technique of separation of variables) into $$y\,dy = 2x\,dx$$. We can then integrate both sides, which is a valid operation because we have the necessary differential elements produced by the chain rule. If we didn't have those, there would be no function which would produce that equation when differentiated, and thus we could not take an antiderivative. —Preceding unsigned comment added by Aaron Rotenberg (talk • contribs) 05:27, 3 March 2008 (UTC)
 * If you want the more technical answer, the dx or du or whatever represents the width of the rectangles you're adding up. When you integrate, you divide the area under the graph into tiny rectangles, the area of each rectangle is the height, f(x), times the width, dx. You then add all those areas together to get the answer (the integral sign was originally an S for Sum), so the dx is absorbed into the answer. --Tango (talk) 11:35, 3 March 2008 (UTC)
 * As I remember it, substitution was confusing to me until I figured out that the du, kind of signifies a temporary sub-equation that you're doing. You can think of the du is just to make sure, that you don't mix up your temporary sub-equation with the entire one. -- User:Mac_Davis 00:06, 4 March 2008 (UTC) —Preceding unsigned comment added by 216.120.217.92 (talk)

Ok. Thanks all, that makes a little more sense to me now. Yet, alas, I am once again confused. I was watching a video about integrating with partial fractions. The equation read, $$\int\frac{\operatorname{d}P}{P(10-P)}$$ the guy then said that dP must equal 1. What? I apparently need some major clairification on this whole notation bit (like the difference between d/dx and dy/dx). Yeah calculus is confusing me. (By the way, sorry to ask so many questions in such rapid succesion, but not understanding this is somewhat troublesome.) Thanks much :-), Zrs 12 (talk) 02:32, 4 March 2008 (UTC)


 * Apparently, the guy meant that
 * $$\int\frac{\operatorname{d}P}{P(10-P)}=\int\frac{1}{P(10-P)}\operatorname{d}P$$
 * Such ones are normally (and rightly) omitted. Hence, you decompose
 * $$\frac{1}{P(10-P)}$$
 * into (for instance) $$\frac{A}{P}+\frac{B}{10-P}...$$
 * Then integrate. Pallida  Mors  03:38, 4 March 2008 (UTC)

Regarding "the difference between d/dx and dy/dx": the basic notation for a differential is $$d(f(x, y, z, ...))$$. So, for example, when we write $$d(e^{2x})$$, we apply the chain rule and get $$e^{2x} \cdot d(2x) = e^{2x} \cdot 2 \cdot d(x)$$. We don't know what $$d(x)$$ is (x might itself be a function of some other variables), so we leave it alone. It is conventional to omit the parentheses when differentiating a single variable, so $$d(x) = dx$$. Also, $$\frac{d}{dx} (f(x)) = \frac{d(f(x))}{dx}$$ and $$\frac{d}{dx} (y) = \frac{dy}{dx}$$. It's just easier to write it the first way sometimes. « Aaron Rotenberg « Talk « 07:16, 4 March 2008 (UTC)

JC-1 H2 Mathematics: Sequences and Series question
Hello! I don't know if you guys remember me. I transferred to a different junior college, so my IP is different.

Last week's H2 Maths lecture was about the topic "Sequences and Series". There is something I don't understand about series. A series is defined as "the sum of the terms in a sequence". Does that mean that the series of the first 5 odd positive integers is 25, since 1+3+5+7+9=25?

--166.121.36.232 (talk) 05:15, 3 March 2008 (UTC)


 * Sure is. And in fact, any series of consecutive odd integers starting from 1 is a square. Fun fact. Black Carrot (talk) 05:47, 3 March 2008 (UTC)


 * Although, perhaps to come closer to the question you were asking, the word also refers to the string of symbols representing that summation. Depending on the context, it can refer to the form of the sum or its value. Black Carrot (talk) 05:48, 3 March 2008 (UTC)
 * I agree, but would prefer to express this as: a series is a mathematical expression given in the form of a sum. As is usual in mathematical discourse, the words "the value of" may be omitted, so that "the series" can also mean "the value of the series", just as with "(the value of) the sum" and "(the value of) the integral". --Lambiam 14:05, 3 March 2008 (UTC)


 * $$\sum_{n=0}^{4}2n+1=1+3+5+7+9=25$$ --wj32 t/c 07:34, 3 March 2008 (UTC)


 * You're absolutely correct. Visit me at Ftbhrygvn (T alk |C ontribs |L og |U serboxes ) 07:51, 3 March 2008 (UTC)

Friedrichs' Inequality
Hey guys, I am working with the following inequality (listed in our appendix without a proof of course), $$||v||_{L^2(\Omega)}\leq C(||\nabla v||^2_{L^2(\Omega)}+||v||^2_{L^2(\Gamma)})^{1/2}$$ where $$v \in C^1(\bar{\Omega})$$ and $$\Omega$$ is a bounded domain in $$\mathbb{R}^n$$ with smooth boundary $$\Gamma$$. The question is, the inequality seems obvious to me using Poincare's Inequality. Would this be a valid proof? I start with Poincare's inequality and then simply add a nonnegative quantity on the right side and end up with Friedrichs' inequality. If not, what is a proper proof of this inequality. Thanks! A Real Kaiser (talk) 08:48, 3 March 2008 (UTC)


 * It is basically like that, but with a small twist. Poincare either requires that the function have average value 0, or have boundary values 0.  So that extra term is there to take care of "zeroing" the function.  If you've proven the average value 0 version of poincare, then you are gold: subtract the average of v from v, to get a function that satisfies poincare inequality, and then add back the average of v times the volume of omega.  You have to fiddle with Hoelder inequality, but the really trivial version applying to ordered pairs of real numbers (also known as L^p of a two point set). JackSchmidt (talk) 16:49, 3 March 2008 (UTC)

Well, we only have Poincare's inequality with boundary values zero. I didn't even know that there was an inequality with average value zero. In addition, the book gives a "hint" that in order to prove it, "integrate by parts in the identity $$\int_{\Omega} v^2 dx=\int_{\Omega} v^2 \nabla \phi dx$$, where $$\phi(x)=\frac{1}{2n}|x|^2$$ and $$\Omega$$ is a bounded domain in $$\mathbb{R}^n$$". I don't understand how this hint is helpful.

A Real Kaiser (talk) 19:07, 4 March 2008 (UTC)
 * Small correction: "integrate by parts in the identity $$\int_{\Omega} v^2 dx=\int_{\Omega} v^2 \Delta \phi dx\,$$, where $$\phi(x)=\tfrac{1}{2n}|x|^2$$ and $$\Omega$$ is a bounded domain in $$\mathbb{R}^n$$"
 * If that correction is enough to help, don't read further. I include the standard steps you should learn to do as part of a PDE course, then I include the cheap trick that should teach you what dirty cheats analysts are.


 * Set u=vv, then integrate by parts to get:
 * $$\int_{\Omega} \nabla u \cdot \nabla \phi ~dx = \int_{\partial\Omega} u \frac{\partial\phi}{\partial\nu}~d\nu - \int_{\Omega} u \Delta \phi ~dx$$
 * Now Du = 2vDv, Dphi = x/n, Lap(phi) = 1, so substituting we get:
 * $$\int_{\Omega} 2v \frac{\partial v}{\partial x} \frac{|x|}{n} ~dx = \int_{\partial\Omega} v^2 \frac{|x|}{n}~d\nu(x) - \int_{\Omega} v^2 ~dx$$
 * Now rearrange to get the integrand in roughly the right place:
 * $$\int_{\Omega} v^2 ~dx = \int_{\partial\Omega} v^2 \frac{|x|}{n}~d\nu(x) - \int_{\Omega} 2v \frac{\partial v}{\partial x} \frac{|x|}{n} ~dx$$
 * Now replace those bare |x|/n with the maximum of |x|/n over Omega, to get some positive constant depending only on the domain.
 * $$\int_{\Omega} v^2 ~dx \leq C \left( \int_{\partial\Omega} v^2 ~d\nu(x) + \int_{\Omega} \left|2v \frac{\partial v}{\partial x}\right| ~dx\right)$$
 * Now be too eager, and rewrite things as norms to see what is left:
 * $$\|v\|_{L^2(\Omega)}^2 \leq C \left( \|v\|_{L^2(\partial\Omega)}^2 + \int_{\Omega} \left| 2v \frac{\partial v}{\partial x}\right| ~dx\right)$$
 * Aha, only one integral to fix, and Cauchy-Schwarz will give us the right integrand
 * $$\left|2v \frac{\partial v}{\partial x}\right| \leq |v|^2 + |\nabla v|^2$$
 * Substitute this in to get:
 * $$\|v\|_{L^2(\Omega)}^2 \leq C \left( \|v\|_{L^2(\partial\Omega)}^2 + \int_{\Omega} |v|^2 + |\nabla v|^2 ~dx\right)$$
 * Then write them as norms:
 * $$\|v\|_{L^2(\Omega)}^2 \leq C \left( \|v\|_{L^2(\partial\Omega)}^2 + \|v\|_{L^2(\Omega)}^2 + \|\nabla v\|_{L^2(\Omega)}^2 \right)$$
 * Genius! Now if C &lt; 1 (which it is for small domains!), we just subtract C |v|_2 from both sides, and we are done!  Except, what if C is big (which it is for big domains)?  Like if C=1, then we have |v| &le; |v| + |Dv| + ... Well duh! You add some positive things to |v| and you don't make it smaller.
 * I recommend making sure you've come this far, since basically up to here has been "intuitive", just plug and chug PDE. Since math is unbelievably hard, it might take a little time to even get to where these steps are plausible, much less intuitive.  Once they seem slightly reasonably, you might want to ask your prof, "What now?!" since I think that is a reasonable reaction.


 * At any rate, if you want to finish, instead of using "Cauchy-Schwarz for Gullible Bystanders", we use an evil scaling technique which pretty much lets you claim anything you ever want to claim. I call it "Cauchy Schwarz for Dirty Rotten Scoundrels".  Set A=ae, B=b/e, then 2ab = 2AB &le; A^2 + B^2 = e^2 a^2 + (1/e)^2 b^2.  In particular, choose e=sqrt(1/(2C)), then
 * $$\left|2v \frac{\partial v}{\partial x}\right| \leq \frac{1}{2C}|v|^2 + 2C|\nabla v|^2$$
 * continuing as before ends up with
 * $$\|v\|_{L^2(\Omega)}^2 \leq C \|v\|_{L^2(\partial\Omega)}^2 + \tfrac{1}{2} \|v\|_{L^2(\Omega)}^2 + 2C^2\|\nabla v\|_{L^2(\Omega)}^2$$
 * Now move the 1/2|v| over to get:
 * $$\tfrac{1}{2}\|v\|_{L^2(\Omega)}^2 \leq C \|v\|_{L^2(\partial\Omega)}^2 + 2C^2\|\nabla v\|_{L^2(\Omega)}^2$$
 * Clear denominators, choose the larger of two constants, and sqrt to get:
 * $$\|v\|_{L^2(\Omega)} \leq \max(\sqrt{2C},2C) \left(  \|v\|_{L^2(\partial\Omega)}^2 + \|\nabla v\|_{L^2(\Omega)}^2\right)^{\tfrac{1}{2}}$$


 * At any rate, hope it helps. JackSchmidt (talk) 20:44, 4 March 2008 (UTC)

Wow, thanks Jack. It must have taken some time to type that all up but it certainly made some things clear. But I am going to continue. In the same paragraph, the next line says that it can also be proven that $$||v||\leq C(||\nabla v||^2+(\int_{\Omega}v dx)^2)^{1/2}$$ for $$v \in C^1$$ where $$\Omega$$ is the unit square in $$\mathbb{R}^2$$. Then I think to myself that this is obvious from Friedrichs' inequality so all that I have to show somehow is that $$||v||_{L_{2}(\Gamma)} \leq \int_{\Omega}v dx$$ but I don't think that this is true. Am I right?

A Real Kaiser (talk) 16:49, 5 March 2008 (UTC)


 * The L^2([-1,1]) norm of f(x)=x is 1, and the L^2({-1,1}) norm of f is nonzero (sqrt(2) ish I think), but int( x, x=-1..1) is 0, so no, |f|_2 need not be less than int(f).  Basically int(|f|) >= |int(f)| is so true (and backwards from what you want), that even adding some exponents is not going to change anything.
 * One nice thing about this version of the Poincare (Friedrichs) inequality is that it gives you the average value version I mentioned earlier. If the average value of f is zero on Omega, then the int(f)^2 term disappears, and you are left with the standard poincare inequality valid for all H^1 functions with average value 0.  I think it still requires (piecewise) smooth(ish) boundary of the domain.
 * I don't know about the square, but for all ball there are very explicit, very elementary calculus arguments that work. The Friedrichs inequality on a ball is just polar integration and fundamental theorem of calculus (an analyst here showed me this when I asked about the inequality earlier).  I suspect a similar trick should work for a cube; basically integrate v(x,y) = v(x,0) + int( dv/dy(x,t) dt, t=0..y) over the cube, and then switch some orders of integration to get an integral over the boundary, etc. JackSchmidt (talk) 18:14, 5 March 2008 (UTC)

Well, thanks for all your help. I will fiddle with it some more.A Real Kaiser (talk) 03:05, 8 March 2008 (UTC)

special functions
I`ve been reviewing the list of special functions here and i want to add another special function. i also need your opinions about it.

Function no1: Assume we put any real number in the form,(a.b),this new function "L" is going to transform that number to(0.b),for example,if we take the function f(x)=x,then,L(x)=0,if x=integer,and, L(a.b)=0.b,if x=a.b,for example,L(1.2)=0.2,and so on.Obviously the maximum value of "L" never exceeds one.

Function no2: this function is about the area we can form by agiven length. Assume we have alength,s, and we want to start generating different areas by closing this length in away we keep all the SUBLENGTHS WE CUT EQUAL.Oviously we will start with atriangle where each sublength=(s\3).The function,A(numbers of sublengths) has aminumium value at the triangle, A(3)=[(s^2)\36](3)^(1\2),and amaximum value at the circle,A(n)=(s^2)\4pi,where,n→infinity. Now if we give up the conditions about the sublengths should be equal,and start with, A(1), to represent the triangle,A(2),to represent the square and so on,we may put,A(n)=nA(1).

thank you. Husseinshimaljasimdini (talk) 11:22, 3 March 2008 (UTC)


 * Your first function is called the fractional part function, and is described in our article on floor and ceiling functions. I don't know of a name for your second function. However, if you intend to write a Wikipedia article about it, remember our "no original research" policy. To write a Wikipedia article you must find a reliable published source that describes this function and its properties - you cannot use Wikipedia to write about your own thoughts and discoveries (unless they have been published). Gandalf61 (talk) 12:16, 3 March 2008 (UTC)


 * Function 2 if I read correctely your function is
 * Area of polygon as a function of number of sides. (where perimeter is fixed)
 * I don't think this is specialised enough for inclusion, as it can be easily derived from other equations and is found at Regular_polygon

87.102.93.158 (talk) 12:59, 3 March 2008 (UTC)


 * I didn't fully understand your last statement "Now if we give up the conditions about the sublengths should be equal,and start with, A(1), to represent the triangle,A(2),to represent the square and so on,we may put,A(n)=nA(1)."
 * Could you explain? The final formula would be correct if the perimeter depended on the value of n. I looks like part is missing.87.102.93.158 (talk) 13:04, 3 March 2008 (UTC)
 * Also did you always mean regular polygons??87.102.93.158 (talk) 13:05, 3 March 2008 (UTC)ok sir,thank you for your consern.obviously it would be always regular polygons sliced into identical

triangles if n=even,biger than4.and when n=odd,like heptagon,it is also easy to slice it into identical triangles starting from the center.Husseinshimaljasimdini (talk) 11:43, 4 March 2008 (UTC)
 * It's still not that clear to me what you are describing - is that you make N-sided polygons out of N identical triangles - possibly making a shape that is not planar?87.102.44.156 (talk) 14:57, 4 March 2008 (UTC)

Another combinations problem
Is there a way to solve the following problem without diagrams?

A secretary types 4 letters and then addresses the 4 corresponding envelopes. In how many ways can the secretary place the letters in the envelopes so that NO letter is placed in its correct envelope?

Thanks. Imagine Reason (talk) 21:44, 3 March 2008 (UTC)


 * See derangement --tcsetattr (talk / contribs) 23:58, 3 March 2008 (UTC)


 * No way, the exact problem? Thanks! Imagine Reason (talk) 01:36, 4 March 2008 (UTC)

The really amazing thing about mathematics is not the solution but the fact the mathemagicians of the past can find the solution by inventing new (or using existing) mathematical principles using nothing but pencil and paper. 202.168.50.40 (talk) 00:57, 4 March 2008 (UTC)


 * Yes, it's beautiful. Imagine Reason (talk) 01:36, 4 March 2008 (UTC)