Wikipedia:Reference desk/Archives/Mathematics/2008 June 9

= June 9 =

20th Century Maths
How have developments in 20st century maths affected our living and social life?

nb: This question has been asked before, but I need some starters. Thanks, 220.244.76.78 (talk) 00:36, 9 June 2008 (UTC)

This is an interseting question. I am sure I will miss some important items, but here is a list of some effects that I jump to mind:
 * 1) Encription and specifically public key encription schemes have enabled a lot of what we do on the web today (e-commerce, e-banking,...)
 * 2) Game theory and mathematical economics effects commerce, and to a lesser extent law and politics. For example, the idea of emissions trading can be seen as a game theory idea.
 * 3) Communication, image compression and for example DVD technology are based on mathematics such as Fourier analysis, information theory (entropy coding), coding theory
 * 4) Statistics plays a very significant role in many different disciplines, e.g., health, political planning,...
 * 5) There is a lot of sophisticated math that goes into various engineering projects: cars, planes, bridges, buildings... —Preceding unsigned comment added by OdedSchramm (talk • contribs) 01:37, 9 June 2008 (UTC)
 * Calculus is very much involved in optimization in industry. Most Calculus was 19th century
 * Computers and these here internets are founded in many ways on electrical engineering and computer science but mathematics has a large place as well, including the pseudo-randomization used in many web and computer apps. Chris M. (talk) 02:15, 10 June 2008 (UTC)
 * What 20th century developments are you thinking of in calculus? Algebraist 07:37, 10 June 2008 (UTC)
 * Guess I thought Calculus was newer then it is, it's mostly 19th century, not 20th. O well, nice catch. Chris M. (talk) 03:46, 11 June 2008 (UTC)
 * I guess that depends on what you mean by "most". The fundamentals were set down in the 17th century by Newton and Leibniz. Plenty of people have built on it since then, but I'm not sure if the work done in the 19th century really outweighs Newton's and Leibniz's contributions. --Tango (talk) 13:41, 11 June 2008 (UTC)

question about equations
How do you know when an equation has infinitely manyn solutions?How do you know when an equation has no solutions? —Preceding unsigned comment added by Lighteyes22003 (talk • contribs) 14:02, 9 June 2008 (UTC)

FORMULATION OF ENGINEERING OPTIMIZATION PROBLEMS —Preceding unsigned comment added by 59.94.72.225 (talk) 14:18, 9 June 2008 (UTC)


 * An equation f(x)=0, where f(x) is a polynomial of degree n, has generally n solutions. If the degree is zero, the equation is 1=0, which has no solutions, as no value of x is a solution. If the degree is minus infinity, the equation is 0=0, which has infinitely many solutions, as every value of x is a solution. If the degree is one, such as x+1=0, the equation has one solution, x=&minus;1. If the degree is two, such as x2+1=0, the equation has two solutions, x=i and x=&minus;i. The equation x2=0 has actually only one solution, x=0, but because the polynomial x2 can be written as a product of two polynomials of degree one, namely x2=(x&minus;0)&middot;(x&minus;0), the solution x=0 is often counted twice and called a double root. Is this helpful? Bo Jacoby (talk) 14:36, 9 June 2008 (UTC).
 * And, if the equation isn't a polynomial, then I don't know of any general method for telling how many solutions it's likely to have. Telling that is has no solutions is often quite easy, but will depend on the equation - you basically just have to show that the equation implies something impossible (1=0, say). The obvious way for an equation to have an infinite number of solutions is for it to be periodic, such as $$\sin x = 0$$, then you can just find all the solutions in one period and generalise it to the other periods. --Tango (talk) 14:46, 9 June 2008 (UTC)


 * In complex analysis it can be easier, as if you have an entire function of a single variable, it takes at most one finite value only finitely often, and takes all others infinitely often. There are ways to optimise this to more general single-variable meromorphic functions using nevanlinna theory. -mattbuck (Talk) 16:03, 9 June 2008 (UTC)


 * Hey there, fellow question answerers. Let's not get to advanced.


 * A typical way of showing that an equation has no solutions is to demonstrate that any solution would lead to trouble (i.e., to a contradiction). As an example, suppose we want to find all real numbers $$x$$ that solve the equation $$x^2=-1$$. We know that for any real number $$x$$, $$x^2$$ is something positive, but the right hand side of the equation is negative. Therefore there can be no solutions.


 * One way of knowing that there are an infinite number of solutions is to actually find an infinite number of solutions. Suppose we want to find all real numbers $$y$$ that solve the equation $$0\cdot y=0$$. Since zero multiplied by any real number is zero, $$y$$ can be any real number, of which there are an infinite number. —Bromskloss (talk) 17:16, 9 June 2008 (UTC)
 * I don't know if this is what the OP had in mind, but questions such as these are usually asked when we have a system of linear equations. -- Meni Rosenfeld (talk) 17:56, 9 June 2008 (UTC)

Arc square
I'm intrigued by a figure called as an arc square, which reminded me of some 7th grade math questions...

I found an arc square in

http://www.mathematische-basteleien.de/arcfigures.htm

the picture is

http://www.mathematische-basteleien.de/kreis406.gif

the question is: how to calculate the green area shown? It seems a question that requires me to add and subtract a lot of shapes.

1: Suppose I merge the green area and two adjacent shapes as the red one shown below,

http://www.mathematische-basteleien.de/kreis206.gif

then let me suppose that X, X, and the green area (i.e. the same shape) form this red area.

3: Then, suppose I take a shape which equals to the area of the square minus a quarter circle. This shape should consist of Y, Y, and X.

4: Suppose the width of the square is r cm, which is also the radius of any quarter circle inscribed in it.

Is it only possible to calculate the green area when one knows X and Y? I tried multiple times and it seems the three unknowns cannot be deduced by ordinary additions and subtractions of shapes. Is it necessary to partition the square to figure it out?--61.92.239.42 (talk) 15:18, 9 June 2008 (UTC)


 * The green area can be straightforwardly attacked with coordinate geometry: put the lower-left corner of the square at the origin and scale it to have sides of length 1. Then consider only the upper-left quadrant of the "arc square": the equation for its bounding circle is $$y=\sqrt{1-(x-1)^2}$$, and its other two sides are of course the lines $$x=1/2$$ and $$y=1/2$$.  The x coordinate of the left corner is thus $$x=1-\sqrt3/2$$: all we need is $$4a^2\int_{1-\sqrt3/2}^{1/2}\left(\sqrt{1-(x-1)^2}-\frac12\right)\,dx=\left(1-\sqrt3+\frac\pi3\right)a^2$$.  The red area is even easier: it is bilaterally symmetric, and half of it is the quarter circle minus the half square, so its area is $$\left(\frac\pi2-1\right)a^2$$.  --Tardis (talk) 16:29, 9 June 2008 (UTC)


 * Thanks. But is there any way that a "7th grade" kid can do it? In the textbook (I have to teach a kid) that it comes up, it is supposed that the learner uses the basic formulas for circles only, i.e. the area of circle = pi x r^2. I can't remember, but when I was doing the same question, I learned that it was possible to do it using this formula only, but it also involves a lot of adding and subtracting work - with all the four quadrants.

confused
e-infinity=0 obviously, but:

e-x=1 -x +x2/2! -x3/3! +x4/4! + etc

e-x= (1-x) + (3x2-x3)/3! + (5x4-x5)/5! etc

e-x= (1-x) + (3-x)x2)/3! + (5-x)x4)/5! etc (equation A)

if x=infinty then all the terms in equation A are negative (and infinite except at the limit).. Where did I go wrong?87.102.86.73 (talk) 17:10, 9 June 2008 (UTC)

In general I was trying to find a way to find the value of an infinite polynomial where the nth coefficient is (-1)n x fnn(x) where fnn(x) has similar properties to the above ie decreases evenntually (converges) for finite x. I need to work out such a sum for x=infinty any suggestions?87.102.86.73 (talk) 17:14, 9 June 2008 (UTC)
 * What you're trying to do is extend the Taylor series for ex to the extended real numbers. Unfortunately, as you've observed, it just doesn't work. The underlying mathematics used to derive the Taylor series doesn't all work on the extended reals. --Trovatore (talk) 17:35, 9 June 2008 (UTC)
 * I think it is possible to view this in the following way: Every power series has a radius of convergence. Convergence is only guaranteed within this radius, not on the boundary. The Taylor series of $$e^x$$ has an infinite radius of convergence, which means that convergence is not guaranteed at infinity.
 * A somewhat similar example: The Taylor series of $$\frac1{1-x}$$ around 0 is $$1+x+x^2+x^3+\cdots$$. This "gives" us the paradoxical $$-1=\frac{1}{1-2}=1+2+4+8+16+\cdots$$. The classical explanation is that 2 is simply outside the radius. -- Meni Rosenfeld (talk) 17:45, 9 June 2008 (UTC)
 * Of course, that makes perfect sense $$\mod2^n$$ for some natural number n. -mattbuck (Talk) 18:43, 9 June 2008 (UTC)

Thanks - that makes sense. I have another problem below - and related...


 * Another way to think of it (see big O notation) is that, for the series to "start converging", the terms are going to have to start getting small. $$O \left( \frac{x^n}{n!} \right) = O \left( \frac{x^n}{n^n} \right) = O \left( \left( \frac{x}{n} \right)^n \right)$$, so we'd be looking for the point where $$n \gg |x|$$.  Now what happens as $$|x| \to \infty$$?  --Prestidigitator (talk) 20:59, 9 June 2008 (UTC)

power series - integral at infinty
I have a power series..

the zeroth coefficient a0 is a 'number' (I want to normalise this function ie make it's integral between 0 and infinty = 1) for now a0 can be 1 or whatever is easier..

a1=ka0/2

a2=(ka1+Ma0)/2x3

in general (not a1 above)

an+2(n+2)(n+3) = kan+1+Man

ie an+2 = (kan+1+Man)/(n+2)(n+3)

so the coefficients are known. k and M are constants. I might be interested in solutions for different values of these constants .. but for now I'd just like to get to 'step1' - that is evaluating the integral.

Integrating is no problem - but I can't evaluate the integral at infinity - what to do? I guess I should try to convert this infinite polynomial into something I can integrate..

Question..How to go about this..I'd really like to do this analytically rather than approximate it numerically.. Clues or links please - I don't think I've yet learnt the tools to do this. Clearly I expect the function to converge, and coverge at infinity (radius of convergence at infinty). (It's not that important that I solve it - please don't 'bust a nut' over it if it's 'difficult' - as I don't intend to..) Thanks.87.102.86.73 (talk) 18:38, 9 June 2008 (UTC)

As a short answer: am I right in thinking that the antiderivative of this function; evaluated at infinity, (for finite a0, k, M ) will allways be finite?87.102.86.73 (talk) 18:58, 9 June 2008 (UTC)


 * The function itself isn't even guaranteed to be finite in the limit x -> infinity, in fact, it is trivial to show that it is not if a0, k, M > 0. To have a finite integral over the range 0 to infinity, you need the value of the function itself to go to 0 at infinity, which seems implausible.  Dragons flight (talk) 19:28, 9 June 2008 (UTC)
 * I agree - I found (by another route ie the back door) than if M=-1, k=-2 the function is e-x .. that's one solution, also I have function=0 (not helpful here).. but are there any others? I'd like to be able to show one way or another.. A link to 'writing power series as sums of ' is what I need!87.102.86.73 (talk) 20:00, 9 June 2008 (UTC)
 * As long as the series converges uniformly (I think... pointwise convergence may be enough, but I don't think so), you can integrate it by just integrating term by term. That will give you the integral as another power series, which you can then play around with so see if you can get anywhere (I'm not quite sure what it is you're trying to do, so I'm not sure where you would go from there). It's probably best to just assume uniform convergence and see how far you can get - there will be constraints on the constants but, with any luck, what they are will become obvious since they'll be the only values which give meaningful answers. --Tango (talk) 21:53, 9 June 2008 (UTC)
 * Everywhere pointwise-convergent power series are locally uniformly convergent, so you're fine with term-by-term integration (as long as the series converges at all, anyway). Algebraist 22:25, 9 June 2008 (UTC)
 * Yes, I can integrate each term, but can't evaluate that integral at infinty - any ideas?87.102.86.73 (talk) 16:40, 10 June 2008 (UTC)
 * To evaluate an improper integral you evaluate it between 0 and a finite M, and then take the limit at M tends to infinity. How easy it is to do that will depend on the constants. For some, it will be very easy, for others it will be completely impossible, since the limit won't exist, there will probably also be choices that are possible but very difficult (you could attempt to do it numerically in those cases). --Tango (talk) 17:59, 10 June 2008 (UTC)
 * Yes - I'd sort of got that far. I was wondering if there where any alternative series I could convert to. I was avoiding numerical methods... eg suppose I have e^-x as a polynomial (but don't recognise it as e^-x) - could anyone suggest a way to show (analytically/proof) that the limit as x becomes infinty is zero? Even a guess.87.102.86.73 (talk) 19:05, 10 June 2008 (UTC)
 * I seem to be banging my head on the 'radius of convergence' mentioned in the above section. Anyone know of any methods that deal with evaluation on the radius of convergence? I'm clutching at straws.87.102.86.73 (talk) 19:08, 10 June 2008 (UTC)


 * I don't know if this is of any help, but if y denotes the value of the power series evaluated at x, then y satisfies the linear ODE
 * $$xy''+2y'-(Mx+k)y = 0\,.$$
 * In the special case that $$k^2 = 4M$$, the series sums to $$y = a_0\exp(k/2)$$. (The "back door" statement above requires M = +1.)
 * In the special case that $$k = 0$$, the series sums to $$y = a_0S(M^{1/2}x)$$, where $$S(x) = \sinh(x)/x$$, a hyperbolic counterpart to the sinc function. --Lambiam 21:17, 10 June 2008 (UTC)
 * That's the function I started with (not unsuprisingly)... normally I'd be satisfied with the relationship above, but for this 'homework' I also need ʃydx (0 to ∞) to be finite. (or maybe ʃy2dx (0 to ∞) to be finite. In either case I'm stuck at the integral for general values of k and M (both of which are non-zero and likely negative). Thanks anyway.87.102.86.73 (talk) 15:30, 11 June 2008 (UTC)