Wikipedia:Reference desk/Archives/Mathematics/2011 October 31

= October 31 =

1 x 10 to the power of nought
Excuse my lack of notation - it mirrors my lack of understanding of maths. I believe (could be wrong) that 1 x 10 to the power of nought=1. This seems counter-intuitive to me. Surely 10*itself zero times is 0, x1=0? Thanks in advance and even more thanks if you dumb down to my level (if possible) with any answers. --Dweller (talk) 12:35, 31 October 2011 (UTC)


 * It is a basic property of powers, any number to the power of one is equal to itself, and any number to the power of zero is equal to one. Ten to the power of zero is not the same as ten times zero, your logic is flawed. To illustrate this:


 * a b /ab = ab-b = a0
 * a b /ab = 1 = a0

Plasmic Physics (talk) 13:00, 31 October 2011 (UTC)
 * This is a bit counter-intuitive. But consider that powers of numbers are always a bit weird unless the exponent is a positive whole number. 10n is indeed "10 times itself n times", but this only makes sense when n is a whole number greater than 0. The phrase "10 times itself zero times" doesn't really mean anything, and certainly 10-1 which equals .1 is not really "10 times itself -1 times". Not to mention 101/2 which equals $$\sqrt {10}$$ ("10 times itself 1/2 times") or $$10^\pi\approx 1 385$$ ("10 times itself pi times"). All of these weird ones need to be defined in useful ways which are consistent with how people generally want powers to behave. These are really just conventions, though the convention that 100=1 is so useful that there's (almost?) never any reason to do it any other way. We do it this way so that this rule works: ab ac = ab+c when b or c is zero. Staecker (talk) 13:27, 31 October 2011 (UTC)

Thank you both, although Plasmic you lost me very quickly. Staecker, I think I follow what you're saying - you're saying it doesn't really equal 1, we just say it does because it's convenient to do so, but in reality, the expression is meaningless as a reflection of reality. --Dweller (talk) 13:50, 31 October 2011 (UTC)
 * I would say 100 equals whatever we say it equals- when we define abstract mathematical concepts, we get to say what they mean however we want to. Your offhand reasoning "because it's convenient to do so" is actually the way that everything in mathematics is defined. (Consider the number zero itself, which from a simplistic point of view doesn't "really" represent anything- it is so overwhelmingly convenient to allow 0 as a number that we universally do it.) As for reflecting reality, the exponential function y=10x is one of the most commonly occurring functions in nature, and is reflected in many many "real-world" systems. See exponential growth. In all these cases, defining 100=1 is the "right" way to do it, if you want your definition to correspond to the real world. — Preceding unsigned comment added by Staecker (talk • contribs) 14:30, 31 October 2011 (UTC)
 * If you prefer a simpler but less general argument: 104 = 10000, 103 = 1000, 102 = 100, 101 = 10. Each time the exponent is reduced by 1, we divide by 10. So 100 = 10/10 = 1. Continuing to negative exponents, 10-1 = 1/10, 10-2 = 1/100, and so on. See also Exponentiation.
 * Mathematicians love to generalize things. In order to do this they sometimes reformulate traditional definitions so the new definition gives the same result as the old definition in the cases covered by the old definition. In cases not covered by the old definition, they usually want the new definition to keep some properties of the old definition. 10n = "10 multiplied by itself n times" is a traditional old definition. That particular wording makes best sense when n is a positive integer above 1 (101 = "10 multiplied by itself 1 time" is already a stretch to many non-mathematicians). There are other ways to formulate the definition of 10n which make sense in more cases such as n=0.
 * Another way to look at it: 0 is the identity element of addition (a+0 = 0+a = a), so an empty sum is given the value 0. But 1 is the identity element of multiplication (a×1 = 1×a = a), so an empty product is given the value 1. To a mathematician, "10 multiplied by itself 0 times" can actually make sense as an empty product, and that is 1. PrimeHunter (talk) 14:22, 31 October 2011 (UTC)


 * You can also appeal to your intuition about graphs. If you plot a graph of, say, 2x, you will get a graph that looks like this one here. Now, if you agree that all the points to the right of the Y axis are correct (21=2, s2=4 etc.) then you'd see that the graph would look pretty funny if it suddenly changed at zero. And, in general, "funny graphs" that change suddenly aren't likely to be graphing functions that are useful and generalizable in mathematics. Now, that's obviously not a format proof, but it might appeal to your intuition. If not, one other question: what would 100.1 be? And 100.01? and 100.000001? &mdash; Sam 76.118.180.196 (talk) 16:49, 31 October 2011 (UTC)
 * Think of exponentiation as working like 103=1×10×10×10=1000, 103=1×10×10=100, 101=1×10=10, and 100=1. Notice the ever-present 1. Also, graphing y=10x forms a curve that is without any breaks or jumps which is how nature behaves. --Melab±1 &#9742; 20:57, 31 October 2011 (UTC)


 * Dweller, early statements about it being "just a convention" may have mislead you if you concluded that in reality, the expression is meaningless as a reflection of reality. It is just a convention only in the sense that all of mathematics is human construction.  We do, however, constructed mathematics to be consistent, and the results consistently display strong reflections of reality (even when we choose such poor names as imaginary numbers). 100 = 1 is not an arbitrary choice; it is the only answer that can possibly make sense, whether it comes from the a fundamental definition of exponentiation and an understanding of the empty product, as described above by PrimeHunter, or as an extension of exponentiation initially described with only positive integer exponents, extended to include all integer exponents in such as way that preserves the laws of exponentiation, as described (tersely) above by Plasmic Physics. -- 110.49.227.102 (talk) 02:05, 1 November 2011 (UTC)


 * Sadly, according to his reply at 13:50, 31 October 2011 (UTC), eiter he does not understand my working, or the properties of powers. Plasmic Physics (talk) 02:25, 1 November 2011 (UTC)

If you still want to think of it as repeated multiplication, then think of it like this. You don't start with 0 at the beginning. Otherwise 101 would be 0*10 = 0. And 102 would be 0*10*10 = 0. You start counting at 1, because, with multiplication, 1 is the identity element, just like 0 is for addition. PrimeHunter and Melab's explanations about 1000->100->10->1 are also very straightforward. Lastly, xa * xb = x(a+b). This fits with the 'repeated multiplication' concept. Now, if you set b to 0, you get xa * x0 = x(a+0) = xa. The only number that x0 could be is: 1. KyuubiSeal (talk) 02:44, 1 November 2011 (UTC)

Ha. I thank you all for your kindness and erudition. Melab's explanation of 10 to the two =1x10x10... makes sense. I'd love to understand the equation KyuubiSeal gives... as well as Plasmic's stuff, but I don't, which is my problem! Thanks again. --Dweller (talk) 13:12, 2 November 2011 (UTC)

Hi Dweller. Look at it this way. 102=100. That is: "ten power two is one with two zeroes behind". That rule also applies for 103=1000=one with three zeroes, and 104=10000=one with four zeroes. So it makes sense that 100=one without zeroes=1. Bo Jacoby (talk) 20:01, 6 November 2011 (UTC).

Differentiation
Is differentiation an unbound operator on the Schwartz space? If so can someone give me a proof of why? Money is tight (talk) 16:13, 31 October 2011 (UTC)
 * Do you mean unbounded? Looie496 (talk) 16:50, 31 October 2011 (UTC)
 * Yes. Money is tight (talk) 21:41, 31 October 2011 (UTC)
 * It is bounded on the unit ball and continuous. Verify with the metric
 * $$d(f,g) = \sum_{n,m=1}^\infty 2^{-n}2^{-m}\frac{p_{n,m}(f-g)}{1+p_{n,m}(f-g)}$$
 * where $$p_{n,m}(f)=\sup_x |x^nf^{(m)}(x)|.$$ (Similarly in higher dimensions.) 108.3.75.33 (talk) 22:15, 31 October 2011 (UTC)

Lorenz attractor—continous or not?
Are the equations for the Lorenz attractor continuous or do they involve calculating each triplet (X,Y,Z) recursively, in which case the Lorenz attractor would be discrete? I've read that only differential equations with three or more dimensions could exhibit chaotic behavior and I thought the Lorenz attractor was an example of this. --Melab±1 &#9742; 19:18, 31 October 2011 (UTC)
 * In spite of my previous efforts you are still confusing the Lorenz attractor with the Lorenz oscillator. The Lorenz attractor is not a set of equations, it is the limit that solutions to the Lorenz oscillator equations converge to as t goes to infinity.  If you don't get that distinction straight you are never going to be able to make sense of this. Looie496 (talk) 22:27, 31 October 2011 (UTC)
 * Lorenz oscillator redirects to Lorenz attractor. I am just confused on how the associated equations are solved. If a simpler system would be of better use explaining how these types of problems are solved, could you explain how the van der Pol oscillator is solved? The articles say things like "x is a function of t", which I understand as being parametric but I don't see "t" in the equations. How are they solved? --Melab±1 &#9742; 23:31, 31 October 2011 (UTC)
 * The equations for the Lorenz oscillator can't be solved in closed form -- that is, it is not possible to write down a formula for the solution. They can be solved approximately using a computer, for specific initial conditions:  if you assume that x, y, and z have particular values at time t, then you can use the differential equations to tell you approximately their values at time $$t + \Delta t$$.  The smaller $$\Delta t$$ is, the better the approximation.  This is called the Euler method -- for more information you could consult our article on Numerical ordinary differential equations, or any textbook on differential equations.  Looie496 (talk) 01:07, 1 November 2011 (UTC)
 * Then how do I figure out the values of x, y, and z from $$t + \Delta t$$? --Melab±1 &#9742; 02:50, 1 November 2011 (UTC)
 * I don't really know your background -- have you taken calculus yet? Looie496 (talk) 03:28, 1 November 2011 (UTC)
 * I am currently taking calculus. --Melab±1 &#9742; 19:59, 1 November 2011 (UTC)
 * Okay, then you presumably know the definition of a derivative. Take the first equation for the Lorenz oscillator (the one for dx/dt), and rewrite it using the approximation $$\frac{dx}{dt} \approx \frac{x(t + \Delta t) - x(t)}{\Delta t}$$, then solve for $$x(t + \Delta t)$$.  Do the same for the equations for dy/dt and dz/dt. Looie496 (talk) 01:43, 2 November 2011 (UTC)
 * I don't where to start though because there is no function $$x(t)$$ that exists. How are $$x(t+\Delta t)$$ and $$x(t)$$ found then, by solving an implicit equation? And, yes, I do know the limit definition of a derivative. --Melab±1 &#9742; 02:10, 2 November 2011 (UTC)
 * The problem here is that you're really operating beyond the level you are prepared for. The people who wrote those pages assumed readers would know, after seeing dx/dt, dy/dt, and dz/dt appear, that x, y, and z are functions of t -- that is, x stands for x(t) etc. Looie496 (talk) 02:26, 2 November 2011 (UTC)
 * I understand that x, y, and z are functions of t, but I don't see how someone can work with them without a single t in the equations. --Melab±1 &#9742; 02:42, 2 November 2011 (UTC)
 * The Lorenz oscillator equations are continuous - they are a system of non-linear ordinary differential equations. We know that a solution to the Lorenz oscilattor exists for any initial set of conditions x_0, y_0, z_0 - in other words, we know there is a set of functions x(t), y(t), z(t) which satisfy the Lorenz equations and for which x(0) = x_0, y(0) = y_0 and z(0) = z(0). But it is not possible to write x(t), y(t) z(t) in terms of simple functions like sin, cos, log etc. (or, at least, this is very unlikely to be the case). This is not surprising - it is very easy to write down a system of NLDEs for which a "closed form" solution cannot be found.
 * Even though a closed form solution is not known, the Lorenz oscillator can still be studied using numerical methods which produce an approximate solution. One such method is the Euler method that Looie496 mentioned above. To use the Euler method, you calulate dx/dt, dy/dt and dz/dt at time 0 using the initial values x_0, y_0, z_0. You then assume that dx/dt, dy/dt and dz/dt stay constant during some small time interval h (we know this isn't quite true, which is why this is an approximation). So you can multiply the initial value of dx/dt by h to find the approximate change in x between time 0 and time h, and add this change to x_0 to get a value x_1 which is the approximate value of x at time h. Do the same thing for y and z to find y_1 and z_1. Then use x_1, y_1 and z_1 to find the (approximate) values of dx/dt, dy/t and dz/dt at time h, assume these values are constant again between time h and time 2h, calculate the approximate changes in x, y and z between h and 2h, and add these to x_1, y_1, z_1 to find the approximate values of x,y,z at time 2h. Continue until you run out of patience. In effect, you are approximating the system of continuous differential equations
 * $$\frac{dx}{dt} = f_1(x,y,z) \quad \frac{dy}{dt} = f_2(x,y,z) \quad \frac{dz}{dt} = f_3(x,y,z)$$
 * by the system of discrete difference equations
 * $$x_{n+1} = x_n + hf_1(x,y,z) \quad y_{n+1} = y_n + hf_2(x,y,z) \quad z_{n+1} = z_n + hf_3(x,y,z)$$
 * The Euler method is simple to implement, but it is not always stable - sometimes the error terms in this approximation can grow very large. There are more stable and more accurate numerical approximation methods, such as the ones on this list. Edward Lorenz was using numerical methods to study the Lorenz equations on an early computer in 1961 when he noticed that a small change in the initial conditions led to a very large change in the behaviour of the numerical solution - the now-famous butterfly effect. Lorenz could have dismissed this as a programming error or a problem with his approximation method, but instead he looked more closely and found it was a feature of the equations themselves. Gandalf61 (talk) 09:37, 1 November 2011 (UTC)
 * So what are $$f_1(x,y,z)$$, $$f_2(x,y,z)$$, and $$f_3(x,y,z)$$ defined as in this case, $$f_1(x,y,z)= \sigma (y - x)$$, $$f_2(x,y,z)=x (\rho - z) - y$$, and $$f_3(x,y,z)=xy - \beta z$$ (these probably aren't right considering there is no z in the first function)? Or are they undefined since there is not closed form solution? And in the notation, what are dx/dt, dy/dt, and dz/dt supposed to be representing the derivative of, the solved forms of x, y, and z? I'm unsure as to whether or not the result of the differential/difference equations is what is plotted on the graph for x, y, and z. Also, are there any continuous chaotic systems with closed-form solutions that you know of? --Melab±1 &#9742; 20:10, 1 November 2011 (UTC)
 * I looked at the Euler's method article. I had assumed secants were used in it. --Melab±1 &#9742; 20:23, 1 November 2011 (UTC)
 * Yes, for the Lorenz oscillator case $$f_1(x,y,z)$$, $$f_2(x,y,z)$$, and $$f_3(x,y,z)$$ are the three expressions on the right hand side of the Lorenz equations. I was stating a slightly more general case, where the right hand side expressions could be any given functions of x,y and z. Yes, dx/dt, dy/dt and dz/dt are the first derivatives of our unknown functions x(t), y(t) and z(t). The Euler method uses tangents rather than secants. Geometrically, you can think of it as drawing a tangent line to the curve at the point t=0, approximating the next piece of the curve by moving a short distance along this tangent line, then repeating the process - so the curve (x(t), y(t), z(t)) is approximated by a sequence of short straight lines. Gandalf61 (talk) 08:50, 2 November 2011 (UTC)

Okay, Random Weird Question
Has anyone every proved a result by non-constructively proving that a proof exists? --COVIZAPIBETEFOKY (talk) 20:56, 31 October 2011 (UTC)
 * Godel's completeness theorem comes to mind. I don't know any "everyday natural language" examples though. Money is tight (talk) 21:43, 31 October 2011 (UTC)
 * How is that even possible? Suppose A is a proof that there exists a proof of B. That means that A is itself a proof of B. Therefore A is a constructive proof because it involves finding an example of a proof of B, specifically, itself. Widener (talk) 05:50, 1 November 2011 (UTC)
 * Good point, although A could happen in a different formal system than the one in which the statement B and the alleged proof A shows to exist reside. --COVIZAPIBETEFOKY (talk) 14:19, 1 November 2011 (UTC)
 * Well that depends on how you interpret the meaning of "proof". To me, if you manage to constructively prove the statement "there exist a proof of B", it's still a non constructive proof of B because it asserts existence without saying what it actually is. Money is tight (talk) 17:24, 1 November 2011 (UTC)

That question came up on MO a while back, and the answers there are pretty good. (And I miss seeing EmilJ around here). 69.228.170.114 (talk) 09:39, 2 November 2011 (UTC)
 * Awesome, Thanks! --COVIZAPIBETEFOKY (talk) 18:42, 2 November 2011 (UTC)

I've heard of a supposedly significant theorem in number theory, whose proof goes something like this: suppose the extended Riemann hypothesis (ERH) is true. Then by [complicated argument involving ERH], X is true. Suppose on the other hand that ERH is false. Then by [completely different complicated argument involving not-ERH] X is true. Therefore by the law of the excluded middle we conclude that X is true. I don't know what theorem this is about though. 69.228.170.114 (talk) 09:44, 2 November 2011 (UTC)
 * I remember reading about that, and it doesn't qualify, because it's still a proof. --COVIZAPIBETEFOKY (talk) 18:42, 2 November 2011 (UTC)