Wikipedia:Reference desk/Archives/Mathematics/2008 October 13

= October 13 =

Volume ratio
The ratio of the volume of a cube to that of a sphere which will exactly fit inside the cube is? —Preceding unsigned comment added by NAKABBOSS (talk • contribs) 00:20, 13 October 2008 (UTC)
 * Well, the formulae for the volume of a cube and a sphere are fairly simple, you just need to know what you use for the radius and the side length. -mattbuck (Talk) 00:34, 13 October 2008 (UTC)


 * The ratio is: click here. -hydnjo talk 00:39, 13 October 2008 (UTC)

1/x integral
Is the integral of 1/x from 0 to 1 infinity or undefined? —Preceding unsigned comment added by 165.124.138.143 (talk) 02:06, 13 October 2008 (UTC)

Infinity (you might find proving this formally using the epsilon delta definition interesting).

Topology Expert (talk) 02:20, 13 October 2008 (UTC)


 * If you want to get pedantic, then the Riemann integral does not exist, but the improper integral (and the Lebesgue integral, IIRC) evaluates to +infinity. Confusing Manifestation (Say hi!) 04:44, 13 October 2008 (UTC)

help in editing gelfand pairs
I hope it's the rite plase to ask this requst. I'm curently editing the artical gelfand pair and I wont to put ther a section abut its aplications. so I rote it and put it in the discation page. However my english and riting stayl are to poor to put it in the artical, so I'll b grateful if somebody will fix this section and put it in the artical. Aizenr (talk) 10:47, 13 October 2008 (UTC)

Calculating a growth (or interest) rate
If I have a starting population of X, a number of periods P, and a final population of Y, how do I calculate the population growth per period assuming it's a smooth growth rate? — PhilHibbs | talk 15:41, 13 October 2008 (UTC)
 * Assuming you are talking true exponential growth, then its easiest to manipulate your compounding period to satisfy f(1)=e. Setting up the problem with no aim, is difficult and unintuitive...

The population P at any time is given by the formula... $$P(t)=Ae^{kt} \,$$

A represents the starting value (but you chose X which is no problem) however k is very tricky. I'll give you several examples of k, until you see a pattern.

For a bacteria population that triples every seven years, $$k=\frac{ln 3}{7} \,$$ and your period is 7 years.

For a sample of Uranium that has a half life of 4.5 billion years,$$k=\frac{ln {\frac{1}{2}}}{4.5*10^8} \,$$ and your period is 4.5 billion years.

For a human population that goes up by a factor of 1.001400763 each generation, $$k=\frac{ln 1.001400763}{1} \,$$ and your period is 1 generation.

For a rabbit population that goes up 10 fold, year over year, $$k=\frac{ln 10}{1} \,$$ and your period is 1 year.

Now lets answer your question...


 * starting population, we'll assign the symbol A = 5,000
 * final population, we'll assign the symbol Z = 6,000,000,000
 * Final population divided by Starting population = 1,200,000 (because $$\frac{Z}{A}$$ = 1.2 million)
 * number of periods, we'll assign the symbol N = 10,000
 * length of period, we'll assign the symbol L = 1 generation
 * The growth rate per period is $$e^k \, $$ =
 * e is called euler's number
 * The value of $$e^k \, $$ that when multiplied by itself ten thousand times = 1.2 million, super easy
 * $$e^k \, $$ = 1.001400763 and you're done!

This can be verified by checking the original equation. $$P(t)=Ae^{kt} \,$$ I needed this refresher--use it or lose it! Sentriclecub (talk) 16:38, 13 October 2008 (UTC)


 * Yes, I meant a smooth exponential, e.g. starting with 2000, over 1000 generations, a final population of 42,000,000 requires a growth rate of 1%. I calculated this example the easy way around! 1% growth actually gives 41,918,311. Oh, and this isn't homework, I left school 22 years ago. I could probably have done this in my head back then. — PhilHibbs | talk 15:51, 13 October 2008 (UTC)
 * A little more info - I'm trying to demonstrate how many "Mitochondrial Eve"s there should be given a given population bottleneck at any point in the past. So for a population bottleneck of 5,000 people 10,000 generations ago, I need a population growth rate that would result in 6,000,000,000 people after 10,000 generations of growth. I can trial-and-error it and get 0.149267437184% but it takes ages of fiddling around to get each answer. — PhilHibbs | talk 16:04, 13 October 2008 (UTC)
 * $$Y=Xr^P$$
 * You know Y, X and P, so solve to find r. Gandalf61 (talk) 15:58, 13 October 2008 (UTC)
 * Yeah, I don't know how to reverse that $$r^P$$ bit. — PhilHibbs | talk 16:05, 13 October 2008 (UTC)
 * click* - it's $$r^(-P)$$ isn't it? So that gives $$r=(Y/X)^(-P)$$? (if you forgive the broken math tags) — PhilHibbs | talk 16:08, 13 October 2008 (UTC)
 * D'oh, no, it's r^(1/P) — PhilHibbs | talk 16:13, 13 October 2008 (UTC)
 * D'oh, no, it's r^(1/P) — PhilHibbs | talk 16:13, 13 October 2008 (UTC)


 * (ec) Suppose it grows by a factor k each period. Then
 * Y = X kP
 * so
 * $$k = \left(\frac{Y}{X}\right)^{\frac{1}{P}}$$
 * That gives you the growth factor over a particular period.


 * Alternatively, if you want the current instantaneous rate of growth, consider
 * $$Y(t) = X k^\frac{t}{T_P} $$
 * $$\qquad \qquad = X e^\frac{t \ln k}{T_P} $$
 * where TP is the duration of each time period.


 * Therefore, at any time t the rate of growth is
 * $$\frac{d Y(t)}{dt} = X \frac{\ln k}{T_P} e^\frac{t \ln k}{T_P} $$


 * And so the proportional rate of growth is
 * $$\frac{1}{Y}\frac{d Y(t)}{dt} = \frac{\ln k}{T_P} $$


 * Substituting in k from above gives
 * $$\qquad \qquad = \frac{\ln Y - \ln X}{P T_P}$$


 * You can get to the second-to-last step more quickly, once you've realised that that is what you want, by noting that
 * $$\frac{1}{Y}\frac{d Y(t)}{dt} = \frac{d \ln Y(t)}{dt} $$


 * Jheald (talk) 16:17, 13 October 2008 (UTC)

Dumb question about infinity
Are there an infinite number of infinities? Sorry if this is a dumb question. Jooler (talk) 20:21, 13 October 2008 (UTC)


 * There are no dumb questions about infinity, it's a very confusing topic! The answer is "it depends". There are different things "infinity" can mean. It can be used to mean "going on forever" (when doing limits and things), so if you're talking about a sequence indexed by natural numbers, there's just one infinity. If you're talking about a function on the real numbers, there are two, positive and negative, if you're talking about a function on the plane or some high dimensional space, then there are infinitely many, one in each direction (the infinity being the same as the number of real numbers or the "cardinality of the continuum"). On the other hand, there are infinite ordinals (numbers used for ordering things) and cardinals (numbers used for counting things), there are infinitely many of each of those too (I'm not sure which infinity - I think there are more ordinals than any infinity you can think of, so I guess that's a kind of super-infinity!). --Tango (talk) 18:07, 13 October 2008 (UTC)


 * So, as the set of real numbers is larger than the set of integers, and the set of complex numbers is larger still, can you logically say that the infinity represents the count of one set is larger than than the infinity that represents the count of another set?


 * $$\infin_r > \infin_i $$


 * Jooler (talk) 20:22, 13 October 2008 (UTC)


 * The infinity which is the cardinality ("size") of the real numbers is, indeed, large than the cardinality of the integers. The complex numbers are actually the same size as the real numbers in terms of cardinality. For example, you can pair them up by taking a complex number with its real and imaginary parts written as decimal expansions and then form a single real number by interspersing the digits, so 0.25+0.36i would become 0.2356 (there are details to be worked out, but the basic idea works). Real numbers can obviously be mapped to complex numbers by simple inclusion - a real number is already a complex number. --Tango (talk) 20:30, 13 October 2008 (UTC)


 * Most definitely. Consider the natural numbers - it's obvious that there are an infinite number of these, and that infinity is $$\aleph_0$$, and it's called a countable infinity. However, by Cantor's diagonal argument, we know that there is no bijection between the set of natural numbers and the set of real numbers (though there is between the naturals and rationals). It's clear that there are more real numbers than rational numbers, so the Cardinality of the continuum is $$> \aleph_0$$, and could therefore be said to be more infinite. We call this an uncountable infinity. In fact, $$\vert\mathbb{R}\vert = 2^{\aleph_0}$$, though opinion is divided as to whether this is the smallest uncountable infinity (which is $$\aleph_1$$). For more information, look at the article on the Continuum Hypothesis. -mattbuck (Talk) 20:32, 13 October 2008 (UTC)

Ok, so as $$\infin_r > \infin_i $$, can it be concluded that there must be an infinite number of infinities between those two infinities? Or is this becoming absurd? Jooler (talk) 20:57, 13 October 2008 (UTC)


 * As for whether there can be an infinity between those two, read what mattbuck had to say above (with more at Continuum Hypothesis, which is exactly the question you asked). As for an infinite number of infinities, if we are talking about cardinalities, then for any set S, the power set of S is necessarily larger, so we can always get a larger and larger infinity by taking the power set over and over again.  Eric.  131.215.45.106 (talk) 21:13, 13 October 2008 (UTC)
 * Right, but what he literally asked was whether there were infinitely many cardinals between $$\aleph_0$$ and $$2^{\aleph_0}$$. That's not exactly the same question as CH, although it has a similar answer (namely, the question can't be settled from the accepted axioms of set theory alone, and whether that's a defect of the question, or just indicates the weakness of the accepted axioms, is a bone of contention). --Trovatore (talk) 21:17, 13 October 2008 (UTC)
 * Indeed, it's closely related to CH - it's a stronger version of the negation of CH. --Tango (talk) 21:26, 13 October 2008 (UTC)
 * Well, you can look at it that way if you want, although I think that these days a significant current of thought suggests $$\aleph_2$$ for the cardinality of the continuum (for example, this is the value that follows from the proper forcing axiom, and it's the value that shows up in certain models related to Woodin's Ω-logic, though I've never quite got it straight whether Woodin's anti-CH argument specifically supports the value $$\aleph_2$$). --Trovatore (talk) 22:12, 13 October 2008 (UTC)
 * My bad, I misread what he wrote. I seem to be in the habit of that lately (a sign I should be sleeping instead of being on WP?).  My apologies to Jooler for the mistake.  Eric.  131.215.159.210 (talk) 06:59, 14 October 2008 (UTC)

Thanks for all the answers guys. Unfortunately my maths education stopped at A-Level, and that was 25 years ago, so some of the above is over my head, but its all helpful! Jooler (talk) 22:21, 13 October 2008 (UTC)


 * I've seen arguments for variants of a new axiom which would imply that there was exactly one new infinity between $$\aleph_0$$ and $$2^{\aleph_0}$$. So the subject is by no means dead with saying one could assume this either way. Dmcq (talk) 09:45, 14 October 2008 (UTC)
 * Yes, that's what Trovatore was talking about above. Algebraist 10:26, 14 October 2008 (UTC)
 * Sorry, true - I should read through all the answers more carefully Dmcq (talk) 18:16, 14 October 2008 (UTC)

Solving equations
Hi, I'm a hobbyist programmer writing games, and I need a way to solve equations of the form (with complex constants A, B, C, D, E):

$$AB^x + Cx + D = E^x$$

Is there a good (ie, fast) way of telling if there is a real solution quickly (because there might not be), and if so finding it? 79.78.73.72 (talk) 18:24, 13 October 2008 (UTC)


 * To start with, we will need to write
 * $$B^x = \exp[x \cdot \log B]$$
 * and similarly with $$E^x$$. In general $$\log B$$ will have both real and imaginary parts, making things messy.  If you have some notion of a bound on the value that x could take, then you can approximate the exponentials with a Taylor series and numerically solve the resulting polynomial with what I presume to be standard techniques.  This polynomial will have a finite number of complex solutions, with no reason a priori to suppose any of them to be real, so that will have to be tested for (approximately, of course).  Eric.  131.215.45.106 (talk) 19:50, 13 October 2008 (UTC)


 * Some questions: What sort of accuracy are you hoping for for x, and if you are able to reasonably bound x, what sort of bounds might you have?  Do you have any additional information about A, B, C, D, E, or can they be arbitrary / very messy?  How fast is fast enough (e.g., are you hoping to solve this equation thousands of times per second, or once a minute?)?  Eric.  131.215.45.106 (talk) 19:50, 13 October 2008 (UTC)


 * We can use
 * $$|E|^x = |Cx + D + AB^x| \geq |Cx| - |D| - |AB^x| = |C| \cdot |x| - |D| - |A| \cdot |B|^x $$
 * $$x \ln |E| \geq \ln (|C| \cdot |x| - |D| - |A| \cdot |B|^x)$$
 * to get a lower bound on x, assuming |E| > 1. If |E| > |B|, then similarly
 * $$x \ln |E| \leq \ln (|A| \cdot |B|^x + |C| \cdot |x| + |D|)$$
 * should yield an upper bound on x. Although writing an algorithm to find these lower/upper bounds seems rather difficult, I think the computation time would be sort of reasonable.  Eric. 131.215.45.106 (talk) 20:03, 13 October 2008 (UTC)


 * Hi, this is part of the collision detection code, so several hundred times a second. At the moment, 0 < x <= 1, but other bounds are possible if they're more convenient. As for accuracy, it's not a great concern. A relative error of 5% would be acceptable. B and E have magnitude 1; the rest are entirely arbitrary. 79.78.73.72 (talk) 20:13, 13 October 2008 (UTC)


 * Oh, also $$B^x$$ is ambiguous; for example, if B = 1 and x = 0.25, then $$B^x$$ could be $$\pm 1, \pm i$$.  If x is irrational then the situation is rather worse;  how do you want this ambiguity resolved?  Equivalently, I need you to choose a branch of the log function, e.g., choose $$\log B \in i \cdot [0, 2\pi)$$ or $$\log B \in i \cdot [-\pi, \pi)$$, etc.  Eric.  131.215.45.106 (talk) 21:06, 13 October 2008 (UTC)


 * Consider the line segment L connecting D to C + D in the complex plane; these represents the values attainable by the function Cx + D for the given range of x.  Let $$\mathcal B$$ be the annulus of outer radius |A| + 1 and inner radius |A| - 1 centered at the origin (if |A| < 1, then $$\mathcal B$$ is a disc of radius |A| + 1).  If L does not intersect $$\mathcal B$$, then there is no solution x.  If a portion of L does pass through the interior of $$\mathcal B$$ (say, for $$a < x < b$$), then for every irrational x between a and b and for every positive $$\epsilon$$ there exists branches of the complex logarithm such that your equation is satisfied within an error of $$\epsilon$$.  However, if you wish to fix upon a particular branch ahead of time (which I assume you do) then approximate solutions are no longer guaranteed, and the problem becomes much more interesting.  Eric.  131.215.45.106 (talk) 21:46, 13 October 2008 (UTC)


 * Here are some more ideas... let
 * $$f(x) = Cx + D + AB^x - E^x = Cx + D + Ae^{x \log B} - e^{x \log E}$$,
 * so that we wish to find the zeroes of f. If we choose the $$\log B \in i [-\pi, \pi)$$ branch of the logarithm, we find that
 * $$f'(x) = C + A (\log B) e^{x \log B} - (\log E) e^{x \log E}$$
 * $$|f'(x)| \leq |C| + |A| |\log B| + |\log E| $$
 * $$\leq |C| + (|A| + 1) \pi = M $$.
 * Therefore we can easily find an upper bound on the rate of change in f. This allows us to easily determine that f is bounded away from 0 for large intervals of x;  for example, if we discover that |f(0.5)| = 0.1 M, this tells us that $$f(x) \neq 0$$ for all $$x \in (0.4, 0.6)$$.
 * So, I would approach the problem roughly as follows. You wish to solve for x several hundred times a second;  with a fast computer, you get an allotment of a million "simple" computations for each time you wish to solve for x (assuming that you have close to a billion "simple" computations available per second).  If evaluating f takes 100 "simple" computations, then that means we may evaluate f 10000 times within our allotted time.
 * First, we can easily narrow down the range of x to be considered. Construct that line segment L I mentioned above, and the annulus $$\mathcal B$$, and find their intersection;  this amounts to finding points of line-circle intersection, which is a minor hassle to program but computationally trivial.  This intersection gives us a reduced range (or possibly a pair of ranges) of the form [a, b];  we know that all solutions to f(x) = 0 will occur for x within that range.
 * Now, since we can evaluate f 10000 times, evaluate f for (say) 1000 points uniformly distributed in the interval [a, b]. Each time we evaluate f(x), with $$x \in [a, b]$$, this evaluation gives us a range of numbers in which we know that no solutions exist:
 * If $$x - \frac {|f(x)|}M < y < x + \frac {|f(x)|}M$$, then $$f(y) \neq 0$$.
 * Assuming that M is reasonably small (but A and C are arbitrary, so we can't really assume that), we should be able to eliminate most or all of the interval [a, b] as candidates for solutions to f(x) = 0, leaving us with a small number of intervals [a',b'] where the zero or more roots x may lie. Since we still have 9000 evaluations of f left available, we repeat the process 9 more times to get highly accurate values of the roots of f.
 * For this general approach to be at all workable, would likely require carefully coding up this algorithm and examining it's behavior in practice for actual inputs; furthermore, it depends on having reasonably bounded inputs, as M must be "small" for the approach to work.
 * If M is not small enough for the above to be feasible, but close, then there are obvious ways to improve on the above, though possibly at the cost of being rather difficult to code.
 * Clearly my proposed idea owes much to Newton's method. Unfortunately Newton's method only works as described in that article for functions from real numbers to real numbers, or complex numbers to complex numbers, but not real numbers to complex numbers.  If you treat f as a function from complex numbers to complex numbers, then doing Newton's method (which, by the way, is far computationally faster than what I was suggesting above) will find complex roots to f without respect to your interest only in roots lying in the real interval (0, 1], and furthermore may have undesirable convergence properties.  Since f, in general, will have an infinite number of roots in the complex plane, the direct approach of using Newton's method to find all roots and checking if any of them are of interest is not possible, although some variation on the idea may work.
 * See root-finding algorithm for more ideas. Eric.  131.215.159.226 (talk) 10:27, 15 October 2008 (UTC)

cardinality of unions
hi, I know that: $$|A \cup B |= |A|+ |B|- |A \cap B|$$
 * what is the rule for 3 sets? (i.e $$A \cup B \cup C = ?$$)
 * what is the rule for n-sets?

thx, 87.68.26.172 (talk) 20:59, 13 October 2008 (UTC)
 * See inclusion-exclusion principle. --Trovatore (talk) 21:01, 13 October 2008 (UTC)
 * thank you87.68.26.172 (talk) 21:13, 13 October 2008 (UTC)


 * Note also that the rule above does not necessarily apply to infinite cardinalities. For infinite sets, for example non-degenerate intervals of real numbers, and a Cantor's cardinality (instead of some finite measure, like an inteval length or any probabilistic measure) you would get a subtraction of infinities, which is undefined. --CiaPan (talk) 05:20, 14 October 2008 (UTC)


 * You're fine with infinities as long as you change all the subtractions to additions on the other side. Given the trivial nature of infinite cardinal addition, though, the result isn't very interesting. Algebraist 09:11, 14 October 2008 (UTC)