Wikipedia:Reference desk/Archives/Mathematics/2010 November 11

= November 11 =

Irrational power
Is there a more accurate way to calculate an irrational power of a number than to approximate it with a fraction (rational number) and take the [denominator]th root to the power of [numerator]? 24.92.78.167 (talk) 00:06, 11 November 2010 (UTC)


 * I'm not certain what you're saying but if you want lots more digits than a standard calculator gives then just look for a high precision one on the web and stick in the figures. The first I looked at said it did 50 figures which is way more accurate than even most mathematicians will know what to do with. Dmcq (talk) 01:46, 11 November 2010 (UTC)


 * Irrational exponents are often defined by first defining rational powers and then taking the irrational case to be the continuous extension of the rational case. That is, irrational powers are defined to be the limit of rational power approximations, as the approximations approach the irrational exponent. There's no way to get more accurate than the definition.... I'm also puzzled by what you might have meant. 67.158.43.41 (talk) 02:31, 11 November 2010 (UTC)

I mean this: to evaluate, say $$2^\sqrt{2}$$ we can approximate $$\sqrt{2} \approx 1.41421$$, and so $$2^\sqrt{2} \approx 2^{\frac{141421}{10000}}=(\sqrt[10000]{2})^{141421}$$. I know we can approximate these numbers closely enough for all practical purposes, but is there a way to find the exact value? Thhans 24.92.78.167 (talk) 02:40, 11 November 2010 (UTC)
 * Well, in special cases, sure. For example you can certainly get closer to $$2^{\log_2 3}$$ directly than you could get by using rational approximations to $$\log_2 3$$.  I don't see any obvious better way of getting $$2^\sqrt{2}$$, though. --Trovatore (talk) 02:42, 11 November 2010 (UTC)


 * (ec) Well, there's no way to write down a decimal expansion of the exact value. But the same is true of $$\sqrt{2}$$.  For all irrational numbers, the decimal expansion goes on forever and never repeats.  But, writing $$2^\sqrt{2}$$ or $$\sqrt{2}$$ is writing down the exact value, because we have defined exactly what these expressions mean.  If you're more comfortable with $$\sqrt{2}$$, that's probably just because it's more familiar.  One nice way to rewrite messy exponents is: ab = eln(a)b.  (That's how calculators would compute the value for you, actually.)  One thing I have to add for fun-- it is possible to write down the exact value of the decimal expansion of $$(2^\sqrt{2})^\sqrt{2}$$.140.114.81.55 (talk) 02:51, 11 November 2010 (UTC)


 * According to some perspectives, ab = eln(a)b is how ab is defined, if b is not rational. At least according to my calculus textbook, prior to the definition of ln x through integral of 1/x, there wasn't any operation defined over all the real numbers which functioned like exponentiation, so taking something to an irrational power was nonsensical. The inverse of ln x had the same properties as exponents, assuming a particular irrational base (henceforth called e), and so was taken as the definition of exponentiation to irrational powers. Irrational powers of other bases followed from the basic properties of powers and logarithms. - 174.31.204.207 (talk) 16:34, 11 November 2010 (UTC)


 * You may be interested in computable numbers. What precisely do you mean by "find[ing] the exact value"? Writing the full decimal expansion? Having a way to compute the full decimal expansion to any desired precision? Something else? 67.158.43.41 (talk) 04:11, 11 November 2010 (UTC)

Try http://www.wolframalpha.com/input/?i=2^(sqrt+2). Bo Jacoby (talk) 22:00, 11 November 2010 (UTC).

Direct sum
Let f:A->B be a map between projective modules. Is A the direct sum of ker(f) and im(f)? Money is tight (talk) 05:03, 11 November 2010 (UTC)


 * Provided $$\text{im}(f)$$ is projective, the short exact sequence $$0 \to \ker(f) \to A \to \text{im}(f) \to 0$$ splits (so that A would be the direct sum of ker(f) and im(f)). However, B being projective does not necessarily imply $$\text{im}(f)$$ is projective, though I don't have an example on hand.  I doubt A being projective makes any difference.  Eric.  82.139.80.73 (talk) 05:37, 11 November 2010 (UTC)
 * Your right about how A doesnt make a difference. Take a projective B with a non projective submodule K, and we have the composite f:F(K)->K->B, where F(K) is the free module on K generators. If the sequence splits, then K must be projective because direct summands of projectives are projective.
 * My real question was the following: can every map ker(f)->C be extended to a map A->C? This is implied by my above question, but I bet it's false as well (I have very bad intuition in algebra). Money is tight (talk) 07:11, 11 November 2010 (UTC)
 * It may interest you to know we have a sort-of converse. Suppose A is any module, and K is a submodule of A.  Suppose any map $$K \to C$$ can be extended to $$A \to C$$.  Then A is the direct sum of K and A/K.  For consider the map $$K \to (K \times A / K)$$, and extend it to a map $$g : A \to (K \times A/K)$$.  Although g is not necessarily an isomorphism, we can see that A is the direct sum of K and $$g^{-1}(0 \times A / K)$$, from which $$A = K \oplus A / K$$ follows with a little work.
 * As for your question, I suspect that once you find an example of a projective module which has a non-projective submodule, you will be able to construct a counterexample to your original question. Wikipedia says there exist such modules but doesn't give any examples, and I can't think of how one might be constructed.  Eric.  82.139.80.197 (talk) 04:11, 12 November 2010 (UTC)
 * Hmm as for your Suppose any map $$K \to C$$ can be extended to $$A \to C$$. Then A is the direct sum of K and A/K., wouldn't this proof be faster
 * Consider the identity $$1 : K \to K$$, extend it to a map $$f : A \to K$$,, then fi=1, where i is the inclusion of K into A, so the short exact sequence $$0 \to K \to A \to A/K \to 0$$ splits.
 * This will also prove my second question is wrong: suppose it's true. Take a projective module A and a non projective submodule K (ill take wikipedia's word on this one). Consider the map $$f : F(K) \to K \to A $$, where F(K) is the free module on K generators. Now by the above lemma, if maps from ker(f) always extends, we have $$F(K) = \ker(f) \oplus K $$. But this is impossible because direct summands of projective are projective contradicting our assumption that K isnt projective. Thanks for your help. Money is tight (talk) 09:57, 12 November 2010 (UTC)
 * Yeah, that's an easier proof, and I agree with your argument that (assuming Wikipedia is right) the answer to the original question is "no". Eric. 82.139.80.197 (talk) 17:01, 12 November 2010 (UTC)
 * To find an example: Ideals are just submodules of the simplest free module, the ring itself. One characterization of Dedekind domains (one dimensional, noetherian, Integrally closed domains), friendly rings like Z, Z[i], C[x] is that they are domains where all ideals are projective modules, where all the torsionfree (in particular submodules of projective or free) modules are sums of rank one projectives. So you aren't going to find an example there. Look at (ideals in ) higher dimensional rings or non-domains.John Z (talk) 13:12, 13 November 2010 (UTC)
 * I guess the smallest (cardinality-wise) example one can construct is $$\mathbb Z/4$$ as a module over itself, with submodule $$ 2(\mathbb Z/4) \simeq \mathbb Z/2 $$. Aenar (talk) 13:31, 13 November 2010 (UTC)

growth versus "now"
Suppose I have a time-discrete function C(t+1) = gC(t). The parameters can be set up in such a way such that x% of the function goes to "growth" and y% goes to "present benefit". Thus B(t) = (1-g)*C(t). Of course, growth has no utility while benefit does. At 100% growth, the function doubles.

Assuming the function is allowed to proceed indefinitely, is there a cutoff point for the growth allocation where the accumulated benefit as t goes to infinity for some high growth rate, will be actually be less than the accumulated benefit for some function with a low growth rate?

I notice that at 100% growth and 0% benefit, B(t) = 0 for all t, so NO benefit accumulation occurs, so clearly 100% growth is not an optimum point, and 99% growth will give you more benefit at t=infinity. Is 100% a singularity, is there an optimum somewhere?

I suppose, the proportion must stay the same throughout all times. How do things change if the proportions are variable or if 100% growth corresponds to multiplying the current function value by 1.5 or 4 to get the next function value?

Does this problem have a name?John Riemann Soong (talk) 07:18, 11 November 2010 (UTC)
 * First you'll have to define what your target is. Any $$0\le g<1$$ will give you an infinite accumulated benefit. You need a way to distinguish different outcomes.
 * For any reasonable such way which preserves the idea that you're only interested in the infinite run, you'll find that for $$g_10$$.
 * Ramsey growth model might be relevant. -- Meni Rosenfeld (talk) 09:23, 11 November 2010 (UTC)


 * If this was a finance problem, you would choose whichever values gave you the highest NPV. As this continues into the infinite future, the Gordon model might apply. It also reminds me of the St. Petersburg paradox. It would be interesting to see this illustrated graphically, for different paramaters. Why limit growth to 100%? Growth is unlimited. 92.29.120.164 (talk) 14:33, 12 November 2010 (UTC)

Calculating the central angle in polyhedra
Hello, I asked a question on the science reference desk here: Reference_desk/Science regarding how to calculate the bond angle between a central atom and it's nearest neighbours in polyhedral structural units. I was referred here since it is essentially a geometry question. The article Molecular_geometry states that the central angle for a tetrahedral unit is cos-1(-1/3)=109.47 degrees. My question is how does one obtain this formula? I am looking for a reference that clearly and concisely describes how the central angle can be calculated in various polyhedra, not just a tetrahedron. For example, how would I calculate the central angle for a cuboctahedron? Or even the simpler octahedron or cubic cases? I'm certain this is described in some fundamental geometry text book or maybe an online resource somewhere but so far I have not been able to find a reference that describes the necessary procedure. Many thanks in advance. 88.219.44.165 (talk) 11:01, 11 November 2010 (UTC)
 * The first step is to find the coordinates of the vertices, u and v, with the center at the origin. Then the angle is given by
 * $$\cos\alpha = \frac{u\cdot v}{\|u\|\cdot\|v\|}$$
 * Where $$u\cdot v$$ is the dot product and $$\|u\|$$ is the norm. For the cuboctahedron, taking vertices (1, 1, 0) and (1, 0, 1) you have
 * $$\cos\alpha = \frac{(1,1,0)\cdot(1,0,1)}{\|(1,1,0)\|\cdot\|(1,0,1)\|} = \frac{1}{\sqrt{2}\sqrt{2}}=\frac12$$
 * So the angle is 60°. -- Meni Rosenfeld (talk) 13:58, 11 November 2010 (UTC)


 * The central angles, and many other properties, of various polyhedra are considered at K.J.M. MacLean, A Geometric Analysis of the Five Platonic Solids and Other Semi-Regular Polyhedra →86.132.164.178 (talk) 13:33, 13 November 2010 (UTC)

Commutativity property on "two-stage" transformations
I'm trying to prove a property about transformations on a set of objects G (labelled directed graphs in my particular case, but not necessarily relevant), and got stumped, so was hoping for help. One property of the transformations I'm looking at is that they are split into two-stages; that is, transformations are functions $$T: G \rightarrow \Delta$$ where $$\Delta$$ is then the set of functions $$\delta: G \rightarrow G$$. So for a given $$g \in G$$, you might compute $$t=T(g)(g)$$, where $$t \in G$$. I've been using $$T^+(g)$$ as shorthand for $$T(g)(g)$$.

I've also got a notion of dependence or independence between two transformations. Given two transformations $$T_1$$ and $$T_2$$, $$T_2$$ is dependent on $$T_1$$ if $$T_2 \circ T_1^+ \neq T_2$$. If $$T_1$$ is not dependent on $$T_2$$ and vice versa, then they are independent.

The property I'd like to prove is this: if $$T_1$$ and $$T_2$$ are independent, then $$T_1^+ \circ T_2^+ = T_2^+ \circ T_1^+$$; or, failing that, what additional conditions I might need on $$T_1$$ and $$T_2$$ to ensure that it is true.

Thanks in advance for any help! &mdash; Matt Crypto 16:40, 11 November 2010 (UTC)
 * Hmm. Unwinding the definitions, you are assuming that $$T_1((T_2(x))(x))=T_1(x)$$ and $$T_2((T_1(x))(x))=T_2(x)$$ for every $$x\in G$$, and you want to infer that $$(T_1((T_2(x))(x)))((T_2(x))(x))=(T_2((T_1(x))(x)))((T_1(x))(x))$$. Is that correct? If so, then the property does not hold in general: for a counterexample, take G = {1,2}, and define (T1(x))(y) = 1 and (T2(x))(y) = 2 for every x and y.


 * Under your assumptions, the identity you want simplifies to $$(T_1(x))((T_2(x))(x))=(T_2(x))((T_1(x))(x))$$, so you might want to use that as an additional assumption.—Emil J. 17:07, 11 November 2010 (UTC)
 * Thanks for taking the time to unravel the definitions -- and you were quite correct in working out what I meant -- and for the counterexample (pretty obvious now you point it out!). I'll go and think some more, but just in case it triggers any further suggestions, I'll share a little more about my specific problem domain. G is a set of labelled directed graphs, and the transformations $$T_i$$ are rules which find matches of a pattern graph in an input graph and produce a graph edit as output. A graph edit is a consistent set of atomic edits on the input graph -- remove particular specified edges and vertices, relabel vertices, add edges and previously unseen vertices. You can also apply a graph edit to a different graph than that with which it was created, and the effect is that it simply ignores any atomic edits that don't apply to the given graph. I've got a method of calculating dependencies (with the meaning given above) between rules, and, assuming that the dependency graph is a DAG, I wanted to show that if I executed the rules in any topological order then the output graph would be the same. &mdash; Matt Crypto 19:11, 11 November 2010 (UTC)