Wikipedia:Reference desk/Archives/Mathematics/2011 September 27

= September 27 =

Multiply 3-dimensional matrices
I was wondering if there was a defined method for this sort of thing. I would guess that it would be something like this: If you had a 3x3x3 matrix A, with these slices, (looking at it straight on going from front to back) $$ F_{1} = \left[ \begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{array} \right] F_{2} = \left[ \begin{array}{ccc} 10 & 11 & 12 \\ 13 & 14 & 15 \\ 16 & 17 & 18 \end{array} \right] F_{3} = \left[ \begin{array}{ccc} 19 & 20 & 21 \\ 22 & 23 & 24 \\ 25 & 26 & 27 \end{array} \right] $$ And these slices looking at it from the right, from back to front (same matrix) $$ R_{1} = \left[ \begin{array}{cccc} 1 & 10 & 19 \\ 4 & 13 & 22 \\ 7 & 16 & 25 \end{array} \right] R_{2} = \left[ \begin{array}{cccc} 2 & 11 & 20 \\ 5 & 14 & 23 \\ 8 & 17 & 26 \end{array} \right] R_{3} = \left[ \begin{array}{cccc} 3 & 12 & 21 \\ 6 & 15 & 24 \\ 9 & 18 & 27 \end{array} \right] $$ You would do something like this to multiply it by itself: $$ F_{1} \cdot R_{1} + F_{1} \cdot R_{2} + F_{1} \cdot R_{3} \text{ becomes the new } F_{1}$$ $$ F_{2} \cdot R_{1} + F_{2} \cdot R_{2} + F_{2} \cdot R_{3} \text{ becomes the new } F_{2}$$ $$ F_{3} \cdot R_{1} + F_{3} \cdot R_{2} + F_{3} \cdot R_{3} \text{ becomes the new } F_{3} $$

I don't know if this works (I just extended the rows and columns to 3x3 matrices) or if you need to multiply even more matrices, but I was wondering if you guys had any thoughts? Aacehm (talk) 00:40, 27 September 2011 (UTC)


 * You may be looking for tensors. Bobmath (talk) 02:34, 27 September 2011 (UTC)
 * Agree with Bobmath. There is a tensor product which works differently from the scheme you present above. You could make up your own product rule for 3-dimensional matrices but it might not be useful for anything. Tensors can be given a physical interpretation. See Tensor. EdJohnston (talk) 02:46, 27 September 2011 (UTC)

Easy way to judge if a holomorphic function is an injection and/or has no branch points
Suppose I have a function that I know is holomorphic almost everywhere, and I know its power series expansion about some point, but further that this series doesn't converge everywhere and thus the function has singularities. Is there an easy way to judge: (a) whether the function is an injection, or otherwise; and (b) if the function has a branch point. The reason I'm interested is that I'd like to try it for a global-optimization procedure (inverting the series, which obviously will work properly only if the function is injective; I think the branch point stuff could mess with it as well): as far as I know, Newton-type methods are local-optimization ones, not global. Thanks. --Leon (talk) 06:06, 27 September 2011 (UTC)


 * OR, and excuse me if my moment of realization is but a moment of madness, does a function not being an injection imply that its inverse will have a branch point, and vice-verse; and further, will this method only work with bijective functions, those being but the Möbius transformations?


 * In any case, are there any rules relating the presence of singularities/derivatives equal to zero/series expansions at particular points? To cut to the chase, is there any mileage to be had in my suggested method of finding a global minimum?--Leon (talk) 06:21, 27 September 2011 (UTC)

Are polynomial functions invertible
I apologize if it is too trivial a question, but is an arbitrary polynomial invertible as a function if the domain is suitably restricted. It seems to me they are, but I cant quite justify it. Can someone explain why? Thanks -Shahab (talk) 07:27, 27 September 2011 (UTC)


 * Doesn't the fundamental theorem of algebra state that, over the complex plane, every polynomial function is a surjection? If so, every polynomial function has an inverse, albeit, in general, a non-unique one, and thus the most general inverse will be a many-valued function (so not strictly a function).--Leon (talk) 08:02, 27 September 2011 (UTC)
 * Strictly speaking, the FToA does not state, but implies that. Although it is a relatively trivial implication. — Fly by Night  ( talk )  20:14, 29 September 2011 (UTC)


 * (Assuming a real domain:) Polynomials only have a finite number of critical points. Near any other point you can restrict the domain so that there is a smooth inverse using the inverse function theorem. Staecker (talk) 11:42, 27 September 2011 (UTC)
 * (Over the complex plane:) Polynomials only have a finite number of critical points. The derivative of a polynomial is a polynomial. — Fly by Night  ( talk )  19:52, 7 October 2011 (UTC)


 * Polynomial maps (as opposed to functions) are incredible when the Jacobian is nonzero in dimensions 1 and 2. The higher dimensional case is believed to be true, and is known as the Jacobian conjecture. Sławomir Biały  (talk) 11:14, 28 September 2011 (UTC)

ultrafilters
Can somone give me a example to an ultrafilter which is not principal? — Preceding unsigned comment added by 93.173.34.90 (talk) 10:37, 27 September 2011 (UTC)


 * Yes, but it will require some form of the axiom of choice, meaning it won't be easy to visualize. Are you familiar with Zorn's lemma?  From that, you can show that every filter is contained in a maximal filter, and it's not hard to see that a maximal filter is an ultrafilter.  So begin with the filter of all subsets of N which have finite complement (it's not hard to check that this is a filter on N).  Then by Zorn, there is an ultrafilter containing this filter.  Since it contains all cofinite sets, it cannot contain any finite sets, so it cannot be principle.--Antendren (talk) 10:46, 27 September 2011 (UTC)

In a sense, nobody can show you such an example. The set of all cofinite subsets of an infinite set is a filter. Now look at some infinite subset whose complement is also infinite. Decide whether to add it or its complement to the filter. Once you've included it, all of its supersets are included and all subsets of its complement are excluded, and all of its intersections with sets already included are included, and all unions of its complement with sets already excluded are excluded, etc. Next, look at some infinite subset whose complement is also infinite and that has not yet been included or excluded, and make the same decision. And keep going....."forever". That last step is where Zorn's lemma or the axiom of choice gets cited. Michael Hardy (talk) 17:32, 27 September 2011 (UTC)

I see. Thank you Both! — Preceding unsigned comment added by 93.173.34.90 (talk) 17:41, 27 September 2011 (UTC)

Explicit Runge-Kutta methods
What is the highest possible order of an explicit Runge-Kutta method? --84.62.204.7 (talk) 20:27, 27 September 2011 (UTC)

What is the highest order of a known explicit Runge-Kutta method? --84.62.204.7 (talk) 12:58, 28 September 2011 (UTC)


 * WP:RDTROLL No such user (talk) 13:53, 29 September 2011 (UTC)

Quick question on exponentials
Hi wikipedians: I'm no mathematician, and I came across a formula in a paper I'm reading that I can't make sense of. Could someone help me with this? It says that for small values of p:

(1-p)^N ≈ e^(-Np)

Why is this? Any help would be appreciated. I don't have a digital copy of the paper or I would post a link. Thanks! Registrar (talk) 21:12, 27 September 2011 (UTC)


 * Because $$\log\left((1-p)^N\right) = N \log(1-p) \approx -N p$$. The approximation step simply replaces the log function with a tangent line. 130.76.64.109 (talk) 21:20, 27 September 2011 (UTC)


 * Alternatively, if you expand $$(1-p)^N$$ with the binomial theorem, the first two terms are $$1 - Np$$. All the rest have $$p^2$$ in them, so since $$p$$ is small, the remaining terms are tiny.  Simultaneously, the first two terms of the power series for $$e^x$$ are $$1+x$$, so plugging in $$(-Np)$$ for that gives $$1 - Np$$.--Antendren (talk) 21:25, 27 September 2011 (UTC)

Thanks both of you! The theory behind the first explanation isn't perfectly clear to me, but I can see from graphing $$\log(1-x)$$ that it works. The second explanation makes perfect sense. So thanks very much. Registrar (talk) 21:37, 27 September 2011 (UTC)


 * Glad you're happy. Note that the second explanation depends on $$Np \ll 1$$. For p=0.01 and N=200, $$(1-0.01)^{200}\approx 0.1340$$ and $$e^{-200\times 0.01}\approx 0.1353$$, but $$1-Np=-1$$. 130.76.64.121 (talk) 22:36, 27 September 2011 (UTC)

The approximation is actually better than either explanation suggests, because of the fact that


 * $$e = \lim_{n \rightarrow \infty} (1 + 1/n)^n$$

or in other words,


 * $$e = \lim_{p \rightarrow 0} (1 + p)^\frac{1}{p}$$


 * Looie496 (talk) 06:04, 28 September 2011 (UTC)

Series
Under what circumstances is this equality always true: $$\sum_{x=0}^\infty f(x) + \sum_{x=0}^\infty g(x) = \sum_{x=0}^\infty (f(x) + g(x))$$? Do the series $$\sum_{x=0}^\infty f(x)$$ and $$\sum_{x=0}^\infty g(x)$$ have to be absolutely convergent or just convergent? Widener (talk) 21:30, 27 September 2011 (UTC)


 * $$F(N) = \sum_{x=0}^{N} f(x) $$


 * $$g(N) = \sum_{x=0}^{N} g(x) $$

If the limits for N to infinity of both exist (and these limits are, by definition, the summations to infinity), then because the sum of the limits is the limit of the sum, the summation to infinity of the sum of the two function is equal to the sum of the two summations to infinity. Count Iblis (talk) 22:47, 27 September 2011 (UTC)

You can also use this in case of divergent summations. Suppose e.g. that $$\sum_{n=0}^{\infty}c_{n}$$ is convergent and we write $$c_{n}= a_{n} + b_{n}$$, but both $$\sum_{n=0}^{\infty}a_{n}$$ and $$\sum_{n=0}^{\infty}b_{n}$$ are divergent. Then define the functions:


 * $$f(z) = \sum_{n=0}^{\infty}a_{n} z^{n}$$


 * $$g(z) = \sum_{n=0}^{\infty}b_{n} z^{n}$$


 * $$h(z) = \sum_{n=0}^{\infty}c_{n} z^{n}$$

If f(z) can be analytically continued to the entire complex plane, then h(z)= f(z) + g(z) and you can put z = 1 in here, despite the series for f(z) and g(z) not converging there. If f(z) and g(z) have poles at z = 1, then you can evaluate h(1) by computing the limit of f(z) + h(z) for z to 1. Count Iblis (talk) 23:16, 27 September 2011 (UTC)