Wikipedia:Reference desk/Archives/Mathematics/2011 March 13

= March 13 =

Matrices Question
Okay, suppose you have an n x n matrix such that there is only one '1' in every row and column and the rest of the matrix is filled with zeros. Take this 8 x 8 matrix for example, that I'll call R: [0 0 0 0 0 0 1 0] [0 0 0 0 0 0 0 1] [0 0 0 0 0 1 0 0] [1 0 0 0 0 0 0 0] [0 0 0 0 1 0 0 0] [0 0 1 0 0 0 0 0] [0 0 0 1 0 0 0 0] [0 1 0 0 0 0 0 0] If you multiply R by itself six times (R ^ 6), you get the 8 x 8 identity matrix. Any multiple of 6, you also get the identity matrix. In this smaller 4 x 4 matrix, which I will call V

[0 1 0 0] [0 0 0 1] [0 0 1 0] [1 0 0 0] V ^ 6 yields the 4 x 4 identity matrix. Any multiple of 3 >= 6 you also get the 4 x 4 identity matrix.

Is there any reason for this? Any rule that can be used to describe this? — Preceding unsigned comment added by Aacehm (talk • contribs) 01:02, 13 March 2011 (UTC)
 * Not sure if this is what you're looking for, but if An = I, then Akn = I for any positive integer k. This is because you can write Akn = An ⋅ An ⋅... ⋅ An (multiplied k times), which is I ⋅ I ⋅ ... ⋅ I = I. In your examples you have R6 = I, and so you'll get I for any exponent that's a multiple of 6. You also have V3 = I, and so you'll get I for any exponent that's a multiple of 3. Staecker (talk) 01:36, 13 March 2011 (UTC)


 * The matrices you're speaking about are called permutation matrices. When one of them is used to represent a linear transformation on column vectors, it acts by permuting the components of the argument vector in a particular pattern. Any such permutation has a finite number such that repeated this many times its total effect is to do nothing, which is represented by an identity matrix. This is a consequence of the fact that a permutation is a product of independent cycles (which allows you to calculate the exponent needed to get the identity, as the least common multiple of the cycle lengths). –Henning Makholm (talk) 02:14, 13 March 2011 (UTC)

Calculation of s in Lucas–Lehmer primality test
Hi there, I haven't been to Wiki for a long time. Nice to be back! I have been wondering about the s in Lucas–Lehmer primality test. In the article, it was proven that $$s_i = \omega^{2^i} + \bar{\omega}^{2^i}$$ where $$\omega = 2 + \sqrt{3}$$ and $$\bar{\omega} = 2 - \sqrt{3}$$. However, the calculation $$s_i$$ involes $$\omega$$ and $$\bar{\omega}$$, both not integers. Is there a formula for each and every value of $$s_i$$ starting with $$i = 0$$ defined purely in terms of integers? Thanks. voidnature 03:11, 13 March 2011 (UTC)


 * Um, the definition of the si's given first is purely in terms of integers. I suppose you don't think it is a "formula", but if we write it as $$s_i=(\lambda x.x^2-2)^i(4)$$ using Church's lambda notation, it looks perfectly like a formula to me. What exactly do you want that it doesn't have? –Henning Makholm (talk) 03:58, 13 March 2011 (UTC)
 * Thanks for such a quick reply, Henning, however I am quite unfamiliar with Church's Lambda Notation. I have read the Wikipedia article, but I still don't really get it. Can you please explain some of it please? By the way, by "formula", I meant like I didn't want a recursive relation but instead wanted something of a single equation such as $$n_i = 2^n$$ so each and every value of the sequence can be computed immediately, without computing other values like $$n_{i-1}$$. Too, I am quite familiar with programming(specifically, mostly with C#.NET) and in the article about Church's Lambda Notation, it seems there is a deep relationship between programming languages and the notation, so don't mind explaining stuff in terms of programming stuff if required. Thanks!
 * voidnature 04:32, 13 March 2011 (UTC)


 * In practical programming terms, a lambda expression is just an anonymous function. The C# syntax for $$\lambda x.x^2-2$$ would be something like " " or " ", I think. I'm then taking the liberty of mixing in the conventional mathematical notation of using an exponent to indicate iterating the function i times.
 * Yes, there are deep connections to programming languages. The anonymous functions in modern programming languages trace their ancestry directly to Church's work in the 1930's on logic and number theory. In fact, the lambda calculus is held in higher esteem by computer scientists than by mathematicians in general. It is the favorite theory of programming language researchers, and everyone working with process-oriented languages, or parallel languages, or network-oriented or aspect-oriented or mixin-oriented languages or whatever wishes they could find a core calculus as general and well-fitting as the lambda calculus -- something that would make their favorite feature look like a fundamental primitive of computing rather than an ad-hoc graft-on.
 * If you define "formula" as something that avoids computing, specifically, the previous s_i's, may I suggest
 * $$s_i = (\lambda x.x^2 - 4x - 2)^i(6)-2$$
 * ? Less whimsically, my point is that there is no real principled distinction between a "formula for" something and an "algorithm for computing" it. Sometimes you may be familiar with notation that allows you to write one down one more succinctly than the other, but that's more a function of which notation you know, not an intrinsic property of the thing you're writing down. –Henning Makholm (talk) 05:14, 13 March 2011 (UTC)
 * No wonder the name "Lambda Expressions" for delegates! Thanks, but what would the argument "x" in $$s_i = (\lambda x.x^2 - 4x - 2)^i(6)-2$$ be? Also, to be honest, I posed this question because I wanted to calculate the s' of a really large value without calculating the previous values. For such a huge value, it would probably be at least a few thousand times faster if the previous values were not required to be calculated. For the value, even if 100 iterations of i could always be done in 1 second, the final s would take slightly more than 1 whole year to compute. Thanks, Henning, you're great!
 * voidnature 05:26, 13 March 2011 (UTC)


 * Sorry, that should be $$(\lambda x.x^2-4x-4)$$, which my revised formula would apply to each $$s_i$$+2.
 * As for finding a fast algorithm, notice that the largest known Mersenne prime has an index of only several dozen million. If there were a known algorithm for testing a candidate as fast as you desire, surely much larger Mersenne primes would be known by now (or, alternatively, the unexpected absence of Mersene primes above i=43112609 would be all over the news). This strongly suggests that there isn't any known algorithm that will satisfy you. –Henning Makholm (talk) 14:09, 13 March 2011 (UTC)


 * If one simply put in a symbol like r instead of the square root of 3 and change r2 to 3 wherever it occurs you'll never have to deal with non-integer values. The expressions will become quite big though. Far easier to just plug the non integer values into a calculator and if you see a .995 at the end just round it to the nearest integer because the result has to be an integer. Dmcq (talk) 10:57, 13 March 2011 (UTC)
 * It strikes me that using r like this might be quite an efficient method for computing very large values, see Exponentiation by squaring. I don't know what they actually do but I'd guess it would be something like this and doing a modulo operation at each step. Dmcq (talk) 11:11, 13 March 2011 (UTC)
 * Thanks for the answer, Dmcq. I have considered this method before I posted the question, actually. However, it requires at least $$\frac{i}{2}$$ addition operations, and considering the sheer size of "i" I am considering, and taking into account the requirement of calculating the binomial coefficients of $$\frac{i}{2}$$ values, many of them extremely large too due to the size of the considered "i" again, I have considered the method impractical. Thanks for your reply whatsoever.
 * voidnature 11:23, 13 March 2011 (UTC)


 * Yes I can see looking at that article that at best one would be replacing i big multiplies with 3i big multiplies so that doesn't save anything! Dmcq (talk) 12:17, 13 March 2011 (UTC)


 * Ah, and yes, I forgot to mention that if the type "double" could handle numbers of the size I am considering, I would have considered it too. The type "double" has to be used since nearly all methods in the System.Math class and virtually every other math library uses doubles.
 * voidnature 14:42, 13 March 2011 (UTC)
 * In a bit someone removed I pointed out it would require 3i big multiplies rather than i using exponentiation by squaring so yes it is no gain. The additions would be inconsequential in comparison in real life using exponentiation by squaring. I was answering your question as posed, which was not about producing an efficient algorithm but how to avoid the non-integers if using that formula. Dmcq (talk) 15:04, 13 March 2011 (UTC)


 * Your bit disappeared here, and I reinserted it later. Seems to have been an uncaught edit conflict; such glitches are not unheard-of with the MediaWiki software. –Henning Makholm (talk) 15:11, 13 March 2011 (UTC)


 * You can also write it as a series: $$s_i = 4s_{i-1}-s_{i-2}$$ where $$s_0 = 2$$ and $$s_1 = 4$$ but that's probably not what you were looking for. 93.136.39.209 (talk) 20:17, 13 March 2011 (UTC)
 * No; I think you missed the 2-to-the in the OPs original formula. –Henning Makholm (talk) 20:33, 13 March 2011 (UTC)


 * Ah, Henning, as to your post above stating the impossibility of an algorithm as quick as I desire, I reject. This is because all I am asking is an algorithm to efficiently determine the value of s for a particular i, faster for large values but not to outrun the recursive algorithm for incrementing. A reasonable non recursive algorithm would be able to complete the calculation for huge values far faster than applying the recursive algorithm from the bottom up. For example, I want to start at the i value of 3321928097. If I use the recursive method, if 100 recursions could always be completed in 1 second, it would take 385 complete days to complete the calculation. What I am seeking is a faster method to initialize the s value. For incrementing, the square and minus 2 recursive method would be certainly quicker. I am just looking for a quick method to calculate any value, not attempting to be more efficient than the recursive algorithm while incrementing. Thankyou.
 * voidnature 10:26, 14 March 2011 (UTC)


 * As I said, if such a method were known, projects such as the Great Internet Mersenne Prime Search would already be using it to look for primes in the range of 23321928097-1, instead of mucking around in the forty-or-fifty-millions. They don't need the intermediate si's either (at least not modulo 2p-1 for a different p than the one they're testing at the moment) -- if they are computing them it must be because they can't think of anything better. Note that computing the si without doing it modulo some Mp is a fool's errand; already s1000 is far too large to represent explicitly on any computer that would fit in the observable universe. –Henning Makholm (talk) 13:04, 14 March 2011 (UTC)
 * Thanks, Henning! Now I kind of get why you say my question as impossible. However there are two points I wish to state. Firstly, to represent M3321928097, all you need are 3321928096 binary bits(The last bit must be false), about 3GB, and RAMs of such a size are perfectly practical. To leave space for the operating system, a 8GB RAM would be perfectly good for the purpose. Also, I really LOVE the .NET framework. In the System.Numerics namespace, there is the BigInteger class, allowing one to represent an integer of any size. Secondly, (I've discovered a mistake and take it back: It really is impossible to represent on a modern computer)I've thought I could maybe resort to the method of expanding the whole thing. In expanding, about half of the terms are cancelled. After that, I'll trying to put the terms back together in terms like (2 + 3)a. I'll search for all possible methods to combine the terms such as adding and subtracting the same terms and greatly using the many identities. Hopefully, this method will turn out to be practical. Even if the calculation takes days, I'll save 100 times the original time, 385 complete days. Henning, can you please tell me about some of your thoughts on this method?
 * voidnature 14:28, 14 March 2011 (UTC)

Multinomial terms
Hi. Is there a "famous" example of the multinomial "choose" function? I want a numerical example that is known for some reason. Perhaps it appears in a proof, or perhaps some famous mathematician used it in a book, or maybe has some unusual property. I could use a random example such as $${10\choose 1\ 2\ 3\ 4}=12600$$ (which I just made up) but would much prefer a famous example. Anyone? Robinh (talk) 06:43, 13 March 2011 (UTC)
 * (OP) It doesn't have to just integers. $${\pi+e\choose \pi\ e}$$ would be fine (if it was famous) Robinh (talk) 06:48, 13 March 2011 (UTC)


 * You mean something like how many ways can a bridge hand be dealt or something more complicated? Dmcq (talk) 15:10, 13 March 2011 (UTC)


 * That's what I thought at first, but 12600 does not match. It rather seems Robinh is referring to the multinomial theorem, where there is more than one number in the bottom half of the bracket. –Henning Makholm (talk) 15:14, 13 March 2011 (UTC)


 * How about $$\tbinom{52}{13,13,13,13}$$, though? –Henning Makholm (talk) 20:30, 13 March 2011 (UTC)
 * Thanks Henning and Dcmq: bridge is perfect. Everybody can interpret this.  Sweet!  Anyone got a non-integer famous example?   Or an integer example which crops up in the combinatorics literature? Robinh (talk) 21:08, 13 March 2011 (UTC)

Form work of Plinth Beam or   Shuttering of Plinth Beam
How to Calculate Form work of plinth beam in Running Feet —Preceding unsigned comment added by 182.178.71.47 (talk) 20:12, 13 March 2011 (UTC)


 * Note: this question was also asked on the science desk. I don't understand it well enough to figure out which desk is the proper one, but whomever can offer an answer, please add a pointer to the other copy.–Henning Makholm (talk) 20:47, 13 March 2011 (UTC)


 * I don't have an answer, but these links might help people understand the question better: Form work and shuttering:, plinth: . StuRat (talk) 11:41, 14 March 2011 (UTC)

Quadratic Forms
I have a real quadratic form, say Q, in six variables. (These variables are actually coefficients in the Monge form expantion of a surface.) I wanted to think about the zeros of this quadratic form, so I calculated the Hessian matrix. It turns out that the Hessian is singular. Moreover, the Hessian has a kernel of dimension three. Let's write the quadratic form as Q(x) = xHxT, where H is the Hessian matrix and x = (x1, &hellip;, x6). I'm interested in the solution set {&thinsp;x&thinsp;∈&thinsp;R6&thinsp;:&thinsp;Q(x)&thinsp;=&thinsp;0&thinsp;}. So, I have a degenerate quadric that consists of a three dimensional linear space. The problem is that, or at least Maple claims, that Q(x) doesn't factor over R. Can anyone suggest a way of using the kernel to simplify and further understand Q(x)? The kernel is given by
 * $$\ker(H) = {\mathbf R}\langle (2,0,0,1,0,0), (0,1,0,-6,0,1), (0,0,-1,0,1,0) \rangle . $$ — Fly by Night  ( talk )  20:45, 13 March 2011 (UTC)


 * The bit about not factoring over R sounds strange. A real quadratic form ought to diagonalize nicely, but perhaps that is not what it means? (One possibility is just that the diagonalization is not unique because there is a multiple eigenvalue).
 * What do you know about the three nonzero eigenvalues of H? –Henning Makholm (talk) 21:32, 13 March 2011 (UTC)
 * Factoring over R and having a Hessian matrix which is diagonalisable over R is different. For example the quadratic for Q(x,y) = x2 + y2 doesn't factorise over R and gives the identity as its Hessian matrix. I think it might be because of the fact I have an eigenspace of dimension more than one. I'll post more details shortly. — Fly by Night  ( talk )  22:42, 13 March 2011 (UTC)
 * Ah, so by factoring you actually mean to write the form as a product of two linear forms? As far as I can see, that is impossible unless the positive-definite and the negative-definite subspaces both have dimension at most 1. –Henning Makholm (talk) 21:02, 14 March 2011 (UTC)

There are three other eigenvectors, two with negative eigenvalues and one with a positive eigenvalue. They are given below. The subscript denotes the eigenvalue:
 * $$ E_{154} = \langle (1,-6,0,-2,0,-6) \rangle \,, $$
 * $$ E_{-36} = \langle (0,0,1,0,1,0) \rangle \,, $$
 * $$ E_{-144} = \langle (0,-1,0,0,0,1) \rangle \, . $$

I've managed to partially factorise the quadratic form. I get
 * $$Q(\sigma,a_0,\ldots,a_4) = (\sigma-2a_2-12a_4)(\sigma-2a_2-12a_0)-9(a_1+a_3)^2 \, . $$ — Fly by Night  ( talk )  18:47, 14 March 2011 (UTC)

Okay, so what I had in mind (which may or may not be helpful to you) would be to take $$(\tfrac{1}{77}E_{154}, \tfrac{1}{6}E_{-36}, \tfrac{1}{12}E_{-144})$$, together with three spanning vectors for the kernel, and express the x vectors in that basis. Then
 * $$Q(z_1\tfrac{1}{77}E_{154} + z_2\tfrac{1}{6}E_{-36} + z_3\tfrac{1}{12}E_{-144} + $$anything from the kernel$$) = 2(z_1^2-z_2^2-z_3^2)$$

(bit of a shame about that factor of 2 -- it may be caused because I'm working from your formula Q(x) = xHxT -- but if H is actually the Hessian as defined in our article, made of raw second derivatives, then the right formula is Q(x) = ½xHxT and everything will look nicer).

Then the zeroes of Q are the vectors for which $$z_1^2 = z_2^2 + z_3^2$$ which form a cone in $$(z_1,z_2,z_3)$$-space, direct-summed with the kernel of H. –Henning Makholm (talk) 20:55, 14 March 2011 (UTC)

recent old SAT I exams
Dear Wikipedians:

I am wondering, are there compilations of recent authentic old SAT I exams (after the recent SAT reform) for sale on Amazon? Also, I'm wondering if I could also download recent authentic old SAT I exams on line. I am especially interested in the quantitative sections.

Any help would be much appreciated!

174.88.32.181 (talk) 23:38, 13 March 2011 (UTC)
 * It wouldn't surprise me if some are circulating illicitly but I'm fairly sure they are never officially released to the public. There are plenty of study guides available whose questions are similar to the real ones though.  IMO the main thing for it is get plenty of sleep the night before, and have an energizing breakfast.  The questions are pretty easy but tedious and it's easy to make dumb errors if you're not well rested.  75.57.242.120 (talk) 04:43, 14 March 2011 (UTC)