Wikipedia:Reference desk/Archives/Mathematics/2009 November 9

= November 9 =

Random binary digits from random numbers?
What is the best way to get random binary digits from a sequence of random decimal or hexical digits?

For example, if I toss a six sided dice a total of 8 times, how can I convert that sequence of 8 tosses into a sequence of random binary digits (0's and 1's), each having 50% probability. I know I could simply do 1,2,3 on dice = 0, and 4-5-6 on a dice = 1, but that takes 1 roll per binary digit, I want to do it in a way that minimises the number of die rolls required per binary bit.

--Dacium (talk) 02:58, 9 November 2009 (UTC)


 * You could get 2 random bits from each roll 2/3 of the time and 1 1/3 of the time, for an average of 5/3 bits per roll. 1 counts as 00, 2 counts as 01, 3 counts as 10, 4 counts as 11, all with equal probability so it should be uniform.  Then, 5 could be just 0 and 6 could be 1. StatisticsMan (talk) 04:30, 9 November 2009 (UTC)


 * The most bits you can get out per roll is $$x = \ln (6) / \ln (2) \approx 2.585$$. With only a finite number of rolls, you can only get a rational number of bits, as arbitrarily close to x as you'd like (with the closer you get to x requiring more rolls total and a more complex algorithm).  For example, following StatisticsMan's method, you can get $$41 / 18 \approx 2.3$$ bits per roll as follows:  every pair of rolls has 36 possibilities, so list these possibilities in some order;  the first 32 possibilities yield the bits 00000, 00001, ..., 11111, and the last 4 possibilities yield the bits 00, 01, 10, 11.  Eric.  131.215.159.109 (talk) 05:26, 9 November 2009 (UTC)
 * Of course, for best results you should use all the rolls at once. I think the following works: Write $$6^8=\sum_{k=0}^na_k2^k$$ where $$a_k\in\{0,1\}$$. After rolling, write the result as an integer $$0\le L<6^8$$. Find the minimal m such that $$\sum_{k=m}^na_k2^k > L$$. Then $$0\le L-\sum_{k=m+1}^na_k2^k<2^m$$ and is uniformly distributed over this range. This gives you m bits, and happens with probability $$\tfrac{2^m}{6^8}$$ (only if $$a_m=1$$). The expectation is $$\frac{\sum_{k=0}^na_k2^kk}{6^8} = \frac{42424}{2187}$$ which is 2.42478 per roll. -- Meni Rosenfeld (talk) 06:19, 9 November 2009 (UTC)


 * Remark on infinite sequences. If you interpret a sequence of digits in p:={0,1,..,p-1} as the expansion in base p of a real number x, you get a measure preserving map from pN (with the product of countably many copies of the uniform probability measure on p) into the interval [0,1] with the Lebesgue measure. If you then expand x in base q, you get a measure preserving map into the sequences of digits in q={0,1,..,q-1}. These maps are bijective up to a countable null set (due to the double representation of some rationals). So any set of p-sequences gives rise this way to a set of q-sequences with the same probability. --pma (talk) 07:32, 9 November 2009 (UTC)


 * Dacium, what are you actually trying to do? If you're really rolling dice by hand and trying to get a long bit stream, then StatisticsMan's proposal is good and it does use the entropy optionally unless I'm missing something.  If you're trying to program a computer to convert some non-binary entropy source to a bit stream, the mathematical theory of randomness extractors is way more complicated than you need.  Just make a conservative estimate of the amount of min-entropy in your input source, and collect enough input that the output of a CSPRNG seeded from that input can't be feasibly distinguished from true random bits.  If you can be more specific about your requirements you will probably get more useful answers.  69.228.171.150 (talk) 10:55, 9 November 2009 (UTC)

With simple single rolls of a d10 converted by hand, one easy way would be to convert 10 and 1-7 into three digit binary numbers (000 to 111) and code 8s as 0 and 9s as 1; this would be analogous to Statisticsman's original suggestion with a d6. An average string of 100 decimal digits would give you 260 zeroes and ones. Of course, if you're using a non-hexical die (e.g., an RPG d10), your best option would be to use a d8, to give you a three-digit binary number every time, or a d20 for 16 four-digit binary numbers and four two-digit ones - an average of 3.6 binary digits per roll. Grutness...wha?  11:31, 9 November 2009 (UTC)


 * with decimal I would take 3 digits at a time, and if it's more than 500, the binary digit is 1, less than 500 and it's 0, and this should reduce any off-by-one error from my sloppy thinking to at most 2/500 or maybe 3/500 skew depending on how unlucky I am. If this is unacceptably high, I would increase it to 8 digits at a time, so that if the number is more than 50,000,000 it's a 1 and less than 50,000,000 it's a 0, thereby reducing any off-b y-one skew to a negligible amount.  Why only 8 digits?  As that is a safe distance (2 orders of magnitude) from the overflow of the signed integers I'd probably be using by accident to store the result....  can you tell I'm an adherent of Murphy's law?  —Preceding unsigned comment added by 92.224.205.24 (talk) 16:06, 9 November 2009 (UTC)

mathematics/{1+x+x*x}{1+5x+x}{5+x+x}=64
{1+x+x*x}{1+5x+x}{5+x+x}=64 —Preceding unsigned comment added by 59.164.3.176 (talk) 08:23, 9 November 2009 (UTC)
 * It helps if you also tells us what you are wondering about. Taemyr (talk) 08:35, 9 November 2009 (UTC)
 * In the future, we will not answer your question unless it is deemed unambiguous by a majority. Although you are most certainly encouraged to ask questions here (and we are happy to answer them), it is important to note that phrasing your question in a simple manner increases the chance that someone willl answer it (after all, this is is a free service). Let me assume that the following is your question:


 * $$(1 + x + x^2)(1 + 5x + x)(5 + x + x) = 64$$
 * $$(1 + x + x^2)(1 + 6x)(5 + 2x) = 64$$
 * $$(1 + x + x^2)(5 + 32x + 12x^2) = 64$$
 * $$5 + x + 5x^2 + 32x + 32x^2 + 32x^3 + 12x^2 + 12x^3 + 12x^4 = 64$$
 * $$12x^4 + 44x^3 + 49x^2 + 33x - 59 = 0$$


 * The solutions to which are x = -2.7825809, x = 0.66558396. -- PS T  11:35, 9 November 2009 (UTC)


 * Coefficient of x should be 37 i.e.
 * $$12x^4 + 44x^3 + 49x^2 + 37x - 59 = 0 \, .$$ Gandalf61 (talk) 12:53, 9 November 2009 (UTC)
 * and so the two real solutions are &minus;2.82568 and 0.650129 and the two nonreal solutions are &minus;0.74556±1.4562i. Bo Jacoby (talk) 14:47, 9 November 2009 (UTC).
 * Both of you are right. Maybe I need a break from the reference desk... -- PS T  03:35, 10 November 2009 (UTC)
 * None of us are perfect and your contributions are appreciated. Bo Jacoby (talk) 19:40, 10 November 2009 (UTC).

Your equation expands to be 12x4 + 44x3 + 49x2 + 37x &minus; 59 = 0. The left hand side of this equation is a polynomial in the variable x. In fact it is a quartic polynomial, i.e. the highest power of x is four. A corollary of the fundamental theorem of algebra tells us that your equation has, counting multiplicities, four solutions over the complex numbers. As is the case with quadratic polynomials, it is possible to find the exact solutions in terms of radicals, i.e. expressions involving nth roots. Although, in practise, this is very difficult to do by hand for anything other than the simplest of quartic polynomials. If you are satisfied with approximate solutions then there are a few things you could do. The first is to use the Newton method to find approximations to the solutions. With the Newton method you start with an educated guess for one of your solutions, say x0, and feed this into a certain function giving a new value, say x1. You then put x1 into the same function to get x2, and so on. This process of iteration will give a sequence of numbers (x0, x1, x2, &hellip;) which will eventually (given a few extra conditions) converge to a solution of your equation. You won't need more than a calculator, a little knowledge of complex numbers and the ability to differentiate a polynomial to use this method. Alternatively, you could just cheat and use a computer algebra package like Maple or Matlab to generate some numerical approximations to the solutions; but where's the fun in that? :o) Dr Dec  (Talk)   17:35, 10 November 2009 (UTC)


 * The fun is in mastering a tool that enables you to solve hard problems easily. J (programming language):

>{:p.(((1 1 1&p.)*(1 6&p.)*(5 2&p.))-64&p.)t.i. 5 _2.82568 _0.74556j1.4562 _0.74556j_1.4562 0.650129 Bo Jacoby (talk) 19:40, 10 November 2009 (UTC).

Commutative diagram
I have an n x n diagram (n rows and n columns) with the property that each (1 x 1) cell of the diagram is a commutative diagram. Is the whole diagram commutative? What's the name for this result (if it's true)? —Preceding unsigned comment added by 58.161.138.117 (talk) 12:24, 9 November 2009 (UTC)
 * You mean you have a rectangular grid with all arrows between neighbouring nodes going (say) left-to-right and top-to-bottom? Yes, the diagram is commutative. You can prove it easily for any m-by-n grid by induction on m, with inner induction on n. I have no idea whether it has a name. — Emil J. 14:21, 9 November 2009 (UTC)

Why don't we have a complete mathematical language?
Mathematical texts still use natural languages. Couldn't mathematicians develop a complete language? This language being complete in the sense that you can express things like "why", "imagine a". Students would learn how to code their natural native languages into this mathematical language.--Quest09 (talk) 12:57, 9 November 2009 (UTC)


 * Well you could use the Mizar system or Metamath but mathematicians are human too and these definitely wouldn't help students. I seem to remember a funny story in Mathematical Intelligencer where students did everything this way. Dmcq (talk) 13:12, 9 November 2009 (UTC)


 * There is not much use in developing a language unless there are semantics for it, but it is not at all obvious what the semantics for "imagine a" would be. Thus the formal languages that are used in math are restricted to a class in which we can develop a good semantics. For example, we have a good sense of what "A and B" means in terms of "A" and "B". Philosophers and linguists spend a long time studying the meanings of natural-language phrases, but we do not understand them well enough to try to formalize them.


 * On the other hand, many students do learn how to express mathematical claims in "mathematical language" which corresponds roughly to first-order logic. But this does not include the entire range of expression that is possible in a natural language. &mdash; Carl (CBM · talk) 13:43, 9 November 2009 (UTC)
 * Also see Loglan/Lojban. --Stephan Schulz (talk) 14:38, 9 November 2009 (UTC)

Because mathematicians have a right brain as well as a left brain. The right brain does not respond in the least to rigorous symbolic manipulation. Thus expressing everything symbolically is a great way to take the right brain out of the equation, and deduct rigorous truth even when it is grossly unintuitive. But only a few such deductions are unintuitive; the vast majority of truth mathematicians manipulate makes sense to them on an intuitive, right-brain level as well (and this is why they are mathematicians). If you were to remove language, you could no longer easily communicate with the right brain (imagine if you remove directx/opengl: you no longer can easily program the GPU) and many kinds of math become hard to fathom. This is especially true for appeals to visual intuition, as there is no rigor there at all. A visual demonstration of the pythagorean theorem has no rigor at all, you must 'add in' all the rigor bit by bit. But, to a mathematician, it might immediately make total sense to see that. Now, was the picture, which by definition has no rigor at all, it's just pixels, the right way to communicate the picture, instead of through coordinate geometry, which would have taken you 3-5 minutes to translate into a picture? Obviously it was. It's simply easier to show the picture. Similarly, it's just easier to use natural language to appeal to your intuition, and the left brain can then proceed to flesh out the formalities. For example, primes get harder and harder to come by as you go up through larger and larger primes. Couldn't there be a largest prime, couldn't the set of primes be finite, and not unending? The simple (and famous) natural language appeal that there cannot be a largest prime is simple: "there can't be a largest prime, because if there were, you could take it and all the primes smaller than it, multiply them together, and add one.  The new number would just fail to divide by "every" prime (it would leave  remainder of 1 for each attempt), but if it fails to divide by any prime, then its only factors must be 1 and itself -- ie it must be a prime, and since we got it by multiplying "all" the primes together, it muts be greater than all of them.  So, it is a new "greatest" prime, clearly a contradiction with the possibility that there were a finite number of primes to begin with." This was a natural-language appeal, and it uses your intuition. I could have phrased it in a more complex and rigorous way, that would have been harder to understand, but if I had done so, you simply wouldn't have understood the truth of that statement so spontaneously (assuming you did). by the way, if you DIDN'T follow my logic, imagine how much farther away still you would ahve been from 'getting' it if I had used all symbols! Now, once you have understood the untuitive appeal (or thought of it yourself), you might sit down and go through it rigorously. You might even say whooops, with greater rigor I notice that the new "greatest" prime might not really be a prime, but just relative prime to all the primes -- in fact, it might have another prime factor we left out of "all" primes. These kinds of ancillary effects often come out only under greater rigor. But natural language is quite important for even doing the kind of intuitive reasoning/communication that it takes to get to the point where you even know what to try symbolically/with great rigor. 92.224.205.24 (talk) 15:41, 9 November 2009 (UTC)
 * 92, your post would be much easier to read if you divided it to paragraphs. -- Meni Rosenfeld (talk) 19:09, 9 November 2009 (UTC)

Question regarding order of an element in a group
I have the following question. Suppose G is a finite group and K is a normal subgroup of G. Also suppose G/K has an element of order n. Then prove that G also has an element of order n.

I understand that finiteness is essential because if G= Z, K = 2Z clearly G/K has an element of order 2 but G doesn't. Other then that I can't think of a thing! I'll appreciate any help--Shahab (talk) 16:20, 9 November 2009 (UTC)
 * (Finiteness is not quite essential, it suffices if G is a torsion group.) Just do the obvious thing: take an element g/K of G/K or order n, and try to see if this gives any constraints on the order of g in G. Then you should be able to find an element of order n in G with no difficulty. — Emil J. 16:35, 9 November 2009 (UTC)


 * All I can think of is that O(g)≥ n as n is the least positive integer for which g^n is in K. Can you please be a bit more explicit. Thanks--Shahab (talk) 17:15, 9 November 2009 (UTC)
 * n is the least positive integer for which gn is in K, yes. And more holds: for any integer a, ga is in K if and only if n divides a. Now, go(g) = 1 is in K, hence o(g) is not just greater than or equal to n, but actually divisible by n. — Emil J. 18:01, 9 November 2009 (UTC)
 * Let $$g \in G$$ with $$g^n \in K$$ and let n be the least positive integer with this property. Then, if m is a positive integer with $$g^m \in K$$, $$\frac{m}{n}\in Z$$ since K is a subgroup. Consequently, $$\frac{||}{n}\in Z$$ since 1 is an element of K and thus $${x}^\frac{||}{n}\in G$$ has order n, as desired. Note that I think my proof overlaps with EmilJ's, but hopefully I was a bit more explicit. -- PS T  03:34, 10 November 2009 (UTC)