Wikipedia:Reference desk/Archives/Mathematics/2008 October 9

= October 9 =

Order of subgroups
I am studying abstract algebra and group theory, and came across a problem I'm not sure how to solve. Given subgroups H and K of a finite group G, we want to show that |HK| = (|H||K|)/|H intersect K|, where |H| denotes the order (number of elements) of H. I haven't studied cosets yet, and have been told by several math professors that we need them to solve this. Does anyone have advice on where to start, without using cosets? Many thanks. 134.129.57.188 (talk) 02:16, 9 October 2008 (UTC)

The question uses the notion of left cosets anyway! Left cosets are very simple to understand once you know what a subgroup is. Basically, left cosets are defined as follows: Let H be a subgroup of G. Then a left coset of H in G is:

a*H = {a*h | h is in H}

for some a in G. A right coset is similarly defined where we right multiply a with every element of H. Note that the collection of all left (or right) cosets form a partition of G. Basically this means:


 * The union of all the left cosets of H equals G


 * If two left cosets intersect, they must be equal

The first property is very easy to see (try to see it yourself). To see the second property, suppose that two left cosets a*H and b*H intersect. Then there exists h1 in H and h2 in H such that a*h1 = b*h2 (don't make the mistake of assuming that h1 = h2; this need not be the case). It follows that a*h1*h2-1 = b. From this one can deduce that a*H equals b*H (try to see this yourself by showing that b*H is a subset of a*H and conversly).

Now to solve your problem notice that HK = U {h*K | h is in H} so that the cardinality of HK is just the cardinality of K multiplied by the no. of h's that are there (not counting twice h1 and h2 if h1*K = h2*K). This is just your expression. To see this formally, we define an equivalence relation on H to set h1 ~ h2 iff h1*K = h2*K. Then the cardinality of K times the number of equivalence classes that are there is simply the cardinality of HK. Furthermore, this expression equals yours (I will leave you to work out why since I am not really allowed to help you with your homework. Just show that the cardinality of H divided by the cardinality of H intersection K is the number of equivalence classes that are there).

Topology Expert (talk) 05:54, 9 October 2008 (UTC)

Just one note. If H intersection K is empty, the problem does not make any sense. Are there any other additional assumptions involved in the problem (i.e assuming that K and H are not disjoint)?

Topology Expert (talk) 06:04, 9 October 2008 (UTC)
 * Their intersection contains the identity.76.126.116.54 (talk) 06:28, 9 October 2008 (UTC)


 * How could I have said that! The problem is OK. Just go ahead and use my method (or any other method) and you should come up with the answer (note: the only possible methods to solve this problem would be to use cosets since that is how the problem is defined).

Topology Expert (talk) 09:37, 9 October 2008 (UTC)

Polarities
Can someone point me to a proof that every polarity of a finite projective plane has absolute points? In other words, that any bijection between the points and lines of the plane which preserves incidence takes some point to a line containing it. Black Carrot (talk) 06:43, 9 October 2008 (UTC)


 * Have you tried looking in Coxeter's books? I don't really know projective geometry so I can't help.  &#x2013; b_jonas 11:57, 15 October 2008 (UTC)

Combinations
Hello. Is there a way to find r choose n but the set (n) has i identical elements? Sort of like choosing combination of three letters from the word CANADA. I know how to do it for a given problem, but I wonder if there is something general. Thx Brusegadi (talk) 06:30, 10 October 2008 (UTC)
 * If you want non-identical elements, you just have to remove the duplicates from the set and then choose from that, so instead of rCn it's (r-a)Cn where a is the # of elements removed. -mattbuck (Talk) 12:09, 9 October 2008 (UTC)
 * So in the example above, if I want to choose three, I will do 4 choose three, which is 4. But, I can come up with more than that: AAA, AAC, AAN, AAD, CND...  The real problem are cases like AAN, which I would count twice if I took out one A and inserted another A. Did I miss something obvious? Brusegadi (talk) 06:30, 10 October 2008 (UTC)
 * Since you are concerned about identical elements in combinations, don't you mean that you would not count AAN twice? Since if you do count AAN multiple times, once for each pair of distinct A's that could be chosen, you end up with the standard combination count $$\binom{6}{3}=20$$. 84.239.160.166 (talk) 09:59, 12 October 2008 (UTC)
 * There is a formula for exactly what you want, but I can't remember it. I think it's the usual choose formula but with an extra factor in the denominator. I'll try and find it/work it out. --Tango (talk) 17:09, 9 October 2008 (UTC)

Check out the multinomial theorem. 84.239.160.166 (talk) 20:10, 9 October 2008 (UTC)
 * Note, that is talking about permutations rather than combinations, so doesn't exactly answer the OP's question. --Tango (talk) 21:02, 9 October 2008 (UTC)
 * Thanks for your help. I also remember seeing a formula somewhere, and it is killing me!  I will try to remember during the weekend, since I am preoccupied with work :)  Brusegadi (talk) 06:30, 10 October 2008 (UTC)

Ok, sorry about the confusion. Given that there are specific numbers of identical elements, the number of combinations can be counted as follows. Suppose there are elements of $$k$$ types, with the number of elements of each type given by $$n_1,\dots,n_k$$, and you want to count combinations of $$m$$ elements. In the CANADA example we have $$k=4$$ types of elements, $$n_1=3$$ for the letter A and $$n_2=n_3=n_4=1$$ for the letters C, N and D.

Each combination of $$m$$ elements can be identified with a vector of non-negative integers $$i_1,\dots,i_k$$ such that $$\sum_{j=1}^k i_j=m$$ and $$i_j \le n_j$$ for all $$j=1,\dots,k$$. Denoting the number of such combinations by $$C(m;n_1,\dots,n_k)$$, and by considering the effect of removing all elements of type $$k$$ we get the recursion
 * $$C(m;n_1,\dots,n_k)=\sum_{i=0}^{\min(n_k, m)} C(m-i;n_1,\dots,n_{k-1})$$

with the base cases $$C(m;n_1)=1$$ for $$m \le n_1$$ and $$C(m;n_1)=0$$ for $$m > n_1$$.

The recursion formula isn't as simple as one might hope for, but it is relatively easy to implement on a computer. In the CANADA example even computation by hand is feasible:
 * $$C(3;3,1,1,1)=C(3;3,1,1)+C(2;3,1,1)$$
 * $$=C(3;3,1)+2C(2;3,1)+C(1;3,1)$$
 * $$=C(3;3)+3C(2;3)+3C(1;3)+C(0;3)$$
 * $$=1+3 * 1 + 3 * 1 + 1 = 8$$

The 8 different combinations are AAA, AAC, AAN, AAD, ACN, ACD AND CND. In this particular case, the fact that $$n_2=n_3=n_4=1$$ makes the first three steps resemble a binomial expansion, and the fourth step follows directly from the base case of the recursion. In fact, this gives a hint about a less general formula that would directly address the original question... but I'll leave that as an exercise for the reader. 84.239.160.166 (talk) 09:59, 12 October 2008 (UTC)
 * Thanks. This problem was bothering me one night because I was convince there was a simple plug and chug out there.  But it seems like the summation is as general as it gets.  Thanks for generalizing it, by the way.  You were very helpful.  Good day, Brusegadi (talk) 07:24, 14 October 2008 (UTC)

simultaneous equations


Is substitution the best way to find the solution for this set of simultaneous equations or is there a better method? (all results are zero) -- Taxa  (talk) 20:08, 9 October 2008 (UTC)


 * You have far more variables than you do equations, so there won't be a unique solution. Adding and subtracting equations or multiples of equations from eachother is the standard way of solving simultaneous equations, although substitution can also help. You'll only be able to get so far though since you'll have used up all the equations and will still have lots of variables left. --Tango (talk) 21:00, 9 October 2008 (UTC)


 * Probably a little more efficient would be to rearrange the equations so that the right-hand sides all equal zero, then put that into a matrix and apply row reduction to help you determine the solution space (which, as Tango says, will involve a whole continuum of possible solutions, not just one unique one). Confusing Manifestation (Say hi!) 22:18, 9 October 2008 (UTC)
 * To be clear, it's more efficient notationally, but it's the same actual method. --Tango (talk) 22:43, 9 October 2008 (UTC)

Actually, in this particular case, substitution probably is fastest. Unless I've made a mistake, there is only one variable, u, that appears on both the left and right -- it appears on the left in the first equation, and on the right in the last and second-to-last equation. If you just substitute for u, eliminating the first equation, you're done. A solution is characterized by choosing any values you like for the variables remaining on the right, and then plugging those values in to get the remaining variables on the left. --Trovatore (talk) 00:42, 10 October 2008 (UTC)
 * No, the mistake is mine and corrected. The u on the left in the first equation should have been xx since u rather than t was the last variable used on the right side of the equations. All variables on the left should have been zeros instead as now shown in the corrected image. -- Taxa  (talk) 02:44, 10 October 2008 (UTC)

Limit of an α-sequence
In my Introductory Set Theory course, we just started talking about α-sequences. We defined a limit to be $$\lim_{\gamma \rightarrow \alpha}\beta_\gamma = \sup\{\beta_\gamma | \gamma < \alpha\}$$. This seems counterintuitive to me, since if we talk about the identity sequence, $$\lim_{\gamma \rightarrow 3}\beta_\gamma = 2$$, while $$\beta_3 = 3$$. I think. I know that, for example, $$\lim_{\gamma \rightarrow \omega}\beta_\gamma = \omega$$ and $$\beta_\omega = \omega$$. Am I confused, or is this what's expected? Thanks!Djk3 (talk) 22:31, 9 October 2008 (UTC)


 * I've never heard of an α-sequence, but my guess would be that that less than should be a less than or equal to, then the problem goes away. Double check the definition in your textbook. --Tango (talk) 22:46, 9 October 2008 (UTC)


 * An an α-sequence is generally only defined for &gamma;<&alpha;. The notation is counterintuitive, but only for successor ordinals.  For limit ordinals, you clearly need strict inequality to get anything remotely interesting.   siℓℓy rabbit  (  talk  ) 23:04, 9 October 2008 (UTC)

What you've been given isn't really the definition; it's something equivalent to the definition when &beta; is nondecreasing and &alpha; is a limit ordinal. Topologically, successor ordinals are isolated points; the notion of the limit of a function as its argument approaches an isolated point is not well-defined. And if the sequence &beta; fails to be non-decreasing, then it could do something silly like take its maximum value at &beta;0 and then never get close to it again, and then the supremum is clearly different from the limit. --Trovatore (talk) 23:15, 9 October 2008 (UTC)
 * Okay, sure. That makes sense.  I see now that I have "If &beta; is non-decreasing," and I probably just missed "and &alpha; is a limit ordinal."  Thanks a lot.  Djk3 (talk) 00:10, 10 October 2008 (UTC)