Wikipedia:Reference desk/Archives/Mathematics/2008 August 6

= August 6 =

Ramsey's Theorem Proof
The section on Proof of Ramsey's Theorem for 2 colors contains the following sentence:
 * Now |M| ≥ R(r − 1, s) or |N| ≥ R(r, s − 1), again by the pigeonhole principle.

Can someone please explain how the inequalities follows from the pigeonhole principle. Thanks--Shahab (talk) 04:44, 6 August 2008 (UTC)


 * The graph contains $$ |M| + |N| + 1 = R(r - 1, s) + R(r, s - 1) $$ vertices. Therefore one or the other of the stated inequalities must hold.  Eric.  89.51.105.61 (talk) 08:25, 6 August 2008 (UTC)


 * (After edit conflict( I believe it follows from the fact that vertex v lies in a complete graph with R(r-1, s) + R(r, s-1) vertices, so


 * $$|M|+|N|=R(r-1, s)+R(r, s-1)-1$$


 * but if |M| < R(r − 1, s) and |N| < R(r, s − 1) then


 * $$|M|+|N| \le R(r-1, s)+R(r, s-1)-2$$


 * which is a contradiction. Invoking the pigeonhole principle doesn't seem to be the most natural way to explain this. Gandalf61 (talk) 08:48, 6 August 2008 (UTC)


 * That's what I had arrived at. So this means that the reference to the pigeonhole princinple is incorrect. Thanks--Shahab (talk) 09:21, 6 August 2008 (UTC)


 * I'll clarify that point in the proof given in the article. Eric.  89.51.105.61 (talk) 10:00, 6 August 2008 (UTC)

Trigonometry: Secant to Cosecant
I'm having a lot of trouble understanding this problem from Teach Yourself Trigonometry:

"The value of $$\alpha$$ is acute, and $$sec(\alpha) = k$$. Find in terms of k the value of $$cosec(\alpha)$$."

Now,

$$sec(\Theta) = \frac{1}{cos(\Theta)}$$

and

$$cosec(\Theta) = \frac{1}{sin(\Theta)}$$

So if I start with

$$k = \frac{1}{cos(\alpha)}$$

Multiply both sides by cos(a) and divide by k

$$cos(\alpha) = \frac{1}{k}$$

The sine is 90 degrees minus the cosine:

$$\frac{\pi}{2} - \frac{1}{k} = sin(\alpha)$$

So it seems to me that the answer should be:

$$\frac{1}{\frac{\pi}{2} - \frac{1}{k}} = \frac{2k}{\pi(k)-2} = cosec(\alpha)$$

But the book says that the real answer is:

$$\frac{k}{\sqrt{k^{2}-1}}$$

Can someone tell me where my thinking went wrong, and what the right way to approach this problem is? - 209.242.51.68 (talk) 05:23, 6 August 2008 (UTC)


 * The error is with "sine is 90 degrees minus the cosine". It's actually $$\sin(\theta+\frac\pi2)=\cos(\theta)$$. --Tango (talk) 05:42, 6 August 2008 (UTC)


 * Or $$\sin(\frac\pi2-\theta)=\cos(\theta)$$, but not $$\sin(\theta)=\frac\pi2-\cos(\theta)$$. -- BenRG (talk) 11:47, 6 August 2008 (UTC)
 * What you want is the Pythagorean trigonometric identity. Algebraist 12:49, 6 August 2008 (UTC)

Polynomial that generates ALL primes
I have seen in a book (The Music of the Primes, Marcus du Sautoy, 2004 ed.) a polynomial in 26 variables whose positive outputs constitute the complete set of prime numbers. Can anyone tell me who proved it and when?

Thanks in advance, Fish. (talk) 17:28, 6 August 2008 (UTC)
 * According to formula for primes, it was . Algebraist 18:05, 6 August 2008 (UTC)
 * In case du Sautoy doesn't makes this clear, you should be aware that this is in no way an interest fact about the primes. It is an interesting fact about the expressive power of integer polynomials. In fact, for any Recursively enumerable set of positive integers, there is a polynomial which takes exactly those positive values. Thus the result for primes is a trivial special case. Algebraist 18:56, 6 August 2008 (UTC)
 * Thank you very much for a model answer: timely, concise and accurate. As to your second point, the polynomial is not discussed in any detail at all (which I presume is why he does not quote the source himself), so the article you directed me to contained much of interest. I don't understand it all, but it's interesting anyway! Thanks. Fish. (talk) 22:39, 6 August 2008 (UTC)

list methods which can demonstrate ability to perform a given function
Optimal classification is an arrangement of attributes for a set of elements in an attribute-value system which minimizes the number of attribute queries necessary to identify a particular element within an element set. Because there may be more than one arrangement of attributes which minimize the number of queries, the function is termed optimal rather than minimal. By this definition what method(s) in mathematics can clearly demonstrate their capability of performing this function? (Not a homework question BTW, merely doing article research to determine the method(s) which can perform this function such as when one might collect a list of sort routines.) —Preceding unsigned comment added by 71.100.162.249 (talk) 20:17, 6 August 2008 (UTC)
 * Riiiight! hydnjo talk 22:19, 6 August 2008 (UTC) Acknowledging failure to AGF  :(   -hydnjo talk 15:49, 7 August 2008 (UTC)


 * Let's see if I understand what you're saying. You have a list of attributes for each element. Maybe "Green", or "11010", or something. You want to know which attributes are helpful in telling the elements apart quickly, and you want references for algorithms that are provably effective in finding these attributes. Am I right so far? You say, "a particular element". Are you looking for the fewest attributes necessary to distinguish this from the rest, or to distinguish all elements from each other? Black Carrot (talk) 07:49, 7 August 2008 (UTC)
 * List of attributes... correct.
 * Which attributes are helpful in telling the elements apart quickly... correct.
 * Want references for algorithms that are provably effective in finding these attributes... correct.
 * Am I looking for the fewest attributes necessary to distinguish a particular element from the rest or to distinguish all elements from each other... the latter, if I understand your question correctly.
 * In general, all attributes are assumed to be shared by a common group of elements with the purpose of the algorithm being to distinguish any one element from all other elements in the list of elements. If you threw in a foreign element with a single attribute permitting that element's identification with only one query and added this attribute to the list of attributes being considered for all then the algorithm would treat it no differently than any other attribute nor the new element any differently than the other elements. In other words, if the group of elements were varieties of trees and you threw in a tin can and added the attribute of "totally made of metal" it would not make any difference in terms of the order in which the attributes were placed by the algorithm to permit the fewest queries to be made to identify either the trees or the tin can except that when unrelated elements are combined in such a fashion that identifying the tin can might take a greater, or the greatest, number of queries than if it were part of a list of metal objects with attributes more closely related to identifying them, such as types of metal by percent for each metal object, etc.  —Preceding unsigned comment added by 71.100.162.249 (talk) 09:38, 7 August 2008 (UTC)


 * Note that "Optimal classification" was the subject of a former article and AfD debate. Gandalf61 (talk) 12:15, 7 August 2008 (UTC)
 * Yes it was and has since survived a second attempt at deletion by the same user where he started an account for not other purpose the to pursue the books deletion when it was moved to the Wikibooks. The purpose of this question is in fact to give those who claimed in the deletion discussion here an opportunity to prove that other methods exist. Unlike here in the Wikipedia, in the Wikibooks project, if other methods do in fact exist then they are easily enough included in the book as other chapters. My feeling is, however, that these were false claims and the nomination for deletion itself was the equivalent of an entomologist seeing a new creature he had never seen before and attempting to squish it into the ground for that reason. —Preceding unsigned comment added by 71.100.162.249 (talk) 12:25, 7 August 2008 (UTC)


 * If I can stay awake through all the long descriptions... you're looking for a 20Q algorithm? --tcsetattr (talk / contribs) 20:33, 7 August 2008 (UTC)
 * Similar idea, but no. I'm looking for algorithms intended to address bounded problems, i.e., to find the order of attributes which will in combination define a specific class of elements with the fewest attributes rather than algorithms to find the order of fewest attributes which will in combination define an unbounded class of elements. —Preceding unsigned comment added by  71.100.162.249 (talk) 04:50, 8 August 2008 (UTC)


 * Offhand I'd guess this is what they call an NP-hard problem like the travelling salesman problem and you won't even get a quick way to check a solution is correct. It would be nice to be proved wrong but I think you'll probably have to make do with heuristics for a good solution. There's probably a degree in it either way for somebody ;-) Dmcq (talk) 22:18, 7 August 2008 (UTC)
 * Actually, there is at least one algorithm that does this already which has been rejected recently as an article in the Wikipedia on claim that other methods exist which already solve this problem, such as methods for trimming decision trees. Out of scientific curiosity and for the sake of scientific objectivity I seek only to examine such claims to reach an empirical determination of validity or invalidity for myself. —Preceding unsigned comment added by  71.100.162.249 (talk) 04:50, 8 August 2008 (UTC)
 * Certainly 'brute force' is one such method, as to others I can't provide you with a list, and in fact didn't find an algorhtym in the links http://en.wikibooks.org/wiki/Optimal_Classification - perhaps the method described does give the minimum amount of steps (I have do reason to doubt it or 'un-doubt' it.) - but I'd generally expect to see the method described whereby the choice of question-order is found .. perhaps I missed it in the text. The flag recognition example seems like as good as an example as any http://en.wikibooks.org/wiki/Optimal_Classification/Application_Example/Flag_Recognition
 * Since the least number of attributes to perform optimization may include a variety of arrangements to be found brute force is very likely to result in finding at least one. In absence of an alternate means of correlating results with arrangements which would permit a heuristic method over trial and error the "algorithm" is merely a repetition of the separatory equations to alternate arrangements to allow the most effective to be found within a variety of constraints such as available processing time. The method is fully described in detail in the primary reference and the equations only elucidated and the primary reference paraphrased in the text. In future the Wikibooks version may include a flowchart, expanded text and other diagrams but consultation with the primary reference will still be advised.
 * When I think of an algorhtym I'd would generally be thinking of a method that reduces the amount of work required down from the try all combinations of question order (brute force) method. If this is what you seek the computing desk might be a good place to ask - but I expect it's unlikely that someone would know off-hand...87.102.45.156 (talk) 02:32, 10 August 2008 (UTC)
 * I am consulting the reference desk to examine existing methods claimed in the deletion discussion to perform the OC function rather than looking for new methods or improvements. A heuristic method to perform initial selection of attributes over the existing trial and error method by random selection or by selection based on sequential combinations might represent such an improvement but this is not the purpose of this inquiry unless such methods already exists.
 * I wonder if you include "shortest average number of queries", and if that topic expands to include situations where the likelihood of one object is greater than another ('object' meaning 'flag' in the example linked above)...
 * Not average but absolute for the bounded class. If you eliminate an object or flag then recomputation results in an entirely different order again based on the bounded class which makes up the table and is currently present.
 * If so the question would seem practically identical similar to one of data compression: which topics such as huffman encoding etc etc particularily applicable.
 * I'll take a look at it.
 * I think it would require additional research to make it applicable to your - problem - I just wanted to point out a vague similarity - generally in data compression one gets to choose the 'questions' - nevertheless the concept might (strong emphasis) be useful.87.102.45.156 (talk) 21:15, 10 August 2008 (UTC)
 * As you say if the No. of questions must be absolutely limited most forms of numerical encoding/huffman would not be applicable in the slightest. However data-compression is a massive subject - and a well published one - due to it's application in computer science etc. I wouldn't be completely suprised if something very similar or even identical exists (under a different name) - I think it's clear at a very basic level how reducing the number of questions is very similar to a data compression algorhtym (sic).87.102.45.156 (talk) 21:22, 10 August 2008 (UTC)
 * If the question is strictly "the least number of queries to get a clear-cut decision in ALL cases" then I don't think I know of any examples. sorry. But I still think a true expert in data compression may be able to...87.102.45.156 (talk) 19:11, 10 August 2008 (UTC)
 * Apart for multiple state logical equation reduction I have left methods of data compression alone, since they were addressed much earlier than my interest in the topic.
 * Also, as a result of finding multiple solutions when set size was expanded and realizing that separatory values change per attribute added I felt expansion might be an invalid although heuristic approach compared to sequential combinations or random sequence trials. Besides, optimal order is usually accomplished within a very few attributes regardless of group size. However, as a further test I will progressively increase the set size to see if it offers a completely heuristic approach.

edit break
(edit conflict)
 * Looking at the deletion discussion it seems that the above comments will not be of much use to you in your quest.
 * Decision-tree pruning is in a sorry state, Alpha-beta pruning looks better but doesn't really cover your topic (both are simplistic) - the suggestion that these two articles are sufficient to cover the topic as a whole is insulting, there really doesn't seem to be a similar article - as you note in the deletion discussion wikipedia is indeed a place where 'easy verifibility' out-weighs 'factual accuracy', it is more common than it should be for factually innacurate articles to exist; and near-impossible to get rid of because factually innacurate references exist.
 * Typically the body of wikipedia consists of copies from text-books, if the text books contain errors so does wikipedia. Peer review counts for nothing. I really wouldn't expect much luck in this area in this place.
 * Many editors are most likely students, with their world views based around the books their professors recommend, if your subject is not covered in those books expect some hostility; please accept this explanation as an apology. No doubt in years to come 'they' too will come to understand in what way 'they' were wrong. But I suggest you 'let it go'(in terms of wikipedia articles - not questions on the ref. desks etc). There are more constructive places to go. Personally I thank you for bringing this topic up, I can't add more than that.87.102.45.156 (talk) 21:15, 10 August 2008 (UTC)
 * HOWEVER If you find the data compression similarities good enough to suggest the possibility of the same method being used under a different name, or at least having been researched by someone in the data-compression field then I'd suggest asking another question about that - I've no idea if the maths or computing desk would give the best chance of success. 87.102.45.156 (talk) 21:22, 10 August 2008 (UTC)


 * Thanks very much for all comments added above. I personally think that Dr. Rypka's method is unique and the function of "Optimal Classification" is not duplicated by any other method despite comments in the deletion discussion and elsewhere to the contrary. What surprises me is the extent of control over the Wikipedia at the student level as in this case of deletion juxtaposed to some of the fine and completely unacceptable articles it contains. BTW - progressive increase in set size is the heuristic used by the method, i.e., calculating the empirical separatory value for each combination of columns starting with the combinations of a two column set, followed by the combinations of a three column set, etc. The order of significance of the characteristics or the previous set remains the same so combinations with the added characteristic in the next size set is all that is required rather than permutations of all characteristics in the set.
 * Its been awhile since I worked on the program. Even though the minimum set size requirement is within the capability of a modern personal computer (three columns with a radix of five) the way I personally implemented the method was to maximize the number of columns to match the number of columns in the group, if possible. I used randomly(guess) and sequentially(step by step) generated permutations rather than progressive combinations. In fact I even included a trigger to switch between these methods after a lengthy period of either method not coming up with additional results. However, by any method complete separation usually occurs within the minimum number of columns so nothing is to be gained from further processing once complete separation has been achieved. At the moment I can not remember my motive for using maximum set size and permutations rather than progressive (heuristic) set size and combinations.