Wikipedia:Reference desk/Archives/Mathematics/2007 July 28

= July 28 =

Integers with no equal differences
How do I choose N integers so that no differences are equal, and the largest difference is minimal? To start with, for N=5? --mglg(talk) 08:29, 28 July 2007 (UTC)


 * Start with a chosen number, then work as follows: a0 := k, ai := ai&minus;1 + i. I believe you could argue that you are incrementing by the least amount possible at each step, so the largest difference is minimal, and of course no difference would be equal. This is assuming that you mean difference as ai-ai&minus;1.

—The preceding unsigned comment was added by User: (talk • contribs) – Please sign your posts!


 * I believe such sets are known as Golomb rulers. Finding minimal ones (and proving their minimality) is difficult, and has been done with the aid of global distributed computing projects such as distributed.net.  —Ilmari Karonen (talk) 09:54, 28 July 2007 (UTC)
 * Yes, Golomb rulers is precisely what I was looking for. Thanks Ilmari! --mglg(talk) 10:35, 28 July 2007 (UTC)

Quadratic equation
How can we solve a Quadratic equation. Please mail me here (email address commented out to prevent spam)

There are several ways of solving quadratic equations - assuming that they have solutions in the reals. Firstly, many polynomials and therefore quadratics can be factorised. Firstly, there is an extensive article on completing the square, which explains how this can be done, http://en.wikipedia.org/wiki/Completing_the_square.

Secondly, you can factorise the quadratic by simplifying as much as possible, and using some system to guess, like as follows. (sorry i don't know how to use math symbols)

eg. There exist real solutions to y= f(x) = x^2 + 3x + 2 = 0. What are the factors? Since the coefficient of x in its highest form (x^2) is 1, the coefficients of x must always be 1. We also know that x^2 is the highest form, so there must only be 2 expressions that multiply together, (although the fact that it is quadratic means this also).

So, we have deduced that the factorisation looks like this, f(x)= (x + ?)(x + ?). Since brackets (x + a)(x + b) = x^2 + xa + xb + ab, we can put our equation into this form. f(x) = x^2 + 3x + 2 = 0 f(x) = x^2 + x + 2x + 2

the only vaules for the question mark can be 1 and 2.

This means that (x + 1)(x + 2) = 0 in order for this to equal zero, one or both of the brackets has to equal zero. So, we know that x + 1 = 0 (from the first bracket), and that x + 2= 0

this gives us the solutions that x = -1 or -2, while y = 0. I hope this solves your problem, and that someone puts some of this into a page about quadratic factorisation, because I couldn't find one. 82.35.196.165 21:39, 28 July 2007 (UTC) Ben
 * See quadratic equation. -- Meni Rosenfeld (talk) 12:53, 28 July 2007 (UTC)

Intuition in Maths
I just wanted to know the role of intuition in mathematics.


 * No mathematically sound solution should ever be found by intuition alone, but sometimes intuition can suggest solutions to a problem. For example, if you were given a set of points and told to find a function that calculated them, without intuition you'd be looking at an infinite number of possibilities. However, by looking at a graph of them, your intuition might suggest, say, a linear or sinusoidal function may be possible, and then it's a case of finding the right parameters. Some of the more famous examples of intuition may include Fermat's Last Theorem (since he didn't provide a proof, and many scholars believe he didn't have one, but suspected it was true), and conjectures such as Goldbach's Conjecture, where no formal proof has been found but where mathematicians have a strong suspicion that it's true. Confusing Manifestation 11:34, 28 July 2007 (UTC)

The role of intuition in mathematics is the same as the role of mutation in biological evolution: it's the source of the variations that are culled by the selective pressure. The selective pressures in mathematics are (a) the need to prove your results, and (b) other mathematicians' aesthetic opinion of your results. -- BenRG 15:40, 28 July 2007 (UTC)


 * If you take intuition in its philosophical sense, then there is the school of Intuitionism as an approach to mathematics. --Lambiam 20:14, 28 July 2007 (UTC)

Many (probably even most) practicing mathematicians rely on intuition to decide which questions to pursue and which techniques to use. Intuition comes into the equation much more frequently, and on a much lower level: "I think this result should generalize to the noncommutative case", "If triangles look like hyperbolic triangles, then some properties of hyperbolic geometry should hold here", "There should be some quantifiable idea of geometric complexity that can be used here", etc. Confusing Manifestation's example is not something often considered in mathematics; it sounds more like the role of intuition in engineering. The difficulty of Fermat's Last Theorem and the Goldbach Conjecture makes it clear that these claims were closer to guesses; it is very unlikely that either Fermat or Goldbach had any intuition relating to the actual solutions of these problems. The ability to make accurate guesses about what problems are feasible is one of the most important skills for a mathematician to possess; in some fields, a certain intuition about the kinds of problems and proofs encountered is considered almost a necessity. I've heard it said, for example, that someone without geometric intuition could never be a great geometer (though this may say more about the field, and the role of rigor in geometry, than about intuition).

Intuition is also useful to bridge the gap between seemingly unrelated fields; even though a theorem in one field is probably not strictly true in another, many of the same ideas often work. Consider the importance of geometric intuition to algebraic geometers, who very often work on problems where geometric ideas are completely out of place; nevertheless, the intuition informed by geometry is invaluable. From Geometry: "From a naïve perspective, these objects are just finite sets of points, but by invoking powerful geometric imagery and using well developed geometric techniques, it is possible to find structure and establish properties that make them somewhat analogous to the ordinary spheres or cones." I alluded above to the ideas of synthetic geometry. A CAT(-1) space, for example, is a geodesic metric space where triangles are "as thin" as triangles in hyperbolic space. It turns out that such spaces have many properties in common with negatively curved manifolds (in fact we can extend this even to hyperbolic groups), and the techniques used to study them are often motivated by familiar arguments about negatively-curved manifolds. Tesseran 05:01, 29 July 2007 (UTC)


 * Developing a good mathematical intuition is not always helpful, for example, see pathological cases in mathematics. There are some results in mathematics which are counterintuitive.


 * That just means you need to develop a better intuition. Intuition is absolutely indispensible in mathematics; without it you simply can't make progress. But you have to adapt your intuition to the facts as they become clear. --Trovatore 16:17, 29 July 2007 (UTC)


 * Intuition is crucial to mathematics success. Mathematics describes patterns. Intuition is human pattern recognition. Shall we invent axioms at random and make deductions with no purpose? Of course not. Shall we see the hint of a pattern and assume our first impression is universally true? Of course not. We are Polynesian navigators on the wide Pacific, finding our clues to direction in the clouds and birds and floating debris. If we find land we are mathematicians; if not, we choose another career. --KSmrqT 17:08, 29 July 2007 (UTC)


 * Intuition is critical in math because it seperates us from a computer. Our ability to prove mathematical facts is intuitive and much faster than a computer can do using automatic theorem proving. Indeed123 02:05, 31 July 2007 (UTC)
 * Are you implying that intuition was not critical before computers were invented, and that it will cease to be critical when theorem proving by computer becomes faster than humans can prove theorems (which it many cases already is)? --Lambiam 04:47, 31 July 2007 (UTC)

Tiling applet
I came across a problem requiring a given set of rectangles to be fitted without overlap into the minimal-area rectangle, which I solved by cutting out shapes and trying to fit them into various possibilities of increasing size, until I reached one which would suffice. Does anyone know of an applet which will do this, preferably by finding the smallest rectangle which will contain an arbitrary set of user-defined tiles? The tiles need not be rectangular, but would be made up of unit squares.…81.159.14.237 14:15, 28 July 2007 (UTC)


 * I'm pretty certain this is NP-complete. If it is, finding the optimal solution will take exponentially longer as the size of the problem increases. It still is solvable, but if you don't need it to be perfect you can find good approximations much faster. It should be somewhere in the list_of_NP-complete_problems, but that's a pretty long list and you're probably better off waiting for someone who knows what it's called and can direct you to that page. — Daniel 16:39, 28 July 2007 (UTC)


 * Being NP-complete certainly does not imply that the solution takes exponential time. It is quite possible that a sub-exponential algorithm is known, and unless P=NP has been solved negatively and I wasn't informed, it is also possible that a polynomial algorithm exists. -- Meni Rosenfeld (talk) 17:55, 28 July 2007 (UTC)


 * Is a sub-exponential algorithm known for any NP-complete problem? Is there even anything between polynomial time and exponential? As for P=NP, if nobody solved it for a million dollars, nobody will solve it just so you can make your applet fast. — Daniel 19:54, 28 July 2007 (UTC)


 * No sub-exponential algorithm is known for any NP-complete problem, but many NP-complete problems can be solved efficiently in practice (just not in the worst case). There are super-polynomial, sub-exponential algorithms; one example is the general number field sieve. -- BenRG 20:58, 28 July 2007 (UTC)


 * Hmm... I guess I figured that since there are known subexponential algorithms for problems not known to be in P (like the integer factorization that BenRG mentioned), there would probably be such for NP-complete problems. But I wasn't thinking straight, as if there was, there would be such an algorithm for all NP-complete problems, and I would probably know about it. I withdraw that part. -- Meni Rosenfeld (talk) 22:13, 28 July 2007 (UTC)


 * I assume that you require the axes to be aligned (rectangles can be rotated by 90° but not by arbitrary amounts). This does make a difference even if all rectangles are unit squares. The term tiling usually means tesselation, in which the area is also fully covered; packing may be a better term here. I don't see an obvious way of doing it, but perhaps it is possible to adapt Knuth's fast pentomino packer (see ref. 2 in the article) to this problem. --Lambiam 20:27, 28 July 2007 (UTC)

As the OP, I did mean that all shapes including the enclosing rectangle should be aligned to XY axes. The problem I was looking at had only 6 rectangles, and I assume that not many more would be involved in anything similar, so no great efficiency would be needed in the fitting algorithm.

As an aside, the 6 rectangles to be fitted were of size 1 X 2, 2 X 3, ..., 6 X 7. Starting with the smallest and successively adding the next smallest, it is easy to form a rectangle (which therefore just contains the pieces) up to the stage of including the 5 X 6 piece, when one of size 5 X 14 results. At this point, there is nowhere to place the 6 X 7 piece and still remain rectangular, so I addressed the problem from scratch and managed to fit the 6 pieces into a 6 X 19 rectangle with a 1 X 2 gap, which I decided was the best fit. I wonder if any larger problem of the same pattern ( fitting one rectangle of each of the sizes 1 X 2, 2 X 3, ..., n X (n+1), for n>6) has an enclosing rectangle with no lost area?…81.132.236.205 10:26, 29 July 2007 (UTC)


 * You might be interested in Squaring the square. The total area of you 6 pieces is 112 which factorises as 2^4 * 7. So if a perfect fit were to exist, it would be 16*7 or 8*14 - because of the 6*7 piece, the shortest side possible is 6. The obvious packing approach is to start with the biggest piece and add smaller pieces to it, this results in a wasted space of 2*4=8: "place 6*7; join 6*5 along 6 edge; join 5*4 along 5 edge; join 4*3 along 4 edge; now fit 3*2 and 1*2 in the 4*4 space remaining". I presume your 6*19 (allowing for repositioning of 6*7 and 5*6) is: "place 6*7; join 5*6 along 6; join 5*4 using 5 along part of 6 of 5*6; join 4*3 using 4 along part of 5 of 4*5; join 3*2 along 3; fit 1*2 in remaining 1*4 hole". -- SGBailey 14:22, 30 July 2007 (UTC)

Connect 4
What placement of countery things would make the greatest number of lines of four for both sides? What if the number of lines of four had to be equal? Would there be a difference? Vitriol 18:21, 28 July 2007 (UTC)


 * If there are five consecutive pieces of the same colour in a row, would you count them as two lines of four (1234 and 2345 each count as a line of four), or one (1 line of four-or-more), or none (there is 1 line of five, not of four)? Also, do we have a 6×7 arrangement, as in the standard Connect Four game? If the answer to the first question is "two", the greatest number for both sides together is obtained by putting pieces of the same colour on all positions, which gives you 69 lines of four. (In general, for an m×n arrangement, where m and n are at least 3, a total of 4mn − 9(m+n) + 18.) This gives an upper bound of 34+34 if both sides must have equal numbers; that upper bound is certainly not sharp. --Lambiam 21:06, 28 July 2007 (UTC)


 * [[Image:Connect4_diag.svg|70px|right]] (ec) The best I could come up with is 18 lines of four for each side (on a 6×7 board), achieved by dividing the board along a diagonal and putting each color on one side, as in the image on the right. Of course, a definite answer could be obtained by running an exhaustive search of all the possible configurations.  —Ilmari Karonen (talk) 21:34, 28 July 2007 (UTC)

Probability
Here is a probability question: where does the mathematician even begin to try to answer it? At the end of the day, there are only three things that can happen with the Dow Jones Industrial Average: (A) the value can rise (i.e., the difference between the opening value and the closing value is a positive number); (B) the value can fall (i.e., the difference between the opening value and the closing value is a negative number); or (C) the value can remain unchanged (i.e., the difference between the opening value and the closing value is zero). What is the probability of A happening? of B? of C?  Where does one begin to answer a question such as this? Thanks. (JosephASpadaro 20:21, 28 July 2007 (UTC))


 * Mathematicians don't try to answer that question; they just work with probability as an abstract theory. The mathematical theory of probability is perfectly consistent within itself, and it can be shown that if there are any sensible rules of probability, then they have to be the standard rules. (I can't remember the details, but you can show that anyone following different rules is obliged to accept a certain wager which it's logically impossible for them to win.) Why probability is useful in the real world is a deep metaphysical mystery, closely related to the problems of existence and qualia. The book Littlewood's Miscellany includes an essay about this, and Leonard Susskind has a short essay about it here: . -- BenRG 20:39, 28 July 2007 (UTC)


 * I would expect that you could anwser with the average probability for all days, and would get something like 51% of the time it increases, 49% it decreases, and it almost never stays exactly the same, if you calculate the DOW to sufficient precision. (Of course, if you include days the stock exchange is closed, you'd get a lot more days in the last category.)  The real trick, of course, is knowing which days will go up and which will go down, and that's not so easy to calculate.  Note that large numbers of people using the formula to calculate such a things and investing on days they expect an increase would also change the market's behavior.   StuRat 21:34, 28 July 2007 (UTC)


 * Well, you could measure the proportion of days over some given historical period that the DJ index did (A), (B) or (C). This gives you a descriptive statistic, which you interpret as a frequentist probability over that period. But this doesn't really tell you anything about what determines hiw the index behaves. It's value on one day could depend in some complex way on its values on pervious day, and its behaviour in the future (if that is what you are trying to predict) could be entirely different from its behaviour over the period that you measured. Gandalf61 13:35, 29 July 2007 (UTC)


 * For the general interest, in the past year (249 data from 31 July 06 to 26 July 07), the DJIA closing value went up 60.08% of the time, never stayed the same value, and went down 39.92% of the time.  These numbers, of course, will vary from year to year.  They are not the "true" probabilities of going up or down (nor, a priori, are they necessarily estimates of the probabilities of what will happen tomorrow).  Prediction into the future requires some kind of assumption about the processes acting from day to day.  If you assume that every day is an identical random event, then these probabilites are estimates of the probabilities for what will happen tomorrow.  But this is probabily not a safe assumption.  --TeaDrinker 17:20, 29 July 2007 (UTC)
 * Should that be 248? For no integer U, 100U/249 rounds to 60.08, the closest being 100·150/249 = 60.24... . --Lambiam 00:00, 30 July 2007 (UTC)
 * Well spotted! 249 days of closing values, so 248 differences.  I also rounded a bit. --TeaDrinker 02:31, 30 July 2007 (UTC)
 * And of course, he fact that it goes up 60% of the time and down 40% of the time doesn't mean it is higher now than it was then. It could have gone up 1 point 60 times and down 10 points 40 times, thus making it a lot lower overall. -- SGBailey 14:03, 30 July 2007 (UTC)
 * Right, but I think it would not be implausible to assume that the change between successive days could be approximately modelled as a normally distributed variable. Under that assumption, as well as reasonable assumptions about the prior distribution of the parameters, increasing in 60% of 248 days does imply, with very high probability, a significant increase between the two endpoints of the time period. -- Meni Rosenfeld (talk) 17:36, 30 July 2007 (UTC)

Great input! Very helpful! Thanks. (JosephASpadaro 19:27, 30 July 2007 (UTC))