Wikipedia:Reference desk/Archives/Mathematics/2008 April 30

= April 30 =

Exponent Circle
when y=x^n (n is a even number) is graphed, no mater how large n gets, 3 points stay the same: (-1,1), (0,0), and (1,1). But, as n increases the line segment between x=-1 and x=1 becomes more curved until it looks like a right angle - if you look at it from far away. So here's my question: what value of n gives you a half- circle line segment between -1 and 1 on the x-axis? Sorry if i explained it wrong.68.5.35.135 (talk) 00:14, 30 April 2008 (UTC)


 * It will never be an actual semicircle, because that's defined by a different equation (namely x^2 + (y-1)^2 = 1). What you can do is try to find the value of n that will minimise some measure of the difference between the polynomial and the semicircle. Unfortunately I don't think there's a particularly easy way to solve that exactly for most decent measures (and it may make more sense to work between x = 0 and x = 1 if you want to get an exact value of n), but I suspect the solution lies somewhere between n = 2 and n = 4 for most measures. Confusing Manifestation (Say hi!) 01:02, 30 April 2008 (UTC)


 * It's also somewhat a question of what would make something a better approximation over the whole interval (0,1). Specific, note that for the semicircle the slope is approaching positive infinity for x approaching 1 from the left. But y=x^n for n finite would have a finite slope at x=1, So arguably the higher the degree the better thee approximation but just looking at the curve one would probably think ConMan's answer to be more reasonable. If you know some Calculus you might try defining the error to be


 * $$r=\int_{0}^{1}|f(x)-y|dx$$


 * Where $$y=x^n$$, r is the error and $$f(x)$$ is the equation for the appropriate semicircle. A math-wiki (talk) 05:20, 30 April 2008 (UTC)
 * If you're still interested, I find the minimum for real exponent to be at about n = 3.6598. If looking at only even exponents, it's a choice between 2 and 4, and 4 gives a closer curve. -- Xedi (talk) 12:42, 30 April 2008 (UTC)
 * It would certainly be more orthodox to consider $$\int_0^1(f(x)-y)^2\ dx$$ instead. -- Meni Rosenfeld (talk) 22:39, 30 April 2008 (UTC)
 * One can also ask for the best approximation of the form y = x1/n to the quarter circle y = (2x−x2)1/2, for 0 ≤ x ≤ 1. From a geometrical point of view, i.e., considering only the degree of agreement between the shapes of the curves, following the A math-wiki approach of minimizing the integral of the absolute difference and thereby the combined area of the shapes of the differences, will give the same result (n ≈ 3.45) as before, whereas the squaring approach does not. Another geometric criterion is making the two difference shapes have the same areas, which leads to n =$π/4−π$ ≈ 3.6598, suspiciously close to Xedi's best real value (for which the criterion was not revealed). --Lambiam 13:25, 2 May 2008 (UTC)
 * And that integral is actually calculable, even if it does require you to head into hypergeometric functions. A math-wiki's integral comes out nicer, as long as you can find the point of intersection of the two curves (which involves finding the root of a degree 2n-1 polynomial, no simple feat). Confusing Manifestation (Say hi!) 23:23, 30 April 2008 (UTC)


 * FYI, Mathematica evaluates this integral to $$\frac{\sqrt{\pi } \Gamma \left(\frac{n+1}{2}\right)}{2 \Gamma \left(\frac{n}{2}+2\right)}-\frac{2}{n+1}+\frac{1}{2 n+1}-\frac{\pi }{2}+\frac{5}{3}$$. The minimum can easily be located numerically as n &asymp; 3.78064841874389536239. So it's not surprising that n = 4 gives the best integer fit. Fredrik Johansson 14:43, 1 May 2008 (UTC)

Proving that a graph coloring algorithm produces a minimum color solution
A few years ago I came up with an algorithm for coloring the vertices of a graph (graph theory). What I am wanting to know is how to go about formally proving that it produces a minimal color solution, I suspect the algorithm does and have as of yet failed to find any counterexamples. I know fairly little about graph theory, Only that which was covered in HS Discrete Math. Any would be appreciated thanks. A math-wiki (talk) 05:07, 30 April 2008 (UTC)
 * I doubt we'll be able to give much useful help without details of the algorithm. All I can say is that the problem is NP-complete, so if your algorithm does work, it's probably unreasonably slow. Algebraist 10:03, 30 April 2008 (UTC)
 * Unreasonably? I’d think an NP solution to an NP-complete problem is reasonable. GromXXVII (talk) 11:06, 30 April 2008 (UTC)
 * NP is a class of problems, not of solutions (and does not stand for non-polynomial, as you appear think). I assume you mean "a super-polynomial solution to an NP-complete problem is reasonable". --196.209.177.235 (talk) 13:13, 30 April 2008 (UTC)
 * To my knowledge the problems are classified by the complexity of an algorithm that correctly solves the problem. I was assuming it wasn’t a clearly exponential nor polynomial algorithm in runtime, and so is as far as I know, offhand, as much as could be desired from an NP-complete problem – which is almost what I meant by an NP solution. I actually meant, by NP solution, a solution different from an optimal runtime solution by a polynomial transformation. GromXXVII (talk) 17:54, 30 April 2008 (UTC)
 * Problems are classified by the complexity of an algorithm that correctly solves them in a given computational model. NP is the class of those problems which have a polynomial solution, but on a nondeterministic machine. Since we have no physical nondeterministic computer available, I expect math-wiki's algorithm is for a deterministic one. To the best of my knowledge, there is no known subexponential solution to an NP-Complete problem, thus math-wiki's solution is probably exponential, qualifying as unreasonably slow. -- Meni Rosenfeld (talk) 22:35, 30 April 2008 (UTC)
 * I don’t know if there are any algorithms, at all, with runtime not polynomial, but faster than every exponential as well. There are algorithms, such as I think Buchberger's algorithm, which are not so clear, and may or may not be polynomial. A solution to an NP-complete problem that might be polynomial I think would not be unreasonably slow. (Granted it either is or isn’t, that question can be far from trivial). GromXXVII (talk) 23:24, 30 April 2008 (UTC)
 * Of course there are. GNFS runs essentially at $$e^{\sqrt[3]{n}}$$ (which is not exponential in n). I think there are some that run at $$e^{(\ln n)^2}$$, too. -- Meni Rosenfeld (talk) 07:04, 1 May 2008 (UTC)
 * One method to showing it finds an optimal coloring would be to show that it gives a coloring with the same (or at most) number of colors as a known algorithm that gives an optimal coloring.
 * Or you might show it works in some classes of graphs, at least, by figuring out when it gives you the clique number – which is a lower bound on the chromatic number, and actually is sometimes. I don’t know how many graphs relative to all of them actually have the clique number as the chromatic number though.
 * Or, this would probably be really tough, every graph has an optimal greedy coloring. (although a greedy algorithm need not yield an optimal coloring). So if given a graph, and given an optimal greedy coloring, show that your algorithm produces an equivalent coloring. – I have no idea how you could do that unless if you know more about the graph though. GromXXVII (talk) 11:06, 30 April 2008 (UTC)

Well my algorithm is very straight forward and easy to apply to all but a select few of graphs. Hopefully the following is an accurate reproduction of my algorithm (I haven't written in proper algorithm for in a long time, I just remember how to do it).

The Benfield Algorithm:

1. Pick an arbitrary vertex, and give this vertex color (1), exclude color (1) from its adjacent vertices. (Exclude means mark the vertex to indicate that (1) is not an allowable color, usually by writing (1) and putting a slash through it.)

2. Select an arbitrary vertex adjacent to the 1 colored vertex, and given this vertex color (2), exclude color (2) from its adjacent vertices.

3. Select another arbitrary uncolored vertex adjacent to a colored vertex. If there are no uncolored vertices then you have colored the entire graph.

4. If this vertex has no available colors left, add a new color (such as color (3)), and given this vertex the new color, goto step 3.

5. If this vertex has exactly one color available, give the vertex this color, goto step 3.

6. If the vertex has multiple colors available, choose another vertex adjacent to a colored vertex, goto step 4.

7. If all uncolored vertices adjacent to 1 or more colored vertex(es) have multiple available colors, select an arbitrary uncolored vertex adjacent to 1 or more colored vertex(es), goto step 8.

8. Compare what happens to the neighbors of this vertex, if this vertex is colored with each available color.

9. If there exists a single choice for coloration of this vertex that results in less additional colors (including potentially zero additional colors) than any other choice to color this vertex's neighbors, then color this vertex with that color. Goto step 3.

10. If there does not exist a single choice, eliminate those choices taking more additional colors than the minimum number of additional colors, and goto step 11.

11. Apply steps 8-11 to the neighbors of this vertex, and repeat if necessary until either you have exhausted the set of uncolored vertices or you have obtained a minimal choice of coloration. Goto step 12.

12. If you exhausted the set of uncolored vertices, then your may arbitrarily choose a coloration for this vertex. Goto step 3

13. If you found a singular minimal choice of coloration, color the vertex with this color, goto step 3.

The algorithm looks very long and complicated, but it's really not. And if stay within steps 1-6, which usually happens, it's very computationally efficient, the procedure in steps 7-13 is best explained by example, and I call it, The Concept of nth Order Exclusions. Suppose you were applying my algorithm and your only uncolored vertex adjacent to a colored vertex, has multiple choices for it's coloration (not very common at all but it can happen). Lets suppose you have 2 choices for coloration of this vertex, (2) and (3). What you do in step 7 is say what if I used (2) and then what if I used 3, what would happen to it's uncolored neighbor's? Well it would eliminate either color (2) or color (3) from all of it's neighbors (this is a First Order Exclusion). Now the question is, does either of the choices require us to add a new color to our possible choices for coloration of a vertex? If both don't require us to add any, or the require the same number to be added then the First Order Exclusion are in conclusive. But if one requires more to be added than the other then the First Order Exclusion is conclusive and we choose the one which requires us to add less additional colors. If the First Order Exclusions are inconclusive, then we look at the effect our choices for the original color have on the vertex's neighbor's neighbors (Second Order Exclusions), that is, we now consider what happens if we colored it's neighbors as well for color (2), or color (3). This procedure is repeated as many times as are necessary, and if we run out of neighbors then we conclude the choice is arbitrary and choose as we please for this vertex. This computationally intensive process is avoided by my algorithm if possible, by step 6. So it is really exceedingly rare that one needs to use this procedure. Most of the time just choosing a different vertex and coming back to the problematic vertex later usually solves the problem. The method in steps 7-13 seems to me to be akin to branch and bound techniques. Though I know little about them. Hopefully the above isn't too confusing. A math-wiki (talk) 07:27, 1 May 2008 (UTC)

I should mention that I have tried and failed to find counter examples to the above algorithm not finding the minimum color solution. This is really the fourth iteration of this method, I went through several version before deciding that this version was most efficient and that it didn't seem to fail to produce the optimal coloring solution. The first two variants I found counter examples for, graphs where they failed to produce the minimal solution. The third variant which simply doesn't bother avoiding those vertices with several choices for coloration, is much less computationally efficient but otherwise quite possibly produces an optimal coloration of the graph. A math-wiki (talk) 07:46, 1 May 2008 (UTC)


 * I don't have advice for proving anything, but I do think that this is similar to branch-and-bound, and therefore takes exponential time in the worst case. Note that in assigning complexity classes to algorithms, we always use the worst case.  In your algorithm, that means having to do steps 7-13 through the entire depth of the graph for every point.  Note that many theoretically exponential-time algorithms are often very fast in many, if not most, useful problems.  Take the simplex algorithm for linear programming.  It is usually very fast, finding the minimum in a few iterations.  However, it is possible to construct a problem for which the simplex method visits every corner of the feasible space, thus making it an exponential method.  The ellipsoid method solves linear programs in polynomial time but is very slow in practice.  You'll find many more people use the simplex method in practice than any polynomial-time method, and almost no one uses the ellipsoid method.  moink (talk) 13:55, 2 May 2008 (UTC)

Markov chains

 * The probability tomorrow will be rainy if today is rainy = 0.1
 * The probability tomorrow will be rainy if today is not rainy = 0.2

What is the probability that two days ago was rainy if we know today is not rainy?

This question is the opposite to the first example here: Examples of Markov chains. Should I look at the invertible matrix or use Bayes' theorem? —Preceding unsigned comment added by 89.0.54.162 (talk) 10:40, 30 April 2008 (UTC)


 * Use Bayes' theorem. Note that:
 * P(today is rainy and tomorrow will be rainy) = P(today is rainy) x P(tomorrow will be rainy|today is rainy)
 * P(today is rainy and tomorrow will be rainy) = P(tomorrow will be rainy) x P(today is rainy|tomorrow will be rainy)
 * so if we assume that the unconditional probability of rain (i.e. the probability that it will be rainy given no further information) is the same on all days, then P(today is rainy) = P(tomorrow will be rainy), and so P(today is rainy|tomorrow will be rainy) = P(tomorrow will be rainy|today is rainy) i.e. Markov process probabilities are the same in either direction. Gandalf61 (talk) 11:50, 30 April 2008 (UTC)
 * How can you assume unconditional probability?
 * P(today is rainy and tomorrow will be rainy) != P(today is rainy) x P(tomorrow will be rainy)
 * because P(tomorrow will be rainy) depends on P(today is rainy)
 * Am I wrong?
 * He's not assuming it's independent. He's assuming that, given no conditions, the probability of rain is constant on any day. Just because we know it's dependent doesn't mean we can't find the unconditional probability. --Tango (talk) 13:32, 30 April 2008 (UTC)
 * What is the result (in numbers)? —Preceding unsigned comment added by 89.0.54.162 (talk) 14:00, 30 April 2008 (UTC)
 * We don't generally give answers here. You don't learn anything from being told the answer. If you have an answer and want to know if it's correct, we can say "yes" or "no", but we won't just give you the numerical answer. Did you understand what Gandalf61 was saying about how to apply Bayes' theorem? --Tango (talk) 16:14, 30 April 2008 (UTC)
 * Note that--since we only care about two days ago--you have to consider both states for yesterday. --Prestidigitator (talk) 17:45, 30 April 2008 (UTC)
 * Never mind. No idea where my head was.  That was implicit in nature of the problem.  --Prestidigitator (talk) 22:39, 30 April 2008 (UTC)

P(today is not rainy|tomorrow will be rainy) = P(tomorrow will be rainy|today is not rainy) ? so the result is 0.11 ? —Preceding unsigned comment added by 89.0.11.77 (talk) 18:59, 30 April 2008 (UTC)
 * I don't think so, no. P(A|B)=P(B|A) is certainly not true in general (Bayes' theorem gives you the correct formula). It's possible that's it happens to be true in this case, I haven't checked. --Tango (talk) 23:27, 30 April 2008 (UTC)

P(today is not rainy|tomorrow will be rainy) = 1-P(today is rainy|tomorrow will be rainy) = 1-P(tomorrow will be rainy|today is rainy) = 1-0.12 = 0.88 ? 0.88 for a rainy day - doesn’t make sense. It should be 0.11 or 0.12. —Preceding unsigned comment added by 89.0.234.231 (talk) 08:50, 1 May 2008 (UTC)
 * That looks right - you were calculating the probability of today NOT being rainy, so 0.88 is reasonable. --Tango (talk) 09:31, 1 May 2008 (UTC)


 * Where does the value 0.12 come from? Isn't it given that P(tomorrow will be rainy|today is rainy) = 0.1? Also, the original question was for P(the day before yesterday is rainy|today is rainy), which is not the same as P(today is not rainy|tomorrow will be rainy). --Lambiam 13:54, 2 May 2008 (UTC)

Length of graph through function
Is there a way to solve the length of a graph? Do we have articles related to studies of this? I've been going through a lot of books lately that I'd have thought would cover this subject, but none have as of yet. A linear graph from x=a to x=b, say [0,0] in a line to [5,5] can be solved easily enough using trigonometry, but these are hardly the safran in the collection of spices. :) Thank you for your help! Scaller (talk) 16:42, 30 April 2008 (UTC)
 * PS! The kind we see in Function (mathematics), not Graph. Scaller (talk) 16:44, 30 April 2008 (UTC)
 * You may find arc length of use. Baccyak4H (Yak!) 16:55, 30 April 2008 (UTC)
 * Thank you, I do. It is also rather saddening a find; I was hoping to find the procedure called for only common algebraic methods (multiply, add, so forth) :) Scaller (talk) 17:37, 30 April 2008 (UTC)
 * Well for a function with a given form you may be able to do that. For example, you can find a general solution for a quadratic polynomial ($$f(x) = a x^2 + b x + c$$) without a great amount of hassle, and then plug in concrete values for coefficients and endpoints for a specific problem.  Also, as for any such problem you can always approximate with a numerical solution by taking small steps and summing $$s = \sum_i \sqrt{{\Delta x}_i^2+{\Delta y}_i^2} = \sum_i \sqrt{(x_i-x_{i-1})^2+(f(x_i)-f(x_{i-1}))^2}$$.  --Prestidigitator (talk) 18:34, 30 April 2008 (UTC)

Principal Ideal Domains
My notes say that a principal ideal domain is an integral domain, say R, in which every ideal is of the form $$x$$R, for some $$x$$ in R. Does this imply that all ideals are equal to R, since $$x$$R is just the set of $$xr$$, for all $$r$$ in R, and R is closed under multiplication? Our article on PIDs doesn't state the definition in quite the same way. I feel as though I'm not understanding something - surely if the definition were as simple as "the only non-trivial ideal is R", it wouldn't be stated in such a roundabout way? The course I'm doing mostly focuses on groups, with a bit of rings and fields tacked on the end, so rings are still a bit of a mystery to me. Thanks a lot, Icthyos (talk) 17:00, 30 April 2008 (UTC)
 * No, being closed under multiplication means xR is always a subset of R, but it might not be the whole thing. Consider the integers: 2Z is the set of even integers, which isn't all the integers. --Tango (talk) 17:25, 30 April 2008 (UTC)
 * Aside: an integral domain R in which the only non-zero ideal is R is called a field. Algebraist 17:28, 30 April 2008 (UTC)
 * Oh of course, I can't believe I didn't see that! Thanks a lot, Icthyos (talk) 17:29, 30 April 2008 (UTC)
 * (oh I see now, I was getting confused with the group idea of a coset - like for H a subgroup of G, $$g$$H = H if and only if $$g$$ is in H. Thanks again, Icthyos (talk) 17:58, 30 April 2008 (UTC))