Wikipedia:Reference desk/Archives/Mathematics/2015 June 30

= June 30 =

Usefulness vs. naturalness
There is one special mathematical equation that is defined as such because it is useful, rather than because it is natural. This is that:

$$ 0^0 = 1 $$

Are there any other equations of this kind?? Georgia guy (talk) 15:14, 30 June 2015 (UTC)


 * Sort of similar to 0!=1. See Factorial and empty product. I get what you mean, but I don't think the useful/natural distinction is very compatible with the axiomatic nature of math. Do you think the irrational numbers "unnatural"? What about the axiom of choice? Perhaps a better distinction would be "definition by convention" as opposed to "properties that directly follow from axiom and inference". It is by convention that we define the empty product to be 1, but it is also useful, and in some sense natural (what other choice could be more natural?) SemanticMantis (talk) 16:03, 30 June 2015 (UTC)
 * 0!=1 is actually very natural. It is consistent with the combination of 2 facts: 1! is 1 and that n! is related to (n+1)! simply by dividing by n+1, and 1 divided by 1 is 1; it's not an indeterminate form. Georgia guy (talk) 16:09, 30 June 2015 (UTC)
 * They both come down to the empty product though... SemanticMantis (talk) 18:47, 30 June 2015 (UTC)


 * Oh boy here we go again. Anyway, as I see it:  whether $$ 0^0 = 1 $$ is natural depends on what sort of exponentiation you're thinking of.  If you're thinking of repeated multiplication, that is natural-number-to-natural-number or real-number-to-natural-number or complex-number-to-natural-number exponentiation, then $$ 0^0 = 1 $$ is quite natural.  That's because it's an empty product.
 * However, real-number-to-real-number exponentiation is a conceptually different operation. It cannot be viewed as repeated multiplication.  In that context, the argument for the naturalness of $$ 0^0 = 1 $$ loses its force.  (So, by the way, do most of the arguments for usefulness.) --Trovatore (talk) 16:27, 30 June 2015 (UTC)
 * You're implying that 0 is a natural number here, aren't you?? Georgia guy (talk) 16:42, 30 June 2015 (UTC)
 * Oh, there are levels to that question :-). But in the sense I think you mean it, yes, like most set theorists and C programmers, I start counting with 0.  It's the natural choice :-) --Trovatore (talk) 17:30, 30 June 2015 (UTC)
 * You mean, to you June is the fifth month of the year?? (More generally, January is the zeroth and February is the first.) Georgia guy (talk) 17:45, 30 June 2015 (UTC)
 * No no; I'm still speaking English. No one would understand me if I adopted that convention.  But my loop indices start with 0, and the base case of most inductions/recursions is naturally indexed by 0. --Trovatore (talk) 18:06, 30 June 2015 (UTC)


 * An exponentiation
 * $$V^A$$
 * for cardinal numbers A and V represents the cardinality of a set of functions from a domain of cardinality A ('A' for 'arguments') into a codomain of cardinality V ('v' for 'values'). For example there are $$10^2$$ functions from a two-element set into a ten-element set (those function may be represented e.g. as 2-digit decimal strings '00', '01',... '99'). Then $$0^0$$ is a number of functions from an empty set (whose cardinality is zero, $$|\{\}|=0$$) into itself—and there is exactly one such function: the empty function. Hence naturally $$ 0^0 = 1 $$.--CiaPan (talk) 08:40, 1 July 2015 (UTC)

It helps to distinguish between an ordinal number and an index: Ordinal numbers (first, second, third, . . . ) start with 1 while indexes may start with 0.

Consider the polynomial
 * $$f(x)=\sum_{k=0}^n a_k x^k$$

The first coefficient a0 has index zero, and the first term a0x0 has degree zero.

The recursive definition for real-or-complex-number-to-nonnegative-integer exponentiation
 * x0 = 1 and x1+n = x·xn

defines 00 = 1 and 01 = 0.

The formula for nonzero-number-to-negative-integer exponentiation
 * x–n = 1/xn

defines (–1)–1 = –1 but 0–1 remains undefined.

The formula for positive-real-number-to-real-or-complex-number exponentiation
 * $$a^x=e^{x\log a}$$

does not define neither 00 nor 01 nor (–1)–1. Some people leave 00 undefined. Strangely they don't leave 01 and (–1)–1  undefined as well. Bo Jacoby (talk) 09:12, 3 July 2015 (UTC).
 * Of course $$0^1$$ and $$(-1)^1$$ are defined, it's not strange at all. $$\lim_{x\to 0, y\to 1}x^y=0$$ so $$0^1=0$$ follows from continuity (you get $$0^1=\exp(1\cdot \log0) = \exp(-\infty)$$ which is 0 however you approach it). For $$(-1)^{-1}$$ you don't need continuity, the direct definition is single-valued: $$(-1)^{-1} = \exp(-\log (-1)) = \exp (-\pi i+2k\pi i)$$ which is -1 whichever branch you choose. Whereas $$\lim_{x\to 0, y\to 0}x^y$$ depends on how you approach and doesn't exist, so can't be used for a definition by continuity of $$0^0$$ - and the direct definition results in the indeterminate form $$0\cdot\infty$$. -- Meni Rosenfeld (talk) 10:00, 5 July 2015 (UTC)

Very fast growing function
Define g(n) to be the factorial of n taken n times (again with the convention that g(0) = 1 due to the empty product). For example, g(3) = 3!!! = 6!! = 720! and g(5) = 5!!!!! = 120!!!! etc. (the factorial signs mean iterated factorials, not the double factorial or any related kind). Has anyone investigated this function before? In particular, is there an analytic continuation of it? Obviously it's an extremely fast-growing function; I might even conjecture that it's faster than tetration.

I came up with this when considering the generating function of the sequence of reciprocals, $$f(x) = \sum_{k = 0}^\infty \frac{x^k}{g(k)}$$. This surely has to be an entire function as I can't come up with any singularities for it and it converges absolutely and extremely quickly. My question for it is, is it elementary? More generally, how do we show whether a given function is elementary or not?--Jasper Deng (talk) 18:15, 30 June 2015 (UTC)
 * I'm not sure about the "faster than tetration" thing (though it might depend on what you mean exactly). It's a well known-riddle to show that a tower of powers of 9's is bigger than applying ! to 9 the same number of times. Of course $$x!>9^x$$, but $$9^9>9!$$, and since the argument of the function is much more important than the choice of function, the gap is enough to make sure the factorials never catch up. This result should generalize easily. -- Meni Rosenfeld (talk) 20:26, 30 June 2015 (UTC)
 * By "faster than tetration" I'm considering the question of $$\lim_{n\to\infty} \frac{g(n)}{^na}$$ for any fixed value of a (not $$^nn$$ which I think grows faster than g). If g grows faster, then this limit increases without bound. For example, I am pretty sure that g(15) is greater than 159.--Jasper Deng (talk) 21:01, 30 June 2015 (UTC)
 * Discussion with others suggest that this is incorrect. Based on this calculator it would seem that g(9) is on the "order" of 810 and 99 is on the order of 710.--Jasper Deng (talk) 08:13, 4 July 2015 (UTC)
 * Wolfram Alpha also gives estimates, e.g., with g(9) > 99. Dragons flight (talk) 17:05, 4 July 2015 (UTC)
 * It's not 99, it's 109. I didn't elaborate because I thought it would be clear from context, but the idea is that in both cases you start with 9 and then apply a function 9 times, the function is either $$x!$$ or $$9^x$$. The latter results in 109, which is bigger than the former, which is $$g(9)$$. -- Meni Rosenfeld (talk) 17:57, 4 July 2015 (UTC)
 * But is this really nn then?--Jasper Deng (talk) 19:46, 4 July 2015 (UTC)
 * It's not, but in nn you are starting with n and doing $$n-1$$ operations, so it's unfair to compare it with $$g(n)$$ in which you are starting with n and doing n operations. It appears that $${}^nn 0. Then g(n) = h(n, n).--Jasper Deng (talk) 21:48, 30 June 2015 (UTC)
 * Here's more information about extending g(n) to g(x). Since 1 and 2 are fixed points of the factorial function, I believe that $$g'(1) = 1 - \gamma \approx 0.422784$$ which is the derivative of $$\Gamma(x+1)$$ at 1, and $$g'(2) = (3 - 2\gamma)^2 \approx 3.406124$$ which is the square of the derivative of $$\Gamma(x+1)$$ at 2. It should be possible to calculate g(x) for half-integers by using the functional square root of $$\Gamma(x+1)$$. Using the formula on this page I get g(1.5) ≈ 1.253. Getting more digits is hard because of huge cancellations in the formula, but it might be possible using multiprecision. Egnau (talk) 14:23, 2 July 2015 (UTC)
 * I guess, but I have my doubts that the functional roots can really provide a complete analytic continuation of g (what about g(π)?). It certainly would help to have a better closed form than just that. Also, I'm unsure of the derivatives you propose because for one, as the number of factorials is not constant one must also consider values in the neighborhood of 1 and 2, where the factorial isn't fixed; I also would probably not be satisfied with that interpolation because $$1.5! = \Gamma(1.5 + 1) = \frac{3\sqrt{\pi}}{4} > 1.253$$, and the functional square root of the factorial should be an increasing function. --Jasper Deng (talk) 05:22, 3 July 2015 (UTC)
 * I think that studying functional roots is your best hope of getting an analytic continuation. g(π) isn't a problem, Schröder's equation works for any real iterate.
 * I don't understand your issue with my derivatives. The derivation is elementary: let $$F(x) = \Gamma(x+1)$$ and ignore terms in $$\epsilon^2$$ or higher. The effect of iterating F in the neighborhood of 2 is as follows:
 * $$F(2+\epsilon) = 2+c\epsilon$$ where c = F'(2)
 * $$F^{[2]}(2+\epsilon) = 2+c^2\epsilon$$
 * $$F^{[3]}(2+\epsilon) = 2+c^3\epsilon$$.
 * We're interested in $$g(2+\epsilon) = F^{[2+\epsilon]}(2+\epsilon) = 2+c^{2+\epsilon}\epsilon = 2+c^2\epsilon$$
 * so $$g'(2) = c^2$$.
 * Also I don't understand your issue with my interpolation. Do you agree that
 * h(1.5, 0) = 1.5
 * h(1.5, 1) ≈ 1.329340
 * h(1.5, 2) ≈ 1.187710 ?
 * Why is my estimate h(1.5, 1.5) ≈ 1.253 out of place? Egnau (talk) 08:50, 3 July 2015 (UTC)
 * On further examination your interpolation seems to be consistent with my definition, though I still find it very hard to believe that g is not monotonically increasing for all n > 1... you would seem to imply that it is meaningful to consider $$\lim_{n\to\infty} h(1.5, n)$$. Then there has got to be a turning point. Where is that? I'm still unconvinced by the derivatives (why is $$\lim_{\epsilon\to0}\frac{F^{2+\epsilon}(2+\epsilon) - F^2(2)}{\epsilon} = \lim_{\epsilon\to0}\frac{F^2(2+\epsilon) - F^2(2)} = (F^2)'(2)$$?) on the grounds that because I'm 100% sure g(x)>x! for all x > 2, it should be greater than that.
 * Note that I also realize that there are infinitely many possible ways to interpolate this analytically, although not necessarily with all the properties I desire.--Jasper Deng (talk) 17:07, 3 July 2015 (UTC)
 * My goal was only to compute enough values of h(1.5, n) to surround h(1.5, 1.5). I didn't consider taking the limit, but feel free to investigate.
 * I think that g(x) is increasing for x > 1. My g'(1) > 0 and g'(2) > 0 strongly support that. What did I write that contradicts that it's increasing?
 * I also believe that g(x) > x! for x > 2. I reported g'(2) > F'(2) which strongly supports that. What did I write that contradicts the inequality?
 * I agree that the way I get the derivatives is not completely rigorous, and I discovered that it even gives the wrong result in some cases. For example, $$4x(1-x)$$ has a fixed point at 3/4 with derivative -2. The continuous iterates for it are known and given in Schröder's equation. Computing the equivalent of g'(3/4) for it doesn't agree with my formula which gives a complex solution, probably because of the negative derivative. On the other hand, $$x/(2-x)$$ has a fixed point at 1 with derivative 2, and $$\sqrt{\frac{1}{2} x^2 + 2}$$ has a fixed point at 2 with derivative 1/2. I believe that these functions are complicated enough to test my formulas for bugs. Continuous iterates for these are given in Schröder's equation and Iterated function respectively, and the equivalent of g'(1) and g'(2) for them agree with my formulas: $$g'(1) = F'(1)$$, and $$g'(2) = (F'(2))^2$$
 * I couldn't help it and I pushed my method to get formulas for the second derivatives at fixed points 1 and 2: $$g(1) = 2 a_1 \ln a_1 + b_1$$ and $$g(2) = 2 a_2^2 \ln a_2 + a_2 b_2 + a_2^2 b_2$$, where $$a_i = F'(i)$$ and $$b_i = F(i)$$. I tested the formulas with the same two functions above with known continuous iterates, and they gave correct results. Applying the formulas to your problem gives $$g(1) = 2 (1 - \gamma) \ln (1 - \gamma) -2\gamma + \gamma^2 + \pi^2/6 \approx 0.09573644$$ and $$g''(2) \approx 17.266561$$ (with a more complicated closed form). Assuming that my formulas work for your problem, it's pretty exciting to get closed forms for some derivatives. Egnau (talk) 21:52, 3 July 2015 (UTC)
 * It would seem that I forgot that for 1 < x < 2, x! < x. To me, (not to nitpick) it would still suggest that g'(1) != F'(1) because it seems that g < F for 1 < x < 2. It remains that I'm unconvinced about the way said derivatives were derived.--Jasper Deng (talk) 08:16, 4 July 2015 (UTC)
 * I agree that g < F for 1 < x < 2. It's fine to have g'(1) = F'(1) if g"(1) < F"(1) and according to my numbers the inequality holds for the second derivative: 0.0957364 < 0.823680.
 * I'm sorry but you've been extremely negative so far. It's as if your first reaction to a post is "ok, why is this wrong?", and you keep searching for something wrong until you make a mistake, and then you post your mistaken reason without double-checking it, ad nauseam. How about you change your approach and ask "ok, why is this right?" Sometimes it's better to replicate a result than to search for holes in someone else's.
 * Here's homework for you: start from formula (6) from Iterated function. Set n = x because we're interested in $$f^x(x)$$, and expand as a polynomial in (x - a). Inside of the expression you'll need to expand $$f'(a)^x$$ as a Taylor series at x=a, but it shouldn't be very hard. Egnau (talk) 19:28, 4 July 2015 (UTC)
 * I'm sorry if I'm negative but I'm skeptical of a result that's not entirely satisfying for my original intents and purposes. Nothing more, nothing less. I'm not satisfied with a result that, at least to me, is counter to my intuition. Before saying anything else I will say that I am well aware that intuition doesn't fly in a lot of mathematics, unlike physics. But I want a "natural" result.
 * And with that I'm still going to contend that g'(1) != F'(1). By the way it's good that you admitted that the way you got the derivatives isn't completely rigorous. I'm not ready to consider an arbitrarily small application of F so I can't really say much in terms of taking the limit myself, but my intuition suggests that since g < F for x > 1 and g > F for 0 < x < 1, F's derivative should be greater there.
 * As for a Taylor series I think it would make sense if we expand around x = 1 or x = 2 as you seem to be doing. I think by now I need to throw away my result that g(0) = 1 because it would create a discontinuity.--Jasper Deng (talk) 19:46, 4 July 2015 (UTC)
 * I think that "obviously" $$g(0)=0$$, since you start with 0 and apply factorial 0 times, i.e. you apply nothing at all, so you still have 0. I'm not sure why you thought it should be 1. -- Meni Rosenfeld (talk) 11:16, 5 July 2015 (UTC)

Distance matrix
I have a set of strings and I want to compute the edit distance between all pairs in the set. Is there a more efficient way to create the matrix of distances than computing the Levenshtein distance for each pair individually? (besides the obvious distance(i,j) = distance(j,i)). 70.190.182.236 (talk) 23:08, 30 June 2015 (UTC)


 * Assuming that by "edit distance" you are referring to "Levenshtein edit distance", it is important to note that the order of the strings in comparison does not matter. Levenshtein(A,B) = Levenshtein(B,A). So, for string A, B, C, D..., the worst you could do is compare A to everything, then compare B to everything by A, then compare C to everything but A and B... The issue is trying to compare one string to more than one string. Doing so would not be faster using the standard Wagner-Fischer dynamic matrix solution - which is what most people use. You *could* compare one string to a N strings, but you would still make 3N comparisons per cell of the matrix. There is no benefit. To get an improvement, you need to sort the strings. Then, you have to get the longest common initial substring. If you have N strings that begin with "AHEKKDEG", you can compare a string to "AHEKKDEG" first. From there, you compare to the rest of the strings for each one. Then, you have a mostly complete matrix to work with from that point on. I personally wouldn't do it that way - too much work. What I would do...
 * Sort the strings
 * Compare string A to B (assuming that you call the first one A and the second one B after sorting).
 * Get distance A-B and B-A from that.
 * Replace the top of the matrix, currently holding B, with C - but note how many columns are the same as B (a lot since I sorted the strings first).
 * Start my calculation of A-C and C-A from the first column of difference between C and B.
 * Repeat for every string left - using the most precaculated columns as possible for each new string.
 * There is a problem here... Levenshtein functions are usually as optimized as they can possibly be. If you are writing your own, it will likely NOT be optimized. You need to look at the header file that includes your Levenshtein function and really understand the code so you can write nearly the same code. If you are using a scripting language, such as PHP, you simply cannot write code as efficient because the built-in function will be compiled, not scripted. Hopefully that is helpful. 199.15.144.250 (talk) 17:44, 2 July 2015 (UTC)

Least Symmetric Triangle?
Let T be the set of all triangles in the x-y plane with one vertex at 0,0, one vertex at 1,0 and one at x,y where y is positive and x>=.5. Let L be the set of lines in the x-y plane. let t in T and l in L be chosen. S(t,l) is the percentage symmetry that t has in l, defined as the percentage of t, where the mirror across l is also in t. (So if you chose an icosceles triangle ABC where AB = AC and l is the line through A and the midpoint of BC, the value of S(ABC, line(A->MidpointBC)) woud be 1. Any other line would of course be inferior unless the triangle was equalateral.

Let MS(t) = maximum over all l in L of S(t,l) (so MS(t) is 1 for any icosceles triangle t). What triangle t in T has the smallest MS(t) and what is that MS(t)?Naraht (talk) 23:21, 30 June 2015 (UTC)
 * I have a hard time following your question, since "percentage symmetry" and "mirror across l" are both ill-defined to me.--Jasper Deng (talk) 23:47, 30 June 2015 (UTC)
 * The percentage symmetry for a triangle for a given l is the percentage of the triangle mirrored in l that overlaps l. If l doesn't intersect the triangle then the value will be zero.Naraht (talk) 00:03, 1 July 2015 (UTC)
 * That's still a bit ill-defined (it would be helpful if you could link an article on defining those terms). What does it mean for a triangle to overlap l? Are you talking about the ratio of area in t on one side of l to the area on the other side of l? In that case, the answer to your original question can be found by computing S as a function of the parameters that define l (namely its slope and y-intercept, or equivalently, two points along it) and the parameters that define t (the "free" point) and solving for $$\nabla S = \mathbf{0}$$ for l that crosses t; any l that does not intersect t need not be considered. On obtaining the maximum of S for a given t, it then provides an expression for MS(t), which can be similarly solved as a function of the triangle's parameters.--Jasper Deng (talk) 00:25, 1 July 2015 (UTC)
 * I think it's fairly clear what the OP is asking for. If T is a triangle and l is a line, let Tl be the reflection of T through l and define S(T, l) as Area(T∩Tl)/Area(T). Now define S(T) as the maximum of all lines l of S(T, l). The value of S(T) for a given triangle is well defined and there is some l for which S(T, l)=S(T). So see this, it's clear that you only need to consider lines which intersect T, so if l is parametrized as l(t, r) = {(x,y): cost x + sint y = r} the set of pairs (t, r) with 0≤t≤π and l(t, r) ∩ T ≠ ∅ is bounded and therefor compact. Then the question is, what is the minimum possible value of S(T) over all triangles T. We know 0 is a lower bound for S(T) so there is a greatest lower bound, S say. But it's not clear that there is a triangle T that achieves it. It's conceivable that there is a sequence of triangles that get longer and thinner where S(T) approaches S as a limit. In practical terms, finding the area of intersection of two triangles a bit tricky, so S(T, l) would probably have a complicated formula and not be differentiable. So unless there is some simple way of finding l for a given T, finding S(T) for a given T would involve some kind of optimization algorithm. On top of that, you want to minimize over T, so it sounds like a difficult problem. Not that it couldn't be done with some effort and a computer. Maybe a good first step would be to find a better lower bound for S than 0.--RDBury (talk) 14:36, 1 July 2015 (UTC)
 * OP here, thank you. That's exactly what I meant. Area(T∩Tl)/Area(T). The definition of what Triangles could be chosen from is because a triangle have the same S(T) even after both translation and expansion/contraction. For starters, (to see if this can be attacked reasonably), how would S(T) be calculated for a 3,4,5 right triangle?Naraht (talk) 18:08, 1 July 2015 (UTC)


 * We can define a unique placement for each triangle; OP presented one attemp, here is another one: let a ≥ b ≥ c denote the triangle's sides lengths; place vertex C in (0,0), B in positive X axis; then A is in a blue area bounded by the X axis, a x=a/2 line and the circle arc.
 * Seeking asymmetric triangle 1.png
 * If we want the reflected triangle to overlap the original one, the reflection axis must intersect the triangle. A triangle is a convex figure, so the reflection axis has to meet the triangle's edge at two different points. We can define a uniform coordinate along the triangle circumference, so that each axis is described with two real numbers – coordinates of intersection points. Then the common area is a $${\mathbb R}^2\to \mathbb R$$ function, continuous and piece-wise differentiable (the domain pieces will depend on vertices of Tl passing the T edges as l changes). However the number of pieces and their boundaries may depend on a:b:c ratios and it may be difficult to give a general algebraic description. --CiaPan (talk) 19:22, 1 July 2015 (UTC)


 * My understanding of the problem:
 * for any triangle $$t$$ with fixed area and arbitrarily chosen reflection axis $$L$$, let $$t(L)$$ be an image of $$t$$ in the reflection;
 * let $$A(t,L)$$ be an area of the intersection $$t\cap t(L)$$, and $$A(t)$$ be a maximum possible area: $$A(t)=\max_L\{ A(t,L)\}$$; then:
 * what triangle $$t$$ minimizes $$A(t)$$?
 * Am I right? Is that what you mean? --CiaPan (talk) 18:05, 1 July 2015 (UTC)
 * More or less, I was using two fixed points to try to cut down on the fact that similar triangles would have the same answer, but using fixed area works as well. (Rotations also give the same answer).Naraht (talk) 18:13, 1 July 2015 (UTC)
 * Here is a partial result: For any triangle T there is a line l so that if Tl is the reflection of T through l then Area(T∩Tl)>Area(T)/φ, where φ is the golden ratio. Proof: Let the triangle have sides a, b, c with a≤b≤c. Then a+b>c. Suppose b/c ≥ a/b. Let A be the vertex opposite a and let l be the bisector of A. Then both T and Tl contain the the isosceles triangle with vertex A and equal sides b. The area of this triangle is b2/2 sin A, so Area(T∩Tl)/Area(T) ≥ (b2/2 sin A)/bc/2 sin A = b/c. Now c/b-1 = (c-b)/b < a/b ≤ b/c. Putting x = c/b we get x>0 and x-1 < 1/x, so x2-x-1 < 0 and x < φ. ∴b/c > 1/φ. Now suppose b/c ≤ a/b. Let C be the vertex opposite c and let l be the bisector of C. As before, both T and Tl contain the the isosceles triangle with vertex V and sides a. Computing area as before, Area(T∩Tl)/Area(T) ≥ a/b. Now b/(a+b) < b/c ≤ a/b, a/b+1 = (a+b)/b > b/a. Putting y = b/a we get 1/x+1 > x, x>0, so x2-x-1<0 and x<φ. ∴a/b > 1/φ.
 * Judging from the constructions here it seems to me that attention should be focused on long thin triangles where the ratio of the shorter sides is around φ.--RDBury (talk) 08:06, 2 July 2015 (UTC)
 * Belated reply to interesting question. I speculate that, for a given triangle, the largest intersection is achieved by reflecting about a line that either bisects one of the triangle's angles or is perpendicular to one of the triangle's sides. I also speculate that the OP's "least symmetric" triangle is a vanishingly slender one having the ratio of the sides tending to 1 : 1/√2 : 1–1/√2. This has largest possible intersection proportion of 2/(√2 + 1) ≈ 0.8284, which is achieved equally by reflecting about a line bisecting the smallest angle and reflecting about a suitable perpendicular line. The vanishingly slender triangle with the ratio of the shorter sides having the value φ suggested by RDBury has a larger maximum possible intersection proportion of 2/(2φ–1) ≈ 0.8944. 109.153.232.17 (talk) 01:38, 12 July 2015 (UTC)