Wikipedia:Reference desk/Archives/Mathematics/2012 December 10

= December 10 =

Impossible math problems
Hi reference desk,

Recently, I came up with a few problems on my own that nobody else could solve. The reason I came up with these problems was partly to test my own progress in learning new concepts in calculus, only to find out much to my dismay that I could not solve them myself. Thus, I'll be posting them here. No, these are not assigned homework problems. In fact, it may turn out that these problems are completely meaningless, have a ridiculously simple and elegant solution or theorem associated, are actually impossible to solve, or all of the above. If the question has no solution or is stupid, please clearly state why. Surely, my conception of these problems is so simple that someone else must have thought of them already, and if such existing articles exist please point me to them. I have thought up similar but easier-to-solve problems in the past, and may post one or two of those in the following weeks, if time permits. For now, I will list only three.


 * 1) Evaluate $$\int_{1}^{e}\ln x -2x^2 + cos(x^e)\, dx$$ using the Riemann integral sum method.
 * 2) If $$x, y$$ are any real finite numbers, find the absolute probability that $$\frac{x}{y} \,>\, x^y$$.
 * 3) If $$g'(x) = f(x)^{-f(x)^{+f(x)^{-f(x)^{+f(x)...}}}} \text {f(x) times}$$, compute all possible solutions of $$\int f(x)\, dx \over g(x)$$.

Clearly, we may benefit from the following hints:

\frac{b-a}{n} \sum_{i=1}^{n} f(s_i)$$. From my limited understanding, I turned this into $$R(f,p) = \frac{e-1}{n} [\sum_{i=1}^{n} ln\frac{i}{n} \,-\, \sum_{i=1}^{n} 2(\frac{i}{n})^2 \,+\, \sum_{i=1}^{n} cos(\frac{i}{n})^2]$$, which simplified to $$\frac{e - 1}{n} \sum_{i=1}^{n} [ln(i) - ln(n)] - \frac{2e - 2}{n^3} \sum_{i=1}^{n} i^2 + \frac{e - 1}{n} \sum_{i=1}^{n} cos(\frac{i}{n})^2 $$. This is where I got stuck. Obviously, the Riemann sum method would be very cumbersome to carry out with an integral like this, and I've already made numerous mistakes. For one thing, we would need to know the general formula for the arithmetic sum of $$i^m$$, where m is any real integer. Note that i refers to an interval, not the complex number $$\sqrt{-1}$$. For the second sum of our terms, $$\sum_{i=1}^{n} i^2$$ $$=$$ $$n(n+1)(2n+1) \over 6$$, so we can simply to $$- \frac{2e - 2}{n^3} \sum_{i=1}^{n} i^2 = \frac{(1 - e)(n+1)(2n+1)}{3n^2}$$. However, the problem underlies what we can do with the arithmetic sum for logarithmic expressions like ln, or even sinusoidal expressions like cos? The only thing that I know is that 0 < ln(x) < 1 when 1 < x < e. Wolframalpha tells me that the integral of this total function is negative over [1, e], so perhaps we should reverse the integral. In either case, we can use the Fundamental theorem of calculus, part II, to show that $$\int_{1}^{e}\ln x -2x^2 + cos(x^e)\, dx$$ equals, erm, ignoring all constants, and after evaluating the third term not by substitution, but once again by Wolfram... $$xln(x) - x - \frac{2x^3}{3} - \frac{x(x^{2e})^{- \frac{1}{e}} ((-ix^e)^\frac{1}{e} {\Gamma}(\frac{1}{e},ix^e)+(ix^e)^\frac{1}{e} {\Gamma}(\frac{1}{e},-ix^e))}{2e}$$, where Γ(a,x) is the incomplete gamma function. Again, I have no idea what this means.
 * 1) For a Riemann sum, $$R(f,p) =
 * 1) We know that if we set a closed interval of [x] and [y], we could find the probability that one term is greater than the other in terms of an absolute ratio. However, we soon see that this is no regular diophantine equation. Plotting on wolfram, we get a three-dimensional quasi-supersymmetrical surface! We could similarly select two numbers x and y, for instance x = 1/2 and y = 3, and find that x/y > xy. When we set $$x = \frac{1}{2}, y = \sqrt{2}$$, we find x/y < xy. However, this tells us nothing about probability over a given double interval, nor what happens when the intervals overlap, and certainly not enough information about the real number line as a whole, unless we can apply the reversability of 1 and the transcendence of prime-irrationals.
 * 2) This last one may well be a hoax, and if it is, you may disregard it, but if it is not, then continuity does not imply differentiability. In fact, I think it was a botched attempt at creating a partial differential equation.

Any insight would be greatly appreciated here. Thanks. ~ AH1 (discuss!) 07:31, 10 December 2012 (UTC)


 * 2) I'd think the key would be to find the intersections of z = x/y and z = xy. That should give you some insight.  I'd guess that one or the other of those equations is only lower over a finite range of x and y values, with the other being lower for the rest.  Thus, the probability will be either 0 or 1. StuRat (talk) 08:03, 10 December 2012 (UTC)


 * For number 3, the "f(x) times" part doesn't make much sense for non-integer values of f(x). Your "continuity does not imply differentiability" makes me think you would be interested in reading Weierstrass function. 209.131.76.183 (talk) 19:26, 10 December 2012 (UTC)


 * Re #1, clearly the inclusion of 3 summands is a red herring, as each can be integrated separately. The polynomial part is easy. For the logarithmic part, you could use the fact that $$\sum_{i=1}^n\log i=\log n!$$ and Stirling's approximation $$n!=\sqrt{2\pi n}n^ne^{-n}(1+O(1/n))$$, but the standard way to derive that is by using the integral of log, so I'm not sure how to handle it when crippled and unable to use integrals. For the third part, the definite integral is -0.14383459316195129, but isc ("the new" Plouffe's inverter) doesn't find any matches for it. -- Meni Rosenfeld (talk) 10:33, 16 December 2012 (UTC)

Ratio of x / y > x^y for integers excluding 0
If I plot these ratios for increasing values of x and y I get something approaching 0.372 as x and y approach infinity. Is there a closed form expression for the value of this ratio as x and y approach infinity? 149.169.218.138 (talk) 18:23, 10 December 2012 (UTC)
 * What do you mean by x and y approaching infinity? If x is fixed and y goes to infinity, then the lhs tends to 0, and the rhs to infinity. If y is fixed and x goes to infinity, then both lhs and rhs go to infinity, and the ratio lhs/rhs goes to 0 (as long as y > 1). If x and y approach infinity along the line x = ay, where a > 0, then the lhs is constant a, and the rhs goes to infinity, so the ratio again tends to 0.—Emil J. 18:42, 10 December 2012 (UTC) Never mind
 * Sorry, what I mean is, for all pairs {x, y} for x in range(-h, h) for y in range(-h, h) as h approaches infinity. 149.169.218.138 (talk) 18:43, 10 December 2012 (UTC)
 * I just had a look one question above, and it struck me that I probably completely misunderstood your question. Are you in fact not asking about the ratio of x/y to xy, but a discrete version of AH1’s question 2? Is that a coincidence?—Emil J. 19:06, 10 December 2012 (UTC)
 * No, it's not a coincidence. Like StuRat said, the continuous version is probably just 1 or 0, so not terribly interesting. The discrete version however, appears to converge. 149.169.218.138 (talk) 19:17, 10 December 2012 (UTC)
 * If true, I would rather take that as an indication that StuRat may be wrong. However, note that even though it was not stated in the question, the continuous version only makes sense when x is restricted to be positive, and it is not clear from the wording whether y is intended to be positive as well or not. This may lead to a different discretization than you suggest.—Emil J. 20:00, 10 December 2012 (UTC)
 * Can you describe the process or calculation you used to arrive at this conclusion? If the number is 0.372 or thereabouts is that some special number? ~ AH1 (discuss!) 22:27, 13 December 2012 (UTC)