Wikipedia:Reference desk/Archives/Mathematics/2015 April 7

= April 7 =

Generalization of a function
Let f : R+ → R be a monotonically increasing function on the positive real line, and require that f be greater than 1. Define g : N → R by $$g(n) = \prod_{k = 1}^n f(k) $$. Finally, let h : N → R be defined by $$h(n) = \sum_{k=1}^n g(k) j(k) $$ where j is a positive continuous function defined on the positive real line. Under suitable conditions, how do I generalize g and h to the real numbers rather than the natural numbers N? Let f and j be themselves real analytic if that helps.--Jasper Deng (talk) 05:05, 7 April 2015 (UTC)
 * $$g(n+x) = \prod_{k = 1}^n f(k+x) $$
 * for 0≤x<1. Bo Jacoby (talk) 07:22, 7 April 2015 (UTC).
 * That's not continuous. -- Meni Rosenfeld (talk) 11:26, 7 April 2015 (UTC)
 * The first question reduces to $$g(n)=\sum_{k=1}^nf(k)$$ by way of logarithms, and the second reduces to it by $$f(k)=g(k)j(k)$$ (maybe you give up some of the assumptions in the transformation, but I'm not sure it matters). So I'll focus on that form.
 * You'd be in a better position if you had $$g(n)=\sum_{k=-\infty}^{n}f(k)$$. Then you could use Bo's construction $$g(n+x)=\sum_{k=-\infty}^nf(k+x)$$. If $$S=\sum_{k=-\infty}^0f(k)$$ converges, you can still make that work: $$g(n)=\sum_{k=1}^nf(k)=\sum_{k=-\infty}^{n}f(k)-S$$. Then $$g(n+x)=\sum_{k=-\infty}^nf(k+x)-S$$.
 * If instead you have f decreasing, there is a possibility that g has a power series around ∞ which converges for every n. Then it converges for every sufficiently large x and is unique, and can be used to define g for every x. -- Meni Rosenfeld (talk) 11:26, 7 April 2015 (UTC)
 * What if I extended the domain of f such that it's 1 on the negative real line? Then its logarithm there is always 0 and S = 0. Is my original question then not a special case of what you describe?--Jasper Deng (talk) 00:55, 9 April 2015 (UTC)
 * Yes. But you're looking for a "natural-seeming" generalization - you might not get one if you extend f in this arbitrary, non-smooth way. E.g., $$f(k)=k$$ wouldn't give you the gamma function. -- Meni Rosenfeld (talk) 09:03, 9 April 2015 (UTC)
 * I'm not exactly sure what counts as a generalization, but one approach would be to use product integrals. Let $$g(x) = \exp \left(\int_0^x \log f(t)\,dt\right)$$ and $$h(x)=\int_0^x g(t)j(t)\,dt$$.  This does not interpolate the g and h from the discrete case, unless one assumes that j and f are step functions.   Sławomir Biały  (talk) 12:04, 7 April 2015 (UTC)
 * What happens when the function $f(x) = x + 1$ is plugged into yours respective suggestions? Anything for $g$ that reduces to the gamma function ($Γ(x + 2)$ to be precise) in this case will pass a sanity test in my eyes. YohanN7 (talk) 14:06, 7 April 2015 (UTC)
 * Well, I'm not sure about the j business or that the goal should be to reproduce the gamma function. But my construction does not reduce to the gamma function.  The most natural generalization of the gamma function is probably something like a Mellin transform, which also is not very helpful here.  I think something along the lines of fractional calculus could work instead.  For instance, if one can construct an auxiliary function k on [0,1] such that $$J^n k(1) = g(n)$$ (here Jn is the nth iterated integral), then we could interpolate using the fractional integrals.  I'm not sure how the details would work, but it might be worth thinking about it this way.   Sławomir Biały  (talk) 13:00, 9 April 2015 (UTC)
 * Basically, a logical interpolation was what I was looking for, in the style of that done to the factorial by the gamma function.--Jasper Deng (talk) 18:08, 8 April 2015 (UTC)

Can the transitivity of multiplication be proven by a computer?
It was conjectured by Roger Penrose (in The Emperor's New Mind) that a computer could never prove that a×b always equals b×a. Has he been proven wrong? Can a computerized mathematical proof of this be generated from more elementary axioms? Same question for Penrose tiling - he asserts that a computer can never prove that Penrose tiling will fill a plane, since it does not repeat. — PhilHibbs | talk 21:12, 7 April 2015 (UTC)
 * I'm not sure what Penrose means by "computerized proof". As someone who accepts the Church–Turing thesis, it seems redundant.  Certainly a computer can simply enumerate all proofs until it finds one.  So since commutativity can be proven in first-order Peano arithmetic, which is arguably more elementary, a computer can find a proof.  Similarly, there exists a proof that the Penrose tiling is a tiling of the plane, so a computer can find a proof.--80.109.80.31 (talk) 21:48, 7 April 2015 (UTC)
 * More context for the first statement would be helpful. Penrose defines multiplication using the lambda calculus.  So, $$a\times b$$ is something equivalent but not identical to $$b\times a$$.  If induction is not allowed as a rule of inference (so we're confined to something weaker than Peano arithmetic), then we might not even be able to prove equivalence.  It depends on the rules.   Sławomir Biały  (talk) 11:33, 8 April 2015 (UTC)
 * A little more... As Sławomir Biały says, it depends on the rules. However, Penrose is actually making his remark in a wider context. His underlying point is that he thinks of computers as being entirely bound by rules, and that these rules restrict the conclusions that a computer can arrive at. In particular, he is, I think, echoing (or being heavily influenced by) the ideas of John Lucas in his famous paper Minds, Machines and Gödel. Lucas argued that computers are unable to "see" the truth of statements that human mathematicians can see and prove. The reasoning is closely linked to the work of Alonzo Church and Alan Turing in their negative answer to David Hilbert's Entscheidungsproblem; in effect, they proved that there is no decision procedure that a computer can carry out that determines whether an arbitrarily chosen formula in the first order predicate calculus (i.e. the normal rules of logical deduction) is true or false. That is, Penrose joins Lucas in thinking that computers can't always decide whether a randomly chosen mathematical or logical statement is true or false, and thus that the "mind" of a computer is in some way qualitatively different from the mind of a human.
 * For what it's worth, there is substantial academic debate on the validity of Penrose's/Lucas' conclusions. RomanSpa (talk) 17:50, 9 April 2015 (UTC)


 * The statement listed by the OP appears to be a statement of commutativity rather than of transitivity. Is the OP really asking about a*b =? b*a, or about (a*b)*c =? a*(b*c)?  Robert McClenon (talk) 19:03, 9 April 2015 (UTC)
 * Wait a minute. The second example is of associativity.  Transitivity applies to relationships, not operations.  Robert McClenon (talk) 19:32, 9 April 2015 (UTC)


 * Penrose and Lucas do appear to make the same argument. Lucas's paper starts out "Gödel's theorem states that in any consistent system which is strong enough to produce simple arithmetic there are formulae which cannot be proved-in-the-system, but which we can see to be true", which is already wrong. Gödel's argument shows that the Gödel sentence G is provable iff ¬G is provable. If both are provable, the system is inconsistent (and G is false in the standard interpretation). If neither is provable, the system is incomplete (and G is true in the standard interpretation). Saying that we "see" that G is true (in the standard interpretation) is the same as saying that we "see" that the system is consistent, and that's merely an educated guess based on the information available to us. A computer limited to reasoning within a Gödelizable system knows as much as we know about its own Gödel sentence—that it's provable iff its negation is provable iff the system is inconsistent iff it's false in the standard interpretation, etc.—because all of that is provable within the system. There's no evidence that "seeing" (i.e. guessing like humans do) the consistency of the system is not also formalizable within the system. It's simply not defined well enough for anyone to argue anything about it. All of this is obvious to most mathematicians, and I don't think there are many who think the argument of Penrose and Lucas is valid or interesting. -- BenRG (talk) 23:37, 9 April 2015 (UTC)
 * Whilst I'm largely of the view that the Penrose/Lucas reasoning is ultimately unconvincing, because of a misunderstanding of the nature of what computers can do, I'd like to suggest that BenRG's comments don't seem to me to provide a rigorous critique of their position. I'm reluctant to act as apologist for Lucas, though, for personal reasons. RomanSpa (talk) 01:45, 10 April 2015 (UTC)