Talk:Hyperoperation

"Hyper-0"
There is no such operation as "add one". Addition is the most basic operation. The "add one" function, or counting, is merely an example of an arithmetic progression with an initial term and common difference of 1. And even if we look at it as something different, it cannot be called a binary, or arithmetic, operation - the operations of addition - subtraction, multiplication - division, exponentiation - root - (logarithm if you wish) are operations involving two numbers, and the result of the operation depends on both of those numbers. But the "add one" function has only one number input. It's not a binary operation, just as sine, cosine, and other like functions are not. Majopius (talk) 21:44, 9 February 2010 (UTC)


 * See Addition or Peano axioms. The add 1 function is normally called the successor function and addition is defined in terms of it. Dmcq (talk) 23:46, 9 February 2010 (UTC)


 * You guys are both right in a way. The successor (or "add one") function is very different than the rest of the hyper operations, as Majopius noted, it is not really a binary function (although we treat it that way in this article). Before Peano and his peers, I believe that addition was considered to be a basic operation. However, Peano and others did find successor useful for defining addition axiomatically. it is also a clear extension of the hyper operation series of functions below addition. Your argument is basically the same as same as between people who say that you cannot subtract 5 from 3 ( 3-5 ) and those that say it is -2. Neither are wrong, it just depends on your system. Cheers, — sligocki (talk) 04:24, 10 February 2010 (UTC)


 * I still do not agree that there is such an operation. Addition needs not to be defined in terms of a simpler operation - what can be simpler than addition? The "add one" function is, in this sense, merely a special case of addition, and just because it was given a verbal, non-mathematical name does not change its true identity. Majopius (talk) 01:27, 26 February 2010 (UTC)


 * You may be biased by language. Consider a visual approach to numbers: consider a number line with sequential marks representing the numbers 0, 1, 2, 3, ... from left to right. Now, the successor function is defined as moving to the next mark to the right. This is clearly a very basic operation. How would you define addition? You could do it by measuring the length from 0 to n going that far past m to get n+m (assuming that they are evenly spaced), but that is rather complicated and you would need some sort of device that measures distance. Alternatively, you could define addition based upon the successor function: to add n to m, start with one finger at n and the other at 0, apply successor to both repeatedly until your second finger is on m, they your first finger is on n+m. This requires only 2 fingers and the ability to move them to the right. This is roughly the reason that we sometimes think of successor as a more basic operation than addition. Cheers, — sligocki (talk) 03:36, 4 March 2010 (UTC)


 * What helps clear up this matter is logarithms. A logarithm acts like a mirror image to the number, because everything is the same, but one step lower - when the numbers multiply, their logarithms add. When they divide, the logarithms are subtracted. When a number is raised to a power, its logarithm is multiplied by the power. Furthermore, we get relationships between the two identity elements - 0 and 1: since logarithms reduce each operation a step down, multiplication reduces to addition, and so the logarithm of 1 (the multiplicative identity) is 0 (the additive identity). Finally, the logarithms of multiplicative inverses are additive inverses. So, if the logarithms would behave a certain way when the numbers are added, then this would be a Hyper0 operation. But there is no such formula for the logarithm of a sum (log(x + y)) - at least, it is not the successor function. Majopius (talk) 23:32, 8 March 2010 (UTC)


 * You are oversimplifying things, log(a*b) = log(a) + log(b) and log(a^b) = log(a)*b, but this does not work for Hyper-4 (tetration), log(a^^b) is not log(a)^log(b) or log(a)^b or anything like that. Just because log seems to "reduce each operation a step down" for a few examples does not mean that it does for all generalizations. Cheers, — sligocki (talk) 00:33, 14 March 2010 (UTC)


 * Furthermore, I do not believe in tetration or any other such things further on. Tetration cannot be consistently defined for all inputs, and is not that necessary. I have made an attempt to define it for non integer heights, but got nothing, and so left that idea. But the idea of defining addition a + b as a + 1 + 1 + ... + 1 is completely wrong. To be perfectly honest, I hate it. Defining multiplication ab as a + a + ... + a, though incorrect, is still bearable, but not "add one". The subsequent operations after addition are defined recursively as a previous operation involving b numbers a. But here addition is defined as b numbers 1 and an a separately. To me, it looks pretty ugly - breaking one summand into ones and making the picture look so asymmetrical. Majopius (talk) 00:57, 26 March 2010 (UTC)

(unindent) Well, you don't need to consider generalizations of addition and multiplication, but this page is all about those generalizations. I agree, some of these things are not the most pretty or intuitive. But these are how you define hyper operations. Cheers, — sligocki (talk) 21:20, 27 March 2010 (UTC)


 * Alright, alright. Those who believe in those strange things like "add one", tetration, pentation, hexation, heptation,... may continue to do so. But I don't - and neither would any mathematician in their right mind. I don't mean to be crude, but the idea I stick to is this: there are two primary operations of two different degrees: addition (degree 1) and multiplication (degree 2). The other two operations - subtraction and division - naturally appear as inverses of addition and multiplication, and have the same degree as their primaries. Next, by iterating multiplication, we get exponentiation (degree 3). Because it is non-commutative (as it is the first operation to be defined recursively in terms of repeated multiplication), it has two inverses - root (degree 3, of course) (however, root is different in that it gives non-unique results depending on its index) and logarithm, whose status as an operation remains questionable due to its nature - "taking the logarithm" does not seem like a meaningful operation - it is interpreted more like a property of the number itself. And plus, it is root that naturally replaces division as a higher operation (for example, when switching from arithmetic to geometric progression), not logarithm. Majopius (talk) 03:10, 28 March 2010 (UTC)


 * Any mathematician "in their right mind" would not speak about "believing in" an algebraic function. These functions are not empirical.  They exist by definition, not by physical discovery, and they are defined for utilitarian purpose.  Hence, there are multiple definitions for the same operation in different contexts (such as how 0^0 is defined differently in different domains).  Consider that many branches of math which have no obvious connection to reality are actually quite useful for modeling real processes: complex numbers in electrical engineering, or tetration in computational analysis.  Whether you "believe in tetration" is irrelevant, because it has a use.  Your point of view has been very rare for over a century and is not relevant to the article. TricksterWolf (talk) 17:32, 21 May 2012 (UTC)
 * "But I don't - and neither would any mathematician in their right mind."
 * Very nearly every mathematician in the past hundred years would say you are the dumb one. Not the people studying tetration
 * What does it even mean to "believe in" a function? It's not a god, it doesn't save you from your sins or accept sacrifices (although I guess you could give sacrifices to a function). It just maps an input to an output. If you mean by "believe in" that you consider a function in useful: Whether you consider a function to be useful or not is a different question.
 * But before you go calling mathematicians "out of their mind" bear in mind there are many things that do not look useful at first but in fact are very useful so it's not stupid to study something that's seemingly useless, plus it can be fun. Math can be done just for fun or because you think it might be interesting. It seems Euler knew this well, sure, but some of them are just neat facts and not "I just invented calculus" things. To invent calculus one must come up with some weird ideas such as "infinitesimals". Of course tetration is not defined for all the reals, at least not in a standard way, but it does have a standard definition for all of the integers. At no point in the definition of a function does it say "A function must be defined for all the reals" JGHFunRun (talk) 16:13, 28 October 2022 (UTC)

Even though this definition was last active a WWII ago (6 years, 1 day), I wish to note that the fun thing about math is that, like science, it doesn't matter whether you agree such an operation exists. It's true whether you accept it or not. I also want to point out that math is... strange in the way existence works: things exist by definition; essentially, if you asked Euclid to explain why all right angles are equal, the answer would be "because fuck you that's why" (pardon my illustrative profanity). Math is done by saying certain things are true without justification and seeing what results; it might not be applicable to the real world, but it isn't wrong.

You can define addition in terms of successor, or you can make it a primitive notion; the former is more popular in abstract mathematics.

(and nevertheless, yes there is a 'plus one' function. Even if you don't like defining addition in terms of it, it still exists.) Hppavilion1 (talk) 18:55, 22 May 2018 (UTC)
 * OK, I missed when Majopopus said "I do not believe in tetration". Seriously, tetration is defined right there in the article, so it exists. You might not see the use for it, but it exists because we just showed you a definition. Hppavilion1 (talk) 18:57, 22 May 2018 (UTC)

"Ennation" listed at Redirects for discussion
An editor has identified a potential problem with the redirect Ennation and has thus listed it for discussion. This discussion will occur at Redirects for discussion/Log/2022 May 8 until a consensus is reached, and readers of this page are welcome to contribute to the discussion. 1234qwer1234qwer4 10:31, 8 May 2022 (UTC)

Fractional extension
If we define tetration via the Kneser method, build up pentation via the Kneser method, and so on, would we be able to interpolate and find fractional values for hyperoperations, such as 2[1.5]3? Kwékwlos (talk) 11:34, 22 March 2023 (UTC)


 * I conjecture that 3[0.5]3 is approx. 5.4, 3[1.5]3 is approx. 7, and 3[2.5]3 is approx. 13. See https://commons.wikimedia.org/wiki/File:Hyperoperation_3_and_3_with_real_number.svg and https://commons.wikimedia.org/wiki/File:Hyperoperation_3_and_n_with_real_number.svg. Kwékwlos (talk) 12:52, 28 March 2023 (UTC)
 * This is not a forum for conducting original research. The article's content should reflect what is written in reliable sources.  --JBL (talk) 17:21, 28 March 2023 (UTC)
 * I managed to dig up a reliable source (https://www.hindawi.com/journals/mpe/2016/4356371/) that tries to construct a[3/2]b using the arithmetic-geometric mean. But I don't see how this could be a practical way of extending hyperoperations to non-integer values. Kwékwlos (talk) 11:50, 22 April 2023 (UTC)
 * I wonder why we don't go into negatives (like 4[-1]5) since we are already talking about numbers that are not strictly "positive integers" Taureonn (talk) 23:14, 7 December 2023 (UTC)

Question about Modulus
I may be completely wrong about this, but I was under the impression that Modulus was an inverse of Multiplication, similarly to how Logarithms are an inverse of Exponentiation; Should I be correct about this, shouldn't Modulo warrant attention in the article and in the Hyperoperations template? ThrowawayEpic1000 (talk) 21:41, 8 December 2023 (UTC)

Right associativity vs. left
In this article, the hyperoperations are defined to be right-associative, as in $a[n]3 = a[n−1](a[n−1]a)$ rather than $(a[n−1]a)[n−1]a$. That makes sense to me for integers. However when multiplication and exponentiation are defined for ordinal arithmetic, the definitions of these two operations are instead left-associative, where the distinction matters because neither addition nor multiplication is commutative for ordinal numbers. The transfinite induction steps for $b > 0$ are:

That is, we use left associativity when $a × b = sup(\{(a × c) + a : c < b\})$ or $a ↑ b = sup(\{(a ↑ c) × a : c < b\})$:

but whether integers or ordinal numbers, we use right associativity when $n = 2$:

Why is it sometimes left associativity and sometimes right associativity for ordinal numbers? I'd like to see an explanation of this in the article. — Q uantling (talk &#124; contribs) 01:00, 14 July 2024 (UTC)


 * For example $n = 3$ correctly using left-associativity gives $a [n] b = sup(\{(a [n] c) [n−1] a : c < b\})$ for the first $ω$ summands and then gives $n > 3$ for the correct final answer. However, if we incorrectly use right associativity (under the belief that is the correct rule for all hyperoperations) then we instead get $a [n] b = sup(\{a [n−1] (a [n] c) : c < b\})$, which equals an incorrect answer, $1 × (ω + 1)$. — Q uantling (talk &#124; contribs) 21:26, 14 July 2024 (UTC)