In mathematics, the hyperoperation sequence[nb 1] is an infinite sequence of arithmetic operations (called hyperoperations in this context)[1][11][13] that starts with a unary operation (the successor function with n = 0). The sequence continues with the binary operations of addition (n = 1), multiplication (n = 2), and exponentiation (n = 3).

After that, the sequence proceeds with further binary operations extending beyond exponentiation, using right-associativity. For the operations beyond exponentiation, the nth member of this sequence is named by Reuben Goodstein after the Greek prefix of n suffixed with -ation (such as tetration (n = 4), pentation (n = 5), hexation (n = 6), etc.) [5] and can be written as using n − 2 arrows in Knuth's up-arrow notation. Each hyperoperation may be understood recursively in terms of the previous one by:

It may also be defined according to the recursion rule part of the definition, as in Knuth's up-arrow version of the Ackermann function:

This can be used to easily show numbers much larger than those which scientific notation can, such as Skewes's number and googolplexplex (e.g. is much larger than Skewes's number and googolplexplex), but there are some numbers which even they cannot easily show, such as Graham's number and TREE(3).[14]

This recursion rule is common to many variants of hyperoperations.

Definition

edit

Definition, most common

edit

The hyperoperation sequence   is the sequence of binary operations  , defined recursively as follows:

 

(Note that for n = 0, the binary operation essentially reduces to a unary operation (successor function) by ignoring the first argument.)

For n = 0, 1, 2, 3, this definition reproduces the basic arithmetic operations of successor (which is a unary operation), addition, multiplication, and exponentiation, respectively, as

 

The   operations for n ≥ 3 can be written in Knuth's up-arrow notation.

So what will be the next operation after exponentiation? We defined multiplication so that   and defined exponentiation so that   so it seems logical to define the next operation, tetration, so that   with a tower of three 'a'. Analogously, the pentation of (a, 3) will be tetration(a, tetration(a, a)), with three "a" in it.

 

Knuth's notation could be extended to negative indices ≥ −2 in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:

 

The hyperoperations can thus be seen as an answer to the question "what's next" in the sequence: successor, addition, multiplication, exponentiation, and so on. Noting that

 

the relationship between basic arithmetic operations is illustrated, allowing the higher operations to be defined naturally as above. The parameters of the hyperoperation hierarchy are sometimes referred to by their analogous exponentiation term; [15] so a is the base, b is the exponent (or hyperexponent),[12] and n is the rank (or grade),[6] and moreover,   is read as "the bth n-ation of a", e.g.   is read as "the 9th tetration of 7", and   is read as "the 789th 123-ation of 456".

In common terms, the hyperoperations are ways of compounding numbers that increase in growth based on the iteration of the previous hyperoperation. The concepts of successor, addition, multiplication and exponentiation are all hyperoperations; the successor operation (producing x + 1 from x) is the most primitive, the addition operator specifies the number of times 1 is to be added to itself to produce a final value, multiplication specifies the number of times a number is to be added to itself, and exponentiation refers to the number of times a number is to be multiplied by itself.

Definition, using iteration

edit

Define iteration of a function f of two variables as

 

The hyperoperation sequence can be defined in terms of iteration, as follows. For all integers   define

 

As iteration is associative, the last line can be replaced by

 

Computation

edit

The definitions of the hyperoperation sequence can naturally be transposed to term rewriting systems (TRS).

TRS based on definition sub 1.1

edit

The basic definition of the hyperoperation sequence corresponds with the reduction rules

 

To compute   one can use a stack, which initially contains the elements  .

Then, repeatedly until no longer possible, three elements are popped and replaced according to the rules[nb 2]

 

Schematically, starting from  :

WHILE stackLength <> 1
{
   POP 3 elements;
   PUSH 1 or 5 elements according to the rules r1, r2, r3, r4, r5;
}

Example

Compute  .[16]

The reduction sequence is[nb 2][17]

 
     
     
     
     
     
     
     
     
     

When implemented using a stack, on input  

the stack configurations     represent the equations
   
           
           
           
           
           
           
           
           
           

TRS based on definition sub 1.2

edit

The definition using iteration leads to a different set of reduction rules

 

As iteration is associative, instead of rule r11 one can define

 

Like in the previous section the computation of   can be implemented using a stack.

Initially the stack contains the four elements  .

Then, until termination, four elements are popped and replaced according to the rules[nb 2]

 

Schematically, starting from  :

WHILE stackLength <> 1
{
   POP 4 elements;
   PUSH 1 or 7 elements according to the rules r6, r7, r8, r9, r10, r11;
}

Example

Compute  .

On input   the successive stack configurations are

 

The corresponding equalities are

 

When reduction rule r11 is replaced by rule r12, the stack is transformed acoording to

 

The successive stack configurations will then be

 

The corresponding equalities are

 

Remarks

  •   is a special case. See below.[nb 3][nb 4]
  • The computation of   according to the rules {r6 - r10, r11} is heavily recursive. The culprit is the order in which iteration is executed:  . The first   disappears only after the whole sequence is unfolded. For instance,   converges to 65536 in 2863311767 steps, the maximum depth of recursion[18] is 65534.
  • The computation according to the rules {r6 - r10, r12} is more efficient in that respect. The implementation of iteration   as   mimics the repeated execution of a procedure H.[19] The depth of recursion, (n+1), matches the loop nesting. Meyer & Ritchie (1967) formalized this correspondence. The computation of   according to the rules {r6-r10, r12} also needs 2863311767 steps to converge on 65536, but the maximum depth of recursion is only 5, as tetration is the 5th operator in the hyperoperation sequence.
  • The considerations above concern the recursion depth only. Either way of iterating leads to the same number of reduction steps, involving the same rules (when the rules r11 and r12 are considered "the same"). As the example shows the reduction of   converges in 9 steps: 1 X r7, 3 X r8, 1 X r9, 2 X r10, 2 X r11/r12. The modus iterandi only affects the order in which the reduction rules are applied.

Examples

edit

Below is a list of the first seven (0th to 6th) hyperoperations (0⁰ is defined as 1).

n Operation,
Hn(a, b)
Definition Names Domain
0   or     Increment, successor, zeration, hyper0 Arbitrary
1   or     Addition, hyper1
2   or     Multiplication, hyper2
3   or     Exponentiation, hyper3 b real, with some multivalued extensions to complex numbers
4   or     Tetration, hyper4 a ≥ 0 or an integer, b an integer ≥ −1 [nb 5] (with some proposed extensions)
5   or     Pentation, hyper5 a, b integers ≥ −1 [nb 5]
6     Hexation, hyper6

Special cases

edit

Hn(0, b) =

b + 1, when n = 0
b, when n = 1
0, when n = 2
1, when n = 3 and b = 0 [nb 3][nb 4]
0, when n = 3 and b > 0 [nb 3][nb 4]
1, when n > 3 and b is even (including 0)
0, when n > 3 and b is odd

Hn(1, b) =

b, when n = 2
1, when n ≥ 3

Hn(a, 0) =

0, when n = 2
1, when n = 0, or n ≥ 3
a, when n = 1

Hn(a, 1) =

a, when n ≥ 2

Hn(a, a) =

Hn+1(a, 2), when n ≥ 1

Hn(a, −1) =[nb 5]

0, when n = 0, or n ≥ 4
a − 1, when n = 1
a, when n = 2
1/a , when n = 3

Hn(2, 2) =

3, when n = 0
4, when n ≥ 1, easily demonstrable recursively.

History

edit

One of the earliest discussions of hyperoperations was that of Albert Bennett in 1914, who developed some of the theory of commutative hyperoperations (see below).[6] About 12 years later, Wilhelm Ackermann defined the function  , which somewhat resembles the hyperoperation sequence.[20]

In his 1947 paper,[5] Reuben Goodstein introduced the specific sequence of operations that are now called hyperoperations, and also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation (because they correspond to the indices 4, 5, etc.). As a three-argument function, e.g.,  , the hyperoperation sequence as a whole is seen to be a version of the original Ackermann function  recursive but not primitive recursive — as modified by Goodstein to incorporate the primitive successor function together with the other three basic operations of arithmetic (addition, multiplication, exponentiation), and to make a more seamless extension of these beyond exponentiation.

The original three-argument Ackermann function   uses the same recursion rule as does Goodstein's version of it (i.e., the hyperoperation sequence), but differs from it in two ways. First,   defines a sequence of operations starting from addition (n = 0) rather than the successor function, then multiplication (n = 1), exponentiation (n = 2), etc. Secondly, the initial conditions for   result in  , thus differing from the hyperoperations beyond exponentiation.[7][21][22] The significance of the b + 1 in the previous expression is that   =  , where b counts the number of operators (exponentiations), rather than counting the number of operands ("a"s) as does the b in  , and so on for the higher-level operations. (See the Ackermann function article for details.)

Notations

edit

This is a list of notations that have been used for hyperoperations.

Name Notation equivalent to   Comment
Knuth's up-arrow notation   Used by Knuth [23] (for n ≥ 3), and found in several reference books.[24][25]
Hilbert's notation   Used by David Hilbert.[26]
Goodstein's notation   Used by Reuben Goodstein.[5]
Original Ackermann function   Used by Wilhelm Ackermann (for n ≥ 1)[20]
Ackermann–Péter function   This corresponds to hyperoperations for base 2 (a = 2)
Nambiar's notation   Used by Nambiar (for n ≥ 1) [27]
Superscript notation   Used by Robert Munafo.[21]
Subscript notation (for lower hyperoperations)   Used for lower hyperoperations by Robert Munafo.[21]
Operator notation (for "extended operations")   Used for lower hyperoperations by John Doner and Alfred Tarski (for n ≥ 1).[28]
Square bracket notation   Used in many online forums; convenient for ASCII.
Conway chained arrow notation   Used by John Horton Conway (for n ≥ 3)

Variant starting from a

edit

In 1928, Wilhelm Ackermann defined a 3-argument function   which gradually evolved into a 2-argument function known as the Ackermann function. The original Ackermann function   was less similar to modern hyperoperations, because his initial conditions start with   for all n > 2. Also he assigned addition to n = 0, multiplication to n = 1 and exponentiation to n = 2, so the initial conditions produce very different operations for tetration and beyond.

n Operation Comment
0  
1  
2  
3   An offset form of tetration. The iteration of this operation is different than the iteration of tetration.
4   Not to be confused with pentation.

Another initial condition that has been used is   (where the base is constant  ), due to Rózsa Péter, which does not form a hyperoperation hierarchy.

Variant starting from 0

edit

In 1984, C. W. Clenshaw and F. W. J. Olver began the discussion of using hyperoperations to prevent computer floating-point overflows.[29] Since then, many other authors [30][31][32] have renewed interest in the application of hyperoperations to floating-point representation. (Since Hn(a, b) are all defined for b = -1.) While discussing tetration, Clenshaw et al. assumed the initial condition  , which makes yet another hyperoperation hierarchy. Just like in the previous variant, the fourth operation is very similar to tetration, but offset by one.

n Operation Comment
0  
1  
2  
3  
4   An offset form of tetration. The iteration of this operation is much different than the iteration of tetration.
5   Not to be confused with pentation.

Lower hyperoperations

edit

An alternative for these hyperoperations is obtained by evaluation from left to right.[9] Since

 

define (with ° or subscript)

 

with

 

This was extended to ordinal numbers by Doner and Tarski,[33] by :

 

It follows from Definition 1(i), Corollary 2(ii), and Theorem 9, that, for a ≥ 2 and b ≥ 1, that [original research?]

 

But this suffers a kind of collapse, failing to form the "power tower" traditionally expected of hyperoperators:[34][nb 6]

 

If α ≥ 2 and γ ≥ 2,[28][Corollary 33(i)][nb 6]

 
n Operation Comment
0   Increment, successor, zeration
1  
2  
3  
4   Not to be confused with tetration.
5   Not to be confused with pentation.
Similar to tetration.

Commutative hyperoperations

edit

Commutative hyperoperations were considered by Albert Bennett as early as 1914,[6] which is possibly the earliest remark about any hyperoperation sequence. Commutative hyperoperations are defined by the recursion rule

 

which is symmetric in a and b, meaning all hyperoperations are commutative. This sequence does not contain exponentiation, and so does not form a hyperoperation hierarchy.

n Operation Comment
0   Smooth maximum
1  
2   This is due to the properties of the logarithm.
3   In a finite field, this is the Diffie–Hellman key exchange operation.
4   Not to be confused with tetration.

Numeration systems based on the hyperoperation sequence

edit

R. L. Goodstein [5] used the sequence of hyperoperators to create systems of numeration for the nonnegative integers. The so-called complete hereditary representation of integer n, at level k and base b, can be expressed as follows using only the first k hyperoperators and using as digits only 0, 1, ..., b − 1, together with the base b itself:

  • For 0 ≤ nb − 1, n is represented simply by the corresponding digit.
  • For n > b − 1, the representation of n is found recursively, first representing n in the form
b [k] xk [k − 1] xk − 1 [k - 2] ... [2] x2 [1] x1
where xk, ..., x1 are the largest integers satisfying (in turn)
b [k] xkn
b [k] xk [k − 1] xk − 1n
...
b [k] xk [k − 1] xk − 1 [k - 2] ... [2] x2 [1] x1n
Any xi exceeding b − 1 is then re-expressed in the same manner, and so on, repeating this procedure until the resulting form contains only the digits 0, 1, ..., b − 1, together with the base b.

Unnecessary parentheses can be avoided by giving higher-level operators higher precedence in the order of evaluation; thus,

level-1 representations have the form b [1] X, with X also of this form;
level-2 representations have the form b [2] X [1] Y, with X,Y also of this form;
level-3 representations have the form b [3] X [2] Y [1] Z, with X,Y,Z also of this form;
level-4 representations have the form b [4] X [3] Y [2] Z [1] W, with X,Y,Z,W also of this form;

and so on.

In this type of base-b hereditary representation, the base itself appears in the expressions, as well as "digits" from the set {0, 1, ..., b − 1}. This compares to ordinary base-2 representation when the latter is written out in terms of the base b; e.g., in ordinary base-2 notation, 6 = (110)2 = 2 [3] 2 [2] 1 [1] 2 [3] 1 [2] 1 [1] 2 [3] 0 [2] 0, whereas the level-3 base-2 hereditary representation is 6 = 2 [3] (2 [3] 1 [2] 1 [1] 0) [2] 1 [1] (2 [3] 1 [2] 1 [1] 0). The hereditary representations can be abbreviated by omitting any instances of [1] 0, [2] 1, [3] 1, [4] 1, etc.; for example, the above level-3 base-2 representation of 6 abbreviates to 2 [3] 2 [1] 2.

Examples: The unique base-2 representations of the number 266, at levels 1, 2, 3, 4, and 5 are as follows:

Level 1: 266 = 2 [1] 2 [1] 2 [1] ... [1] 2 (with 133 2s)
Level 2: 266 = 2 [2] (2 [2] (2 [2] (2 [2] 2 [2] 2 [2] 2 [2] 2 [1] 1)) [1] 1)
Level 3: 266 = 2 [3] 2 [3] (2 [1] 1) [1] 2 [3] (2 [1] 1) [1] 2
Level 4: 266 = 2 [4] (2 [1] 1) [3] 2 [1] 2 [4] 2 [2] 2 [1] 2
Level 5: 266 = 2 [5] 2 [4] 2 [1] 2 [5] 2 [2] 2 [1] 2

See also

edit

Notes

edit
  1. ^ Sequences similar to the hyperoperation sequence have historically been referred to by many names, including: the Ackermann function [1] (3-argument), the Ackermann hierarchy,[2] the Grzegorczyk hierarchy[3][4] (which is more general), Goodstein's version of the Ackermann function,[5] operation of the nth grade,[6] z-fold iterated exponentiation of x with y,[7] arrow operations,[8] reihenalgebra[9] and hyper-n.[1][9][10][11][12]
  2. ^ a b c This implements the leftmost-innermost (one-step) strategy.
  3. ^ a b c For more details, see Powers of zero.
  4. ^ a b c For more details, see Zero to the power of zero.
  5. ^ a b c Let x = a[n](−1). By the recursive formula, a[n]0 = a[n − 1](a[n](−1)) ⇒ 1 = a[n − 1]x. One solution is x = 0, because a[n − 1]0 = 1 by definition when n ≥ 4. This solution is unique because a[n − 1]b > 1 for all a > 1, b > 0 (proof by recursion).
  6. ^ a b Ordinal addition is not commutative; see ordinal arithmetic for more information

References

edit
  1. ^ a b c Geisler 2003.
  2. ^ Friedman 2001.
  3. ^ Campagnola, Moore & Félix Costa 2002.
  4. ^ Wirz 1999.
  5. ^ a b c d e Goodstein 1947.
  6. ^ a b c d Bennett 1915.
  7. ^ a b Black 2009.
  8. ^ Littlewood 1948.
  9. ^ a b c Müller 1993.
  10. ^ Munafo 1999a.
  11. ^ a b Robbins 2005.
  12. ^ a b Galidakis 2003.
  13. ^ Rubtsov & Romerio 2005.
  14. ^ Townsend 2016.
  15. ^ Romerio 2008.
  16. ^ Bezem, Klop & De Vrijer 2003.
  17. ^ In each step the underlined redex is rewritten.
  18. ^ The maximum depth of recursion refers to the number of levels of activation of a procedure which exist during the deepest call of the procedure. Cornelius & Kirby (1975)
  19. ^ LOOP n TIMES DO H.
  20. ^ a b Ackermann 1928.
  21. ^ a b c Munafo 1999b.
  22. ^ Cowles & Bailey 1988.
  23. ^ Knuth 1976.
  24. ^ Zwillinger 2002.
  25. ^ Weisstein 2003.
  26. ^ Hilbert 1926.
  27. ^ Nambiar 1995.
  28. ^ a b Doner & Tarski 1969.
  29. ^ Clenshaw & Olver 1984.
  30. ^ Holmes 1997.
  31. ^ Zimmermann 1997.
  32. ^ Pinkiewicz, Holmes & Jamil 2000.
  33. ^ Doner & Tarski 1969, Definition 1.
  34. ^ Doner & Tarski 1969, Theorem 3(iii).

Bibliography

edit
  • Ackermann, Wilhelm (1928). "Zum Hilbertschen Aufbau der reellen Zahlen". Mathematische Annalen. 99: 118–133. doi:10.1007/BF01459088. S2CID 123431274.
  • Bennett, Albert A. (December 1915). "Note on an Operation of the Third Grade". Annals of Mathematics. Second Series. 17 (2): 74–75. doi:10.2307/2007124. JSTOR 2007124.
  • Bezem, Marc; Klop, Jan Willem; De Vrijer, Roel (2003). "First-order term rewriting systems". Term Rewriting Systems by "Terese". Cambridge University Press. pp. 38–39. ISBN 0-521-39115-6.
  • Weisstein, Eric W. (2003). CRC concise encyclopedia of mathematics, 2nd Edition. CRC Press. pp. 127–128. ISBN 1-58488-347-2.
  • Zwillinger, Daniel (2002). CRC standard mathematical tables and formulae, 31st Edition. CRC Press. p. 4. ISBN 1-58488-291-3.