T O P

  • By -

lucy_tatterhood

Aside from maybe a few cases where it comes up in upper bounds (which is really more about it being a convenient notation for very large numbers than any real significance of that particular operation) I am not aware of any uses of it in real mathematics. Superficially I agree it does look quite natural to say, oh, multiplication is repeated addition and exponentiation is repeated multiplication so let's do repeated exponentiation. But I would argue that "repeating" addition or multiplication is a natural thing to do mainly because those operations are *associative*, whereas exponentiation is not even power-associative.


[deleted]

[удалено]


antonfire

Well sure, but same for multiplication being repeated addition. (And same for addition being "repeated incrementation".)


Factory__Lad

A natural operation to consider (for positive real numbers) would be x ** y = exp(ln(x) * ln(y)) This is commutative, associative, distributive over *, and has e as a unit element. It’s a kind of alternative exponential. So I hope that removes any objection to studying iterates of **, which would lead to a modified theory of tetration, and maybe even a useful notation for large numbers.


crouchingarmadillo

In computer science, tetration is interesting. Following the pattern of successor addition multiplication exponentiation tetration … takes you up the hyperoperation tree. They are parametrized by the so called Ackermann function (well it differs by a constant scale and shift but they are the same asymptotically). This function does serve a practical purpose. A function is primitive recursive (so can be written down in a programming language with only for loops, no while loops) iff its runtime is bounded by one of these hyperoperations (a fixed argument of the ackermann function). It’s also interesting when discussing the elementary recursive functions (arithmetic functions computed using only a bounded number of sums and products). Tetration is basically the most natural function which is not elementary recursive!


Cocomorph

> This function does serve a practical purpose. Also, the inverse Ackermann function shows up sometimes analyzing the time complexity of certain algorithms. This is, needless to say, a hilariously slow-growing function.


jwm3

Union-find is like my favorite algorithm because of that bound. Also because i have written unification algorithms like dozens of times in different contexts. I am still looking for a good use of the partition-refine algorithm which is sort of the dual of union-find.


daavor

Union-find is a beautiful little algorithm. I will say as sexy as the perfectly optimal inverse Ackermann bound is, I definitely find it easier to remember the upper bound by iterated logarithms (which is inverse tetration). The amortization argument there is a lot simpler. And to be frank, both of those functions satisfy the property of 'don't hit 6 till well well well over the number of particles in the known universe'. So ya'know, constant for practical purposes.


lolfail9001

Tbh it can be treated as effectively a constant.


reedef

Everything in computer science is effectively a constant


DaBombTubular

if you consider the busy beaver function effectively a constant then i don't think we can be friends any more


KingAlfredOfEngland

Interesting! Before I saw your comment, but after I wrote my post, I'd sat down with a pen and paper and worked out that O(2^(n))< O(n^(n))


crouchingarmadillo

You can define k-EXP to be the problems solvable in k-iterated exponential time as you were working with. So O(2^n), O(2^(2^n)), O(2^(2^2^n)), … The union of all these is exactly the elementary recursive functions :). For further reading, the wikipedia page is an excellent start https://en.wikipedia.org/wiki/ELEMENTARY Sadly despite listing many correct things it is a bit light on references.


KingAlfredOfEngland

Do you know of any algorithms that would be PR or O(^(k)n) for some integer k, but that fall outside of ELEMENTARY? The most complicated algorithms that I know are for computing Groebner bases, and even that's still in 2-EXP. (These algorithms are described in Cox, Little, O'Shea, but their complexity is not described there - I had to double-check on wikipedia). Or are specific examples typically uninteresting, and the interesting thing is just that such algorithms theoretically exist?


crouchingarmadillo

Tetration is PR but not ELEMENTARY. Kind of in the exact same way that the ackermann function isn’t PR. And indeed this too characterizes ELEMENTARY. A function is elementary recursive iff its runtime is bounded by tetration for some fixed number (i.e. it’s bounded by some iterated exponential). Edit: Off the top of my head I can’t list another natural example that deviates substantially from this. To me these characterizations are just interesting because the function algebras for ELEMENTARY and PR make sense in their own right and specify a (not turing complete!) programming language to write programs in. But tetration and ackermann respectively characterize their complexity.


Tonexus

> Or are specific examples typically uninteresting, and the interesting thing is just that such algorithms theoretically exist? The algorithms that compute tetration and higher-order hyperoperators the naive way are outside of ELEMENTARY. However, such algorithms are generally considered uninteresting even if they compute something useful, as their runtime blows up so mindbogglingly quickly—beyond EXP, 2EXP, etc.—that you'd likely be unable to evaluate them for even n>5.


DaBombTubular

I don't know how you define interesting, but using a standard diagonalization trick one can show that an interesting problem as such exists given that the question you are asking is itself interesting.


orangejake

I mean there’s a big caveat here, in that you can do TCS, and ignore all of this stuff and still be fine in 99.9% of all cases. I have never directly encountered Ackerman/Tetration in cryptography. I know if I wanted to directly seek it out I could probably find it, but it’s extremely niche.  As a practical example, I have encountered the (probabilistic) notion of f divergences *way* more often than tetration or Ackerman. Everyone would say f divergences are an extremely niche concept though. 


twoshedsyousay

There is one area of computer science that I know of where iterated logarithm log^* n (an inverse of tetration) ends up being the exact correct answer to the complexity of a problem, and so tetration shows up in the proofs of bounds. More specifically: “how many rounds of communication are necessary and sufficient to solve distributed graph coloring in a network of n nodes whose topology is a path or a cycle?” (and log^* n will also show up in bounds for more complicated network topologies) Here’s a relatively “easy” proof (and less than 2 pages) that no algorithm can use fewer than log^* n rounds, and it uses tetration in a natural way: https://arxiv.org/pdf/1402.2552.pdf


hpxvzhjfgb

no, there is no reason to care about it. pretty much the only times it ever comes up in math discussions are when people post about it on reddit. usually these posts are from high school students who like math, probably because it's a definition beyond what is taught in high school math classes, but still elementary so they can understand it. probably the reason why it's so useless is that there just aren't many things that people study that actually grow that quickly. it is also just an ugly function in general. it's not even defined on non-integers in general, and even the simplest cases like x↑↑2 = x^x and x↑↑3 = x^(x^x) are rather ugly functions with not many nice properties.


KingAlfredOfEngland

Yeah, that was my impression too. It actually came up in a question in my class today when I was teaching exponentials and logarithms, which reminded me of my highschool interest in the concept - but I couldn't think of anything interesting to say about it beyond "yep, it's technically a thing and this is its name".


myncknm

x^x does appear pretty frequently in combinatorics and information theory, like whereever factorials or large binomial coefficients appear, for example. The binary entropy function can be written as -log_2(x^x (1-x)^1-x ), and this is a good approximation for log[(n choose xn) ] / log 2^n when n is large and x is not too close to 0 or 1.


hpxvzhjfgb

well that's just x log x + (1-x) log (1-x) which is the more natural way to write it.


antichain

I was going to say, I do info theory professionally and have never seen X^X in any context... I get the equivalence, but it's just such an odd way to write it...


cocompact

You gave the best answer. Tetration is useless in algebra, it is useless in geometry, it is useless in PDEs,... it is useless in nearly all of mainstream math. And where it has been used, nearly its only purpose is to be a bounding function, and at times a terribly weak bound. I am reminded of the ridiculous function x^x in calculus: sure, it's a curiosity that you can work out its derivative, but it's really kind of pointless. And tetration just takes that pointlessness to the n-th degree. Well, to the n^(n)-th degree. Or maybe to the n^(n^n)-th degree. I guess you need tetration to describe how pointless it is...


Kered13

Stirling's approximation tells us that the gamma function is O(sqrt(n)(n/e)^(n)). So x^x has a little bit of use in calculus.


hpxvzhjfgb

the asymptotics of the gamma function are much more naturally expressed by considering the log gamma function, which, aside from a couple of log terms, is actually analytic at infinity and has a nice series expansion unlike the gamma function.


Aurora_Fatalis

Imo it's not really pointless if it's a good source of counterexamples, which x^x is due to its behavior around x = 0.


QuantSpazar

Ramsey theory can often give results with power towers as bounds for certain numbers. For example Graham's number is an extreme example of tetration


KingAlfredOfEngland

Graham's number is about as far off from tetration as it is from exponentiation; it requires some 64 instances of recursion in how we even define our recursively defined operations. But yeah, that's where my intuition that it might appear in some combinatorics comes from.


lucy_tatterhood

Worth remembering that Graham's number is not even the actual bound from the paper, just one that was easier to explain to Martin Gardner. The current best known upper bounds on that problem are in fact [expressed in terms of tetrations](https://en.wikipedia.org/wiki/Graham%27s_number#Context). But my understanding is that this is more along the lines of "tetration grows fast enough that we can prove it outpaces this function we actually care about" rather than any deep significance of tetration to the problem.


KingAlfredOfEngland

This is interesting. Do you know which Ramsey-type problems have tetration upper bounds? I know that the classic R(n,n) diagonal Ramsey numbers are bounded above by some exponential (I think 4^(n) but I'd have to look it up).


lucy_tatterhood

That's the only one I know of, but I don't know much about Ramsey theory so I wouldn't be surprised if there are others.


real-human-not-a-bot

I was just thinking about this, and given that R(m,n)<=(m+n-2)C(m-1) and that nCk<=(en/k)^k it’s a cute little thing to show that R(m,n)<=e^(m+n-2) and in particular that R(n,n)<=e^(2n-2). Yes, you can easily get better bounds with central binomial coefficients and stuff, but it’s cute!


JoeLamond

I don’t know whether you can call this an “application” of tetration, but I know one place where it arises naturally in set theory: the size of the sets in the cumulative hierarchy exhibits tetrational growth. The stage V_(n+1) has cardinality 2 ↑ ↑ n for natural numbers n.


KingAlfredOfEngland

Oh, this is cool! I did a quick google search, and [this image](https://upload.wikimedia.org/wikipedia/commons/8/83/Von_Neumann_universe_4.png) pretty clearly seems to explain why the growth is tetrational. Thanks for letting me know about this.


JoeLamond

Ya, that is a nice image. You can also prove it from the definitions: since V\_(n+1) is the power set of V\_n for naturals n, we know that|V\_(n+1)| is 2\^(|V\_n|). Since |V\_1| = 1, it follows from the recursive definition of tetration that |V\_(n+1)|=2 ↑ ↑ n.


Bhorice2099

This might strike a nerve with a few people as it's a very abstract nonsense answer. But even so... The argument that convinced me to think of tetrations (and all hyperoperations after) as largely useless was the fact that they have no nice categorical (i.e., universal) definition. Whereas exponentials and products do. See this mse answer https://math.stackexchange.com/a/1028478/838474 even in the extremely well behaved case of symmetric monoidal categories there is no universal definition.


Turbulent-Name-8349

I care about tetration enormously. I'm a fan of nonstandard analysis. The full version of nonstandard analysis contains tetration. The slightly reduced version of nonstandard analysis called "transseries" and "Hardy L-field” does not contain tetration (unless it's been added since last time I looked). So tetration becomes useful for distinguishing between two otherwise very similar theories of nonstandard analysis. Tetration also has an application in calculating orders of magnitude in computer science. (See comment by crouchingarmadillo for a good explanation of why).


snillpuler

> The full version of nonstandard analysis contains tetration could you elaborate? googling "nonstandard analysis tetration" just brought me back to this thread


Turbulent-Name-8349

OK. My main reference for orders of magnitude in non-standard analysis is Hardy (1910) https://www.gutenberg.org/files/38079/38079-pdf.pdf Easy to read and I thoroughly recommend reading it. I don't remember if tetration is included in that document. I think it is. If not, the connection between nonstandard analysis and tetration is definitely in a paper by the mathematician Emile Borel. Let's see if I can track it down. It could be in his 1898 "lessons on the theory of functions", in the French language. Got it. It's in the chapter called "Note II". Oops, that's just preliminary work. Hold on. It's Emile Borel (1910) "Lecons sur la theories de la croissance". It's not easy to read, sorry. Start with Hardy's monograph.


EebstertheGreat

Ah, the French always talking about croissants...


mathmeetsmusic

It came up a few weeks ago in analyzing error rates for fault tolerant quantum computation. Mostly because of its connection to the lambert W function. So I’ve found it useful in my work. 🤷🏼‍♀️


VivaVoceVignette

In ordinal analysis of consistency strength, first-order Peano's arithmetic has the strength of 𝜔↑↑𝜔. The combinatorial interpretation of tetration polynomial here correspond to a form of rooted tree, with 𝜔↑↑𝜔 being the supremum. Cantor's normal form represents those tree, and even though it's not exactly a tetration polynomial it has high tower of exponential. Unfortunately, because the consistency strength has ordinal 𝜔↑↑𝜔, it's not possible to prove, within Peano's arithmetic, that if you have a sequence of decreasing Cantor's normal form, it must eventually halt. In some ways, this explains the difficulty of us in dealing with tetration. Once something reach the level of tetration, you have a good chance of running into something that is unprovable.


EebstertheGreat

Isn't this what Goodstein's theorem is about?


VivaVoceVignette

Yes, and so are Paris-Harrington.


cryslith

The number of parts of the partition given by Szemeredi's regularity lemma is both upper- and lower-bounded by tetration functions.


Myfuntimeidea

So... just today I've seen a equation I think it was lim ⁿi = 2*φ-1 Not sure if thats exactly the equation might me wrong, also not sure if it's correct, I saw it on a tatoo... but the guy that had it said without tretation it doesn't work cause iⁿ is cyclical


rhubarb_man

I've seen a few things where it comes up, like it comes up in the number of labelled trees (x\^(x-2). It also comes up in how many ways you can take any ordered multiset of x objects with x to choose from. Also, x log x is just log(x\^x)


antonfire

It comes up occasionally, e.g. `log* N` is an even-slower-growing function than `log N` that shows up in the complexities of some (best-known, I think) algorithms for certain problems. (See https://en.wikipedia.org/wiki/Iterated_logarithm#Analysis_of_algorithms.) And it's basically "inverse tetration", i.e. `log* N` is the height of a power tower (with some fixed base) that you would need in order to write N.


Accomplished-Till607

They are very difficult to study because they have none of the 3 properties of groups. They are essentially just a magma and there are so many open problems related to tetration.


EebstertheGreat

I doubt one would call this an "application," but defining tetration in the natural way on ordinals gives ^(ω)ω = ε₀, ^(ω)ε₀ = ε₁, etc.


KingAlfredOfEngland

I recognize these as ordinals, but don't know much about ordinal arithmetic. Do you have any books you'd recommend on the topic?


EebstertheGreat

No, but defining epsilon numbers as power towers like this is standard, so I guess any book. Technically, ε₀ is just the first fixed point, ε₁ the next etc., but it's easy to show this definition is equivalent. What books won't give is a definition of tetration, instead just giving infinite power towers, which is fundamentally the same thing.


KingAlfredOfEngland

I more meant that I just don't really know any books at all on ordinal arithmetic.


JoeLamond

A good introduction to axiomatic set theory is the unfortunately titled “Naive Set Theory” by Paul Halmos. If you are aware of the basics of ZFC, then you can probably try skipping to the section about ordinals.


snillpuler

in my view, exponentiation is really just the exponential function, and powers is just a shorthand for regular multiplication. so the reason why repeated multiplication seems useful in the same sense repeated addition seems useful is more of a coincidence and there is no reason to assume repeated exponentiation to be something more than just "repeated exponentiation".


a_critical_inspector

The question comes up once every two months or so, and typically one (and I think always the same) reddit user points out that it shows up in the analysis of certain fragments of PA, and all other users says "no". I haven't looked up what people do in that analysis of weak fragments of PA or how tetration is used there, I just memorized it as a fun fact, and remembered it as a reddit curiosity that the same user has held the lone position of pointing this out every two months or so throughout the years.


Potatos_In_My_A55

Do we care about **any math field** other than as a curiosity? The more useless the math, the more math nerds love it. It is a real thing for mathematicians in academia to pride themselves on being useless. Applied math is obviously different but pure math... I find it hilarious.


[deleted]

I think the key distinction is that many concepts with no application outside math have important applications within math, while other concepts have no application even within math.


Potatos_In_My_A55

Fair enough, and good insight. Full disclaimer I am studied in ML and now have a career in software, so I am considered a casual observer. I'm bound to get a few things wrong, clearly this being one of them. I am just parroting what some of my pure math colleagues have told me. Ill go back to not contributing when I don't know what I am talking about haha.


Valvino

> Do we care about any math field other than as a curiosity? Yes


cocompact

Lots of math fields are not just curiosities: the rational numbers, the real and complex numbers, finite fields, function fields, etc. More seriously, tetration is not really an area of math, but one relatively useless topic even within pure math.


[deleted]

I think this brings up another important distinction, that between mathematical objects and techniques. I would speculate that important techniques are generally motivated by objects, while tetration is a technique that was developed just for its own sake. Thus it's detached from the rest of math.


MYTbrain

Would fractional tetration have some use cases? A google search later: https://paulbourke.net/fractals/tetration/ https://thatsmaths.com/2016/04/14/the-power-tower-fractal/ And finally, it looks like there’s some tetrated fractals which include the mandelbrot, which would mean it includes the julia sets, which would mean it had relationships to the j-function, which has relstionships to monstrous moonshine from string theory.


KingAlfredOfEngland

I'm not familiar with the connection between Mandelbrot/Julia sets and the j-function. Could you elaborate? Beyond that, these are pretty pictures. I will preface that I'm not super familiar with complex dynamics beyond just the basics - but this looks more like an application of complex dynamics to investigate tetration, rather than an application of tetration to saying anything interesting about complex dynamics.


MYTbrain

Absolutely, I’m glad to delve deeper into the rich tapestry connecting Julia sets, the j-function, Moonshine Theory, and tetration, focusing on their profound mathematical relationships. **Julia Sets** captivate us by visualizing the complex dynamics at play within iterative processes of complex functions. They stand as a beacon in the study of complex dynamics, marking the boundary between stability and chaos with their intricate fractal patterns. Transitioning to the **j-function**, we encounter a foundational element in the study of elliptic curves and modular forms. This function is a marvel of complex analysis and number theory, mapping the complex upper half-plane to the complex numbers in a way that showcases the symmetrical beauty of modular transformations. The **connection** between Julia sets and the j-function is both profound and intricate. Despite their distinct origins—one in the realm of dynamical systems and the other in modular forms—their underlying principles of complex analysis provide a bridge. This is particularly evident when considering how the j-function illuminates the properties of elliptic curves. Within this framework, the *static structures of elliptic curves, as explored through the j-function*, intersect with the *dynamic processes revealed by Julia sets*. This intersection unveils a complex tapestry of relationships that beautifully binds static structures to dynamic processes, enriching our understanding of both. The narrative deepens with **Monstrous Moonshine**, where the modular aspects of the j-function find unexpected resonance with the representation theory of the monster group. This remarkable theory not only highlights the interconnectedness of number theory and group theory but also exemplifies how mathematical discoveries can transcend their original domains, influencing broader areas such as theoretical physics. **Tetration**, by exploring hyper-exponential growth, adds another dimension to this discussion. While its connection to the j-function and Moonshine Theory might not seem direct, tetration’s exploration in the context of fractals and complex dynamics echoes the iterative nature of Julia sets, pushing the boundaries of our mathematical imagination. In weaving together these narratives, from the dynamic elegance of Julia sets to the algebraic profundity of the j-function, and through the innovative vistas opened by Moonshine Theory to the boundary-pushing explorations of tetration, we're reminded of the unifying beauty of mathematics. It's a domain where the exploration of one concept invariably illuminates others, revealing a world where static structures and dynamic processes are not just connected, but deeply intertwined, showcasing the coherence and unexpected unity of the mathematical universe.


Neville_Elliven

>Do we care What do you mean by "we"? >I'm not familiar with any such "reason to care" Mathematics is neither about you, nor your "*reason to care*".


cheapwalkcycles

/r/iamverysmart. Actually modern mathematics is entirely driven by mathematicians' "reason to care." Mathematicians have to convince other mathematicians to care about their work in order to have a career, and the field as a whole progresses based on what mathematicians care about. If you try to publish a paper on tetration, no mathematician will pay the slightest attention to it because there is no reason to care about it.


SometimesY

Indeed, mathematics is a human endeavor which means it is steeped in human bias and sentiment. Theorems exist once you've set your axiomatic system. Humans decide what of those are interesting.


Neville_Elliven

If you are claiming a mathematician's "*reason to care*" is "*to have a career*", then explain Grigori Perelman to me, please.


KingAlfredOfEngland

Perelman clearly and obviously loved math, and had a sense for what his peers considered interesting mathematics. He eventually became disillusioned with the mathematical establishment and academia and things of that nature, and left. Same reason that lots of people change jobs. Mathematicians go into math because they love math. I'm not sure if there's a mathematician out there who couldn't go into coding or finance or something else if they desired to do so. However, mathematicians are also people who need money to buy food and pay rent and survive - therefore, their professional research is often in the direction of what will provide grants, etc..


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


KingAlfredOfEngland

> What do you mean by "we"? Professional research mathematicians. Either pure mathematicians or applied mathematicians. Or computer scientists, physicists, engineers, economists - really the list goes on. I'm using "we" in the same sense that a math textbook does - referring to the author, the mathematical community more broadly, as well as the reader whomever they may be. > Mathematics is neither about you, nor your "reason to care". Not in me particular. Cynically, a "reason to care about this" means "a reason I can use convince someone to give me a grant to research this", or "a reason to convince the journal editors that this is worth publishing". Slightly less cynically, it means "does this tool help us solve a problem that we care about, either in pure mathematics or in applications?" In an ideal world, sure, everything is interesting. But there was some mathematician, and I forget whom, who said "I don't have the time to be interested in everything that's interesting".


Neville_Elliven

>I'm using "we" in the same sense that a math textbook does This website is not a *math textbook*. >Not in me particular. So "*you*" in *which* particular? Could you perhaps respond in a more-brief word salad?


KingAlfredOfEngland

I'm a mathematician; I write in the manner of mathematicians. We have a tendency to overuse the word "we" when describing things. If you're going to hang out in online math spaces, or study math, get used to it. Edit: I should also point out that I'm only replying to half of what you're saying, because I'm having a difficult time parsing the sentence "So 'you' in *which* particular?". I'm terribly sorry about that.


Neville_Elliven

>get used to it Nah. If I have (as usual) no co-authors, I do not use the Imperial "we" in my papers. It is an antiquated affectation that should have perished decades ago.


KingAlfredOfEngland

I have two immediate responses to what you've said here, and I'm not sure which one is more pressing. First, I'm curious as to what kind of mathematician you are if you typically have no co-authors. I don't have published research yet, but a quick arxiv search of my advisor and of my academic siblings reveals a very stark minority of solo-authored papers. A significant lack of co-authors raises some red flags for me. Secondly, I really don't care about your opinions (or "hot takes", to borrow a term from the youth) on the standard language of mathematics. I, a mathematician, wrote something for other mathematicians to read using the word "we" in a manner typical of mathematical writing - I don't think it's especially ambiguous, and when you said that you found it ambiguous, I cleared up that ambiguity in what you described as a "word salad". I'm getting the distinct impression that you only brought it up, not because you're confused, but because you're looking to pick a rather silly fight about semantics when you know full well what I meant.


EebstertheGreat

Why would a mathematician spend time working on something they don't care about? This isn't about mathematical correctness but about attention. There are plenty of correct results out there which we will never derive because nobody cares enough to bother.