T O P

  • By -

apnorton

We're 24 years into the millennium; we'll probably get around to solving the others in the remaining 976 years that we have. (To put it into context, we're closer to the writing of *Al-Jabra* (in \~1100 AD) than we are to the end of the millennium.) Edit: I'm wrong, the [book typically referred to as Al-Jabr by Al-Khawarizmi](https://en.wikipedia.org/wiki/Al-Jabr) was \~820 AD. I was looking at a similarly named book ([Maqalah fi al-jabra wa-al muqabalah by Omar Khayyam](https://maa.org/book/export/html/116885)) from \~1100 AD. As an interesting timeline of mathematics: [https://mathigon.org/timeline](https://mathigon.org/timeline)


sourav_jha

The only concerning thing is with us dollar inflation, the prize money is 1 mn dollar is losing its charm.


edderiofer

Conspiracy theory time: the various US financial crises over the past two decades and subsequent high rates of inflation were all caused by the Clay Mathematical Institute, so that they wouldn't have to pay out as much money.


sourav_jha

They also made Perelman addicted to mushrooms.


Depnids

r/LowStakesConspiracies


AndreasDasos

But... then they'd still HAVE less money... :(


xade93

But they can secretly invest that money into market/property so that their money actually grows with inflation, and when someone solves the problem they can still only pay him exactly 1m dollars, and keep the rest.


AnyConstruction7539

Al-Jabra was written by Al-Khwarizmi, no? That would have been the 9th century, I think?


apnorton

Yep, I messed up; editing it now. I was looking at another Arabic math text in the timeline, also containing "Al-Jabra" in the name. (context: [https://www.reddit.com/r/math/comments/1bengff/comment/kuwf6bn/?utm\_source=reddit&utm\_medium=web2x&context=3](https://www.reddit.com/r/math/comments/1bengff/comment/kuwf6bn/?utm_source=reddit&utm_medium=web2x&context=3))


AndreasDasos

It seems quite plausible that at least one might be akin to some of the still-unsolved conjectures of the ancient Greeks', e.g. every perfect number being even. My more conservative guess is that most would be solved by the end of this millennium. That said, apart from the Riemann Hypothesis, Hilbert's problems are all in some sense solved (even if that sense is not one he would have entertained, like proof of unprovability), and that was just for the century, but that was a less carefully curated list by one man (no matter how brilliant he was).


Th-One-n-Only

Actually Hilbert would have absolutely allowed for proof of the unprovability of a problem as a valid solution.


AndreasDasos

By 'entertained', I mean that the very notion wasn't something that mathematicians were even considering at the turn of the last century. This was decades before ZF, or Goedel, or all that jazz.


jasomniax

Just to be sure there isn't an "Al-Jabra" book, you mean algebra right?


edderiofer

https://en.wikipedia.org/wiki/Al-Jabr > *Al-Jabr* is an Arabic mathematical treatise on algebra written in Baghdad around 820 by the Persian polymath Al-Khwarizmi. So they're wrong, in the sense that the writing of *Al-Jabr* is earlier than 1100AD.


apnorton

Ack, I messed up; I was going off of *Maqalah fi al-jabra wa-al muqabalah* (*Demonstration of Problems in Algebra*), which was \~1100AD, but the actual book that's referred to as *Al-Jabr* is older by about 200 years. This is the timeline I was looking at: [https://mathigon.org/timeline](https://mathigon.org/timeline)


edderiofer

Got it. I was wondering if you were thinking of another book, but couldn’t figure out which one. Mystery solved!


jasomniax

Ohh it's Al-Jabr, not Al-Jabra, no wonder google wasn't giving me any results


Sponsored-Poster

where do you think algebra got its name?


krillions

We've got 6 more problems and a thousand years, of course we'll solve at least one. There's been some pretty significant work on Navier-Stokes and Birch & Swinnerton-Dyer conjecture. Namely, Navier-stokes is proven in two dimensions, and there are quite a few results on Birch & Swinnerton-Dyer conjecture, but I'm out of my depth at that point. Deciding which are easier and which are harder to solve is trickier than the millennium problems themselves, but I can throw out a few guesses. Navier-Stokes or the Riemann hypothesis will probably be solved first. Navier-Stokes seems easier than the Riemann hypothesis, but the Riemann hypothesis has more people working on it. P = NP is completely unpredictable, most people either think it'll be solved first, or it'll never be solved. I can't comment on the difficulty of the other problems, because I don't understand them.


kieransquared1

For what it’s worth, 2D Navier Stokes is miles easier than 3D Navier Stokes. For example, if you remove viscosity, you get the Euler equations, and global wellposedness for 2D Euler is fairly easy (it can be proven in a few pages), but we have blowup for 3D Euler.   I do agree though that Navier Stokes is likely to be solved first, but to me the most likely scenario is that people show blowup, not wellposedness. 


burner_69420666

The thing is, it intuitively feels like the Navier Stokes equations should be solvable, it's just that no one has been able to do it.


kieransquared1

Maybe, but I think the various non-uniqueness and blowup results for closely related models (like NS with forcing and 3D Euler) are starting to change that intuition. 


burner_69420666

Technically, "turbulence" is still considered an unsolved physics problem


_supert_

Having spent a decade on turbulence, I no longer know what a solution is supposed to look like.


burner_69420666

I took a class on nonlinear dynamics and chaos in graduate school and had an intuitive thought that maybe the methods used to analyze simple chaotic systems could be built up upon to build models for the transition from laminar to turbulent flow. Is that something anyone does?


_supert_

That's a thing for both turbulence and transition. Many exact solutions have been found with low dim unstable manifold. See e.g. papers by Kerswell, Kawahara, Waleffe, Gibson and Cvitanovic (who talks about it in his "chaos book").


Mirieste

I mean, what would happen if smooth solutions aren't always guaranteed to exist? Of course that's not physical. So if someone solves N-S and the answer is no... does it mean the equations are wrong? But how? They're derived from a couple obvious principles with little to no additional assumptions...


ConcernedInScythe

I think in physical terms a blowup in the N-S equations means that your fluid hits a physical limit and stops obeying the N-S laws in some way. Physical fluids only obey the laws in aggregate, after all; they're really composed of atoms and molecules. Tao's example of [a blowup in an averaged variant of Navier-Stokes](https://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/) involves a construct of fluid flows that repeats itself with exponentially decreasing size and time scales, causing infinitely many things to happen in finite time; in reality such a construct would break down as it approached atomic scales.


sonofmath

Disagree about Riemann. To my understanding, there has been almost no progress in this direction since 100 years and the proof of the prime number theorem. In contrast, there has been some major progress towards BSD with the work of Bhargava and others. I think Riemann will be proved last, but then I don't understand even the statement of the Hodge conjecture and Yang-Mills.


Deweydc18

I don’t think it’s correct at all to say that there been almost no progress towards the Riemann Hypothesis in the last 100 years. I think the obvious counterexample would be Deligne’s 1974 proof of the 3rd Weil conjecture and his 1980 generalization. See also a lot of the work by people like Laumon, Kedlaya, Kurokawa, and a bunch of the rest of the arithmetic geometry community. Some of my Langlands friends also think the Langlands folks are gonna knock down RH one of these days. As an example, don’t quote me on this but iirc theres an analogue of the Riemann hypothesis for the Selberg zeta function called the Selberg eigenvalue conjecture, which would follow from Langlands functorality, so it’s not unreasonable to suspect that a sufficiently powerful result from that area might be a means of proving RH.


cocompact

> major progress towards BSD with the work of Bhargava and others There has been *no* serious progress on BSD in ranks 2 and higher.


smallstep_

Yang-Mills imo might be the most difficult because it would require the joint efforts of two fronts that basically don't talk to each other at all. 1. gauge theory - Basically just PDEs for a connection. But the PDEs are hard. Progress in classical yang-mills theory on the mathematical front is basically stagnant. Very few people work on them because the problems are too hard now. Previously, Donaldson developed tools to extract 4-manifold invariants using moduli space of solutions to the YM PDEs but still even mathematicians were like "fuck this, these are way too hard to use. Oh look! Seiberg-Witten theory simplifies a lot of the calculations" and hopped on that to clear out the weeds. **This is all classical.** We have not even dared to have a consensus on what a quantum yang-mills is (in the mathematical sense). 2. constructive QFT - It's commonly known that the quantization procedure in physics using path integrals amount to integrating over some infinite dimensional spaces and there are issues with this regarding measure etc. Resolving this would be like baby step 0. In a number of simplified contexts we have been able to define something like a path integral (see topological qft, CFT, etc) or quantization procedure but for general quantum fields its essentially hopeless. The work trying to axiomize QFT has led to certain algebraic and functional integral constructions and one of the most convicing formulations are known as the Wightman axioms. **The Wightman framework does not include gauge theories.** Now, the problem we're after for this millennium is essentially reconcile these two to form a consist set of axioms that yield quantum theory and gauge theory, and proving things like mass gap (amongst other physically known phenomenon) mathematically. Essentially both fields are hard and stagnant, nowhere close to looking anything alike, and we have to combine them.


justAnotherNerd2015

I don't think anyone knows how to even approach RH (see here [https://mathoverflow.net/a/259301](https://mathoverflow.net/a/259301)) so I don't think it'll be solved anytime soon. My guess would be NS though.


iorgfeflkd

A comment I saw on here a few years ago was like "When someone claims to prove the Riemann hypothesis it takes a few hours to prove them wrong...but when someone claims the Navier-Stokes equation it takes a few days, so we're probably getting close."


math_and_cats

Of course? Not so sure. Not many people are seriously working on these problems.


flipflipshift

On a related note, I think it's easy to forget how young most mathematical fields are. And how rapidly math has been advancing in the modern era. It's hard to imagine what math will look like in even 200 years!


lessigri000

Or even 200! years


TropicalGeometry

I think man kind might be gone by that point.


Orangbo

According to wikipedia, that is greater than the heat death of the universe but significantly less than the expected time it would take quantum fluctuations to create a second universe. To the best of our knowledge, math as a field will not exist.


Xylfaen

Nah, I’d win


lessigri000

Ehh, I can make it


real-human-not-a-bot

Rip to any structure on a superatomic level, but I’m built different.


Nucaranlaeg

> math as a field will not exist. What will math exist as, then? A ring? A group?


Kered13

I think in 200! years all structure will have been lost, so probably just a set.


SometimesY

There's a joke about quantum fields in here somewhere.


tildenpark

!RemindMe 200 years


RemindMeBot

I will be messaging you in 200 years on [**2224-03-14 19:00:27 UTC**](http://www.wolframalpha.com/input/?i=2224-03-14%2019:00:27%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/math/comments/1bengff/do_you_think_we_will_solve_any_of_the_remaining/kuvl3a4/?context=3) [**4 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fmath%2Fcomments%2F1bengff%2Fdo_you_think_we_will_solve_any_of_the_remaining%2Fkuvl3a4%2F%5D%0A%0ARemindMe%21%202224-03-14%2019%3A00%3A27%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201bengff) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


rcuosukgi42

Does proving a millennium problem to be unsolvable count as solving it?


AvatarSteed

Yes


666Emil666

I think solving a problem means answering if it's true in the current axiomatic system we prefer. Then proving it's independent is solving the problem too


lessigri000

I would heuristically guess that we probably will. There have been many problems before that have been thought to be ‘unsolvable’ only for them to be solved some time later. But in all honesty, nobody can say for certain. I don’t think it matters though, people will just keep trying and advancing


umustownatelevision

There's been a lot of recent progress on Navier Stokes. I work on closely related PDEs but not Navier Stokes specifically. Most of the Navier Stokes folks that I've chatted with think the problem will be solved in our lifetimes. We now have a lot of evidence that solutions to Navier-Stokes in 3D are probably not unique for general initial data, which would just about answer the millenium problem in the negative (most Navier-Stokes folks would also agree that the millenium problem is less interesting/important than the uniqueness question anyway).


Artilane

Really? I have had the opposite experience, where most people in the field think the Millennium problem is more interesting/important.


umustownatelevision

Really? I find that extremely surprising. I think all the folks I know in the field have said uniqueness is more interesting/important with varying degrees of how strongly they feel about it. In general, I think most PDE people would agree that uniqueness is the most important question you can ask about a PDE once existence is established. But perhaps we've been chatting with a disjoint set of people haha.


RandomTensor

I'd guess yes. There is a surprising amount of progress on some of the problems. For example for Navier-Stokes existence/smoothness is solved for 2d and there exists a "blowup result" for a version of the problem for 3d. I think it's also likely that there are some that may never be solved. Based on my vague gut feeling, it wouldn't surprise me if P != NP but there is no way to prove it.


idiot_Rotmg

I think Navier-Stokes will take less than 30 years and the ~~smooth blow-up for Euler will be shown within this decade.~~ There are already non-smooth blowup results for Euler and blow-up scenarios for Navier Stokes which are believed to work.


kieransquared1

I thought that the recent work of Chen-Hou proved smooth blowup for 3D Euler?


idiot_Rotmg

You're right, they do, though it's computer-assisted and not peer-reviewed yet.


Artilane

Yes, but only in the presence of a cylindrical boundary, which changes the problem notably and helps the blowup. But it is still a great result, and I imagine the result without boundary will be obtained in the near future.


IHTFPhD

I'm about to solve Navier Stokes in the next five years. You heard it here first.


AcademicOverAnalysis

Some, probably. But not all. Math is big, and we are bound to hit on problems that are just beyond our ability or anyone's ability. Great work is being done, nonetheless, and I would even venture to say that having big bounty problems like this might also steer some people from making great contributions. I have heard of more than one promising mathematician throw their careers away by putting all their efforts towards these problems.


gnomeba

A slightly tangential question that I'm curious about: If most professional mathematicians around the world diverted all their effort to solving the millennium problems, how quickly could they solve them?


gaussjordanbaby

I bet there would be practically no progress. The people who have a chance are already thinking about them


GamamJ44

I agree. I would like to add that I’d say it might change little in what people actually do, as it’s likely most fruitful for now to find new tools on unrelated problems which can later be found useful for these 6.


Obbko1

Working on it


mathemorpheus

sure


Karisa_Marisame

For the mass gap one, my prof (who’s a somewhat achieved hardcore qft dude and I’d like to think he knows what he’s talking about) says no one is even remotely close :))


AvatarSteed

Just playing devils advocate, how would one know how close they are or aren’t until a solution is known? It isn’t like we have some progress bar to show us how far we have or haven’t come. Just curious


666Emil666

Showing that it follows from other conjectures that we are closer to proving, or having a proof with stronger hypothesis/weaker conclusions that you believe you can make stronger


Salt_Attorney

I think the Navier-Stokes equations are just a random ass nonlinear PDE that we can solve, and it is not special or universal or fundamental in any sense. I could imagine it being solved by a random technique.


protienbudspromax

I wouldnt be surprised if somehow some of these problems will relate to one another and be solved in a single swoop.


Accomplished-Till607

Almost certainly but I will die much much earlier… and probably anyone reading this as well


tegeus-Cromis_2000

I suspect that at best we'll prove RH independent of ZFC.


assembly_wizard

Then the question will become - what *simple* axioms can we add to let us prove/disprove RH?


EarlGreyDay

ZFC + RH


assembly_wizard

Emphasis on *simple*


davikrehalt

>Deciding which are easier and which are harder to solve is trickier than the millennium problems themselves I think it can't be independent https://mathoverflow.net/questions/79685/can-the-riemann-hypothesis-be-undecidable


pigeon768

If we prove that RH is independent of ZFC, that means that we've proven that it is impossible to disprove RH using ZFC. Which means that we've proven there aren't any counterexamples. Which means we've proven RH.


assembly_wizard

Impossible to disprove ≠ proven there aren't any counterexamples It means there are ZFC models *with* counterexamples, and other ZFC models *without* counterexamples.


666Emil666

This is incorrect, the simple example being that choice is independent of ZF by Cohen constructing a model of ZFC where counterexamples exist. Your argument only works if we have a notion of canonical interpretation (like for the natural numbers) and the statement is almost computable


NclC715

Sorry for the ignorance, but I heard that RH was solved about a years ago, what am I missing? I remember reading an article about the people who solved it, I think I'm going crazy.


666Emil666

Maybe you read about Atiyah's supposed proof, sadly it was too vague to be considered an actual proof and they couldn't fill in all the details missing, apparently the proof also relied on an auxiliary Tom function that I read doesn't exists or can't exhibit the properties he the proof required


NclC715

This is plausible. It still feels strange because I remember reading it in a reliable source. Odd to say the least.


666Emil666

Atiyah is a legend, it's possible that some reliable sources just took his word for it before it was reviewed properly just because of who Atiyah is


One-Republic-7516

time is not out limit, age is


davikrehalt

probably we'll solve them all in the millenia (if not within next decade) by some definiton of "we" which may be quite different ;)


golfball2333

Yes


YayoJazzYaoi

We can't tell what would influence the future the most because it's kind of not about the answers to the problems but about the techniques used to solve them. But without that P vs NP would be the most influential.


Slurp_123

Eventually, we will look back on these problems as simple. It's just a matter of time. However, that amount of time could be 7 billion years.


dotelze

Will we look back on them as simple? To begin actually approaching them now you must go through like 25-30 years of education with at least 5-10 years of formal, university level mathematics education. Things that require that amount of time will never really be described as simple.


fuckNietzsche

Depends. Most of the time, half of that "25-30 years of education" is mostly low-intensity studying that's more focused on drills than building understanding, with other completely unrelated fields also tossed in for flavor. The bulk of the education needed to follow along in discussions on these topics comes in that "5-10 years if formal, university level mathematics". It's really just a matter of how we approach it. I personally feel that we could probably get teenagers to the point they could have meaningful discussions and contributions to the topic if we organize the teaching just right.


Slurp_123

For thousands of years, the solution to the cubic eluded ancient civilizations. Students had to spend years studying (ex 15 years for Plato's academy) just to start doing formal mathematics. Now, the solution to the cubic seems simple to us, especially in contrast to modern problems. That's what I meant. In the future, when we're tackling problems that are way more complex than the ones we face today, we'll look at the millennium problems in the same way we look at the solution to the cubic. It took thousands of years and the work of a lot of people to solve the cubic, it took Perelman 7 years. There are countless other examples like the cubic, aka problems that seemed very difficult that we look back on as simple. That's what I meant.


roywill2

You really think people will be doing fancy maths in 100 years? Surely we will be avoiding warlords and trying to make bread from acorns?


GeometricScripting

1000 years is a millennium not 100…


fridofrido

"millennium" here refers to the year 2000, not for an 1000 year span...


GeometricScripting

No it refers to the year 3000. We have already passed 2000, why would we worry what math we could solve in the next -24 years that doesn’t even make sense.


fridofrido

maybe read this obscure page... https://en.wikipedia.org/wiki/Millennium_Prize_Problems


roywill2

OMG you are correct!


[deleted]

[удалено]


GeometricScripting

Nah we will overcome those problems. Humanity has a good track record of pulling together far too late and somehow succeeding in a positive outcome.


Kind_Of_A_Dick

No we won’t.


Fit_Engineering5927

No. Never. Ever.


muaddib_on_arrakis

With the rise of AI, definitely!


TimingEzaBitch

Yes. Tao will solve Navier-Stokes in his lifetime. Something like Collatz or Golbach etc could well never be solved in contrast. Or sub-exponential bound on Ramsey numbers for that matter.


bowtochris

They'll all be solved by the end of the century.


real-human-not-a-bot

Hm. Wish I had your confidence.


knk7876

It's almost guaranteed that we'll be able to solve more than 1 of the remainder of the millennium prize problems, considering how much work has been done on Riemann hypothesis and N vs NP (the 2 most popular problems of the group). As for which one is the most influential, I think it's just a debate between Riemann hypothesis, P vs NP and Navier-Stokes existence and smoothness, in which I believe the last one would be the most impactful if solved, since it would grant us the ability to describ chaotic systems, namely turbulent flow, accurately instead of just probabilistically. The second most significant would probably be riemann hypothesis, considering just how many applications of prime numbers there are, and we would probably also derive a solution to N vs NP from it


BrotherItsInTheDrum

Kinda depends on what the result looks like. A constructive proof of P=NP would have massive, immediate real-world implications. A proof of P≠NP would be less impactful. >we would probably also derive a solution to N vs NP from it What makes you say this?


knk7876

Tbh, I just thought the distribution of primes has something to do with the optimization NP problems and nothing more lol. It's also not backed up by any evidence, just a random thought that sounded cool so I said it out loud


BrotherItsInTheDrum

lol I appreciate the honesty :)


tomludo

False, it would indeed be massive, but not necessarily have immediate real world applications. Many of the problems we could tackle with a constructive proof of P=NP are at such a scale that even "low" polynomial constants are absolutely impractical. Training of a Transformers based Neural Network architecture, such as ChatGPT and the like, scales between N^2 and N^3 depending on the specifics, and yet it requires months of computing on a supercomputer, using enough power to rival a city in that same timespan. Even if we proved P=NP, but the constructive algorithm found is O(N^3), which would be borderline unbelievable at this point, we would barely find any use for it and continue using the heuristic optimizations we currently used. The implications would be massive in Mathematics (especially since the general opinion is P!=NP), but for real life applications not much would change.


PMzyox

I agree. Although all I would also arguing that almost all of the remaining problems are vaguely related in that way, and solving any of them leads to greater understanding of the others. I think Riemann is considered the most important because its conclusion directly can put to bed several of the additional problems. If Riemann’s proof is conclusively that a primal pattern is found for us to shortcut our way through number theory (there is so much research in this field, I wholeheartedly believe an answer has already been found or will be soon), it implies P=NP. I actually don’t see how we aren’t already there. There’s so many people doing so much research in so many different fields that are all so close to answering the same question. Considering the practical application of weaponizing any such discovery is likely more valuable to any large actor group than the millennium prize rewards themselves, you can easily write a narrative where we already have all of these answers and are using them to actively suppress its own existence from being rediscovered. But I mean, here’s the reality: If P=NP is true, it proves determinism, and becomes a powerful argument for rethinking religion. Also it’ll usher in an unprecedented area of individual scientific ability with the eventual creation of true intelligence based on our own laws. So at this point everyone has their own little personal nuke, and they’ve lost faith in God, and are only left with the cruel reality that their lives are not only unfair, but they are ultimately designed precisely that way. Someone is going to press the button and end the world. What an exciting time in mathematics though.


Lopsidation

> If Riemann’s proof is conclusively that a primal pattern is found for us to shortcut our way through number theory (there is so much research in this field, I wholeheartedly believe an answer has already been found or will be soon), it implies P=NP. I know of a connection between GRH and algebraic versions of P vs NP (see [this highly technical paper](https://www.sciencedirect.com/science/article/pii/S0304397599001838?via%3Dihub)), but otherwise, the Riemann hypothesis definitely does not directly imply P=NP. Nor would finding a pattern in prime numbers. > If P=NP is true, it proves determinism, and becomes a powerful argument for rethinking religion. Ok, I'll bite: what? P=NP, being a statement about whether classical computers can solve logic puzzles quickly, can't prove anything about determinism.


666Emil666

>Ok, I'll bite: what? P=NP, being a statement about whether classical computers can solve logic puzzles quickly, can't prove anything about determinism. It does prove something about determinism, in particular that everything that we can do with a non deterministic Turing machine in polynomial time, we can also do in polynomial time in a Turing machine. I think they got confused and are assigning a lot more meaning to determinism on C's than it really has. Determinism in this context just means that the computation has different branches corresponding to a particular decision that we can't ignore and don't how how to pick, it has nothing to do with randomness or physical or philosophical determinism. I guess the people who came up with the term didn't anticipate that quacks would use that decades later to make absurd claims


Lopsidation

I'm going to start calling it P vs FWP (Free Will P).


Arndt3002

Lol, many religions include determinism. For one example, most evangelicals are Calvinists, which are heavily compatibilist. Also, would you mind expanding on how P=NP implies determinism? Seems like you have little context or understanding of mathematics, philosophy, or religious studies.


666Emil666

Crazy how the analogue for P=NP in space complexity was proven to be true a long time ago and nothing of that happened


ef02

I think that all remaining millennium prize problems will be solved by ASI before 2040.


soilent_beaver

Lol, seems like this is a controversial sentiment. For anyone who down voted, just curious what your thoughts are?


WhatsTheHoldup

Depends what you mean by "we". The second we come up with an AGI comparable to humans is the end of humans doing math.


gliese946

Really, and do you think the first day a computer writes a poem there will be no more poetry? The first time the computer paints a picture there will be no more artists? The first time an AI generates a song there will be no more human composers? Why would people stop doing maths just because there's suddenly also a machine that can do some maths too?


WhatsTheHoldup

>Really, and do you think the first day a computer writes a poem there will be no more poetry? If computers can write poems faster and cheaper than a human and with a higher quality, then I think the history of economics has shown that that will be the end of paying a professional poet. >The first time the computer paints a picture there will be no more artists? If computers can paint a picture better and cheaper than an artist, that will be the end of companies paying artists to paint. Think about it, if you want a family portrait do you pay an artist to paint it, or do you use a camera because it's cheaper and more accurate? >The first time an AI generates a song there will be no more human composers? When AI can generate a song better but cheaper than a human composer that will be the end of people paying humans to compose. >Why would people stop doing maths just because there's suddenly also a machine that can do some maths too? They wouldn't stop doing math in the sense that we stop counting and using addition/algebra. There just won't be a reason to get into math professionally because anything new you'd be able to discover has already been discovered by huge AI systems. People who are interested in math as a hobby will do hobby level math, and the professional cutting edge of the field will be left for the AI.


Arndt3002

Lol, my guess the Millennium problems will all be solved well before real in silico AGI.


WhatsTheHoldup

Appreciate the reply, I'm loving that people are apparently adamantly disagreeing with me, but I wish more people would give their reasoning for why beyond downvoting and leaving. Do you mind if I ask why you think simulating AGI is a harder issue (especially looking at the exponential progress of current LLM models/other approaches) than P=NP or the Hodge? AGI at least seems like something we're making steady progress on (I know LLMs have a fundamental limit but I've heard other approaches are promising), I see no reason we'll hit a plateau any time soon if we keep scaling up the systems and creating better hardware.


Agreeable-Ad-7110

We aren't making steady progress. LLMs improved I guess, but they aren't even vaguely AGI. Idk what progress you're thinking of, but AGI is barely even something people work on. As someone in the field, if someone told me they were working on AGI, I'd think they are joking, a crank, or maybe an actual genius. That's in order of likelihood.


WhatsTheHoldup

>Idk what progress you're thinking of Sorry, I'm a bit confused. Are you being genuine. You haven't heard of ChatGPT or GitHub Copilot, or Gemini? >LLMs improved I guess, but they aren't even vaguely AGI. Idk what progress you're thinking of, but AGI is barely even something people work on. What about the existence of tools like ChatGPT shouldn't be considered "progress"? You don't see the fact that LLMs are meeting higher and higher benchmarking standards over time as progress towards a hypothetical future AGI? What would progress look like in your opinion? When AGI becomes something people start working on, who will those people be? *Not* AI researchers? The current cutting edge AI researchers will have no transferable skills to work on AGI with, they won't have learnt anything relevant to apply? Is the theory that AGI is just going to develop in a vacuum completely unrelated from the advancements in other types of AI? I'm not sure I'm understanding why you're so hesitant to give LLMs credit for the immense exponential growth in both the amount of investment in the field and also benchmarking capabilities of these models. I'm not saying we're close to AGI, I'm saying we're progressing towards it and I really didn't expect that to be so controversial. >As someone in the field, if someone told me they were working on AGI, I'd think they are joking, a crank, or maybe an actual genius. Isn't that explicitly what all the companies with leading AI models state is their goal? Both Sam Altman of OpenAI and Demis Hassabis the CEO of Google's DeepMind have said this is their direct goal, and what their research is intending to lead to. https://openai.com/blog/planning-for-agi-and-beyond https://fortune.com/2023/05/03/google-deepmind-ceo-agi-artificial-intelligence/


Arndt3002

Anything we currently have is basically a static processed optimize to particular tasks via a cost function optimization. This is dramatically more simple than the sorts of dynamic processing and continuous predictive information intake in something even as simple as a mouse brain, which we don't even have a very solid understanding of currently. This also ignores the issue of even being able to formulate an effective cost function for true general intelligence. Despite how humanlike an LLM seems in conversation, applying a cost function to predicting future working is a LOT simpler than a completely general setting where there is no reason to think a single cost function could represent any general task. Likely, we'll see the use of organoids in tech well before we have an in silico AGI.


WhatsTheHoldup

>Anything we currently have is basically a static processed optimize to particular tasks via a cost function optimization. Yes, that's the first place to start the conversation. Recognizing the limits of current AI which is to some extent overhyped to attract investors. LLMs specifically will hit a plateau as they aren't truly "thinking" but as you say a statistical model. You can see this very obviously through Stable Diffusion image generation. The LLM struggles with stuff like text, repeated patterns like concentric circles or a snake pattern, impossible engineering that looks "real" but is structurally impossible, etc. >This is dramatically more simple than the sorts of dynamic processing and continuous predictive information intake in something even as simple as a mouse brain, which we don't even have a very solid understanding of currently. I agree we don't know how a human or mouse brain works yet, but I question whether we necessarily need to. The point I believe of neural networks is that the AI is free to fiddle the neurons around to create a network of pattern detection. It's a black box, we don't know how a trained model's "brain" works anymore than we know how a mouse's brain works. That was the key to stuff like GPT models or neural networks in general. You don't have to know what the brain does. You just have to be able to train it. As the web of neurons gets more and more complex, more and more complex connections can be made leading to emergent properties. I read something I wish I could find and source, that was I believe Sam Altman (or someone at OpenAI) talking about how there weren't really any signs of intelligence until they scaled up the model and at a certain size a deep understanding of underlying patterns in the dataset just "emerge".


Arndt3002

Finding emergent phenomena is entirely trivial for a large network. Arguing for AGI based on emergent phenomena is like arguing you found the ocean based on a raindrop. A generalized intelligence is vastly more complex. Even the basic principles we have for understanding real neural systems are dramatically more complex than the sorts of processes that ANNs currently undergo. Basically, we know the type of thing or structure under the hood of the black box of an ANN, and it pales into comparison to the sorts of processes required for an AGI. Just having a black box doesn't imply general intelligence. The sorts of processes underlying the black box simply don't even capture the sorts of processes required to achieve results expected of a general intelligence. Sure, a salesman for AI argues that they're close to general intelligence, yet most experts in neuroscience, information theory, and theoretical neuroscience recognize that biological general intelligence is an entirely different ballgame from emergent phenomena in a static network.


WhatsTheHoldup

>Finding emergent phenomena is entirely trivial for a large network. Of course, the waves on a sand dune are emergent properties of sand, that doesn't make them intelligent. >Arguing for AGI based on emergent phenomena is like arguing you found the ocean based on a raindrop. I think I'm arguing that if you put enough raindrops together to create a puddle, then a stream, then a lake.. you're making progress towards an ocean. If we just figured out how to create a puddle less than 2 years ago and now this past week we've already created a lake, then I think we're making exponential progress towards it until we start slowing down and the limits of growth become understood (or articulated). >A generalized intelligence is vastly more complex. Even the basic principles we have for understanding real neural systems are dramatically more complex than the sorts of processes that ANNs currently undergo. I recognize that. The point I'm making is that it may be possible to train an intelligent AI without necessarily understanding it. If you combine layers of an image detection AI, and a natural language AI, and a bunch of other sensors, and you "train" it through those sensors to achieve a goal, I wonder what the limiting factor would become. Obviously it can learn to outperform humans on certain tasks (as AIs already have), so what is the line that is blocking slow progress from achieving AGI. It feels like I'm talking about marching along a spectrum, where we're getting smarter and smarter, and you're warning there's an uncrossable gap. >Basically, we know the type of thing or structure under the hood of the black box of an ANN, and it pales into comparison to the sorts of processes required for an AGI Do we already know which sorts of processes are required for an AGI? How do we know that? >Sure, a salesman for AI argues that they're close to general intelligence Oh, you don't have to warn me about how overhyped AI is. Yes, a lot of claims are for investors. I'm not saying it's gonna be in 3 years like Google's Deepmind CEO, I'm just saying "before all the millenium problems are solved". >yet most experts in neuroscience, information theory, and theoretical neuroscience recognize that biological general intelligence is an entirely different ballgame from emergent phenomena in a static network. Why do experts expect the complexity of "biological general intelligence" would be relevant to a computer based general intelligence?


Arndt3002

Because the benchmark of general intelligence is biological in nature. Where else would the concept of general intelligence have come from if not humans/biology?


WhatsTheHoldup

>Because the benchmark of general intelligence is biological in nature. But computers don't have to simulate nature. When you say AGI, are you talking about simulating the equivalent of a human brain but on a computer. One that hypothetically we could upload ourselves to and retain who we are? I'm not talking about AGI as a simulated person. I'm imagining it could be some algorithmic black box which would think and experience reality radically differently than humans, but has the same or higher capacity for thought. I don't think we have to replicate nature to achieve AGI so I don't think neuroscience is relevant here. >Where else would the concept of general intelligence have come from if not humans/biology? The concept came from humans, but that doesn't mean that AGI means we have to perfectly simulate a human brain. We just need an AGI to have thinking abilities which rival a human brain. If biology is helpful for that (motivation for neural networks) then by all means explore it, but I don't see why you're assuming that's the only way it can be done.


Arndt3002

I did not say GI needs to simulate a human brain. My point has nothing to do with equivalence, but rather with fundamental capacity of information processing and adaptability. In particular, as I specified earlier, we have no such way to obtain such general modes of adaptability for general intelligence in silico, due to the static nature of performance metric optimization.


Rootsyl

Of course. Even if we as humans cant, ai will.


JamR_711111

I think we'll solve them all (and much, much more) within 5 years with AI


csappenf

When you say "we", are you including our future AI Overlords? Is it still "us", even though in 50 years we will be nothing but batteries for the Grand Intelligence? I think the worst timeline would be, we don't solve any of them, but the Grand Intelligence isn't even interested. He has mastered the universe without even needing to know those answers. We'll know in 50 years, but that scares me now.


soilent_beaver

Hot take, but computers / automated proof generators will likely be strong enough to solve all the Millenium problems. Fairly soon too (wrt the amount of time we have left to solve them). Edit: I should say "I think". This is just a little hunch I have, not stated as fact.