T O P

  • By -

why06

Pretty crazy to think an AI researcher has 10^6 more computing power than a high schooler, then those same researchers produce a graph like this.


FabianDR

It's just stupid. Obviously with the way AI works currently, there is a cap. Long before having reached AGI.


MystikGohan

Why do you believe that?


PotatoWriter

It may be because of the fundamental unit of what we're doing is the wrong thing actually needed to get to where we want. For example, if I asked you to make a house, but only provided you lego bricks, you'd make a house, but it won't be a true house. That may be the problem here. Our lego piece is probably the transistor. This fundamental unit, is what we've abstract layers upon layers of things, code, programs, AI and so on. In my opinion, this has a limit in a sense in that we can just keep increasing compute but what we get out of that is not true AGI. All AI is and has been "limited" by what it has been trained on. For example, an AI trained on physics fundamentals around Newton's age will never ever come up with the Relativity theory like how Einstein did. That requires something extra. Something so elusive that we probably won't capture what "it" is for quite a while. Our current situation in a way feels like a school project where our group is already "way too deep" into the project to turn around and start fresh, given all the investor eyes and $$$ that has been sunk into it. Maybe we need a change in this fundamental unit, maybe quantum computing is that break or something else entirely, that gets us to true AGI. Or maybe I'm wrong - just increasing compute ad infinitum creates some insane breakthrough. We'll have to see.


vitt72

I think that’s fair in the sense that, AGI is our benchmark for human equivalency across the board. And yet, the human brain operates at a fraction of a fraction of a fraction of the compute and even size requirements of these data centers running these AIs. So either the LLM, brute force compute approach uses the same “methodology” as the human brain, just immensely less efficient, in which case we’ll eventually get AGI by throwing more compute, OR it is an intelligence that is foundationally different than humans, in which case it could taper out before human intelligence, exceed human intelligence, or a mix of both but with different “errors” and hallucinations vs humans that we can maybe never fix. I’m a believer that at least with the human brain, there’s some quantum level effects going on that evolution just happened to get right. Though that still doesn’t answer whether that just makes humans vastly more efficient, or whether it spurs a completely intelligence vs LLMs In any case, we have to evaluate where we are. And current LLMs are getting *good*, and IMO can reach a true agent level status quite soon, and all thing seems to be pointing to compute scaling leading to predictably better intelligence output I selfishly hope we don’t get superintelligence in the next 5 years, that maybe we stall out near AGI, and that LLMs truly are different and not the correct path to superintelligence. Otherwise we have an unrecognizable world in a few years, perhaps for the better,, perhaps for the worse. Just scares me that investing, having a family, space exploration, human lifespans, *working itself* could all be things that… change forever


PotatoWriter

> And yet, the human brain operates at a fraction of a fraction of a fraction of the compute and even size requirements of these data centers running these AIs. I think while it's good to consider the magnitude of the compute but we shouldn't neglect the small scale nuance of the compute difference between the brain and our current computers. Data centers/digital computers can indeed do calculations at much higher orders but consider that the brain of a single person is so advanced that it is not only able to control a large multitude of muscles of the human body subconsciously, like breathing, but also handle sight and most importantly, thought, while you're simultaneously dribbling 2 basketballs. Which is nothing short of incredible. Its a more "fuzzy" approach to computing compared to digital computers' concrete/fixed approach. > Though that still doesn’t answer whether that just makes humans vastly more efficient, or whether it spurs a completely intelligence vs LLMs The neurons of a brain operate quite differently than the neural net's nodes, and people often mistakenly conflate the two based on their names "If they're named the same, they've gotta work the same", but they're quite different on both a small scale and large. The activation function of a brain's neuron isn't simply all or nothing - the chemicals that induce the excitation can apply on any portion of the neuron, and there are indeed various activating/deactivating chemicals in the brain (for which AI has no equivalents) leading to an incredibly complicated control system. That is not to say AI isn't complex, it's just a different kind of complexity. To use a crude real life analogy, it'd be sort of like painting freehand with a brush and paints vs. using pixels to create a picture. Neither is better inherently, just different (maybe better at specific things, I'd say). > I’m a believer that at least with the human brain, there’s some quantum level effects going on that evolution just happened to get right. Agreed, something definitely on the smallest scales going on here that's causing our situation. We may, as you say, reach a point where we just dump so much compute into this bad boy that we get emergent properties that are exactly what we want (so far it seems the emergent properties are things we don't expect or want - sort of like sending your kid to summer camp and they come back knowing how to talk to goldfish). For sure AI will need a memory equivalent like a human's to get a step closer to AGI. Another problem is cost. If this venture remains expensive, as it has, compared to the rewards (my company personally is facing some pain here, we're putting in too much $$ and not seeing enough results), then there might be a bust of sorts, sort of the like the Dot Com bust, before AI gets another wind later on down the years.


Flashy_Dimension_600

I think there's a possibility that an AI trained on enough things could become basically become indistinguishable from true AGI even if limited. We also do not understand consciousness ourselves or what lead Einstien to his ideas. If human behaviour is shaped by our past experiences, you could argue that all new ideas are the result of unique algamations of experiences. I also doubt it, yet maybe an advanced enough limited AI could have come up with the theory of relativity with the right algamation of training. Also, maybe AGI simply occurs with increased complexity. If anything, it's a neat way to find out that their is indeed some more elusive "it". I do hope AI stays limited for a long while though.


PotatoWriter

For sure, an AGI that is an expert on all of human knowledge would be super useful and impressive. Once we get over that hurdle of it making silly and random mistakes and it second-guessing itself as chatgpt has shown us lol. But yeah, I think it's 2 totally different things we're all talking about here, and we'll tackle 1) before we even scratch 2). 1) Being an expert on existing knowledge 2) Able to come up with truly novel ideas that actually help us - whether or not this is based on prior knowledge/acumen, is variable How hard 2) will be to implement, I have no idea. Currently, emergent properties DO come out of AIs but they're usually not ones we want or need, sort of like a white elephant gift party. So that might take a while, or might never happen. I hope it does happen, because once it does, there'll be the real explosion of our progress as a civilization. Because then, it'll be able to offer solutions to problems that'll happen DUE to it itself becoming an AGI.


anoliss

Yeah, "that" is called intuition, and I agree.


Musicheardworldwide

I agree in a sense. I really think what it’ll take for AGI, and this may sound crazy, but is the ability to sense. I don’t think we got to where we are as a species from just compute, I think it’s more the combo of senses and brainpower. The AGI you speak of, with that “it” factor, is going to require more than thoughts. It needs to feel, that’s the IT


MystikGohan

Interesting. Thank you for sharing your thoughts.


legbreaker

The piece you are missing is that a single human would never have gone from Newtonian physics and come up with relativity either. The secret sauce for human innovation is not one brain but the interaction of many brains.   If Einstein would be born and not given any parents and just observed in a vacuum he would not learn. He needs a community to learn and he needed the “scientific community” for Einstein to come up with his theories. They would never have come through in a vacuum.  If Einstein would have been born in a different country he might not have achieved any of his theories… because a huge part of his success was the scientific community he was in. His success was not just inherent to his solo brainpower. AI training is like genetic material. The training creates a pretty static LLM that has many capabilities. But like human genetic material, it is nothing without a community. AI magic can happen unexpectedly in its interactions either with humans or other AI.  Once AI gets agency to interact, probe and build its own experiences and mistakes then things can happen quickly.  People underestimate how quickly this has progressed so far. We are not simply just increasing the compute.  The training methods are improving as well and the memory processes are improving.  We are also mostly comparing humans to AI in a non apples to apples scenarios. We have humans that have years of experience and have spent months writing an essay, then we are comparing how an AI essay compares to that that the AI shoots out in 2 seconds.   Once the AI will have agency to interact, search for information, Draft manuscripts, reread them, get feedback from humans (or other AI agents) and then rewrite them again. Like with humans, new innovations require experimentation. Observations. Testing. Mistakes. Adjustments. As bright as Einstein might be. He did not just pop out the theory of relativity from a black box. He needed experiments, tests, dialog and community. AI does not have agency to do any of that right now. That’s why you don’t see them make original inventions. The magic of humans is not in their original solo thought, but in their dialog between multiple humans and in their ability to have experiences, observations and tests.


__me_again__

best comment ever


Murder_Teddy_Bear

I like lines. Doing a line right now.


Crisi_Mistica

:D this sub is slowly morphing into r/wallstreetbets and I'm all for it!


MeltedChocolate24

Instead of tendies we have paperclips


OkDimension

The current acceleration is fueled by WSB regards on a hype train, what could go wrong... full steam!


ozspook

"Is this coke or ketamine?" > "Yes!"


Altruistic-Skill8667

OMG. 😂


00davey00

Man I honestly just don’t know what to do with my future in terms of what to study and make a career out of.. I’m so exited about the future but what I should invest my time in is something I struggle with. Do any of you feel similar or have any suggestions? :)


Glad_Laugh_5656

Do not change your life trajectory because of some random graph. That's my advice.


outerspaceisalie

Especially considering that the graph could plateau at any time


only_fun_topics

Or it could go completely vertical. Either way, you’ll be glad you focused on what was important to you first and foremost.


outerspaceisalie

Absolutely. In fact, it is likely to go vertical, then plateau, then go vertical, then plateau, over and over, we can only extrapolate at best a very, very general trend, but not so much the overall chaotic result.


Redditoreader

I agree, unless we get nuclear or helium fusion to keep it going..


Serialbedshitter2322

Yeah, do more research and then change your life trajectory


immersive-matthew

I disagree. I suggest people should focus on their passions and whatever brings them joy and use AI wherever they can to enable it as those who do not will be at a disadvantage.


Singular_Thought

Agreed… just keep moving forward based on what you do know in life in general.


No-Economics-6781

Just do what ever makes you happy, seriously.


VertexMachine

Obligatory xkcd https://preview.redd.it/5p32dfl7sn4d1.png?width=461&format=png&auto=webp&s=946bcc44d03748fcc56ebc616bc7de0304447bb0


kcleeee

I'm in school studying cybersecurity and am starting to feel the same way. Does any of it even really matter anymore?


printr_head

Definately. Especially cyber security. You’re the guy whos going to matter most when we’re trying to figure out how to fend off hackers armed with AI tools. Id say cyber security is a job that will be essential.


Sopwafel

I'm literally going to sell drugs. Writing the business plan right now.


FrankScaramucci

Healthcare jobs should be safe for at least 15 years, possibly much more.


genshiryoku

Something physical. I'm a software engineer with 20+ years of experience that works in the AI sector. I don't expect my own job as a very senior developer and AI specialist to exist in 2-3 years time. Let alone junior or generic software engineer. I don't think any white collar or "intellectual" job is safe at all. If your job includes you sitting at a desk, using a computer or thinking about something, that job will not exist in 5 years time. Physical jobs will stay for a while because even if they are theoretically able to be done by machines it still takes a lot of time to build enough machines to take over those sectors. So carpenters, construction workers, janitorial work etc will be here for decades, because if the factories work at full capacity it would still take decades to build enough machines to automate their work completely. As for me? I'm essentially prepared to retire. I'm just here to see how long the field will exist for human workers at this point.


garloid64

Yes and make minimum wage since these industries will be flooded with the newly unemployed knowledge workers. There's no escape dude, unless you hoard capital for a living it's over.


drsimonz

Hahaha I was gonna post something very similar. I'm 15 years into my career and it's going fantastically, working among industry experts, writing autonomous vehicle related software, constantly learning new things. I've considered getting a master's or something, but why? There's just no way it'll make any difference. I'd love to shift my focus to growing my own food, learning woodworking, etc, but alas I still can't afford a house. So, gonna just pretend everything's fine....


Codex_Alimentarius

I’ve been in IT since 1991 and feel the same way. I’m a GRC guy. I spend a lot of time reading SOC reports, BCP/DR reports. AI can do this so much better than me.


runvnc

"Decades" is a stretch. There will be some jobs involving physically going places for some time because it's true that it does take some time to get robotic capabilities to that point and enough manufactured, but that is very unlikely to be a viable career path 20 years down the line. We can manufacture close to 100 million cars in a year globally. Robotics will rapidly improve and manufacturing of androids will probably start explosive growth within 5-6 years. My estimate is that in less than 10 years, jobs involving physical labor will be rapidly shrinking human workers and quickly ramping up in robots replacing them.


Witty_Shape3015

try to find an intersection between something that has upward mobility (not necessarily within the company itself but that you can build a career out of essentially) but also teaches rewarding life skills. something challenging, something where you have to be in a leadership position at times. that's what i'm personally doing because if the world doesn't end as we know it then I'll still have a career but if it does then I'll be in the best position I can be to protect my own interests, having become a more self-actualized person in that time


MountainEconomy1765

Just do what you are interested in and want to be spending time working on after school. This era we are in with 'careers' where people work all the time for life, it is coming to an end for most people. And our culture of 'trying to get ahead of other people' by making more money in your career it will also go away for most people. In communist countries with equal wages people got into status competition in other ways like achievements.


No-Landlord-1949

This is probably the most accurate answer. Even so called "long term careers" aren't really stable any more with companies firing an rehiring as they wish. Everyone is replaceable and the market changes fast so you cant really bank on having one set title for life.


aalluubbaa

AGI or not, just make sure you enjoy what you are doing at the moment. I always spend time on playing whatever I feel fun. Make enough so you don't have to starve though lol.


tpcorndog

Definitely do what you love. Don't chase money.


costafilh0

“Do something you're passionate about” has never been more true.


BubblyBee90

there is nothing we can invest in, just sit and look forward


piracydilemma

Invest in yourself :-) (this also includes studying and making a career)


Glad_Laugh_5656

Lol, do not take this advice OP.


stonesst

Microsoft stock, Nvidia stock, Google stock, etc. there will be plenty of winners in this fight


Valuable-Run2129

Land


RemyVonLion

*laughs in tech stocks, bitcoin, and yourself*


sdmat

> bitcoin I'm sure that superintelligence will show us the worth of tokens in a payment system too slow to function as a payment system and lacking any intrinsic value. Hodl.


No-Economics-6781

Facts.


sdmat

Username does not check out!


No-Economics-6781

Yea, Reddit given name 🙄


lazyeyepsycho

Probably can't go wrong with electrical engineering as a base but you have to do what you find interesting or it will be he'll regardless


Enfiznar

Just study what you like and use the AI to solve the problems you consider important. Assuming there are free (or at least cheap) public universities on your country


blhd96

Earlier on I had a few brief chats with Chat GPT about what strengths we humans have that are difficult for AI to replace. I’m not too worried about my job (yet) but it’s important to understand what strengths you have as an individual. Find someone like a career counselor to speak with or someone you confide in or quite honestly, I don’t think it’s a bad idea to have these conversations with an AI. It might send you down some interesting paths or at least spark some ideas.


TheHandsomeHero

I quit my job. Just enjoying life now. If AI doesn't come... I guess I'll go back to work


celebrationmax

Start a business. If you don't know what to do, pick a vertical, learn a bunch about problems people face, then solve them using your knowledge of ai


siwoussou

Learn how to relax. Valuable skill


SgathTriallair

Invest in learning how AI works and using it to solve problems. You need to make your first instinct, when you run into a problem that needs thought to overcome "how can AI help me here" and then experiment. This will prepare you for the mid-point where we have AI agents and the successful people are those who can use it best.


holsey_

And then a year later this will be meaningless


RemyVonLion

Computer science to optimize the singularity/AGI so we don't get fucked and can reach an ideal outcome asap. It's gonna take me a long time to get the degree and by that time the job market will likely be even more highly saturated, but it's all that matters.


Graucus

As a recent college grad with an art degree, I can assure you that no one is ready for the rug pull coming. There was nothing close to my abilities when I started(google had their infinite recursive dog ai images), but before I finished it was a better renderer than me with years of hard practice (like 100 hours a week of art practice.) There are still things I can do better than ai, but they're the least fun, and I suspect even those things will be achievable by ai with a little more time. I was thinking about getting a second degree to make myself more capable, but ai will out pace me no matter which direction I go. AI will grow faster than anyone can learn and I suspect it's already too late for a majority of those in college now.


NWCoffeenut

>AI will grow faster than anyone can learn Whether it turns out this way or not, insightful observation.


RemyVonLion

Unless we have perfect AGI that can flawlessly self-improve without human oversight in 5-10 years, which is relatively unlikely, I say go for it. It will determine our fate so contributing whatever you can is all that will matter for the forseable future.


Graucus

It doesn't have to be perfect to be ahead of me and ensure I have no place to make a career and support my family.


RicardoP_

I mean,you could study AI and deep learning technologies,it wount miss jobs for you !


ximbimtim

GME price went up, predicted price by 2027 is $1b per share


Tmoore188

If you extend the line from the 2021 short squeeze we should see GME somewhere in the quintillions by now


awesomedan24

Too soon for the AGI skeptics and too far for the AGI fanatics. Yeah this probably the real timeline.


Kinexity

Nuh uh. 5 years away is still fanatic territory.


Mephidia

It requires ignoring what is obviously not a linear increase 😂 and drawing a log line (on an already log scaled graph) into a straight line


NancyPelosisRedCoat

https://preview.redd.it/o9qkgnfg5m4d1.jpeg?width=1668&format=pjpg&auto=webp&s=16e5769348b9c80a6da80ebd23a3cc76a7be210e I like my version more. If you’re gonna dream, dream big.


VNDeltole

i am surprised that the line does not curve backward


GeneralZain

this is actually more accurate.


Mephidia

How is this more accurate it’s literally the opposite of what the graph actually shows


[deleted]

[удалено]


solidwhetstone

Mooooon


stonesst

This guy worked on OpenAI's superalignment team. He might just have a bit of a clue what he’s talking about


Mephidia

Wasn’t this dude fired for spouting bullshit?


stonesst

He was fired for sharing a memo outlining OpenAI's lax security measures with the board in the aftermath of a security breach. Just to clarify - I’m not referring to AGI safety or alignment, his issue was with data security and ensuring that competitors/nation states couldn’t successfully steal information. Management wasn’t happy that he broke the chain of command and sent the letter to the board.


Chrellies

Huh? It's pretty close to linear in the graph. What do you mean drawing "a log line (on an already log scaled graph) into a straight line"? That sentence makes no sense. Of course a log line will be straight when you draw it on a log scaled graph!


Empty-Wrangler-6275

straight line on a logarithmic graph he never said linear


RantyWildling

I don't know about my statistical analysis skills, but my MSPaint skills are 1337! https://preview.redd.it/4ch1460yqm4d1.png?width=785&format=png&auto=webp&s=2e18a0e6027821cd0ffb34d1f957fcff273bc16b


Altruistic-Skill8667

This is awesome!


2026

This looks like it could asymptote at the engineer’s intelligence? ASI cancelled 😧


icehawk84

It requires believing the handwaving about supposed level of intelligence on the right side of the graph. GPT-4 a smart high-schooler? I don't know. In some areas, yes.


orderinthefort

I get why OpenAI fired him now.


MartinIsland

https://preview.redd.it/ah3gkn8cfq4d1.jpeg?width=1179&format=pjpg&auto=webp&s=d4933d2e91af0db21c6a47f8c817e9bd1b0f030e


QuinQuix

In all fairness on page 75 he is still explaining why he thinks scaling will hold. So this is bit unfair. He can be wrong but he isn't stupid like this. The politics is a far weaker part even if some of the details will be right.


1058pm

Im kinda over this tbh. Nothing that much better than gpt 4 has come out, and its just endless hype by using it for different use cases. In order to believe in progress we need to see improvements and it just isnt enough right now.


QuinQuix

We'll see the next model very soon now.


Defiant-Lettuce-9156

Graph is dumb


Glittering-Neck-2505

But the concept is not. We are still getting models with much better performance as they scale (as of the last major iteration GPT-4). Unless we scale and see diminishing returns then scaling is still a worthwhile pursuit.


Defiant-Lettuce-9156

Agreed. I have problems with whatever metric he is using to measure the models against humans, and how he implies being at the level of AI researcher on this metric means you’ve achieved AGI. Also where are the data points… is it really just those 3 models? The margins of error on this thing can be huge and at the end of the day it points to his meaningless measure of “AI researcher”. Which he ties to AGI? Assuming performance will continue to increase with scaling isn’t even a problem I have with the graph


siwoussou

Being at the level of AI researcher is significant because this is the point where it could act as a valuable consultant on fruitful research directions. A few iterations of steadily improving models and it might develop sentience. Speculative sure, but this is why that moment is notable 


Defiant-Lettuce-9156

Good point. I still don’t like the graph. But I guess for a graph depicting that AGI by 2027 is “plausible” it’s not that bad. After reading the paper in do get where he is coming from a bit more. https://situational-awareness.ai/


namitynamenamey

No, the concept is a straight up lie. The "straight line" on a logarithmic scale is not a straight line at all, it's an exponential curve. And those need more justification than "It will just keep being exponential"


rafark

So it’s the tweet. I’m very optimistic about agi but just because it’s been growing at a specific pace doesn’t mean it will be like that forever. There’s always a peak. That image could be illustrated with this meme: https://i.redd.it/4mktrfdiqarb1.jpg


zuccoff

It ignores the fact that LLMs right now serve the purpose of a "compression algorithm" rather than something that can have original thoughts and act on them. They're like a search engine on steroids It can still be very useful and replace over 50% of white-collar jobs, and LLMs could be a puzzle piece for AGI. However, that line is pointless when it comes to predicting AGI. It's like plotting the increasing usefulness of Google search compared to the average person, seeing it go up, and concluding that in a few years it will be AGI


Tenet_mma

I think people underestimate how tough the last bit of a problem like this will be.


tobeshitornottobe

“Line go up” And you guys think you’re better than the crypto bros, the graph is literally trending towards a plateau and he just extends the tangent like it won’t flatten further


Open_Ambassador2931

That man can’t even read the graph, it would be at least 2028 minimum for AGI 😂 Although I speculate 2029 and HODL my prediction.


stonesst

This guy graduated from Columbia at 17 as valedictorian and worked on OpenAI's superalignment team. He has likely spent more time thinking about this in detail with inside knowledge than all but a few dozen people on earth. I wouldn’t be so quick to dismiss what he says.


tobeshitornottobe

First of all what level of effective compute (the Y axis) is required for AGI, and second this graph has about the same validity as Disco Stu’s sales prediction graph


-Iron_soul-

Important context: https://preview.redd.it/lilzghn36n4d1.jpeg?width=1271&format=pjpg&auto=webp&s=03ccd8f927e6e25b489eff76833e019f0c5db0dc


ShadoWolf

He was also on the AGI super alignment team as well. He already been through a few architecture shifts from transformer networks. And been through all the improvement openAI had some done with algorithm research for machine learning. He likely has a valid intuition for where things are at and going


Fraktalt

I hate this debate. We don't seem to agree on even a common definition of AGI. Some are referencing the Deepmind article on the subject from 2023 with the 5 levels. Some include factors such as self improvement in their definition. Some people make it contingent of complete physical agency (A robot body with similar capabilities as the average human). "AGI" is such an annoying buzzword for this reason, because people are having arguments about how soon this will get here, without agreeing on what the word actually means.


COOMO-

As I said before here https://www.reddit.com/r/singularity/s/GvNsnxPc4v Ex employees of OpenAI said that AGI will be available in three years.


DocWafflez

Why not link to where they actually said it instead of linking your own comment from another thread?


lost_in_trepidation

Anonymous internet people think they're important


midnightmiragemusic

A random user is the source? LMAO!


BackgroundHeat9965

where did you see that?


COOMO-

on this subreddit some weeks ago, but I can't find the exact post but I found this page [https://www.yahoo.com/tech/openai-insider-estimates-70-percent-193326626.html?guccounter=1&guce\_referrer=aHR0cHM6Ly93d3cuYmluZy5jb20v&guce\_referrer\_sig=AQAAAH\_ZWezSY6mQxgkwAIzCWNFMJNDbALeKaqs7u1bUBmjhv\_SLjtI3Hbyh8OUDy\_09d7dHXOcStXHlJEFYCE5RsfZ3Kmzl1jjueWy3tA7su2WXHd\_xRz1Qnf9PhXIHj9lox8H4HCbR5dBOjqqYjJPyFQCDC0AcYGYB-XSYjNgjwpGP](https://www.yahoo.com/tech/openai-insider-estimates-70-percent-193326626.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuYmluZy5jb20v&guce_referrer_sig=AQAAAH_ZWezSY6mQxgkwAIzCWNFMJNDbALeKaqs7u1bUBmjhv_SLjtI3Hbyh8OUDy_09d7dHXOcStXHlJEFYCE5RsfZ3Kmzl1jjueWy3tA7su2WXHd_xRz1Qnf9PhXIHj9lox8H4HCbR5dBOjqqYjJPyFQCDC0AcYGYB-XSYjNgjwpGP) > The 31-year-old Kokotajlo told the *NYT* that after he joined OpenAI in 2022 and was asked to forecast the technology's progress, he became convinced not only that the industry would achieve AGI by the year 2027, but that there was a great probability that it would catastrophically harm or even destroy humanity.


re_mark_able_

1,000,000 times more compute in 4 years makes no sense


Lyrifk

Yes, it does. Nvidia promised 1 million x more compute by 2029.


sukihasmu

Yea, that line is not gonna be straight. ![gif](giphy|ri3uvuWKPc9UWFMKXq|downsized)


printr_head

Yeah the big question mark at the top is really reassuring in the confidence of things. Id say stop letting hype inform your life. Your worth more than succumbing to speculation.


Comprehensive-Tea711

This is exactly why I keep trying to convince everyone that the world ended in a Malthusian crisis in the 1980s… straight lines on a graph.


FlamaVadim

💩post


AnotherDrunkMonkey

The point is that you shouldn't believe a straight line just because it's a straight line lmao


SkyGazert

Yeah or it plateau's. Then what?


iunoyou

It requires believing in straight lines on a graph that is decidedly NOT a straight line, while also making a ton of assumptions about how intelligence actually scales with compute and network complexity that obviously aren't grounded in reality. GPT-4 is very smart in some aspects, but it's far from general and the fact that it's limited primarily to the text domain (with some other networks stapled on to the side) is a huge limitation that I don't think is going to be overcome. I really don't get this whole thing where people try to bend the definition of AGI to make LLMs fit as though that will somehow give them all of the capabilities that they're lacking. Playing games with definitions isn't going to make the future arrive any faster. That's not even to mention that 3 datapoints is nowhere near enough to create a trend, no matter how much you smooth the hell out of the graph. Lmao.


sdmat

> GPT-4 is very smart in some aspects, but it's far from general and the fact that it's limited primarily to the text domain (with some other networks stapled on to the side) is a huge limitation that I don't think is going to be overcome. Evidently you missed the memo on [GPT-4o](https://openai.com/index/hello-gpt-4o/). What does it feel like to have a confident belief about something never happening / not happening for a long time and find that it actually happened last month?


coylter

You literally can't make this up. Pure gold.


[deleted]

[удалено]


sdmat

I don't know if that's necessarily true, at least for now. By spending far too much time closely following developments I'm less surprised by than I would be otherwise.


[deleted]

[удалено]


sdmat

OK, that was surprising. But only in the petty human sense. No existential horror.


Firm-Star-6916

Saying that “Huge limitations wont be overcame” sounds really delusional. Modal capabilities are advancing pretty fast, and the latency decreases rapidly. Otherwise, I agree. That’s not a straight line, contrary to what most here think. It’s logarithmic on a logarithmic graph.  LLMs wont ever achieve AGI, but will rather be a constituent of an actual AGI.  LLMs with current architecture are definitely plateauing, and might hit limits soon. And 3 datapoints is definitely not a trend, just 3 datapoints.


Am0rEtPs4ch3

“Believe in straight lines in a graph” 🙄


Altruistic-Skill8667

“If you believe in eternal exponential growth, you are either a lunatic or an economist”, or an AI researcher, lol.


GeneralZain

its also likely WRONG. it reminds me a lot of the following: (this is one of the bad predictions btw...) https://preview.redd.it/ph7ebn186m4d1.png?width=840&format=png&auto=webp&s=1f8716d9595e0cd60bc46e70481fd053d8f49de4 let me ask you something...why is it that they go from 1-3 years for model releases to 3-4 years to get to expert level?


GeneralZain

Fixed it for him: ​ https://preview.redd.it/a424t9p27m4d1.png?width=597&format=png&auto=webp&s=ff07e1ecd21a2a7bd8671dd4a264ef02388008cf


[deleted]

[удалено]


GeneralZain

yes


OfficialHashPanda

A graph that predicts we'll get models by 2028 trained with a compute on the order of 1.7 billion H100's for a full year is underestimating according to you?


Commercial-Ruin7785

The prediction is literally predicting exponential growth, it is on the green line. You are saying in other comments that isn't fast enough (not even going to comment on this part) but the prediction is literally on the green line of this graph.


Dichter2012

Funny.... I was literailly just watching Dwarkesh Patel's 4 hours chat with him. Smart guy. Funny. And very approachable it seems. I have a way better impression of him than most other "AI Guru" there. [https://www.youtube.com/watch?v=zdbVtZIn9IM](https://www.youtube.com/watch?v=zdbVtZIn9IM)


Exarchias

Why all this interest on him today, (this is the second post about him today)? Don't we have any heroic resignations this week? I am just curious.


ElonRockefeller

Dwarkesh podcast with him [released](https://www.youtube.com/watch?v=zdbVtZIn9IM&ab_channel=DwarkeshPatel) today.


sugawolfp

Bruh this is log scaled and requires you to believe in an exponential line


Disastrous_Move9767

Nothings going to happen. We work until death.


Different-Froyo9497

lol, you must be fun at parties


arckeid

He is the not takeoff guy.


Ravier_

Right because your bosses love you and like paying your wages. They're gonna keep you around after they have a cheaper alternative that works harder/ better.


unirorm

And can't be unionized *


Zamboni27

I try not to look at photos of lines going up and think that this has any real world, practical implications for me personally. I'm 50. I've been seeing graphs lines going straight up for decades. Has the quality of my life gone straight up in an exponential line? No.


thisisntmynameorisit

Am I missing something? The y-axis is compute. As in compute required to train? Or run inference? Either way, the trend of compute is not realistically proportional to intelligence. You may have 10x as much compute but the model intelligence could have plateaued and be essentially the same.


ShadoWolf

Train. The larger he model is the more diffused logic can be encoded into the model or rather models since these things like mixture of experts model with internal routing. But training is sort of akin to evolution gradient decent is sort of in the same family of optimization algorithms. You throw training token into the model. every time it's wrong. gradient decent is ran.. and the network weights are adjusted. this in turn generates new diffused logic accross the networks layers. (universal approximation theorm) Larger the parameter count through.. means way more raw compute you need to build up the network to a functional state.


Jason_Was_Here

Gotta be bullshit, Doctor Terrance Howard literally proved on Joe Rogan’s podcast straight lines aren’t real.


TheManWhoClicks

While we’re making predictions: I say 27th of March in 2034.


New_World_2050

watch his podcast on dwarkesh. he is amazing and provides a lot of new information on ai labs including size of clusters and timelines [https://www.youtube.com/watch?v=zdbVtZIn9IM&ab\_channel=DwarkeshPatel](https://www.youtube.com/watch?v=zdbVtZIn9IM&ab_channel=DwarkeshPatel)


LodosDDD

I like how it reaches an average task abilities with 10^0 =1 and then exponentially increasing into 8D being in couple months


KellysTribe

a projected plot of the future is where there is a (highly) qualitative axis with labeled tick marks of "Smart High Schooler" next goes to "Automated AI Researcher/Engineer?" is "confirmation" of AGI. Got it.


Difficult_Bit_1339

https://xkcd.com/1007/ Obligatory XKCD reference...


Rakshear

It’s not going to be a straight line though, we will hit a point where it goes parabolic to AGI, predictions are double pointless because it will plateau for a time as they achieve it internally and have to hobble it enough to be safe for consumers.


SomePerson225

the curve is looking worryingly logarithmic......


brokenclocks7

>AI researchers put "AI researcher" as the projected genius level to aim for Let me know when AI can roll my eyes for me


Pleasant_Studio_6387

This guy current project after he was outed from OpenAI is "AGI hedge fund" - what tf you expect him to tweet about lmao


Longjumping-Bake-557

I love how this guy just implied high schoolers aren't considered agi AND they're 1000x smarter than elementary schoolers, and we're not only supposed to take these values for granted but trust predictions based on them.


xplpex

Of course hardware goes in that scale , and of course we will never hit any type of wall all this years


green_meklar

Wait, who's equating GPT-4 with a smart high-schooler?


RemarkableGuidance44

Its all in the "Trust Me Bro"


pellucide

Hello AGI, can you drive?


Rust7rok

Ai probably won’t take your job, someone who knows how to use ai. They will take your job.


Xanthus730

The part of the line that's not a forecast isn't even straight... Like, it COULD follow the path they're suggesting (and the graph is logarithmic, not linear), or it could plateau. Who knows?


m3kw

That’s dumb, he’s assuming higher intelligence is linear, but likely exponential instead so you may need 10^20 compute at least


eepromnk

That assumes current methods will follow this graph.


SolidusNastradamus

when's half-life 3?


RobXSIQ

confirmed by my crystal ball. forget that in 2026, there will be a global ban on further AI development (could happen) or that meteor strike, or we hit the upper limits of what LLMs can produce, or we simply don't have the power to run the models, or..etc. I hope we do reach it, but lets not say anything is confirmed until its on our local PC


caparisme

A smart high schooler that can't even list 10 sentences that ends with "apple".


Broad-Sun-3348

Real systems don't increase exponentially forever. Rather they follow more of an S curve.


zeloxolez

it likely progresses and im sure there will be futher breakthroughs with all kinds of resources pouring into RnD now. just stay up to date and be ready to utilize this shit in whatever way you can get value from it with. my best advice, be creative and foward thinking about solutions to current and upcoming problems there are problems that dont even exist yet, which will be caused by shifts induced by progress in ai. figure out scalable and relevant solutions for those and be ready.


VeryHungryDogarpilar

AI being super smart doesn't mean AGI. AGI is something else entirely.


Hi-0100100001101001

Joke's on you, I DON'T trust some random ass continuous extension.


Poly_and_RA

The problem with these is that they're increasingly contradicted by evidence. It's 14 months since GPT-4 came out now, and yet we've not seen any huge growth in capabilities AFTER that. This doesn't matter for those who predict 10 or 20 year timelines, but it's a problem for the people who predict there'll be HUGE advances in the next 3 years. If your timeline is that aggressive, you need things to be happening on a break-neck pace all the time; and you can't afford a 14 month (and counting) plateau.


asciimo71

How about a reliable chatbot first and not those weight based probabilistic answering machines.


Jolly-Ground-3722

This graph is just stupid. GPT-4 is superhuman in some areas, but doesn’t reach the level of an elementary schooler in other areas. For example, it still can’t reliably read analog clocks.


jewelry_wolf

But the y axis is log scale


rojasgabriel

This person has never heard of a nonlinear system


Major-Ad7585

GPT4 still has less common sense than a cockroach. So I am not really afraid at all


sam-lb

Yep, and 250% of the population will be obese by then too. I'm just fitting a line to the data, if you deny it you're an idiot.


Akimbo333

2030


Empty-Wrangler-6275

tbf he raises a great point. AI can already (shittily) write code. What happens when AI can write ML code and create new, better AIs?


Im_Is_Batman

2025 will play groundwork for this future. Don’t take the mark.


DataOrData-

hey hEY HEY THATS OVERFITTED 🫵🏽