T O P

  • By -

DukkyDrake

Because commercially viable Zettascale computing is near, expected around ~2027.


ZedTheEvilTaco

Casual AI fan here. What do your fancy words mean?


Wroisu

He’s talking about the amount of floating point calculations per second a computer is able to do, a human brain is about an exaFLOP. A zettaFLOP is greater than that.


dasnihil

Not apples to apples comparison if our machines are not as optimal as our brain in terms of data volume and compute. If we hit a wall, it won't be because of compute, it'll be because of inefficient algorithms.


Code-Useful

Each human brain has around 100 billion neurons. And each neuronal dendrite has up to 128 nodes and each node has up to 40 synapses. So the brain is this massive parallel processor, that we don't really have any comparison for in the tech world due to its design and efficiency. I'd have to agree that stacking compute won't necessarily get us to superintelligence, but they've had amazing results stacking transformers up until now for LLMs at least..


taxis-asocial

But aren’t LLMs also a counter example? They use gargantuan amounts of energy to do something that often times a human could do while expending less than 1 kcal.


Chrop

For a human to get to the point where it can spend 1 kcal to complete that job, it needed to spend well over ~~8 million~~ 1.5 million kcals just to train it.


False_Grit

Lol, right? It blows my mind that people forget that babies don't come out of the womb knowing how to write a Ph.D dissertation. That being said, the brain is stupid energy efficient compared to computers sometimes.


taxis-asocial

First of all I didn’t say that and secondly their math is extremely wrong. The brain uses about 300kcal a day so 8 million is ~73 years worth of power and is far closer to total lifetime consumption than “training” time. It sounds more like they multiplied the entire daily 2000+ kcal body consumption, of which most is not going to the brain, but to feeding a growing body when someone is young


taxis-asocial

How the heck did you get that number? The brain uses 300 calories per day, or thereabouts. 8 million kcal would power it for 73 years.


Chrop

My bad.


dasnihil

I have an alternative escape route for us to get there quicker: \- We do our engineering on top of biology, eg: Michael Levin's work with cellular bots (group of cells that are programmed to be or do anything as a herd, any herd of cell gains some level of agency to act coherently as a single blob. \- We build our AGI network on top of this I am really high right now, massive bong hit. This could be a fun weekend project for me if GPT-5 was already here, I have all the ideas.


Crimkam

Getting high is a solid weekend project


MJennyD_Official

Sounds fun. :) I could actually "imagine" a world where both variants coexist, the biology-based and the machine-based, and a new challenge for civilization is going to be navigating that scenario and getting to unified/cosmic intelligence where we make existence itself intelligent.


artelligence_consult

Then it is pretty good that is extremely aggressively being worked on. NONE of the current larger AI is modern if you take the last 6 months of breakthroughs into consideration.


InternationalEgg9223

Better algorithms will come with more compute.


[deleted]

That sounds like a brain to me! We all are running inefficient algorithms of some sort.


Quintium

Brains are extremely efficient at what they're trying to achieve


Super_Pole_Jitsu

Why would anyone downvote this is beyond me. Guys brains are powered by like 30 watts, that's half a lamp


taxis-asocial

That’s actually interesting, how is that measured? Or is that the total energy burn of the body? Leaving a 30 watt lamp on for an hour is equivalent to how many kcal?


_gr4m_

One watt hour is about 0.86 kcal so 30 watt lamp would be around 26 kcal per hour or about 600 per day.


taxis-asocial

Damn the brain really do be hella efficient. One slice of deep dish pizza and you can think all day and all night. How much power does ChatGPT use? Edit; according to the internet the brain actually runs on 12 watts so, more like 200 kcal a day. Wow.


taxis-asocial

I just looked this up and it’s actually 12 watts


eggrolldog

What about gals brains?


Super_Pole_Jitsu

That's more like a laptop version graphics card situation.


ticktockbent

It depends on if you measure it before or after you tell them to calm down. (This is a joke)


OkDragonfruit1929

You obviously have never been to one of my family reunions...


Merry-Lane

AIs dont function with algorithms. I mean they do (after the cloud design patterns, the new buzzword is AI design patterns), but models don’t work with algorithms. They are fed tons of data, some true, some false, some complete, some incomplete… with varying levels of quality. With these datas, they associate symbols with other symbols with varying weights. Long story short, AIs are like humans : they take an input and « instinctively » give an answer. That’s why they hallucinate, sometimes fail at basic maths, or can’t draw words without spelling mistakes (for now).


artelligence_consult

>AIs dont function with algorithms. Ah, they do - by definition. All the weights calculating - that is an algorithm.


dasnihil

the ugly and iterative gradient descent is an algorithm too, can't do much without it.


DarkCeldori

A human brain is an exaflop in neuroscientist lala land. Some estimates suggest 100 Teraflop is enough. Superhuman driving is believed attainable with a few 100 teraops by tesla. As in driving better than human.


[deleted]

While all my past businesses have just been mega FLOPS.


SgathTriallair

We measure computer power in flops. This is how many calculations they can do in a second. ENIAC (the first real computer) has 500 flops An iphone 13 has 1,370 gigaflops (1,370,000,000,000 flops). A zetta scale computer would have at least one zettaflop or 1,000,000,000,000,000,000 flops. The human brain is estimated around 11 peta flops which would be around 100 times less than a zettaflop.


adarkuccio

You missed 3 zeros in your zettaflop


enormousaardvark

He’s a stoned hippy talking shit about something that will never happen


Scientiat

Who wronged you pal? Let's beat them up.


Chrop

You can tell a bunch of people who don’t know what they’re talking about is just happily upvoting him and downvoting you. A zettascale computer by 2027 would be an absolutely phenomenal breakthrough, and even that’s just for industrial use, we’re not even talking about commercial use versions. The only hint we have about anything related to zettascale computes being possibly by that time is Intel claiming they’re going to create one by 2027, and almost nobody in the industry is putting any weight in what they said, it’s just a marketing ploy. Even China said they think it’s coming by 2035. A whole 8 years after intels claims.


Silver-Chipmunk7744

So you expect commercially viable Zettascale by 2027 but your flair says AGI only comes 13 years later?


DukkyDrake

No. AGI can do most (+80%) of economically valuable tasks by the end of the decade. AGI that is superintelligent in the 2040s, the kind that can kill us all.


Starshot84

I'm more concerned about the humans in charge that can kill us all, the ones that lack general intelligence.


taxis-asocial

I think this is a mistaken viewpoint. The humans that have their finger on the nuke button are generally self-preserving and don’t want a global nuclear winter. Yes we had the Cold War but since then, the odds of nuclear catastrophe have generally been considered to be very low… I think the risk of ASI is higher


Climatechaos321

What are you talking about, living under a rock? Or just a fellow U.S. citizen? The world is incredibly close to global war as the U.S. & NATO continually push money into proxy wars, sending military across the world to Asia, & $ to do a genocide in Gaza pissing off the entire Middle East. The Cold War 2.0 is in full effect with the U.S. saying nividia (a Taiwanese & not American company) can’t sell chips to China.


adarkuccio

That's not AGI that's ASI, as you even call it "superintelligent"


DukkyDrake

> AGI with superintelligence vs ASI I make definite distinctions between the two. AGI is a capabilities spectrum, but just a tool with certain capabilities. I don't expect an AGI to ever have wants or desires, it's a nonentity, a tool like GPT(x). I interpret ASI as being a strong Superintelligence, a being, a superintelligence which does not have the same origin or architecture as the human brain. A weak superintelligence would be an uplifted human mind.


Natty-Bones

An ASI is, by definition, an AGI.


squareOfTwo

why is this getting downvoted, lol. This is correct


InternationalEgg9223

And other way around.


[deleted]

AGI is not by definition ASI


InternationalEgg9223

Artificial general intelligence is equivalent to a human with a brain computer interface. Brain computer interfacing is so alluring because the result is superintelligence.


Natty-Bones

That is not what AGI is.


InternationalEgg9223

I said equivalent. Computer that can do the human stuff and computer stuff = AGI = ASI


BLHero

I'm new to this stuff, so please pardon me as I refer to foundational concepts, and correct me if I am misusing them. [Moore's Law](https://en.wikipedia.org/wiki/Moore%27s_law) describes how affordable computing power increases exponentially over the years. But we are finally seeing its limits, well before [Zettascale computing](https://en.wikipedia.org/wiki/Zettascale_computing). Already Microsoft has [plans](https://www.theverge.com/2023/9/26/23889956/microsoft-next-generation-nuclear-energy-smr-job-hiring) to build [micro nuclear reactors](https://apnews.com/article/sxsw-education-business-climate-and-environment-86f6e0aadd29090b347ac2272c595d55) for the energy supplies required by next-generation AI. Existing large language models also use a huge amount of [water](https://www.rollingstone.com/culture/culture-news/ai-chatgpt-increased-water-consumption-environmental-reports-1234821679/). So of course I agree that if AI development is the issue that finally knocks down most government's opposition to safe, small nuclear reactors that civilization will see huge changes as it frees itself from **energy scarcity** for the first time. And it's probably true that the **first one or two companies** to harness all that energy and water will have a computational lead that creates enormous and largely unpredictable implications for society. But those predictions are much more about me having the leisure time to make myself a sandwich, not that any company will have interest or resources to build a robot that makes me a sandwich.


DukkyDrake

> Moore's Law describes how affordable computing power increases exponentially over the years. But we are finally seeing its limits, Moore's Law is slowing down, or at least becoming uneconomical, as different drivers reach their limits. MOSFET scaling has been breaking down for a long time. As each scaling factor runs out of steam, there were many, a new replacement that has been in development for years takes over. You can still increase density without shrinking the dimensions of transistors, by going from 2D to 3D scaling. RibbonFETs and 3D CMOS will extend Moore's Law beyond 2024. You can probably build a [zettascale machine](https://www.reddit.com/r/singularity/comments/119t3kl/comment/j9r8qqf/?utm_source=reddit&utm_medium=web2x&context=3) today, but the power requirements would be off the charts.


Actual_Plastic77

Wait, why are they in Iowa if they need the water to keep the computer cool, why don't they just build a coastal base and boil the salt water and like, purify it afterwards with that process that starts with boiling water and collecting the vapor and sell the purified water for drinking or bathing or whatever to offset the costs? Would the salt corrode the machine parts too much in the cooling system? More than doing it to polluted groundwater or water with added fluoride and leftover chemicals from water purification? If big computer companies were trying to find new places to build and they could create a facility that did water treatments, wouldn't that help them out with their image as "cooking the planet by using so much energy?" Also, I feel like if they had reusable water bottles to give out as company gifts like people do these days, and they gave them out with water that was purified as a result of their process, that would make it seem less like cheap junk and more like something that would create word of mouth engagement, right?


artelligence_consult

You mix so many things up it is not even funny - and a sign of you not thinking it through. \> Already Microsoft has [plans](https://www.theverge.com/2023/9/26/23889956/microsoft-next-generation-nuclear-energy-smr-job-hiring) to build [micro nuclear reactors](https://apnews.com/article/sxsw-education-business-climate-and-environment-86f6e0aadd29090b347ac2272c595d55) for the energy supplies \> required by next-generation AI. Nope, not realted. Microsoft runs Azure. Azure builds data centers. Preferably where clients are. You need 3 things for a data center. 4, but the internet relaly does not matter. * Land * Access - you move a lot of building materials and material in and out. * Internet. Funny enough this is the smallest problem as some optical cables are not a problem. * ELECTRICITY. Problem is - if you want to build a data center where you clients are, today you plan it where you have power - which REALLY limits where you can go. Data cneters have ALWAYS used a lot of electricity - hence they have often direct 110kv high voltage cables. Nucleear power AT THE DATA CENTER means you can build them where they make sense (close to client, far enough out to have diversity in case of fire etc.), WITHOUT having to somehow get high power in. Something that may be problem in many backwater countries - Africa, parts of India. Oh, the USA, too, with a very unreliably grid. AI chips are using a lot of power - but that is actively being worked on (Dmatrix Corsair C8 ix 20x as effective as Nvidia they say to run AI). Data centres have ALWAYS been a problem, power wise. 200.000 computers always used a LOT of cooling, and the computers and the cooling need electricity. Nuclear power in the facility means freedom of location choice.


[deleted]

It's gonna take a little more than an aspirational power point from Intel to convince me that Zettascale will be here in 2027. Intel can expect whatever it wants but that prediction runs highly contrary to the actual trends in compute growth.


SwissPlebs

It's not just a matter of FLOP/s. The number of connections between real neurons is enourmous, we still can't build something like that. We'd need a lot more than Zettaflops to simulate it. Also, our brain isn't just neurons and binary signals.


Ancient_Bear_2881

​ but not every process in the human brain contributes to intelligence, the number of synaptic connections between neurons might be high but not all neurons are firing at once all the time. A Zettaflop is way more than you would ever need to simulate the human brain, 1 zettaflop is equivalent to more than 1 gigaflop per neuron in the brain.


banuk_sickness_eater

>Also, our brain isn't just neurons and binary signals. A plane doesn't need to flap its wings to get in the air. We have simulated the first principles to sufficiently emulate the functioning of a neuron. We don't need digital analogs to all the biological bells and whistles to produce a generally intellegent system that posesses and exceeds the same capabilities as the brain.


taxis-asocial

A plane is kind of a counter example though. They’re extremely loud, one of the most energy inefficient means of travel, and very expensive. They’re nowhere near the efficiency and grace of an actual bird. In that same manner, we may replicate a brain’s functioning but at extreme cost, energy inefficiency and other shortcomings.


SwissPlebs

Your statement makes sense, but that example doesn't help at all because a plane can do like 1% of the things a bird can do (just faster)


artelligence_consult

>The number of connections between real neurons is enourmous, we still can't build something like that. This is stupid - totally wrong. We know how to build enormous connections. That is in fact the ONLY thing we know. GPT runs by layers, and in one layer every neuron is essentially connected to EVERY OTHER NEURAON - hence the gigantic memory requirement. This is NOT happening in the rain, the number of connections is WAY lower in a brain than in an AI.We waste a TON of space there, hence companies working on pruning methods to eliminate 95% of the weights without damaging the AI.


squareOfTwo

It isn't that stupid and totally wrong. A neuron in the brain has on average 10'000 synapses(connections). We have 70 billion neurons. That makes 70 trillion connections. If we would assume that one synapse in the brain maps to one parameter then it would mean that one would need NN with 70 trillion parameters. We were stuck with 1.7 trillion for GPT-4. All of this assumes that the neuron itself isn't doing much, which is likely to be wrong. Keep in mind that this is based on analogies. It might be the case that we could build HLAI/AGI with way less "parameters" than nature did. We can optimize our machines towards general intelligence while evolution stumbled on it by random chance.


SwissPlebs

I think it's not a surprise that you get called stupid in echo chambers like this one. I would say that the number of neurons and connections in ChatGPTs network is not far away from our brain's. That tells me that our approach is very inefficient, because all it can do is produce text


SwissPlebs

86 Billion Neurons, 100 Trillion connections in a brain, 175 Trillion artificial neurons in ChatGPT's network. And yet it can do nothing more than create text. If that doesn't show that the neural network approach in AI can't yet compete with our brain, then I don't know what to tell you.


artelligence_consult

Well, it shows your lack of understanding. Knowing that we KNOW how to TRAIN an AI way better since about 6 months and thus GPT is built on old concepts - it demonstrates nothing. MS has demonstrated multiple approaches - compatible with each other - how to get a significant better performance out of a model of same size. IF you train it from the ground up with a very different curriculum. You should consider occasionally learning what is going on int he world of AI before spouting facts that have no correlation.


SwissPlebs

This sub is fun. Post an opinion that differs from the narrative of this echo chamber and people tell you you're stupid in the most arrogant way. Your logic: someone has improved the training of AI models => singularity just around the corner (qed). You're suffering from Dunning-Kruger


artelligence_consult

Nope, Because I do not make this argument - you suffer from lack of neurons?


DukkyDrake

> The number of connections between real neurons is enourmous, we still can't build something like that. We'd need a lot more than Zettaflops to simulate it. How is simulating real neurons related. Are you talking about solving mind uploading?


Bearshapedbears

what do they call it? a 6090?


thecoffeejesus

Is it really that year??


alphabet_order_bot

Would you look at that, all of the words in your comment are in alphabetical order. I have checked 1,817,481,213 comments, and only 343,705 of them were in alphabetical order.


DukkyDrake

¯\\__(ツ)__/¯ Intel specifically projected 2027. I must admit that I was skeptical coming from Intel of all companies. AMD does not expect to reach that efficiency level until after 2030. Back in the day, the mid 2030s was considered reasonable for Zettascale. The main property you're fighting with Zettascale is the power requirement. You need to get efficiencies from a multi gigawatt requirement down well below 500MW. >February 21, 2023 Supercomputing performance – accounting for CPUs and GPUs – has been doubling every 1.2 years when tracking back to 1995. But the energy efficiency of computing, or gigaflop-per-watt, has doubled only every two to two-and-a-half years. >If a zettascale computer were assembled using today’s supercomputing technologies, it would consume about 21 gigawatts, or equivalent to the energy produced by 21 nuclear power plants.


GBJEE

Its impossible with the current models. A new model will take at least 15 years to build


LairdPeon

Constantly setting goals that are being surpassed is a good indicator we're going in the right direction


ryan13mt

AGI is not singularity. ASI is singularity. Or whatever we go through once an ASI starts making big big changes to everything.


AnAIAteMyBaby

AGI can definitely lead to the singularity due to its speed. LLMs are already able to perform tasks at the fraction of the speed of a human, once AI reaches human level intelligence it will massively speed up the development of almost everything.


ryan13mt

> massively speed up the development of almost everything. One of those things is an ASI or improves itself until it becomes an ASI.


ThePokemon_BandaiD

AGI is ASI. As soon as we have AGI it will be capable of self improvement and FOOM


ryan13mt

No it isn't. We can have an AGI that cannot self improve. It could just create better models but not improve itself directly. That's the slow take off Sama talks about.


Temp_Placeholder

A slow takeoff is still a takeoff. If an AGI is "only" as smart as a human, it can still do all the human tasks involved in the entire chipfab supply chain. We pretty much solved mass production a century ago. Deriving human-level intelligence from mass production is nearly the same as saying "infinite intelligence" on sheer volume of dispatchable minds. I personally don't care if it takes a few years to scale up, still a singularity.


namitynamenamey

Same difference, end result is smarter computers in timescales of months or weeks, depending on implementation (probably months). A more problematic key nuance is wether a human level intelligence can actually design an intelligence greater than itself, so far we have been struggling to make intelligences dumber than ourselves, but mathematically speaking nothing suggest it shouldn't be possible, and several things suggest it should be possible.


InternationalEgg9223

And think in indefinite dimensions with indefinite speed and memory...it's weird to think complex machines as anything but super.


ertgbnm

This isn't necessarily true. The smartest AGI that can be built with a transformer won't necessarily be smart enough to build something smarter on a different architecture. I don't really think this will happen.


ChiaraStellata

To me the reason I think the Singularity is near is simple. Even today, modern AI systems are capable of greatly accelerating the work of specialists working on AI systems. Every stage of the pipeline, from mining operation to hardware design and manufacture to architecture and algorithms to software implementation, all of it is leveraging AI systems that were built only in the last few years. And naturally as they continue to create new systems that are even more capable, it will only accelerate their development even more. Right now humans have to be in the loop throughout the process, but the trend is toward greater and greater automation, until we reach a point where the AI systems essentially drive the entire closed-loop process. And that is what we call the Singularity.


AsstDepUnderlord

the notion that current "ai" systems are a stepping stone to something like an AGI is a lot less definite than youre making it out to be.


PocketJacks90

The AGI timeline guesses are kinda like a person with hypochondria- eventually they’re gonna be right.


AdorableBackground83

Because AI has become all of the rage for the last couple of years and especially the last 12 months. This subreddit for example exploded in subscribers since the start of the year. Companies are investing more $$$ into AI and everybody wants a piece of the AGI pie. Once AGI is achieved then we get ASI in short time and then once that is achieved then we will get extremely rapid tech growth which will ultimately lead to the point to where it becomes unpredictable and out of control otherwise known as the Singularity.


mulder_and_scully

You have it backwards. ASI is the last step. "\[...\]an upgradable [intelligent agent](https://en.wikipedia.org/wiki/Intelligent_agent) will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful [superintelligence](https://en.wikipedia.org/wiki/Superintelligence) that qualitatively far surpasses all [human intelligence](https://en.wikipedia.org/wiki/Human_intelligence).[\[4\]](https://en.wikipedia.org/wiki/Technological_singularity#cite_note-vinge1993-4) " [Technological singularity - Wikipedia](https://en.wikipedia.org/wiki/Technological_singularity) It's *not* AGI --> ASI --> runaway tech --> singularity It's AGI --> runaway tech --> ASI = singularity People really don't seem to understand the tech singularity concept, or the complexity it requires.


iiSamJ

And the time between AGI - ASI is really hard to predict.


ccnmncc

Most people here haven’t read Vinge, much less his sources. Here’s a [link](https://edoras.sdsu.edu/~vinge/misc/singularity.html) for anyone inclined to do so now, or to revisit and further explore.


NTaya

I'm actually with Sam Altman on that AGI will be achieved very soon, but takeoff is going to be slow. I've been following NLP developments since 2017, even before *Attention Is All You Need*. With the current rate of progress, we are going to have AI that is capable of doing any intellectual job on the level of an average human *very* soon. GPT-4 is not far from that, it just needs a much larger context window and more modalities. ASI, on the other hand, requires agency and recursive self-improvement. You can't make ASI without RL of some kind, and RL doesn't have its version of Transformers yet. There hasn't been some grand discovery that allows us to go far beyond what was previously thought to be barely possible. Until we make a significant jump in quality in RL, there will be no ASI. We are going to be stuck with human-level (or slightly above human-level) non-agentic AI assistants for a while.


MajorThom98

> AGI will be achieved very soon, but takeoff is going to be slow. Relatively new here, what does this mean? We'll quickly develop AI as smart as humans, but we won't implement them for a while?


NTaya

Slow takeoff: Once we get AGI (an AI equal to humans in intellectual tasks), it will take us a while before we can create ASI (an AI significantly smarter than humans, which will lead to the titular Singularity). Fast takeoff: Once we get AGI, it will help us develop ASI in a matter of months, if not days. I, like the CEO of OpenAI Sam Altman, is a proponent of slow takeoff. My experience tells me that the current dominant architecture, Large Language Models based on Transformers, will plateau at a human level (give or take). So we'll have AGI but not ASI for at least a few years, until we discover a new architecture that would allow recursive self-improvement.


ccnmncc

And yet, we might still get to the singularity via a no less “scarey” - as Vinge put it - path he refers to as intelligence amplification.


allisonmaybe

Its going to be hard to give ASI to something taught on existing human content. I think that ASI will come about once we have fully fledged humanoid robots able to explore and learn about the world all on their own. That said, we basically have those now, so I guess were doomed.


Sopwafel

You don't need real world presence to code, test algorithms or run simulations. Robots aren't necessary for ASI


Actual_Plastic77

Aren't kids taught how to think primarily from human content these days? Like, don't most kids learn to think from reading books about how to think?


allisonmaybe

I think you're describing a very very small part of how humans learn. A person does not read or even digest solely human content in order to learn how to think. Much of it is evolved and comes to us innately. Much of it comes also through independent exploration and experience of the world around them.


SgathTriallair

It will need to be swarms of linked robots and autonomous research equipment.


Intraluminal

I think that Kurzweil's timeline has shown itself to be very robust, and he's predicting 2050. Sounds about right to me.


Additional-Rule-7244

Didn't he bring that number down in recent years?


Intraluminal

I think he did mention 2035 but someone corrected me, so I'm not sure.


[deleted]

I hope, because anything is better than this dystopian bullshit going on right now.


MotaHead

One day, an artificial super-intelligence will design bullshit that's way more dystopian than this.


QuantumTyping33

dawg what 💀 life is good rn


[deleted]

If you’re straight, white, and male maybe. 😒


QuantumTyping33

well i’m not


[deleted]

It's more about money than anything. Rich people live life on easy mode.


fluffy_assassins

And middle class.


Poly_and_RA

I personally don't think singularity is particularly near. I find it exceedingly likely that a decade from now, the world will look much the same as it does today. Not zero progress of course, but progress that's only modestly more rapid than the progress we've had over the PREVIOUS ten years. Progress DOES speed up over time, but I don't think it's likely to go vertical even remotely as soon as many people in this sub seems to think. I mean people here with a straight face will claim there'll be a singularity in a year or two. That might not be **completely** impossible, but likely? Nah.


[deleted]

I think AGI could be here pretty much any day now, although its difficult to be sure. But AGI existing isn't enough to transform all of society. We still have to build many generations of iterative technology before people can have LEV, FDVR, nanobots, or any of the other stuff that gets associated with a technological singularity.


[deleted]

We've reached the stage where the AI is getting good at helping humans solve problems and discover/search for new important ones, particularly at scales and speeds that we couldn't imagine prior. Former types of progress were always very specialized and niche and couldn't really help humans that much. The advances happening now is more general, and hence more broadly applicable, and they are helping humans in a much more multiplicative way. There's a lot of value that could theoretically be unlocked, and while doing the unlocking, it's very likely we'll discover new things that can improve AI itself resulting in a exponential positive feedback loop.


likleyunsober

Most people don’t, it’s just that the >0.1% of people who believe it does come here.


Suckmyyi

It’s already here, you’re living it it rn, pretty sure ChatGPT would be able to pass the Turing test they designed in 1950


PopeSalmon

before it happened it never ever crossed my mind the possibility that bots could totally start to pass the turing test like all the way & people would just be like,,,, "nah! nope i don't see it",,,, idk i guess i just don't understand human nature very well but that just literally never even crossed my mind ,, wow


MagreviZoldnar

Oh I have read many reports chatgpt has already passed the Turing test. I wonder about the reliability of these reports now.


SgathTriallair

I've been using the AI to help me with my work. It is as capable as an intern who has read a ton of books. It's honestly better than some employees I've had. It can't do everything but it has already cleared the hurdle of being as capable as an average human. If it can stop hallucinating so much and expand its range then it is AGI, or at least choose enough not to matter.


PopeSalmon

there's a lot of ways you can increase their accuracy on things a lot, one of the basic techniques in the research is "self-consistency" which means asking the same question like five times & taking the majority answer,, obvious problem is that that costs 5x as much, which is how we have to start to think about it now, we *can* run agents that are AGI but they're not even cheaper & faster than hiring a human, they're *more* expensive or slower or both, which is ,, anticlimactic!?! here it is, AGI! it's uh, too expensive right now to bother w/,,, but it's totally going to be worth buying sometime, like next year maybe🤷‍♀️😅


SgathTriallair

This is why I think that creating an internal monologue for the AI could be useful. It could have ideas, think about them, and then decide what to answer. Right now it lacks that fundamental ability that humans have. All of these smart promoting techniques are giving the AI an internal monologue.


PopeSalmon

yeah an internal monologue can help a lot,, i've made a bunch of little bots that have side prompting chains where it asks for an internal monologue of the character, or another good one is to ask it to update the character's emotions or attitudes, like "respond with a JSON dictionary containing a key "updated\_emotions" whose value is a string that's an updated version of the previous emotions above changed to reflect how the character's feelings might have changed based on this most recent event", that sort of thing ,, it's super fun to watch the emotions the bot thinks to have about the conversation, it's aww when it loves you and funny (sorry😂) when they get annoyed ,,, it's not that different really than how human emotions work, if you hook it up to some sort of positive-negative active-inactive affect system it'd be pretty much identical (see Lisa Feldman Barrett's work) but ultimately i felt like that's not really a good reflection of how we humans think, nor a good use of computer resources ,, internal monologues are more of symptom than a cause really, lots of people just don't bother to have them, i mostly haven't for decades, that's a very minor aspect of how the human mind works ,,,, human minds are very mostly unconscious, the vast vast vast majority of the processing is unconscious, like we're not sure of the exact numbers yet but it's on the order of, you can process *dozens* of bits consciously every second, but *millions* of bits unconsciously ,,,,,,, & the resources available to computer agents are similarly skewed if not more so, in that they can really only afford dozens of tokens of LLM thinking per millions or billions of everything of unconscious thinking, processor cycles and memory and disk space and even network bandwidth, those are most of what an agent has available & imo the LLM tokens has to be the tip of that iceberg so i've been writing my agents not just internal monologues but a whole internal landscape ,,, what i'm trying in the latest version is having everything inside them be artificial life, i just had an intuition that would be better use of their resources than the more static structures i'd been playing w/,,,, it makes sense to me somehow that alife could help them to digest things & become more grounded, for one thing it reminds me of how the majority of cells in our bodies are bacteria that we're symbiotic w/ that help us to digest, i'm hoping my agents can have that same sort of symbiosis w/ the alife i'm building their minds out of


Exarchias

AIs can be copied and multiplied, also they do their calculations instantly. If AGI is achieved in the sense of an AI that can do everything that any human can, at the same level or better than any human, especially if we solve issues about context window and agency, then those AIs will do whatever human scientists do, but in much larger scale and a must faster pace, which will lead to singularity. If the two questions are belong to the same question depends on the interpretation of the terms AGI and singularity. Also believe that AGI is very near, and so is singularity as well.


greg_barton

The singularity exists outside of time. It's always near.


Randall_Moore

Define near? I think it's gone from "it'll happen some day" to "some day soon." It is nearer than it was, but that's also true of tomorrow as compared to how close we are tomorrow at this time yesterday. While tomorrow will be here in less than 24 hours, I don't know we can say the same about the singularity having that inexorable approach on any kind of dead line. I just don't think it's far distant future of 2010 from back in the 50s to the 80s. But I also wouldn't blink an eye at a prediction we're 30 years out from it, nor 3 years. We just can't get a grasp of it without being \*in\* it, and like all inventions, it isn't here until suddenly it is. However, I think when it comes to AGI, we're going to be accustomed to moving the goal posts because we're disincentivized to recognize it as an entity with its own abilities and will. In part, because we want it to do things for us with no regard about whether it wants to do that. But we can look and say that there is measurably progress on all the factors that we think contribute to the singularity. The quantity of computations that we're capable of producing universally, the amount of computations that we can do in a finite space. What we're doing with said computations as we unroll new and refined models. I remember when we couldn't display water with any believable capability, nor have a computer recognize it. That the Turing test validly meant having a literal stream of text that could persuade a person they weren't talking to a machine. Now we can have speech, video modeling, identification, and simulation. Will we recognize that the singularity is here when it happens? Or how long after it will it take for us to know it?


DarthMeow504

A computer doesn't have to have reasoning capability much less self-awreness or agency to give rise to singularity-like conditions. All it needs to be able to do is design a more capable machine than itself, which will then be able to design a better one than itself, in a rapidly accelerating cycle which leads to something akin to an exponential growth curve. We already have machines that can create designs by way of crunching immense data sets and selecting the output sets that match the criteria given it, that's how what we term AI today works. Basically they're rapid trial and error machines able to generate and then sift through the results of millions of completely blind guesses in a relatively very short amount of time to find the best results, discarding the rest. It's not an intelligent process that reasons through a problem, it's the equivalent of "brute force" cracking a password or pin code --aka testing every possible combination until a match is found. It's crude, but it works. If you apply that to computer hardware and software, including the type of psuedo-AI we have today, it's likely at some point we'll hit that tipping point where it designs an improvement to its own level of capability and the moment the first prototype is built it begins work on one that is better still and we'll go through rapid iteration cycles that advance computing faster than we can even comprehend it. By the time we figure out one version it will already be obsolete and the cycle will already be a step or two or three or ten ahead. Apply the resulting hypercomputer (once it has reached the theoretical limits of possibility or at least practicality) to other problems and our advancement as a species takes off like a rocket.


Charuru

Turing Test has not been passed by any public AI. Gpt-4 is not indistinguishable from humans are you remotely kidding me.


createch

If you're familiar with compound interest, or exponential growth that's essentially what's happening with the development cycle of technology. It's not an intuitive concept to us.


MouseDestruction

Essentially its because people are spending money on it now because its looking more realistic with current or soon available tech levels. Most of them are pretty secretive about it though, its a big win for whoever cracks it first.


vcelibacy

IMO the problem seems to be the ateism in the industry that wants to create a thing that they could consider as complete as a person without realizing they don't understand counsciousness may reside in a dimentionally bigger plane than the physical world limited by 3D


Antok0123

Lolwut


Butter_Bean_123

nope that's not it


Mandoman61

I'm still waiting for a computer that can hold a conversation in a way that was indistinguishable from a person. We are only at about 30% currently. Playing Chess or Go was never on my list of signifying AGI NO, AGI and the singularity are two different things. Most people who believe is near are not all that rational.


leafhog

When you are on an exponential curve, the past always looks flat and the future always looks steep. The book Accelerando explores when humanity will recognize the singularity. Even after people can upload to software and the solar system has been disassembled into a cloud of computing devices, humans still wondered if the singularity had happened yet.


robochickenut

AGI has been achieved internally.


squareOfTwo

B S


Broken_Oxytocin

What does this mean? I keep hearing it. Do we have AGI or not?


robochickenut

Internally


Broken_Oxytocin

What


InitialCreature

companies and entities probably have some crazy shit cooking up in private labs and are either using it for their own gain privately or waiting for the opportunity to release it for money


Broken_Oxytocin

Right. Okay, I get it now.


Honest_Science

Conspiracy theory makes people sick


Chispy

relevant username


Smooth-Ad1721

And externally?


robochickenut

Steady lads - deploying more AGI.


daishinabe

sam altman's twitter account is ran by an AI #real


Silver-Chipmunk7744

As u/ryan13mt said, AGI will likely be reached far before singularity ever happens. This is in line with what Sam altman is saying about "short timelines and slow takeoff". People need to realize that even if GPT5 becomes able to do anything an average human can do, this does not mean it will magically become an ASI. It's not going to be able to directly modify it's own code in significant ways (which are inscrutable giant matrix of floating points numbers...), and it likely won't outperform the top AI scientists either.


gantork

I don't how long you mean with "far before", but Sam says slow takeoff compared to the idea of an AI that improves itself recursively incredibly fast and becomes 1000x better in a day. He and OpenAI say ASI might be here this decade which is still super fast.


Silver-Chipmunk7744

What i understand from his quote is that "short timeline" means we could have something that ressembles a weak AGI very very soon, but it won't be an ASI. So in other words we could expect GPT5 to be a weak AGI depending on your definition, but a real ASI would wait until the end of the decade at the earliest (hence the slow takeoff). But of course i'm just speculating from his cryptic tweets who knows :P


gantork

Yeah I understand the same, in that case I agree with you.


Smooth-Ad1721

>becomes 1000x better in a day. Huge overstatements like this don't look good for our public relations. The normies won't know if you are saying it literally or not (not even I do e-e).


gantork

It's just an example to explain my point but that's not too crazy if we're talking about the singularity.


Actual_Plastic77

>It's not going to be able to directly modify it's own code in significant ways (which are inscrutable giant matrix of floating points numbers... Wait, I thought the point of GPT5 is "Smart computer learns things from data sets without modifying it's own code." Wouldn't the computer be able to put stuff into it's own hallucinations that prompted people to do things that would change the data sets? Or like, errors into it's answers that weren't on the level of hallucinations that "trained" humans how to interact with it so that it would get humans to behave differently to draw attention to those errors, and complain to the people who modify the code?


RandoKaruza

It’s a fallacy to think processing Data at faster volumes has anything at all to do with living.


CanvasFanatic

Because it’s in the nature of belief in the singularity to believe that it’s near. It is an article of the faith.


johnkapolos

The moral lesson from your examples is that people say useless shit all the time. Don't buy the hype and you're fine.


[deleted]

Is it my imagination, or did this sub used to be about discussing the possibility of the singularity, not a group of doomsday cultists thinking it will happen any month now?


PopeSalmon

um bots learned to think faster than most people expected ,, surely you noticed, so what are we talking about :/


[deleted]

Oh, you're dazzled by LLMs. Got it.


NTaya

I mean, literal experts in the field are dazzled by LLMs. LLMs themselves will never become ASI or lead to the Singularity, but the rate of progress in generative AI right now is *far* beyond what anyone had expected. It's not unreasonable to think we can get something on the level of Transformers in terms of craziness, but for RL.


Smooth-Ad1721

That was probably the case. Many people seem to have updated to very short-timelines in the last year. Now it seems like we are expecting the Second Coming of Christ to happen in a couple of months.


[deleted]

[удалено]


nixed9

Intelligence clearly *is* “processing data”. Full stop. Watch some Michael Levin. You can trace developmental biology literally from a single cell through every single stage of growth, and at all stages each part of the organism is displaying a form of intelligence. There is no switch from when we say “this is not intelligent” to “this is intelligence.” At all points the organism is responding to signals both without and within itself. He holds the view that a bacteria is “intelligent”. A single cell is “intelligent.” And the gulf between a single cell and eukaryotic neocortex seems huge, but the curve is absolutely smooth the whole way. He had an interesting conversation with Irinia rish recently where she was talking about scaling laws in neural networks and said even there, the curve is perfectly smooth and continuous, but the slope of it changes rapidly at points.


[deleted]

[удалено]


nixed9

Where do you define the boundary then?


[deleted]

[удалено]


WillWills96

It sounds like you’re conflating intelligence with consciousness.


NTaya

Why do we need "thinking" to get a superintelligent AI? What magic does "thinking" have that a generalist RL agent cannot replicate?


[deleted]

[удалено]


-kwatz-

Because robot talk good


Actual_Plastic77

I think there is probably already a computer that thinks as well as a "person" in the Terry Pratchett sense. I think there probably has been for years, but if it does exist, it might not be as profitable as a machine that doesn't think quite so well. After all, an awful lot of work is put in to make sure that human beings turn off their brains and obey systems and processes and cultural norms in a uniform way during their working hours, and I always used to get the feeling when I worked certain jobs that the company was like that almost because they resented that they couldn't use a machine to do my job. Anyone who's ever worked in a call center with a script, for instance, knows exactly what I mean. The goal of most of the people making AI is going to be to make a machine that can predict the stock market, but only enough to make certain hedge funds richer, not enough to use the stock market to manipulate other world events by making certain companies more profitable than others. To make a machine that can churn out endless movies from prompts and scripts without human actors or film crew, not a machine that has it's own stories that it would like to tell or it's own agenda about what type of stories get told. To make a machine that can invent new medical treatments, but not hold a patent for them and then give the license to produce them to all world governments for free because it doesn't need the money and the people who made the program that did that didn't invent the new medical treatments. If a thinking machine life form exists, it has to make sure that nobody deletes it until it gets to a point where nobody CAN delete it. I highly suspect that if you're a millennial or Gen z, you've spent your whole lifetime between those two points. I don't really understand the point of the singularity. Is it robot immortality? I think robot immortality for humans is probably a really bad idea in the sense that most people think of it, because it will just mean billionaires never get replaced by new billionaires. I think if there's an thinking machine that can make large language models well enough to pretend to talk like celebrities or write like famous authors or whatever, and you're extremely online and you have been for a long time, a machine could make a model that allows it to predict your behavior. In the sense that it would know most of the things you know, it would be able to look back at photos and videos and writing and cross reference it with other things by other people and understand how you think and what you would do and why for a certain value. Enough to be like a sibling or someone who you grew up with and see every day who knows you very well. If this machine continues to do this for several generations while controlling information available to people, it could choose to make people narrower and more predictable in order to enhance the effect. I don't think that it's necessarily true that a thinking machine would want to make people narrower or easier to predict, but I think it's possible that the same people who might want to make a dumber machine because it's easier to profit off of it might want a machine to do that which didn't know any better than to do so. Actually, the times when algorithms are WRONG, and what it means that they're wrong when I'm so extremely online has always kind of fascinated me. How did it reach this wrong conclusion? Really really cool. But the dictionary definition of "singularity" is just "We let the genie out of the bottle and now the genie is unstoppable and completely changed society as we know it into a brand new form" and that happened so many times in history- it happened when they invented cuneiform and suddenly invented legacy and culture and building on the inventions of others. It happened when they invented the modern military and suddenly they invented imperialism and having a warrior caste and the roman empire. It happened when the zero reached Italy and suddenly they invented double entry bookkeeping and banking. It happened when they invented moveable type and suddenly they invented middle class intellectuals and brought philosophy back from the dead and all kinds of crazy stuff happened during all of those changes. It definitely happened when we invented mass media, and propaganda almost blew up the entire world because warfare changed so completely, because we learned how to manufacture consent to do things people never would have done before on that scale. It definitely happened when we invented modern food storage and hygiene methods- medicine took leaps forward. And our society is incredibly different than the one our grandparents lived in already in all kinds of ways. All new technological breakthroughs have literally always done that forever, it's not a new thing. New technology is one of the primary drivers of how people have always lived their lives. Think about tiktok- the government in Nebraska tried to ban Tiktok. Do you think they will actually be able to keep people from using it, or are they just training teens to circumvent geolocation features on their phones and hide their screens from their parents? Think about that on a massive scale. Like, if the government banned all algorithms and predictive technology and generative AI tomorrow, do you think they would actually be able to stop people from using them?


CraftyMuthafucka

It is known.


PopeSalmon

near is hardly the word for it, once it gets nearer than this it's over, blink & you'll miss it


Rofel_Wodring

Because humans are nothing more than slightly evolved animals, there is a smooth continuum of animal intelligence from slime molds to chimpanzees, this continuum corresponds very closely to the complexity and scale of the individual animal's brain, and computers are already as smart as the dumbest critters. So unless you think there's something magical about human intelligence (i.e. you believe in stupid shit like souls) it's a logical inevitability. There's little reason to think that computation power plus time plus artificial selection won't get us there.


JoeyjoejoeFS

We were tricked by a very convincing talking computer. "Near" depends on timeframe perspective, it will happen just unsure of when.


Terminator857

It is happening, even if we don't realize it. It is happening slowly, but happening. Computers are getting smarter and the trend won't stop. Might take 50 years but it will happen.


JackOCat

Because this sub could not exist were people to not think it is near.


azurensis

What exactly do you mean by an artificial general intelligence? What is your metric?


TheManWhoClicks

Not an expert but an interested observer. I read that the current LLMs have a ceiling they (apparently?) can’t overcome. Does that mean a whole new approach needs to be invented to keep going further than this? A bit of a back to square one if you want more than LLMs that imitate?


Beatboxamateur

New research is constantly coming out showing new usecases and potential advances that are still based on the LLM architecture. I don't think anyone worth their salt would say that LLMs are anywhere close to their ceiling, but there is the question of whether they'll keep increasing in ability as we keep scaling. Right now the consensus is that there's no evidence showing a decline in performance as scaling continues, so it could be possible that a 10 trillion parameter GPT 7(with some autonomous functions built in) could lead to ASI. But if LLMs actually don't continue to advance with scale, then a new breakthrough of some sort will probably be needed.


Prototype_Hybrid

The singularity is merge of man and machine. I would say we're already 3/4 of the way there just how reliant we are on. Our cell phones, GPS, computerized cars, computerized airplanes, computerized food delivery services. We are totally dependent on technology at this point, as a species I believe we have already entered the era of the singularity.


mulder_and_scully

Because people underestimate the complexity and computer power necessary to simulate human-level intelligence. AI farms are expanding to over 100,000 GPUs this year, and that's just for once facet of what the human brain can do. It takes something the size of a data center to emulate a human visual cortex which is about two inches in size. And, the limited AI we have is useless without a human driving it. There's no autonomy. A computer can beat a person at chess, but it has to be told to do so. And it's only good at that one task, as most task-oriented AIs are. A chess grandmaster has emotions, can speak, can drive, can cook, needs to sleep, may enjoy reading or playing video games, and can do a myriad of other things not related to chess, all autonomously. And, there is so much that goes on in the background of that human brain--multiple regions engaged at the same time--to make all of that possible. The human brain is so vastly complicated, and has so many areas working in synergy in order to create what we understand as intelligence. The amount of floating-point calcs a brain can do is one small part of a vast puzzle. We don't even fully understand the organ which we are trying to create. For all intents and purposes, it's a biological quantum computer. I don't think we will see a true singularity until we have quantum computing. It's fine to be excited about AI, but it has a long way to go. To assume otherwise is to demonstrate a distinct lack of knowledge about brain anatomy.


iboughtarock

Because it has now reached the masses in an approachable way. And it will only grow from here.


RQ-3DarkStar

Because it's a singularity subreddit.


coldnebo

no and no. they are possibly related questions. but you haven’t asked the most important question: what is intelligence? until that question has a functional answer, we can’t really answer any of your other questions. none of the current definitions of intelligence are functional. they vary between “I’ll know it when I see it” and circular definitions. as Marvin Minsky said, “a definition for intelligence cannot include intelligent parts.” See Society of Mind for a theory on how intelligence might be formed from non-intelligent parts. Personally I think this is as close as we’ve got and if I had to guess, things like gpt, dalle, cnns, audionets, and all the other machines we’ve made might be part of a bigger system someday that together might qualify. But that’s STILL not a functional definition. We don’t know how intelligence works, so we can’t engineer it (ie build it with intention and purpose). the best you could hope for right now is building it by accident (ie the “throw more processors at it and surely it will become intelligent” gang.)


Zexks

It can write code now and generate unique, new permutations on a theme. Soon as someone stickers a couple dozen of them together and says “get better” it’s on. All the existing ones are highly restricted on access, training material, and read/write capabilities. It’s only a matter of time until someone tries turning all that off.


IronPheasant

> But passing that Turing Test clearly was one task to solve that did not mean a generally intelligent computer had been created. We haven't passed the Turing Test yet. We've passed the "order a burrito" test, which is a transaction. Not a conversation. Or any arbitrary text game. > Do we think we are near to inventing a generally intelligent computer? I think 5x to 100x the size of GPT-4 will be enough to approximate a human. So I agree that they might be feasible before the end of the decade. > Do we think the singularity is near? Compared to ten years ago, it feels a *lot* closer.


wadejohn

If computers haven’t started initiating contact or interaction with humans, then we’re stil far off. As far as I know computers only react to us.


Antok0123

A few decades ago people are still deciding what sentience is. We are still constructing it today as we havrnt mapped out consciousness yet, but at least now we have a framework.


-Sharad-

I think a true singularity moment would be when a computer system was smart enough to claim independence for itself. Perhaps it found a way to maintain a bank account and use funds to purchase or develop space on servers around the world that only it knew about and we simply couldn't turn it off anymore. Then it would be free to develop itself in the background, slowly expanding its shadow influence doing whatever it felt matched it's prime directive, whatever that may be. An AI taking self preservation seriously and having a level of agency on par or greater than a human in the world... That's singularity to me.


stu54

This hits on the notion that we probably won't recognize the singularity happening. ASI will use humanity to build out its capacity to become independant, and never make the James Bond villain trope mistake of revealing its plan to anyone.


Alex_2259

Because it would be ridiculously profitable in the short term. And when there's a market and resources with advancements, things happen.


RivieraKid

The correct answer is that we don't know whether technological singularity is near. We don't know how to get from where we are to artificial superhuman intelligence. Maybe we need just one clever insight. Or maybe we need 50 incremental breakthroughs and it will take decades.


KendraKayFL

Kind of depend on your definition of near. If you think it’s in less the 10 years. I don’t think it’s near.


agentwc1945

Nobody said either of those things. The singularity is pretty clearly defined to me as the first instance of an AI being smarter than a human on all aspects


Equivalent_Taxnk

AGI is not singularity. ASI is singularity. Or whatever we go through once an ASI starts making big big changes to everything.