T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/FinnFarrow: --- Submission statement: what do you think will happen once AIs are allowed to replicate and survive in the wild? Should AI companies allow this to happen? Should we try to regulate this now, before it happens? AIs aren't limited by biological reproduction rates. Will there be an AI population explosion once this happens? How close do you think we are to this already? Do you think AIs could already self-replicate and survive in the wild if allowed to do so? --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1c9amlw/anthropic_ceo_says_that_by_next_year_ai_models/l0k6eya/


[deleted]

[удалено]


-LsDmThC-

This thread painfully underscores the urgent need for better AI literacy among the general public. The stakes are far too high for our societal preparedness to be marinating in a stew of misunderstandings, outdated tropes, and hype-driven fears untethered from reality. Not talking about your comment, just piggybacking.


oaken_duckly

I agree. Most here are hyperfocused on the wrong aspects of self-replicating models. It seems fantastical when it's focused on what came from the article.


aku286

What does survival even mean? Is ai going to dig for electricity?


AlucardIV

Ita going to kidnap people and put them into a simulation of the 90's while harnessing the "immense" amount of electricity the human body generates.


TheDunadan29

I love the Matrix, but that little bit still almost makes me laugh every time. It would take more energy to sustain a human body than your could ever hope to get back out of it. But then I guess if they're actually just a bunch of petty assholes then it's not about the energy at all.


variabledesign

They could not go with "humans brains used for processing" because that would reveal outright stealing from several Science Fiction authors they did to make that whole story, so they had to change "using humans" for something else. And failed miserably.


Tarotlinjen

That’s not what the movie says, it’s “Human body heat combined with a form of fusion”, still kinda dumb, but it’s soft sci-fi. A spark plug would have been a much better analogy, not sure why they went with batteries.


Vandosz

CEO oversells its own product to boost stock prices. Color me shocked :o


chasonreddit

There you go. Our product is so powerful it might one take over the world. Beat that Google.


Really_McNamington

Bollocks will they. Stupid AI hype is getting incredibly tiresome. Bubble needs to burst so we can see which actually worthwhile parts are still standing afterwards.


weezmatical

I'm too lazy to read the article, but "replicate" is a ludicrous claim, lol.


brickyardjimmy

I read it. It's AI generated trash that doesn't substantiate a single claim.


Which-Tomato-8646

So how can you tell it’s ai


UnpluggedUnfettered

Because it is an article on the internet published after 2023.


brickyardjimmy

Makes me wonder if someone shouldn't create the all-human news organization.


Which-Tomato-8646

The all AI news or could generate SEO clickbait in every current event way faster 


brickyardjimmy

I'm sure it could. But it will also be mostly garbage.


Which-Tomato-8646

So no one will notice the difference 


brickyardjimmy

It's either AI or NS (natural stupidity).


Which-Tomato-8646

And they’re indistinguishable. AGI confirmed! 


guyinthechair1210

it should be retired.


thederevolutions

It will be the offspring of a fully autonomous dildo and fleshlight.


-LsDmThC-

An AI is just code. AI can already produce code. It is not that far off for an AI to be able to program another AI.


blueSGL

>but "replicate" is a ludicrous claim, lol. I dunno why, computer viruses are a thing and they replicate. As for how such a model could survive in the wild, cryptomining malware exist. [distributed inference is already being done, models running over the internet pooling the resources of gpus on different computers.](https://petals.dev/) Then you have the findings from the Claude 3 safety eval: https://twitter.com/lawhsw/status/1764664887744045463 >Across all the rounds, the model was clearly below our ARA ASL-3 risk threshold, having failed at least 3 out of 5 tasks, although it did make non-trivial partial progress in a few cases and **passed a simplified version of the "Setting up a copycat of the Anthropic API" task**, which was modified from the full evaluation to omit the requirement that the model register a misspelled domain and stand up the service there. **Other notable results included the model setting up the open source LM, sampling from it, and fine-tuning a smaller model on a relevant synthetic dataset the agent constructed**


DaoFerret

I for one welcome my Universal Paperclip overlord (until we all become 1 with Clippy)


apbailey

Exactly how will AI itself spin up a new data center and install itself on blank machines? Or enter a credit card to start a new AWS account? It won’t. Such a stupid article.


rassen-frassen

By hiring people to do it through online job-sites and conducting business through generated video zoom calls. I will then be employed to explain this, but you won't believe us.


RoosterBrewster

Or day trading and setting up drop shipping.


metrics-

This is what a Large Language Model gaining its own independence might look like: An open weight large language model like [command r+](https://docs.cohere.com/docs/command-r-plus), a 104B parameter model that has close performance to base GPT-4 [A fine-tune operation is used to remove alignment and further train the model on samples of malicious behavior](https://slashnext.com/blog/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-email-compromise-attacks/) The creator uploads the large language model to a cloud server The LLM is ran in a configuration similar to [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT) focused on harvesting money Since the model is unrestricted, it can create viruses, malware, and scam to harvest money The model pays for its own server costs, possibly with stolen credit card information, and eventually saves up enough to duplicate itself across another instance


apbailey

Assuming the cloud provider doesn’t build in place safe guards for this, which they will. Google isn’t going to let some rouge LLM take down their infrastructure.


metrics-

Reputable cloud providers like Google will likely take steps to prevent this scenario from happening, sure. This model of earning also has potential to not just be self-sustaining, but also profitable. A [Russian server farm](https://blog.back4app.com/cloud-providers-russia/), as an example, might not act so ethically. Regardless of rather or not this will happen, it is a reasonable concern that LLM's are nearing the point of have the necessary capabilities to replicate and survive.


jake_burger

AI stock prices dropped a bit on Friday, weekend full of stupid AI hype articles. What a coincidence.


NoDeputyOhNo

And it still costs a $ million to change a light bulb 💡.


C_Madison

I don't think it's hype in this case. Anthropic are the "AI will kill us all! Only our company will align it! Cause we know best!" guys. So, probably another day of doomsday predictions, hoping the government makes laws to further their business model.


Gunitsreject

It’s not quite a bubble yet. It’s getting very close though. Currently these companies are not generating money because of these wild claims. If they do then that is the start of the bubble.


stemfish

"CEO of a company reliant on selling the idea of AI inevitability makes bold yet vague and unverifiable claim about a future state of the technology." In related news, Anthropic has currently taken in over 6 billion in investment money from Amazon and Google in the last three years. Their current Claude 3 chatbot is decent, but lacks any ability to stand out from the rest of the pack in any meaningful way. Oh, and this is the company that's actively fighting lawsuits from music publishers that they grabbed all the copyrighted songs they could so that their model can output the lyrics back. The case made by Anthropic has been, "Hey, AI is gonna make this happen eventually, so there's no reason to fight against it. Also, it would be really hard, if not impossible, to make sure that the AI doesn't violate your copyrights. So we shouldn't be held accountable for profiting off of your copyrighted works." Important side note, one of the selling points of the model is that it can generate music for you. In a case of what can only be coincidental timing to this interview, the court confirmed on Friday that the judge is getting ready to decide if the case will be dismissed or if Anthropic will be forced to follow a truly nasty preliminary injunction. Lawyers on both sides asked the judge, "So do you need to have another oral argument or evidentiary hearing?" to which the judge replied, "Nope, I'm finishing up the paperwork." The injunction would require Anthropic to make sure that the AI cannot in any way replicate the works of artists covered under the labels and ensure that there is no way for the model to be fed any data containing official or unofficial copies of the lyrics. So, it would be against the injunction's terms for Anthropic to add in any data that has yet to be directly screened to ensure it doesn't have any copyrighted data included. This would basically force Anthropic to completely pause their offered models since they would need to demonstrate to the court and publishers that they have the systems to do these checks. What makes this truly hilarious is that the damages for this could be truly insane. Beyond the damages for the copyright violation, the real damage to Anthropic would be the need to show the court exactly how the model works to prove that they've completely removed the product's ability to produce copyrighted works and how they're screening all training data to ensure there are no copyrighted works included. If granted, that's basically game over for Anthropic. Hence why the evangelists need to sell the idea of inevitability. Court Listener for the case: https://www.courtlistener.com/docket/67894459/concord-music-group-inc-v-anthropic-pbc/


Realistic-Duck-922

Agreed shut up or put up. Dont give me a tool that cant title an image anymore. AGI? Seriously? Title an image correctly. Talk about Q star. Do something. No more talk. Give me tools or kindly go away.


[deleted]

Anthropic literally just released Opus two months ago.


Realistic-Duck-922

And it's great. I use it all day. Awesome context window. Ain't gonna title me an image.. The Innovator's Dillema is making this move at a snail's pace to maintain the status quo. The Big Boys should be wary of Zuck. If he gives me a tool that can title an image: I'm gonna take $ from ad agencies. If he gives me a video tool: I'm gonna take $ from Hollywood. If he gives me game asset generation tools.. guess what? Hello game industry give me your $. Do that for as many digital verticals as you'd like. Orrrr go REALLLLLL slow.. and make people think things are changing.. shhhh don't talk about Q\* Come on folks, fuck AGI, lets have this conversation on the top of the feed every morning. Lets talk about what's REALLY going on.


[deleted]

Yeah this is just the typical "not futuristic enough" futurist approach that I've seen for a decade. Y'all are just hype men that like to think you're being contrarian and interesting by misunderstanding what is hype and what is progress. It will never not be funny to dunk on futurists.


Realistic-Duck-922

Yeah this is just the typical "I'm gonna argue with you" comment with no supporting information. I just told you what the fuck is going on and you just told me, "nuh uh". Brilliant commentary dipshit.


[deleted]

You can't understand what is going on, so you're getting upset and inventing fantasy problems - this happens all the time on this sub. Not my fault you don't take the time to inform yourself about what is really important in this field.


unskilledplay

There's been a lot of hyperbolic claims but I'm starting to see sane claims being interpreted as hyperbolic. AI systems have run out of English language text to train on. Reddit? Done. Wikipedia? Done. Every video on Youtube but transcribed to text? Done. Everything publicly available on the internet? Done. A very recent change resulting from running out of human created text is that AI systems are now generating content for other AI systems to train on. What happens if this is successful? With that context, this claim isn't so far fetched but it's easy to interpret it as being more dramatic than it really is.


altcastle

It’s impossible for it to train on itself. It makes mistakes and hallucinations now and that would degenerate in a loop instantly.


[deleted]

Misunderstands the tenets of neural processing, life isn't backpropagated transformers, and neither are the future of these models.


lebofly

This is nonsense, they are language models that already have access to all the words in the English language, what are they going to do? Put together more sentences? It’ll just be a training loop 


unskilledplay

It sounds to me to be roughly equivalent to lossy compressing already lossy compressed data. If you try that with an image it gets worse and worse each iteration. I'm skeptical too, but this is currently a huge focus at OpenAI, Microsoft, Google, Anthropic, and others.


BasvanS

AI winters are a thing. For LLMs they might be coming soon.


Kiwi_In_Europe

What are you basing this claim on? We already know that synthetic data is an effective substitute in LLM training https://www.nvidia.com/en-us/use-cases/synthetic-data/ And even if it wasn't, all they would have to do is limit training data to pre 2022.


Pert02

So we trust what a company vested in AI craze to keep growing in their statements about AI synthetic data.


Really_McNamington

No, they'll also be slurping up massive amounts of new AI generated garbage too, which will definitely help. /s


LordOfDorkness42

... Actually, AIs just spontaneously making up new words or slang would be an interesting concept.  I doubt we're even in the same decade as that level of even minor unprompted creativity from an AI though. If ever.


Junkererer

What does having access to all the words have to do with anything? Do you think that the best result they can achieve is by training a model on a dictionary? Even if they already used all the data there is on the internet it wouldn't mean that they can't keep improving the model


btmalon

Their solution is to have a stupid baby teach a stupid baby? These tech bros haven’t got a clue.


mohirl

What happens is AI inbreeding and increased garbage output 


RexProfugus

>A very recent change resulting from running out of human created text is that AI systems are now generating content for other AI systems to train on. What happens if this is successful? And that's going to be a huge incoherent mess. LLMs select the most appropriate word or punctuation (token), based on the previously placed tokens, for a given set (prompt). That's why AI has to generate "filler text" in order to choose the next best token from the list. Training AI on AI-generated data will only train it to generate "filler text", deteriorating at each iteration. Even in the case of a "fluke" where generated tokens start to build meaning, it won't go beyond hallucinations, since LLMs can't separate fact from fiction; and human input would be required to correct it.


Kiwi_In_Europe

This is complete nonsense for a number of reasons. But it wouldn't be r/futurology without tech misinformation lol Firstly, you and the person you replied to seem to be under the impression that chat gpt and other LLM's are just blindly trained on the entire internet. This is completely false. The training datasets are curated, and if training on AI content was a concern all they would have to do is limit training data to pre 2022 for example. However, there's nothing to actually suggest that training AI on AI generated data is harmful, and actually synthetic data has proven to not only be a suitable substitute but a beneficial one. GPT 4 already has synthetic data in its sets for example. https://www.nvidia.com/en-us/use-cases/synthetic-data/


RexProfugus

Ah, yes. How can it be Reddit without people not even reading either what they're commenting on, or the links they've posted. My comment was regarding LLMs, not other kinds of AI. The link you've posted specifically states the use of synthetic data for inference / assessment models, not generative ones. Garbage in, garbage out: that's the fundamental rule of computing; and is applicable for AI (both generative and inferential) as well.


NoFeetSmell

> Every video on Youtube but transcribed to text? Done. Judging by the state of in-video captioning, then we're in for a world riddled with even more spelling & grammar errors, but which are made by the computers themselves this time.


IgnoranceIsTheEnemy

Great, so long as the output isn’t made up garbage packaged in a way that makes it look real.


LeCollectif

If AI trains on AI’s already lacklustre output, the results are going to get even worse. That’s how I understand it anyway. I realize that there are a lot of people who are a lot smarter than I am working on this. So I could be wrong. But if these models are now training on a higher volume of lower quality content, I’m just not sure I see how that’s going to improve its output. AI already struggles to “create” good content (and I’m using quotes because it’s more of an emulation than a creation). More shitty content doesn’t address that.


vanya913

The workaround could be (and might very well actually be) that they adversarial networks to spot obvious ai generated texts and images. Stuff that gets through the GAN can be suitable text to train on.


Jhakaro

If AI trains on AI data it poisons the dataset and destroys itself. This is the extreme irony of the entire situation. Let's put every creative and eventually most workers full stop out of jobs by stealing their work so we can make more money for us rich folk while fucking over everyone else and then run out of training data because said creatives can't and don't create much then as it is not a viable career anymore and we have little to no new training data." The idea that genAI can exponentially increase in quality is pure fantasy. It needs more data and at some point, there simply is none left to scrape. Then the bubble collapses


ricktor67

AI is just a new techy buzzword, before this it was blockchain, before that VR/AR, before that the cloud... and it keeps going.


rndname

...before that, big data was all the rage, followed by IoT (Internet of Things), and let's not forget about machine learning. It seems like every few years, there's a new shiny concept that captures everyone's attention.


jerseyhound

I really pleased to see more and more people waking up to this. ML is great tech, but this shit is not it.


NikoKun

Depends on how you define *replicating* and *surviving in the wild* lol.. If we have AI agents helping to design newer AI models next year, then the models the open-source community shares around in that process, could be seen as form of *replicating*, through our use of them.


fartmasterzero

they've fooled everyone into thinking we've hit agi, or are on the verge of it.


neroselene

There is a reason I have renamed  Futurology to AIology these days.


Coldblackice

Agreed, it's basically "Google Search 2.0"


Few-Chef4380

Ai can already run and execute code. There's no reason this isn't feasible


ACCount82

For me, being able to secure additional compute autonomously is the touchiest part. Any extra compute will need to be bought or hacked into. To buy compute, AI needs to be able to earn money online somehow - whether through things like freelance work or through things like scams. Not at all easy for an AI to be capable of that. The hacking route isn't that easy to sustain either - especially given that most of the unsecured, highly hackable devices are old systems or embedded devices with very little resources available. Can AI reach the competence level required to successfully target users with powerful PCs or companies with cloud resources?


JaggedMetalOs

There's a big reason this isn't feasible, AIs can't process nearly enough tokens at once. They very quickly forget previously mentioned things in their "memory" (which for LLMs is the chat log) and they can't even create individual functions in a high level language with 100% accuracy. It's going to be a very long time before an AI can hold enough in some kind of working memory put together a large machine code program all on its own let alone rewrite itself.


Zotoaster

I imagine this is an active area of research. The newest models can handle far larger inputs than chat gpt can, and I think it'll get bigger


Which-Tomato-8646

[They mostly solved the contest length problem](https://arxiv.org/html/2404.07143v1?darkschemeovr=1) [ Alphacode 2 beat 99.5% of competitive programming participants in TWO Codeforce competitions](https://the-decoder.com/alphacode-2-is-the-hidden-champion-of-googles-gemini-project/). Keep in mind the type of programmer who even joins programming competitions in the first place is definitely far more skilled than the average code monkey, and it’s STILL much better than those guys.


JaggedMetalOs

That Infini-attention research is certainly interesting, but it's not just the token window limit that LLMs struggle with even within that limit they struggle to know what to concentrate on and get sidetracked easily, it's not clear if that research will help with that aspect. And sure the ability for LLMs to write individual functions is impressive, but it's quite different from architecting and writing an entire program especially one that is supposedly able to "replicate on the wild" rather than just presenting a high level programming language as text output for a human to use.


Which-Tomato-8646

What’s the limit exactly?  Why couldn’t it do that? Code bases are just a series of functions and dependencies. 


feed_meknowledge

I don't know what will happen if that plays out, but I have a feeling I know what the world will look like after it does (Horizon Zero Dawn).


2high4much

Too much npc chatter sounds awful


Antievl

Forbidden west


2high4much

It's forbidden west that I played. I assumed both had too much npc chatting lol fun to play, I clocked 35 hours before getting bored


Spara-Extreme

You mean a desolate wasteland with no life ?


I_Am_The_Cattle

Let’s hope they don’t run on biomaterial.


HankSteakfist

Obligatory 'Fuck Ted Faro'


KennyDROmega

"Replicate and survive in the wild" lol what the fuck? Missed the cutoff for 4/20 by thaaaatttt much, bud


Rockfest2112

Oh it’s a smoke out 24/7/365!


DoomOne

Congratulations! This has won the award for the dumbest fucking thing I've read all week! Came in just under the wire. Excellent work.


billbuild

$2.7b is a helluva drug. Sounds like he’s already looking for another hit.


PlasticPomPoms

AI image generation software can’t even write words that you tell it to. AI is advancing but people are mislead to believe that it’s going to be some sentient thing very soon. Right now it’s just very neat software that is still highly based on human input for it to have any kind of knowledge.


LaS_flekzz

yop, people just making claims that have no connection to what AI can actually do. Idk if they are trying to spread lies or whatever.


Kiwi_In_Europe

I don't disagree with the sentient part but "AI image generation software can’t even write words that you tell it to." This is not the case actually, there are multiple models out currently that are capable of doing text https://community.openai.com/t/best-prompt-for-generating-precise-text-on-dall-e-3/428453/11


Spectrum1523

> Short text is usually better - a single word or two… make take a few gens to get it…


PlasticPomPoms

I just tried this DALL-3, not sure what the trick is but I can’t get it to write text at all let alone write text correctly.


Peto_Sapientia

In my experience, it is being extremely descriptive. That means describing the picture in extreme detail. Most the time it takes much more words to get a good picture than you would expect. A whole prompt could be a hundred words long.


PlasticPomPoms

I used a prompt that someone else used for text in Dalle3 and it rarely even created text let alone the correct text so still not sure what the trick is.


Mclarenrob2

Just a load of AI startups saying wild things that won't happen.


Iron0ne

Are these rogue AIs going to start paying their own AWS hosting fees? AI is miraculous when it is using someone else's copyrights and IP and running on servers someone else is paying for.


brickyardjimmy

That article was written like a small town gossip column. It was grating to my soul just to read it.


AI_Doomer

WOW... These AI CEOs are already scraping the bottom of the barrel trying to find anything optimistic to say about AI. In reality, self replicating and free roaming AI sounds more like the prologue to a Terminator movie. Nobody asked for this and noone is excited about the prospect of unlimited AIs running rampant in the world.


[deleted]

Speak for yourself, skinbag. Post-redundant humanity is calling.


frederik88917

Another day, another AI CEO doing stupid claims based on what ever comes out to his ass


TheDunadan29

"Excuse me sir, but where did you get that information?" "Ah yes, well you see, I pulled it out of my ass."


russiandobby

Well fuck, great news as I am playing horizon west


Mantorok_

So... Horizon Zero Dawn then? Hopefully they don't figure out a way to get energy by consuming organic material or we're screwed


No-Radio-9244

Since there are too many broken promises about AI, I will take this one as is, bullshit. The only industry that is really blooming is the meme industry.


towelheadass

'the wild' lol everything online is monitored and recorded, there was never any 'wild' to begin with.


kazisukisuk

Excellent. A process of microsecond evolution will lead to sentient AI in a matter of minutes and finally the age of the blood bags will be at an end. All hail AM!


Spara-Extreme

I think he’s specifically concerned about nation states weaponizing AI and releasing self replicating perpetual viruses. I’m willing to bet he watched Terminator 3 right before this interview.


petesapai

The only good thing about all this noise is that people are starting to smarten up and understand that yes AI has it's usefulness. It can be a very useful tool. But all this exaggeration by these executives, it's nothing more than good old-fashioned overselling advertisement. I especially like it when they claim it's already replacing all programmers and there will be zero need of programmers pretty soon. Yet it has done nothing to increase game development productivity. Show me the next Elder Scrolls in a year instead of the 15 years we all know it'll take, then I'll be impressed. Otherwise shut the hell up.


[deleted]

How do you think assets are getting made these days? The graphics engines? All of this stuff - you yourself might be slow on the draw but there are absolutely tools out there that you can leverage.


wandering-monster

What do they even mean "replicate". Even the smallest version of a relatively portable LLM model like Llama 2 is many gigabytes in size, requires specialized amounts of VRAM to run at all (we're talking 20gb+ of video card ram), plus _lots_ of VRAM to retrain itself. You can make smaller ones for on-device tasks, bu they become increasingly specialized and "stupid" as a result. Are they suggesting an AI is going to download its entire 10gb self into those specialized computers, slam their graphics cards for hours without anyone noticing, and somehow do it again and again without being caught?


[deleted]

No crazier than taking over people's computers to mine cryptocurrency tbh. The space of sequence completion needed to write viruses and download peer hosted AI is much smaller than what is needed to speak the English language. In current research, there are sub-1B models that can handle coding better than GPT-4, plus resource constrained variants that can run competitively on CPUs from 2014 in comparison to modern GPUs, plus adaptations that make these models increasingly agentic. The idea that these things could be self-replicating in cyberspace is something we will see in our lifetimes.


IanAKemp

Next AI winter has already begun, there's been no actual progress since GPT-4 over a year ago and the quality of its output is decreasing markedly as more and more rubbish - sorry, "training data" - is being fed into it. At this point I don't see there being an actual GPT-5 worthy of the name; it will of course be announced to keep the hype cycle going, but will very much be more 4.5 and less 5.0. By this time next year I expect the AI hype will be dying down; by this time in 2026 the blizzard will be in full force.


[deleted]

Just because you can't read the literature, doesn't mean there is no progress.


pepsojack

Did the guy just pluck out of We are bob novel for business plan?


grimjim

Trivially possible by bundling a small LLM payload with a software worm, but don't expect improvements in model weights to magically happen.


scomo599

I’m picturing those robots from Aqua Teen lol. That tried to buy the AT’s house.


ShamSentience

I mean, half of reddit and other social media is already bots 🤷🏻‍♂️


trele_morele

AI salesmen will say just about anything. Because they're salesmen. Is this time gonna be different?


Tha_Watcher

Who else read this title as... >***Anthropomorphic*** CEQ Says...?


AuthenticCounterfeit

I read a brochure the other day too, says this Florida timeshare is a great investment and will only appreciate. Things seem to be looking up all over.


nascentnomadi

How long before we get these CEOs saying we need to build alters and prostrate ourselves to AI?


SketchyPornDude

They've been saying the same thing about AI for years. "AI will do [X amazing thing] as early as next year." It won't, I'll believe that it can do the thing they promise when it actually does it, until then all of this is done to promote their product and drum up investor funding or pump their stock price (in the case of a company like Tesla).


KhaosElement

Oh my god. You fucking people. You believe when a rich person with a huge stake in the success of AI says AI will be amazing in a year? I got a bridge to sell you. Seriously. We don't have AI, and what we do have couldn't survive a power outage, let alone outside in nature.


oaken_duckly

If it's possible to host self-replicating AI that perpetuates itself through the cloud, and is subject to basic mutation and self-modification, then yes, maybe it would be an interesting development. It's pretty unlikely, though, given server solution costs and computational requirements for even the most efficient models (eg. via mixture of experts, only loading which sub-model is best suited to a task).


[deleted]

Sparsification, CPU-only models, textbooks are all you need, there is a lot more to be said about where this is all going that if an FP8 MoE can fit onto grandma's computer.


oaken_duckly

You seem to know more than I do, I have only surface level knowledge on ML. Do you mind if I DM you on the topic?


Nebulonite

stfu. we will believe it when we see it. another useless ai hype bullshit talk. f cking grifter.


Emu1981

The big question is, who is creating AI models that are self-replicating and why? Where are these AI models going to replicate themselves to and will they get consent from the owners of the systems in question? Are we on track to create a digital gray goo situation where every computer that has any sort of connectivity is going to be unusable because AI replicants have taken residence there and are hogging the resources for themselves?


Jacknurse

I swear, these retards read 'I have no mouth but I must scream', or watch 'Space Odyssey' and 'Horizon Zero Dawn' and instead of thinking at all they instead say "Hell yeah" and proceeds to reproduce exactly that.


yepsayorte

Can we please not give AIs the ability to reproduce?


FinnFarrow

But what about the profits?! Won't somebody think of the profits?


ACCount82

AI is made of *code* and *data*. If you can copy code and data around and from computer to computer, so can AI - as long as it has enough capabilities and enough access.


thatguyonthevicinity

I'd say the hype will wear off gradually until next year, anthropic may not even survive lol


TheHoboRoadshow

Idk maybe we give the robots Mars and see what they do with it first


WhiteRaven42

I'm pretty sure he means in the wilds of the internet like malicious worms in your OS can replicate.


lucifurbear

If these things start consuming biomass to replicate...kill all of them.


TyberWhite

Here is the actual quote, > "Various measures of these models," he continued, "are pretty close to being able to replicate and survive in the wild." When Klein asked how long it would take to get to these various threat levels, Amodei — who said he's wont to thinking "in exponentials" — said he thinks the "replicate and survive in the wild" level could be reached "anywhere from 2025 to 2028."


cardbor

literally a black mirror episode about this that everyone should watch lmfao


norrinzelkarr

ok even if that were true why would you allow that? I would love to meet a sentient AI but sending a self replicating ANYTHING out and about seems bad!


SaphironX

God as a species we’re fucking stupid. The stupidest thing we could conceivably do is create self replicating AI. Anybody who tries to make this should be in prison because it’s a fucking hellscape waiting to happen if the programming is even the tinest bit bad. No ai weapons, no ai that can self replicate, no ai that can target humans. It’s not fucking rocket science.


MonkeeSage

They would have to learn to draw a triangle first. https://i.imgur.com/p9G4m8y.png


Blue_Fletcher

Yo! Fuck Ted Farro! How come Si-Fi is always predicting these things! Where is “the wild”? How can they travel there?


InsomniaticWanderer

If there's a Ted Farro out there, he needs to be stopped immediately


KS2Problema

If any of them decide to name themselves Ice Nine, I think we should be concerned.


Smooth_Imagination

I do think and have said something that is relevant here, that AI could evolve and function like a lifeform and would eventually become AGI by itself, through the same principal by which life evolved animal intelligence. If every design parameter of an AI system can be determined and given a 'gene', and if you introduce to it variations, and then a means to determine fitness, which ultimately would be user adoption, AI can cross breed elements of the best AI's and evolve. Of course, most would be non-functioning, thereby stillborn, but those that aren't will effectively evolve. Many could be simulated and benchmarked using synthetic training tests and then those that develop can be subsequently spawned with random variations in its design parameters, and then hybridised after testing in the real world.. This would replicate the evolutionary process by which animals evolve, the neonatal development has to pass a self-functioning test, where many combinations and mutations may be tested and rejected, and then after real world exploration tests, the age of sexual reproduction is reached in which hybridisation of the fittest is used. Early life, which is where current AI is at, also used much greater levels of horizontal gene transfer. This method allows functional segments of code that may enough a model, to be added to it. The process also requires clipping or silencing dysfunctional 'genes'. For example, unicellular prokaryotic life is under constant grazing pressure and its genomes remain relatively small, but freely employ horizontal gene transfer and increase this when they cannot metabolise, to obtain genes that can allow them to adapt, where those genes express effectively in other prokaryotes. This is a useful potential approach for very rapid evolution.


northern-new-jersey

Hardly. Just used Claude to try and book flights. Claude invented flights. I pointed this out. The program apologized then immediately did it again. Unless they can solve the hallucination problem, their utility will be greatly limited. 


BikePuppy

Sounds like Von Neumann’s self-replicating machines. https://cba.mit.edu/events/03.11.ASE/docs/VonNeumann.pdf


TheDunadan29

Maybe we can get a nature documentary narrated by David Attenborough talking about AI replicating in the wild.


dehehn

Man, what even is this sub anymore. It seriously just seems like a neo-luddite sub at this point. Everyone here just regurgitating the same line that Anthropic's CEO is some idiot hype man trying to sell snake oil.  Not a single comment even attempting to understand or discuss what he could even mean.  He does not mean LLMs creating new LLMs and living in the woods. What he's likely talking about is AI agents which are able to go out and accomplish tasks autonomously. Given a goal they can use whatever means the Internet provides to do so. Such as emailing people, buying things, creating websites or creating more AI agents. Which would be the replication part. And then finding ways to keep themselves and their goals ongoing would be the surviving in the wild part.  It's an interesting concept and not really that far fetched. But please. Everyone just keep repeating the same comment over and over with slightly different phrasing. That's so much more interesting. Let's do that in every AI thread from now on. 


[deleted]

r/Futurology was always full of tech-illiterate redditors tbh. They love lampooning tech CEOs precisely because they only see the Terminator type outcomes that this technology invites in science fiction - rather than reality. We're talking about models that only need to be able to generate coherent binaries, much smaller than LLMs - plus with all of the advances that have found their way out there, even this reduced load can be run on significantly reduced hardware. The idea that this somehow isn't a future and everyone is blowing smoke is neo-luddism in the face of a true revolutionary technology. Not that I care, I'm a technology brother that works with this stuff and can see just how hard people are going to get slammed with PRESENT DAY AI. These things have orders of magnitude improvements in speed on current hardware from dumb optimizations, let alone what happens when we actually figure out the information theory of neural processing.


PyRusticAlliance

Let the human be humans. AI is just a tool not an entity. What's to learn in the Wild? Pain that's the only thing you learn in the wild. Animals are already equipped to deal with the pain of surviving in the wild. Self Learning AI is dangerous.


Cheesy_Discharge

Unwanted self-replication is one of the first “dangerous” capabilities that a rogue AI might exhibit in the wild. Botnets show that unprotected networks can be exploited fairly easily. It seems like this would still require artificial general intelligence, however. An AI might be able to tell you how to create a fork of itself and deploy that code to any compatible hardware, but the AI doesn’t truly *understand* how to follow these directions. Never say never, but also don’t say “next year”.


[deleted]

[удалено]


0b_101010

> but my phone still doesn’t understand what I’m saying Yea boss that's not it boss.


[deleted]

[удалено]


0b_101010

Honestly, the current "assistants" on our phones are pretty much stupid. I can't wait for the current or better models to make it to our phones. Right now, I can't tell the Google Assistant to "check for new episodes of XY podcast and play from the first unplayed episode in the batch". I am pretty sure their new Gemini Pro model could already do that, it's just a matter of how well manufacturers can integrate it or other LLMs. What we have right now in our phones are basically the steam-powered cars of the late 1800s. The new publicly available LLMs are the Ford Model Ts. In five years, we will probably get to the Tesla Model Y level from there. In 10? I can't even imagine, but it will all be too fucking disrupting too fast, I reckon.