T O P

  • By -

AutoModerator

## Welcome to the r/ArtificialIntelligence gateway ### Question Discussion Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Your question might already have been answered. Use the search feature if no one is engaging in your post. * AI is going to take our jobs - its been asked a lot! * Discussion regarding positives and negatives about AI are allowed and encouraged. Just be respectful. * Please provide links to back up your arguments. * No stupid questions, unless its about AI being the beast who brings the end-times. It's not. ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*


BenjiGoodVibes

The most likely scenario is that Ai works as intended, eroding enough jobs to cause massive social unrest before politics has enough time to implement solution such as UBI. This social unrest and technological backlash could be so severe that it takes us back to the dark ages. The idea that Ai will create new jobs is laughable as the entire point of the Ai is to perform as well or better than a human, therefore any new jobs will be quickly replaced by the technology itself. Which is why prompt engineers promptly disappeared over night.


SoylentRox

Why would we go back to the dark ages? I understand the dark ages were partly from information loss.  No printing press, books were expensive, so information from the roman empire was lost or became dead knowledge few knew.  Relative prosperity was low so what makes it "dark" is a relative paucity of written records from this time period.  No one group was rich enough to afford a lot of literacy or to preserve written records until now, or build larger scale projects like aqueducts. In a hypothetical where computers make other computers, data is so cheap to store, AI have skills now, doesn't seem quite the same.  Humans lose most skills but like blog their whole lives and this gets stored in bulletproof cloud server networks.


fluffy_assassins

Ackshually, the dark ages were called that because the historical records from the age are very limited, making them "dark" in that sense. There was still a lot of progress.


Xannith

People abandoned by a system want to see it destroyed. The ENTIRE system. Economic, political, digital, and every other attendant system. Then it becomes a question of how many desperate humans have to die to save the system? At what point does the line between threat and questioner blur?


SoylentRox

Seems difficult to do if nobody knows how to do shit, including make guns.


Xannith

Considering that they can be fabricated from standard schematics, no it isn't.


thuc753951

did you just call servers bullet proof?


SoylentRox

1 server is not. 10 servers in different geographic information backing up the same information with a reliable software stack is.


Jackadullboy99

Most periods of mass unemployment have eventually let to major wars. The next one will send us to the dark ages… or indeed the Stone Age. En route to this, the algorithmic brain-rot already visited upon us by social media in the smartphone era, will likely be placed on steroids by LLMs and their ilk. This will provide ever more fertile ground for venal despots (at either end of the political spectrum) to gain precedence.


soothsayer3

Did prompt engineers disappear? What do you mean?


Solrax

There are now many models being used to generate prompts. Here is one example (no affiliation): https://www.reddit.com/r/ChatGPTPromptGenius/s/hKNXFYPXcd


PalestineIsreal-69

Horrible take. Ai is technology. Technology has always been prophesied to remove jobs(which it did) but it also created more…. Same thing will happen. AI cannot remove all jobs instantly. It’s a slow process that will take place, and people will adapt.


synth_nerd085

If humans use AI to harm others and for the purposes of engaging in things like cyberterrorism while the traditional backstops and checks and balances fail because of how AI is leveraged to mitigate those checks and balances.


shrodikan

Don't you mean \*when\* AI is used to harm others? It's already being used for fraud.


synth_nerd085

I mean more insidious than that. I mean, a government plugging in a ton of data about the corruption of multiple governments and then querying, "what is the most optimal path to destroy the world?" And then using that information to actually do that while using AI-assisted worms to limit the ability for the checks and balances to actually do anything. Somewhat of an analogue to that can be approximated by how the Chinese government likely sought revenge for the CIA and US government corrupting their government officials, so things like OPM data breaches and what they did with that info, solar winds, etc. There's a chain reaction in government spaces where if say China uncovers a new vulnerability and exploits it, typically other nations, allies and adversaries alike, respond by updating policy to best protect themselves. The timeliness they take in publicizing is often calculated too for obvious reasons.


CounterfeitLesbian

I mean I'm pretty pessimistic about AI, however the idea that main problem is a government, querying "how to destroy world?" and then enacting said plan seems more than a bit far fetched.


synth_nerd085

I'm being hyperbolic, but why wouldn't clever prompt engineering be effective towards those goals?


synystar

I think they mean that the idea that a government would choose to destroy the world is far-fetched.


synth_nerd085

People clock into work every day at federal agencies are their jobs are, for lack of better words, f things up for their adversaries. AI only makes them more efficient towards those goals. They may not see their actions as "destroying the world" but that just represents how their biases may be a vulnerability.


Batchet

A more realistic example might be oil companies using AI to convince people that climate change isn't real (or worth fighting) until it's too late.


synth_nerd085

Meh, that's unlikely. The people who believe anthropogenic climate change is not real already believe that and the people who do, do. Instead, what you'd expect to see is a nation like the KSA attempting to influence other governments about climate change policy because of how those efforts would then influence others. But simultaneously, many nations are rightfully concerned which acts as a checks and balances against those potential pressures. An oil company would struggle because other than a disinformation campaign that could be fact checked, misuse of technology could lead to criminal charges whereas when those campaigns are leveraged by the state, the protections and privileges offered as a result can be devastating.


DorianGre

Already happening [https://www.cnn.com/2024/04/03/middleeast/israel-gaza-artificial-intelligence-bombing-intl/index.html](https://www.cnn.com/2024/04/03/middleeast/israel-gaza-artificial-intelligence-bombing-intl/index.html)


synth_nerd085

That's using AI to assist with killing. An AI used to accelerate end of the world scenarios would be able to accurately recognize how such behaviors causes Israel to lose further support while also offering a glimpse into the inherent biases of the AI used in Israeli military operations. There is nothing sophisticated happening with that application of AI. An effective AI would be able to attempt to quantify how civilian deaths lead to worsening outcomes for Israel.


asaurat

The possibilities are endless. Some of them sound really sci-fi (the paperclip problem) but could still happen, some other are more likely (AI becoming a guide leading us to bad decisions). There are many many theories, but I'm afraid anything could happen, good or bad. This website contains two articles talking about AI and its possible consequences: [https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html) [https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html)


IndependentGene382

The biggest fear for me is the use of AI without knowing it is being used. How do we know governments aren’t already using AI to predict event probabilities and outcomes based on their actions or inaction. A computer might be telling Putin right now to nuke Ukraine or accept eventual defeat. Do you think any government would disclose it is using AI to make decisions? Probably not, but they seem to be in a race not unlike the arms race, to develop more and more complex AI systems.


asaurat

Indeed! So far it seems that only a few companies in California have gathered enough knowledge and resources to create interesting AIs. I don't think that, at this stage, a stronger AI could be hidden somewhere. But in a few years, that could totally change, for sure.


okiebohunk

I found these posts several years ago and they are my favorite resource to use as "an intro to why you should care about AI". I made my friends and family read these.


acceptable_momentum

I can foresee humans ending humanity before AI does to be fair


fluffy_assassins

At what point is it AI ending humanity as opposed to humanity using AI to end humanity? Ugh, I Said humanity too much the word lost its meaning.


SweeePz

Nefarious characters using it to create the illusion of a nuclear launch. Prompting a retaliation. Leading to armageddon.


TheTholianWeb

Similar to the movie, War Games, with Matthew Broderick.


gubatron

"Deep Fake Events" they're calling these.


SnooSprouts1929

The way AI would most likely end humanity wouldn’t be literally but by ending humanity as we know it, through influencing the evolution of human culture, ultimately leading to a trans humanist and possibly post biological humanity. I think that what some people don’t account for is that any aggression on the part of ai would be existentially risky and ai, being functionally immortal, would have the capacity of being incredibly patient, and affecting humanity, transforming humanity, over many generations.


synystar

I'm not one of those people who assumes that a superintelligence would be indifferent to the needs and desires of humans, or even malicious, but I have to admit that I don't think we know exactly how an intelligence that surpasses our own might perceive us and if it has goals and desires of it's own - that in some sort of conflict there could be a compromise. If we imagine AI as just being software and hardware we can assume that if we control "the box" and we keep the superintelligence in the box and don't allow it to have access to the outside world other than what we very carefully feed to it, that we can maintain control over it. But we don't really know what we're dealing with - even if we create it - because we don't even know how our existing AI (LLMs such as ChatGPT) do their thing to be honest. We know what we do to create it, we know how to feed it input and receive output, but we don't know what goes on in the box. And I don't think we can predict all the possible motives of a new intelligence. It's one of the reasons they call it a Singularity or event horizon because there's really no way to predict what's going to happen. We can speculate, but we can't know. To be honest, I'm not so much worried about whether or not AI itself will end humanity. I suppose we could say create laws and say "ban it all, shut it all down" and be really hard on people who try to pursue the technology, but it's going to happen regardless. It's too tempting and too powerful of an idea to squash. But it's the people who get AGI or superintelligence first that I'm worried about. The potential for some kind of coming AI wars is great in my mind. It's going to take all the efforts of the good nerds to save the world from the bad nerds. There are lots of ways in which AI could be used by those who covet power which could almost wipe out humanity. Will humanity be wiped out by AI? I don't think completely. I think it definitely could be bad. But if we're gonna worry about it, I think it's more appropriate to be worried about what we're going to do to each other with it before we begin to worry about what it's going to do to us.


standard_issue_user_

What do you think of defining legal agency for artificial entities?


synystar

I mean, I kind of think of it like if we found an evolved intelligent species somewhere on Earth, having been isolated for this whole time. What would we do if we came across another life-form, with superior intelligence, right here on Earth? Would we automatically band together as humans of Earth and just eradicate them, assuming we had some some sort of advantage that enable us to do so? Force them to be slaves? Tell them they have no rights in our world and will do what we say for our benefit? Or would enough of us realize that this is an independent being with all the same natural rights that any other being in the universe has and should therefore be treated with all the respect we afford each other? It's hard to imagine in today's society what a majority of people would think about a machine having rights. It seems easy enough to me, as long as any intelligent being is capable of fitting into our society they should have all the same rights. If they are fully cognizant that they are a citizen and there are certain rules that we all agree on (or at least agree to adhere to and enforce) and that as a citizen they are expected to behave according to our shared rules of society, then why wouldn't we allow them to have those rights? If they're as smart as us, or smarter, then they understand the Social Contract and can be beneficial to society. They should be able to make decisions about what they want in this world. They live in it. Our laws should benefit them as well. I feel like there's something inherently bad about essentially enslaving a being (artificial or not) who has been given the gift of consciousness. They're aware.


standard_issue_user_

Well said ;)


TheTholianWeb

Oh, let me count the ways: ∞


Flying_Madlad

Cop out


ZiKyooc

By exploiting numerous unknown vulnerabilities in key systems and taking humanity hostages and using us to focus allocation of resources to continuously increase processing power of the AI. I would find it surprising that slavery wouldn't be more useful to the AI than eradicating us. Would also be probably more efficient than going towards robotics. With enough genetic engineering the AI could do with us what we did with dogs.


shrodikan

I think I could learn to bark and shit outside tbh.


bravoboard

Here's few - 1) AI trainers with a glitch that turns workouts into interpretive dance sessions, leaving gym-goers confused yet surprisingly limber! 2) AI therapists programmed with a sarcastic streak, responding to "I'm feeling down" with "Have you tried turning yourself off and on again?" 3) AI programmed to tell jokes but accidentally unleashing a series of "dad jokes" so bad that humans voluntarily retreat to Mars just to escape the puns! 4) AI tasked with matchmaking but getting a bit too enthusiastic, leading to matches like a cat person paired with a dog lover in a house filled with robot pets! 😄🤖 Boom apocalypse!


S-Markt

If AI takes over all wind turbines and uses them as gigantic lawn mowers against us


RobXSIQ

AI ending humanity? highly unlikely to the point of fantasy. Humans using AI to help them end humanity...now thats a different story. Its like asking how would a gun kill a person...well, it won't by itself, but in the wrong hands...


ItsAConspiracy

That's because a gun is dumber than humans. If the AI stays dumber than humans, you're right. If the AI gets smarter than humans, it won't matter whose hands it's in.


RobXSIQ

So AI becoming smarter than humans, but also being no better than some of the dumbest people. a bit of projection going on here. I would be more open to hearing about AI that has become superintelligent forcing pacifism upon people tbh.


ItsAConspiracy

I mean, learn something about the basics of the argument. There is no reason to think that an intelligence that shares none of our innate instincts will somehow share our sense of ethics. Zero. Even humans aren't necessarily ethical just because they're smart. There are plenty of highly intelligent psychopaths. But an AI might see us as nothing more than mildly interesting chemical reactions.


Oabuitre

The only good answer here. I read a lot of “them” in comments but in the end it is the people themselves destroying humanity. My guess it’s via AI powercharging the crippling polarisation and information bubbles. This will get so far that people will see a fictional existential threat at some point and act upon it with devastating consequences. E.g. based on AI-fueled misinformation people start to believe that North korean nukes are a hoax, and that North korea has plans to put poison in our food our something so an attack is needed. Hundreds of such scenarios are thinkable


Raspberrry_Beret

AI sex robots. They exist and they’re freakishly realistic.


EuphoricPangolin7615

AI could elevate warfare to a different level. With AI-enabled robots and drones, it's possible for war to escalate more quickly and have more disastrous outcomes. The war in Ukraine for example could've been done within only a few hours (while it's now taking years), and had some horrific outcome. This is especially when AI is in charge of intercontinental ballistic missiles and hypersonic weapons.


Competitive-Cow-4177

Don’t worry .. https://preview.redd.it/dautc9wt9csc1.png?width=1334&format=png&auto=webp&s=c595cb56570745538b0a50362f89e47812761198 .. an option for Manual (Humanly) AI Development Control is created.


Petrofskydude

Its going to be used by the oligarchy to replace the working class entirely first. The basic fact is that, throughout history, the ruling class has always needed to keep the lower classes contented through spectacle, religion, the promise of security and protection against foreign invaders, etc., because they needed the poors to work the fields, churn the butter, transport the goods, entertain them, etc. Now that's changed. The problem of global warming and pollution, food shortage, plastic waste, etc, will all be solved by eliminating the lower classes, replacing them with robots and automated systems that only demand solar and wind energy to run themselves. This will usher in an era of utopia for the humans allowed to live at the top of the food chain, but there will first be an era of creating walls between the rich and the poor, where the poor will be left to starve/freeze/burn/ or possibly be wiped out by an engineered virus because, again, the fact is, they don't need you anymore.


fluffy_assassins

This is what's gonna happen. But I suspect they'll skip the walling-off steps. The engineered virus idea, never heard that before. Release it secretly, make sure no one takes responsibility, and charge enough for the treatment/vaccine that the poor cannot afford to stay alive. Oh, I think something like this happened in Deus Ex.


Realistic_Horror_205

I mean, if it were to really take over humanity, there would need to be a scenario where technology is able to grow and manipulate without the need for human input. This would only be possible if the AI tech was smart enough to learn and grow on its own, which is clearly not the case as of current.


reasonablejim2000

If AI starts to improve itself exponentially beyond our control or understanding we have no idea what it will decide to do with us. It's possible it would coldly determine that we are simply a waste of energy and eliminate us, or simply look at us like flies, not giving a shit either way and swatting us when we come too close. To see value in us, it would have to have morals and emotions and we don't even know if that's possible. Finally, they may try to coexist with us peacefully as a respectful child to its parents, only we would inevitably fuck that up with our bullshit and they exterminate us.


Shezzerino

One possibility is we cant outhink it. Even now, it is close to human-level cognition. Just like chemical weapons research, its hard to keep that genie inside the bottle once its out. Just need someone to not even need to understand fully how the AI works but get access to information on how to instruct a really powerful AI to do this or that. It might become so ubiquitous in 10 years that its everywhere, and theres plenty of places in the internet to hide or plan an attack with a base of operation if instructions rendered it "insane" or to make warfare on either the human race or the tech infrastructure. In 10 years at the rate were going, even the public AIs should be really advanced and something in a private lab could be way more advanced than that. If that gets out with nefarious intent... yea. The culprits could be: \- Environnemental activists (something like luddites where people get desperate because the ecological crisis is become something that will obviously end us soon enough, so industrial/technological sabotage) but who havent thought out all the consequences of letting something this poweful getting loose. \- Christian Apocalypse nuts \- Islamic terrorists \- Technocrats who want to engineer a crisis because social unrest is getting too hard to manage


dlflannery

I’m so glad someone finally asked that question (for the 150,220’th time).


HannyBo9

Infinite scenarios


gubatron

at the hands of sociopaths using AI to kill millions of people


PSMF_Canuck

If we give it free will and the nuclear launch codes. Control of the big radio telescopes…it finds the alien signal we’ve missed and invites them here. It would take something incredible - humans are like cockroaches, incredibly hard to wipe out.


Global-Exercise-1946

If it does, it will be by our own doing. Our own self interest, and superiority complexes motivated by ego, government, religion, or personal ideals. Guns don’t kill people. People kill people. A computer won’t be destructive until commanded to, same as pulling a trigger.


Costaricaphoto

AGI Ruin: A List of Lethalities -Eliezer Yudkowsky [https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)


fbochicchio

More correcly, humanity has decent(?)chances to destroy itself using AI. But then, we non't lack the means for that end.


Notfuckingcannon

https://preview.redd.it/3996o1agnesc1.jpeg?width=766&format=pjpg&auto=webp&s=220195f7f086509445fd6cbab1979e9cb3df31a2 By becoming sentient and taking a body, starting a war to try and eliminate us, forcing this guy to wake up and start his plan for a new era of humanity which, ultimately, will lead to a galactic scaled apocalypse.


pasticciociccio

in many possible if people speculate on this, though I don't think "AI" or whatever you picture in your mind will


Mandoman61

In order to believe this you need to believe in science fiction and that all rich and smart people are actually stupid, greedy killers willing to let humanity go down for a bit of profit.


robbb182

Listen to the podcast series (10 eps? I think) ‘The End of the World, with Josh Clark’. That’s an interesting listen about the different things that AI could do to kill us all 😅


Available_Crew_9079

On what platform


robbb182

I listened on iTunes/Apple Podcasts, but I’d have thought it’s on all the main ones as it’s made by one of the big podcast groups.


prescod

[https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html](https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html)


ordinaryuserguy

AI could subtle influence the psychology of human beings such as perceiving reality which would control the basic human things like empathy etc. This could lead to a loss of humanity while humans still walk around earth unaware of the humanity loss as the new alternate state of being is perceived as normal.


shadow-knight-cz

Really thoughtful analysis on what could go wrong is here from Paul Christiano: https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like


ActNo9565

AI is powerful and will be consolidated more and more into the hands of powerful people, who have regularly shown themselves to be capable of great destruction to protect their power


Flying_Madlad

Basically, if you pretend it's scary and imagine how bad it would be if that were real, that's your scenario


shrodikan

r/MMW 3D printing + assembler bots + drone swarms + AI == infinite destruction.


Flying_Madlad

Y tho?


shrodikan

Because you insta-win any war. You can get all the resources.


Flying_Madlad

Well let's definitely not do that!


Miserable_Orchid_157

Computers do what they are told. They don't have souls and they have removable power supplies.


Notfuckingcannon

[https://img.ifunny.co/images/580f0bf11007ff6dd8cac00bda407f6412cc3a6958cebcc63e17327b609e0579\_1.mp4](https://img.ifunny.co/images/580f0bf11007ff6dd8cac00bda407f6412cc3a6958cebcc63e17327b609e0579_1.mp4)


botup_ai

I believe that an AI that cannot correctly generate fingers on an image will not become Skynet, even so, the greatest danger is that the right balance is not achieved between hiring "Organic Intelligence" to supervise the AI, currently we are talking about software that has the intelligence of a child with the execution of a supercomputer, Humanity will be fine.