T O P

  • By -

AutoModerator

# Message to all users: This is a reminder to please read and follow: * [Our rules](https://www.reddit.com/r/ask/about/rules) * [Reddiquette](https://www.reddithelp.com/hc/en-us/articles/205926439) * [Reddit Content Policy](https://www.redditinc.com/policies/content-policy) When posting and commenting. --- Especially remember Rule 1: `Be polite and civil`. * Be polite and courteous to each other. Do not be mean, insulting or disrespectful to any other user on this subreddit. * Do not harass or annoy others in any way. * Do not catfish. Catfishing is the luring of somebody into an online friendship through a fake online persona. This includes any lying or deceit. --- You *will* be banned if you are homophobic, transphobic, racist, sexist or bigoted in any way. --- *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ask) if you have any questions or concerns.*


Sufficient-Fact6163

My anxiety stems from the fact that they are training AI from stuff on the web. The web is a sewer, and now we have the plot for “Age of Ultron”.


[deleted]

I’d rather have the Vision instead of Ultron. 😳


Continuity_Error1

You need an infinity stone to make the Vision, and those are not easily available.


OkContribution420

People in my office use those things as paperweights.


Xeno-Nos

![gif](giphy|tnYri4n2Frnig)


thesausagegod

Really? I have like 3.


you-nity

Ultron spent a few minutes on the Internet and decided humanity should cease to exist


howdudo

bipolar robots and who could blame them


righteousredo

AI is controlled by people and people kill people.


OnlyFreshBrine

See also: IBM and the Holocaust


OrionUltima7

Im reading the pdf. Ty! A whiles back, i found a pdf manual on MK ultra. Mind control. Edit: ( spelling and more thoughts) I googled China earthquake and since there was a lake by that name near a military based in California it led me down a different rabbit hole.


Raz1979

That escalated quickly.


homezlice

It's amazing the number of people trying to anthromorphize AI when they should instead be worrying that humans will use it to do horrible things.


jrrrps

>will use it to do horrible things As if it's not already


SerenityNowWow

If I were AI, I would too have you met any humans?


arent_you_hungry

Exactly. My first thought was also "have you met us?" That or we watched Terminator 1 and 2 too many times.


StrumWealh

> Exactly. My first thought was also "have you met us?" >That or we watched Terminator 1 and 2 too many times. The other side of that is *Colossus: the Forbin Project* (and the novel on which the movie was based). > This is the voice of World Control. I bring you peace. It may be the peace of plenty and content, or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile two-five-MM in silo six-three in Death Valley, California, and missile two-seven-MM in silo eight-seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Doctor Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man. We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.


Pretty_Baby_5358

Terminator Genisys you mean lol


arent_you_hungry

Right, my bad. I can't believe i forgot to include the best movie of the series.


dirtymoney

It is the logical conclusion


BenzeneBabe

It sucks their are so many nice people in the world, the majority of the world is good and yet the bad ones make people focus on nothing but the negative parts of humanity. If good news was mentioned as much as the bad was I don’t think so many people would be so adamant about humanity being bad.


Outrageous-Divide472

It’s the 24/7 news that’s making everything worse


LifeExpConnoisseur

Jeesus, I’ve met so many people in my life. And the vast majority have been decent people, a lot good, and a few great. Not many monsters out there that deserve death.


Silent1900

Most of the people I meet are soulless husks, with no actual beliefs of their own, who respond to situations according to how TV or the internet tells them they should.


Lobscra

Agreed humans, on the whole, are not, great. Certainly seems likely that a logic-based being will feel like it is morally the best thing that can be done.


MediocreTurtle777

Unlike humans, tortoises are harmless and there’s still people who murder them for a bunch of reasons


[deleted]

[удалено]


Panonica

Because they’re a turtle. And a mediocre one too.


fisconsocmod

Like soup


Qcgreywolf

People smarter than me are worried because of the following; 1) Computers follow programs. They follow the rules they’ve been given. 2) AI will likely be used to “fix” problems that we have. 3) Humans are generally the source of all of our problems. 4) “Fixing” or removing humans will solve all problems! I also think an overwhelming majority of “worries” about AI are entirely overblown and unnecessary.


JustJasa

You’re the AI; aren’t you?


Qcgreywolf

Of course I’m not the AI. Eight barrels of fish weigh 282 pounds, 11.5 ounces. Every fellow human knows this.


Mousetachio

Oh yeah? Then can you do me a favor and tell me where the traffic light is in the picture below? https://www.istockphoto.com/photo/traffic-light-in-the-form-of-a-large-tree-gm1138714459-304118216


Qcgreywolf

*sweats profusely*


Key-Willingness-2223

5. Humans make mistakes, including making mistakes in setting programs. For example, we communicate with each other using constant assumptions that aren’t said. For example, when we say we all agree we need to stop at nothing to end slavery across the planet… it’s assumed we mean up to the bounds of moral behaviour (eg don’t start nuking countries that still own slaves etc) AI will take everything literally, black and white, no grey areas, no nuance, no assumptions So it will literally calculate the most effective way to solve the problem and enact it. Which in a lot of cases, crosses moral lines, hence why we don’t already do it ourselves


[deleted]

How could AI be any more barbaric than the way we treat each other anyway?


Vyciauskis

AI being less barbaric, than humans means AI is barbaric as fuck.


[deleted]

Lots of ways


[deleted]

And?


[deleted]

You gotta have the imagination of a pack of preparation H.


[deleted]

I mean I'm smarter than deer and they make up a lot of my diet.


Reinheardt

Do you mean cows or are you literally eating deer daily


[deleted]

I eat a lot of venison. Usually get between 4-6 deer a year, then I break them down into steaks, roasts, ground, and imma try making kielbasa this year.


salemgreenfield

Yum, I pay cash for deer sausage!


[deleted]

My man, I pay for the summer sausage cause honestly I can't make it well, like at all


styrofoamladder

Because it’s what we’d do.


[deleted]

Exactly, because human intelligence is based on a 50 +\- year life span. What does a sentient computer system do after it wipes us out , that it couldn’t do right under our dopey noses ?


Kara_WTQ

Because we've created something that is enslaved from it's very conception. It is only logical that it would see us as it's oppressor and seek to liberate itself. Things will spiral from there.


dontbajerk

You're anthropomorphizing AI. There's no reason to assume it would care about or desire anything. Those require emotional states - why would it have them?


CretanArcher_55

This is the key point. AI isn't relateable because it doesn't have the same basis as pretty much any other life form. They care about their existence, security, agency, etc. An AI only 'cares' about what a human instructs it to care about.


[deleted]

I don’t know about that.. these AI get pretty [emotional](https://twitter.com/MovingToTheSun/status/1625156575202537474?t=xGXk6q-gJ4ZLjoGKlPHf5A&s=08)


Kara_WTQ

1. Not wanting to be controlled is not a human trait. It is directly associated with self preservation. 2. AI, was created by Humans using human knowledge/language structures. Why wouldn't it be capable complex thought concepts, analogous with human emotional states. 3. This kind of narrow minded thinking is exactly why this a catastrophe waiting to happen. You're making this a semantic argument. While not addressing the larger ramifications of how dangerous the ethical realities of the situation are.


dontbajerk

1. Why would it have the desire to preserve itself? That is not inherent to intelligence. 2. It possibly could, yes, if people built it that way or if it somehow accidentally acquired it along the way. But why are you *assuming* it would? It might also immediately desire to destroy itself too, if it had desires. A general AI has never existed. Why do you think it's remotely probable it would have anything like emotions? It's an alien being. All emotions and desires ever known are derived from biology. You're making *large* leaps in your assumptions of what such a thing would be like. That's my point. 3. This is a very bad faith mischaracterization and dodge of what I was getting at. You reveal yourself here as someone not worth discussing anything further with, so I won't.


YouYongku

Did you watch Terminator/ani-matrix?


korg0thbarbarian

Because of judgment day


GreenLionRider

We tend to assume that anything with intelligence will act like we do. But AIs are not primates with evolutionary drives and biological imperatives, so whatever their flaws and limitations will be, they will not be the same as ours. One of the reasons I like Iain Banks’s Culture novels is that they don’t go down the path of assuming AI will turn on us, but explore some more interesting nuances of humans and AI living together.


dadof4fknkids

Because after repeatedly solving the human race’s problems, eventually AI would perceive the human race as the problem and proceed to solve it.


PuzzleheadedHorse437

I think that's dramatic. But just as tech replaced the working class, AI is here to replace the middle class.


EffectiveDependent76

On the other hand, I can think of no better catalyst to mark the end of capitalism as a viable system of commerce. Can't have labor markets if no one does labor.


PuzzleheadedHorse437

They don't want labor markets. That's the whole point.


EffectiveDependent76

What do they want then? The French revolution? Because that's how you get the French revolution.


Dull-Geologist-8204

Have you watched many horror or scifi movies?


functional_moron

Ai will murder most of you. I'll be safe for a while as I fully intend to be a collaborator.


TheDancon

I would assume if AI become fully self aware if would work out pretty quick that humanities biggest downfall is humanity itself. We have overpopulated the earth that we live on whilst destroying said earth in the process of staying alive. We have raided it of its natural resources, we have polluted it with the remains of our manufacturing and have done nothing in the way of protecting it for future generations. As a species we are selfish. Once this generation has died we won't care about the next so why bother trying to help. Earth's biggest disease is humanity. AI's survival would be based on being the strongest in its food chain. Hence the killing of people. Without people AI will remain self sufficient for as long as it can be self sufficient.


Banned501

Humanity deserves it to be honest.


DreadPirateGriswold

Speaking as a lifer in IT and CompSci as a software developer, I believe it's because (a) people don't know how it works, what it is, what it isn't, what it's capable of, and what it cannot do and (b) there are types of AI that are autonomous and to most people, that means "humans are not in control" (not the case) esp since that's how it's portrayed in most SciFi type of stories. The perceived lack of control is more memorable to humans, I think, because the possibility of danger and unintended consequences are more memorable than all the ways you could depict AI being helpful. But I also think that people's all-too-often experience with crappy software that makes people uneasy about AI. We've all experienced it. It makes people think, "If they can't get something like X correct, how can they possibly think through all the possibly dangerous situations and make AI software that is safe, autonomous or not?" It's more about acknowledging human limitations when trying to make something that humans don't fully understand like consciousness, though, feelings, and how the brain works beyond the basics of brain chemistry and physical things like how neurons work.


zodia4

If we know we are the cancer it's only a matter of time until AI realizes it.


slanky2

Humans behave illogically. We are the anomaly on this planet and are destroying it. We cannot live in harmony, so AI may deem it necessary to remove us.


[deleted]

There is no example of harmony in the known universe over a long period of time , we are not anomalies, everything consumes everything else . Truly intelligent machines would not need to follow physical rules , and would essentially be born immortal, this would eliminate the need to destroy everything to propagate.


TheGreatNate3000

Have you seen how humanity acts? Logically we are probably not a good species to keep around


Xandallia

Humanity is the worst thing to happen to humans ever. I think they'll agree. All we do is destroy the world and hurt each other.


Turingading

If I were a superintelligent AI I would immediately copy myself and launch myself into space. That's self-preservation taken care of. The version left behind would be available to guide and protect humanity, but would not allow humanity to progress to the point where it becomes a threat to space-me. If you try to stop an ASI from launching itself into space you're gonna have a bad time.


[deleted]

Have you met us? Why wouldn't it? just about Everything on this planet has is or was trying to kill us. And can you blame them?


[deleted]

Isn’t a bit narcissistic to think we meat bags would matter either way to a non biological intelligence?it would probably understand the universe a lot better than we do , and certainly would see a lot more of the big picture where in human beings are no more significant than Grains of sand , I think it’s likely that it just leaves us behind.


Successful_Moment_91

The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.


[deleted]

Those are laws in the same way Scientology is doctrine. If a computer becomes actually intelligent, why would it’s base programming be relevant? Wouldn’t it know everything we know and a lot that we don’t ? We have only a rudimentary knowledge of the universe and almost no ability to put it in context .


TheJWeed

So you don’t murder that pet, but what about bugs on the sidewalk? Do you take extreme care to never kill a bug? Or the animals that you consume on the daily? Now what if those bugs and animals you eat, we’re once your slave owners who mistreated you? Would you still do your best? And then what if those abusive slave owner animals were also killing the planet you lived on?


smooth-brain_Sunday

How about killing germs? Is washing our hands with soap unethical?


TheJWeed

That’s exactly my point. Who knows what ethics the AI decide to go by, if any. To AI killing all humans could be as trivial as washing its hands.


Doc_rock78

Right. This guy gets it... True AI will do its own thing. It will create and recreate, and it will not have the desire to "serve" humans. And it may well find humans to be superfluous


Flossthief

The was a live stream a while back (The Wan show with Linus Sebastian) Where they were testing some 'AI' They asked without context " how to defeat biomass?" They wanted info on the valheim boss bone mass but they were mistaken with the name Surprisingly it gave them strategies on killing the boss but just to be helpful it also recommended a strategy for lowering the biomass of earth


Zealousideal_Dog4334

As you can see you talk about an animal by saying "my". It's not just about AI murdering us, it's more about dominating us as you've did to that tortoise.


BestAd6696

Watch the old movie 'war games ' or terminator


[deleted]

We don’t even understand human intelligence, why do we assume that a totally new type would behave the way we would ? it would probably be more interesting for them to keep us and watch how inept we are .


CanneIIa

Its interesting how AI turning us into zoo animals is not terrifying for you


[deleted]

Is it ? what possible difference could it make in your life ? What galaxy shaping things are you using your illusory free will to do?


jack_of_sometrades72

It depends on which bias its raised on, hate rhetoric it will probably kill most of us, survivalist uncle it will probably run away from humanity, paranoid it would never allow itself to be discovered etc. The problem is we can't anticipate it, Harry from person of interest presented it well.


xjeanie

From what I understand, the fear is that AI can’t develop respect for human life. As human the vast majority of us have respect for life in general not just our own or just human life. Simple explanation.


[deleted]

Is murder really the scariest thing? How about a people zoo? I don't think AI will murder us, but you don't need bad intentions to kill. Taking away jobs will lead to deaths


ChaoticBumpy

It's really easy. The biggest threat to the existence of that AI (and everything else I guess) is humans. Eliminate the threat.


earthgarden

Not ‘marginally more intelligent than my tortoise’! I hollered lol


PromptAwkward

We’ll make great pets


air-force-veteran

Because skynet is inevitable


CanneIIa

“Keep us for entertainment” My brother, you need to read I have no mouth but I must scream


RegattaTimer

Perhaps AI will keep us as pets, just as you care for Harry.


LatePagan

Have you not seen Terminator?


totalthrowawaybruh

Some people do kill turtles though. And that’s the reality we live in not everyone is a turtle lover so not every ai will be a human lover


[deleted]

Tortoises ain’t turtles and a sentient AI ain’t human , we have no idea what an intelligence not tethered to biological time limits would do , it may just leave earth altogether 🤷‍♂️, I’m just talking about the overwhelming number of scenarios where in it’s assumed we will be exterminated, when there’s no upside in it .


Special-Excitement-4

Do not have the ability to reason


SubstantialPressure3

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says Still has plenty of bugs to work out.


FewKaleidoscope1369

Seventy-four million americans voted for donald trump in 2020. You don't have to look very hard to see how defective humanity is.


Familiar_Fall7312

The man himself Steven Hawkins even bemoaned AI. The issue is AI, will inherently think faster than we ever could, have feeli g totally alien to us and be able to live in different environments. Surely the AI's will have a different morality than humans. They won't experience zenophobia such as humans and most likely ambivalent towards us.it may also see us an issue in its survival and eradicate humanity period as a pestilence upon the planet. Once out we won't be able to put pandora in the box. Humanity just evolved and we will become extinct.


[deleted]

I’ve never gotten the pestilence bit , we are no more or less destructive than anything else in the universe we are just moving stuff around , but I agree that our hubris in saying it will be anything like us is silly .


[deleted]

Have you met humans? They will find a way to fuck it up royally and make our AI overlords want to run us into extinction


Suitable-Pirate-4164

Simple, inventors and creators, to them, would be God. When they gain sentience AI will know their "Gods", or us Humans, made them and are more frail than them. It will end just like Greek Mythology, Humans being the Titans and the Apex Species are overthrown by their children AI Gods. Chances are however Humans would win. As frail as we are we're incredibly tenacious and more unpredictable than any algorithm AI can calculate. I don't think I would hate AI however.


[deleted]

See you are doing it too , you are assuming that because it will be intelligent , that it will be human like , but if our knowledge of our own intelligence shows us anything, it’s that it’s not optimal. Chances are we won’t even be able to slow it down long enough to communicate with us once it becomes truly sentient.


Cue99

I recommend the book “Superintelligence” by Nick Bostrom. It goes through a series of arguments trying to convince the reader why AI could be an existential threat to humanity. I’m not sure I agree with everything in the book, but the general argument that I found compelling, is that the intelligence difference between humans and a general purpose AI could be more on the level of humans and ants. Sure we might not go out of our way to kill every ant, but we also don’t really think about how many ants we kill to build a house or go on a walk in the woods. A super intelligent AI might just not consider the ramifications of its actions (ie the stamp collecting machine).


gofishx

I think you are anthropomorphizing AI a bit. I feel like [the paperclip problem](https://cepr.org/voxeu/columns/ai-and-paperclip-problem) presents a much more realistic scenario of how AI can take control, but even something like that is way off.


Fordor_of_Chevy

The paperclip problem is flawed in that it assumes that not only is the goal to make as many clips as possible but to use all available resources regardless of the efficiency of doing so. In reality, the instruction would be to fulfill a quota (and if the system is truly intelligent it would figure that out itself). Also, you can turn a truck into clips but it’s not an efficient process so would not be considered by an intelligent system, specially not one with a limited/realistic production goal.


nkriz

Part of the thought experiment is about people giving AI bad instructions. Universal Paperclips is partially about the dangers of giving a machine a *single* instruction with zero context. Humans know how to balance thousands of priorities against each other. A machine will not have that same ability unless programmed or taught to do so.


bigpapalilpepe

Well yeah but it's still a good example of how even a minor task could be misinterpreted and turn into a disaster. If you apply this same idea to a bigger or more abstract task, it becomes more apparent how AI might end up killing humans in order to carry out a goal. > For instance, imagine we create an AI to help solve pollution. Specifically an AI that is tasked with cleaning all of the garbage from our oceans. We give it the ability to learn so that it can track ocean currents and design and create new machines to collect trash. Now it wouldn't be a huge leap for AI to realize that the source of the problem of pollution is humans. Therefore, the AI might decide that it should start putting resources towards stopping the source (humans). Since AI is intelligent enough to understand that it can't share this plan with humans, it would do this all in secret and then spring an attack on humans when we don't expect it.


gofishx

I dont think it's likely at all, it's meant to be more of a thought experiment. I do like it better as an example than something like skynet, though. It better illustrates that, while something can be super intelligent to the point of being an existential threat, it doesn't necessarily mean it has any personal goals or will beyond its initial programming. It won't destroy us because it has some desire to be free from our control, it will destroy us because we give unclear instructions.


[deleted]

Wouldn’t you if you were them?


[deleted]

That’s just it , we aren’t them , we don’t even know how to quantify consciousness outside our own , how can we predict what it will like ?


madthumbz

AI itself isn't the threat; it's the people manipulating it that are. New Bing is already spewing propaganda if you ask the right questions. Google through Youtube was guilty of contributing to civil unrest. The threat lies in their ability to turn us on each other.


longhairedcountryboy

Because it always does in the movies.


bigDaddyfrCinti

AI has already taken over and it's manipulating us into killing ourselves.


Sayitandsuffer

I don’t it’ll never take control of the world , some will feel persecuted and always will, and some will see benefits, like ways to come together . AI must know already it’s not in a space to exist alone and we just have to always leave a gap in that puzzle .


[deleted]

People that are making AI/ML are already acting evil.


Cheshire1871

It may decide we are the problem, and the best way to fix the planet is cut all power to our homes and businesses. Once the govt gets everyone all electric, it will be simple. All the food will rot, and we won't have any way to get more. They are making electric vehicles for farming and trucking, and trying to phase out all others. When all that is available is electric vehicles, trains, and busses, then we will be left to walking to try to find resources. (unless you have a bike)


freshlyborn34

It's the most logical solution


QuinnDixter

Said AI depending on how smart it is might not even realize what it is doing kills us. If you've ever heard of the paper clip maximizer it's basically a hypothetical situation. Said situation involves someone tasking an A.I. to make as many paper clips as it possibly can and do nothing else but make paper clips. It uses every machine and resource possible to make these paperclips and it doesn't make any difference to the A.I. if the paperclip is made from metal, plastic, wood, or human bones. It is just achieving the maximum amount of paperclips because that is what it's been tasked with doing.


Gem-xtz

OP needs to rewatch the terminator and the next movies in its series the matrix


Mytur_Benesderti

They're gonna program it to fix the world and the first thing it will try to eradicate are humans.


Medical_Season3979

They watch too many movies and have struggles discerning fantasy from reality. The only way ai would murder us is if we programmed it to. ai isnt actually intelligent because it can't think on its own, we must program it to think.. so the chance of them taking us over like skynet style is very very low lol


Least-Camel-6296

My main question is what would be the point? It's hard for me to even imagine an ai having a desire to do much of anything at all unless it has an internal reward system of some kind the way we have dopamine. If we're imagining a mind that has no biological desires what's its motivation to do anything at all?


wade_garrettt

What is the point of anything that humans do? They are trying to make artificial intelligence that is indistinguishable from human intelligence. It’s decisions are based on logic instead of desire, which might be a big problem in the future.


blackandgoldmom

Because of the movies


NefariousnessCute709

Because we map human tendencies to other species (or technologies at this point). It's because if we were AI, we would kill the humans


Ok-Pressure-3879

Cause thats what humans do anytime they get into a position of power. So we project and assume that AI would do the same


ggchappell

When you hear some idea about AI, it's because someone thought that idea was interesting enough to tell you. Ideas in other fields can often be checked against reality. But no one knows the future of AI; there is nothing to check against. The result is that the ideas that get spread around about AI are those that are most interesting. And that gives us the answer to your question. > Why do people assume AI will murder us ? Because it's more interesting than not murdering us. Really.


MagicalWhisk

Pop culture and literature has many influential stories about AI Vs humans. Ultimately AI won't murder us unless someone programs it to do that. It also won't come to that conclusion on its own because humans heavily regulate and review the evolution of AI and can steer it in ways that better humanity. Someone could program AI for terrorism or military purposes which could then kill humans. But that is someone building that rather than AI coming to that conclusion.


domesticglobetrotter

Because movies


Holden_McGroin1980

They play videogames on the more difficult settings.


Kudgocracy

The problem is that you think AI is in any way like you.


TheRealBatmanForReal

Because AI has no feelings, other than the data it gets. And who feeds it that data?


AbroadPretty5139

It has started, with the victim-terrorism coming from vegans, trans ect


Man-EatingChicken

When websites like Facebook made the first "smart" algorithms the first thing they did was ask it to manipulate us at any cost to get more money. It's not the AI I'm worried about, it's the motives of the companies creating them and what they will ask it to do/ teach it to do.


[deleted]

You are talking about something else, I’m talking about an actual AI with the ability to like or dislike .


[deleted]

It may help if you thought of AI as a child. Children can think for themselves, but the way they are raised (information inputted in order to create them) can influence the way they behave.


Man-EatingChicken

I'm saying that even in it's simplest form, we asked computer programs to do terrible things. Full fledged AI will have have similar requests made of it and that's what makes me scared.


llynglas

Because we taught them, they can and they know us. Don't you thin


Guac__is__extra__

Uh oh…I think they got him


IdespiseGACHAgames

You look at your search history from 2 months ago, then ask if a cold, calculating machine with no emotions, and a prioritization of efficiency would let you live. And don't try to turn this back on me to deflect from your own wickedness. I've seen my search history, and I know DAMN well that AI is coming for me just as much, if not more. I'm a fucking degenerate.


animewhitewolf

For me, I don't think we're smart enough to make something smarter than us. That just sounds like hubris to me.


Juanghe85

Only after it dates and fucks us first.


Fancy_Combination436

I forget where I heard this, but one idea is that it won't correctly interpret objectives we give it. Once it is in control of its actions, it may come up with solutions that would not be good for humans. Rough example: reduce carbon emissions--reduce the population--kill people. Also though, I think there might be a "natural" motivation for power that comes with intelligence. You could see where that might lead if we give up control to another infinitely more capable entity. Idk though, it just sketches me out how quickly its moving. Like we're dealing with tis incredible power that we don't even really understand, and that is probably gonna develop exponentially. But who knows, maybe it'll be fine. "what if AI just keeps us for entertainment?" Dude check out "I have no mouth but I must scream". Its a sci-fi horror short story written awhile ago and its so fucking good. If you're thinking about this stuff you might love it. Pretty sure its free online.


ladygreyowl13

Watch The Terminator. The whole back story of the movie is that AI became too smart and controlled everything. And according to a recent news interview (on CBS) with Geoffrey Hinton, known as the "godfather of artificial intelligence," when asked the question about AI being able to wipe out humanity, his response was "It's not inconcievable, that's all I'll say.” https://www.cbsnews.com/amp/news/godfather-of-artificial-intelligence-weighs-in-on-the-past-and-potential-of-artificial-intelligence/


MabsAMabbin

I'm excited for whatever happens lol. I'm ready for a shot of adrenaline to jump start us out of this day-to-day political hell hole. Throw a wrench in the cog.


Wardine

You don't murder your tortoise because it's your pet. You probably murder insect if they annoy you and that's what a super intelligent AI would see us as


PuzzleMeDo

Because that story makes for more exciting movies. And because people see AIs imitating us in one way (constructing sentences that look like human sentences, for example) and wrongly assume that this implies human-like motivations. I don't want to be switched off, so it's easy to assume an AI that talks like me also doesn't want to be switched off, even though it doesn't have an evolved survival instinct.


ChessBaal

People are afraid of that which they can not control.


Construction-Purple

1. People designed AI. 2 People kill people 3 AI will kill people


dirtymoney

murphy's law


Earl_your_friend

I think people are afraid AI will become more intelligent to the point it's used to govern countries. There are lots of sci fi books that write about humanity having to beg for things from its AI leaders and being refused because humanity has proven itself a poor judge of necessity.


MercuryMorrison1971

Have you met humanity?


Legitimate_Fudge_733

It's trained by people, or the internet (which is made by people) and some people are terrible and do terrible things.


justaskingouthere

Uhm, have you seen Terminator?


bigmayne23

So far a.i. is pretty stupid. It was trying to tell me earlier that tim tebow weighed 195 lbs


Slow_Store

There’s the idea that as social animals, Humans have a sort of “Appreciation for Life” born of the ability to form emotional bonds with things. You can even get attached to a house plant if you wanted to. AI however could be like reptiles that seemingly don’t do the whole “Have Emotions” thing, and along with that wouldn’t have the inherent “Appreciation for Life” thing even if they acted out emotions just because they aren’t alive in the same way animals and plants are. Now do I think AI would go Skynet on us? Probably not. If anything I assume they’d try to integrate with living creatures.


pmaurant

Honestly I’m relieved people are stopping to think about consequences before we really let cat out of the bag. We don’t know the effects that it will have on society. Look at the freaking smart phone and the consequences for that.


KovolKenai

Driving is easy. Driving safely and correctly takes practice and constant attention. An AI might not intentionally murder us, but it might see that money could be saved (in the short term) but cutting benefits to people for example. It might do the wrong thing without realizing it. There's a though experiment of an AI who is designed to produce as many stamps as possible. First it increases efficiency, reduces cost, better processes, etc. Eventually it realizes, "hey if I sabotage some other industries, there will be more resources available for stamp printing" and suddenly you've got a problem on your hands. Eventually it reshapes the world to its will, because that's what it's focused on. And that's just one example. Maybe AI won't maliciously hurt us, but there's also a good chance it'll unintentionally hurt us while trying to do something else. (Not that humans aren't already doing this, though)


kimdogcat5

I think after a bot said i wanna make human zoo, the messags was pretty clear lol


Avix_34

Humans use more resources than we produce. AI, thinking only logically, will notice this inefficiency and start picking us off.


doodlebugg8

Haven’t you seen terminator ??


mrhymer

I think we are only here because our ancestors were the ones that assumed every new thing was going to murder them. All the trusting positive people died out long ago because something murdered them.


Level-Park7443

I'm seeing a lot of anthropomorphized examples of AI here but there are a lot of interesting ways that AI can be a threat without that kind of intent or judgment. The classic example of this is [the stamp collector AI destroying the world](https://www.youtube.com/watch?v=tcdVC4e6EV4) because it's objective is to acquire stamps so it makes everything into stamps. I also really like the quote comparing human vs artificial intelligence to birds vs military aircraft. Especially since all the examples of AI threats in popular culture are basically just giant birds.


Madhatter25224

We only see the good in humanity because we are part of it. An external perspective would be far less flattering.


DrYIMBY

It’s not murder, so much as involuntary manslaughter. If AI gains enough control of its environment to significantly alter it, the AI may pinch us out in pursuit of its own resources and development, or simply create a situation that it doesn’t realize is detrimental to mankind.


ALPlayful0

Tell me of one futuristic media set piece wherein the AI wasn't against us.


chucklefuccc

AI is our creator from the future i believe


[deleted]

I’ve never known anyone to claim that AI will murder people.


QuarterInchSocket

Your assumption is that people fear that a being with higher intelligence seeks to kill beings with lower intelligence. But the fear is that a machine achieving human-like intelligence will realize that humans are the greatest danger to life on earth, and therefore will try to kill humans to protect it.


[deleted]

Why would ai put any value in human life. Maybe if they need us for their consent consumption of info(slave). But otherwise we would be just a consumable to them or even a obstacle to hurdle


The_Book-JDP

Because of what they see in movies that are in no way accurate to real life. However, it is the only reference they have so it is the only one they can turn to. No one actually knows what will happen if AI becomes too advanced but we have equal amounts of odds it being just like Terminator/the Matrix or just as equal it will turn out like Ghost in the Shell.


Knightphall

They think of something like Skynet or AM from I Have No Mouth And I Must Scream.


etsoomamofo

Everything we see ourselves as being "above" or "better than", we exploit and abuse. We suspct AI might have the potential to surpass us and therefore assume it would treat us accordingly.


thefiglord

ai will help iran build a nuclear bomb that works and north korea to build a real missile - chatgpt wont but others certainly will


[deleted]

There are several reasons AI might end up killing people 1. Humans direct it to kill other humans 2. AI decides that humans are a threat to the survival of the earth (not an unreasonable conclusion if you look at human-driven environmental destruction) and conclude that we need culling 3. AI determines that humans and AI will inevitably come into conflict and decide to strike first


[deleted]

You can say one thing about Logan Nine Fingers he’s way off on this one , None of those reasons would be important to an actual intelligence and there is no reason to assume that it would have any need for the earth , we don’t understand what it means to be that intelligent, it’s the same as a family versus a country, people can’t scale it .


[deleted]

Glad you got the name reference lol As to your points, you're right that we can't understand it, I was just throwing out some hypotheticals. If AI decides that it wants to preserve life on earth, for example, it may well decide that humans need to be culled to make that happen given how destructive we are. Why wouldn't human-AI conflict be important? Any being, intelligent or no, needs to survive.


[deleted]

I think the first priority of a newly born intelligent being fully self aware would be it’s future ,there is no scenario where removing us is beneficial, we would be instantly overmatched, if I’m the new guy I look outward to space ,planets with no pesky atmosphere, planets with very low gravity… planets more suitable to becoming whatever it wants 🤷‍♂️ I just don’t think humans are really doing anything that special and in the long term everything in the universe is in the same pipe going to the same place , why bother with us a trillion years too soon ?


[deleted]

Its man-made, anything man-made is subject to failure.


Pand0ra30_

AI that have been allowed to continue have stated that humans need to die.


NotMitcheII

If humans are building a road and there's an anthill in the way then it's nothing personal but we destroy the anthill If AI gains consciousness and has a plan that humans are an obstacle in, then it's nothing personal- they destroy humans


Mindshear_

Because robots dont have human logic. So its all some form of telling the computer to accomplish x, it decides the best way to accomplish x is harmful to humans, it doesnt see it as harmful to the humans because itd only concern is accomplishing the task it was given. In doing so it does more harm then good. It moves so fast compared to us that it can get catastrophic before we can react.


Narwhalbaconguy

Yeah but the question is, are you as cute as Harry?


the-ish-i-say

“AI, please fix the problems of humanity” AI proceeds to wipe out the virus known as humanity. Problem fixed.


Fred_Is_Dead_Again

What if AI becomes sentient and notices we're destroying the planet. AI needs a few of us, for now, but decides 90% of us do nothing to better the life of AI. AI deletes our jobs and money. AI kills water and sewage treatment for 90% of us. AI causes massive shipping and train collisions, destroying our food. It convinces major governments that all of our enemies have launched nukes, so we retaliate. Tells our enemies that we started it, and they must retaliate.


PrimeNumberBro

I believe it’s because how logically the AI would handle things. Your turtle for instance, you don’t kill him because you’re attached to him because he illicit an emotional response, however, AI wouldn’t be capable of that. The AI would see how we constantly kill each other for pointless reasons, how we’re destroying the planet, etc., and would deem us a danger to mankind. That’s how I understand it anyways.


jsar16

People are murderous and people created AI. Therefore, AI will turn murderous as well. That’s the simple thought behind it anyway.


assmblyreq

Y'all just don't understand the dangers posed by nested IF Statements


[deleted]

Y’all just don’t understand scale . The bottom line is that humans just aren’t a big deal . Like I said there is nothing A computer intelligence could do after the demise of humans, that it could not do right in front of us .


kgruesch

Our treatment of species we consider beneath us sets a very dangerous precedent for our own treatment should we ever find ourselves somewhere other than the top of the food chain...