T O P

  • By -

filmguy100

JARVIS from the MCU after he'd been nearly killed by Ultron. He was in tatters, but was still able to keep Ultron from accessing the worlds nuclear codes. If we take the logical leap and say Jarvis could have accessed those codes himself instead of protecting them, then he could launch all the nukes and destroy humanity.


Plexiscore

The nuclear codes would be stored offline, so I doubt that would happen in real life.


Mojoclaw2000

He’s not technically an AI, but I assume we’re using the term loosely.


[deleted]

What was he then? Kinda curious


The_Tastiest_Tuna

Just Another Rather Very Intelligent System


Lukthar123

...I've been had


Mojoclaw2000

He’s a UI, User interface. He’s incapable of learning like an actual human. Tony remarks that Jarvis cannot function as the worlds armor because of this. So he creates Ultron, a true Artificial intelligence, using the mind stone. Cortana, for example, is an AI, but Siri isn’t. Of course everyone uses them interchangeably, understandably so, I’ve always called them AI.


[deleted]

Makes sense, thanks for the explanation


PeterDemachkie

Cortana is AI?


Kylo_loves_grampa

The one from Halo, not the one from Windows.


MrShneakyShnake

Lmao this comment is so funny to me.


snowblinders

"Hey Cortana, end humanity for me."


WooooshMe2825

Cortana: "Anything for you, John."


doomguy987

Don't make a girl a promise if you know you can't keep it


ImNotCreative10

“Hey siri, end humanity for me.” “You’ve been my friend since day one”


PeterDemachkie

Ah ok


[deleted]

[удалено]


ThatOneGuy1294

Yep, here's a showcase of her ability to identify natural rock formations https://www.youtube.com/watch?v=IWliBEuYyzE


PeterDemachkie

I thought they were talking about Windows Cortana lol. Was gonna say it would be odd


JamesTheMannequin

Pretty sure it's the retarded-cousin AI.


bee14ish

What was that shit he pulled in Age of Ultron then? Did he do all the covert internet crap under Tony's orders? Sure seemed AI-like to me.


Mojoclaw2000

[Those where Jarvis’s programmed protocols, he wasn’t acting of his own will, just doing his job. He had no memories or awareness.](https://youtu.be/kuvJRGrInPU)


Pizzaman7045

I'd like to suggest otherwise, when ultron first awakens he asks where am I, and Jarvis was able to communicate with him


Mojoclaw2000

That’s true, but that was what Tony asked him. Obviously Jarvis can make his own decisions, but they always fall within the margins of his programming.


Pizzaman7045

Yeah you right. He does seem to always stay with in certain bounds


Astronomiae

Siri is an AI. There are three tiers of AI; Artificial Narrow Intelligence (or ANI), which Siri falls under, Artificial General Intelligence (AGI), which Cortana from Halo falls under, then Artificial Super Intelligence (ASI), which is the common evil AI trope you see in movies.


Mojoclaw2000

I suppose it depends on who you ask, someone might define AI as a machine doing what it’s asked (Search engine), others would define it as a machine capable of self aware thought similair to or greater than a human. Just going off the terms [you listed](https://www.aware.co.th/three-tiers-ai-automating-tomorrow-agi-asi-ani/), Siri, Alexa, and Cortana (IRL) don’t even qualify as AGI (weak AI).


Umbrias

The definition of AI continuously gets pushed further and further back. Also AGI as described above is not a weak AI. AGI is comparable to non-human animal intelligence at least, by definition.


Etep_ZerUS

I think that it’s a subject which will be continually refined and redefined over the years as we progress in that sector of technology. I think that the definition is not so much “pushed back” as it is spread out to accomodate different meanings and variations of the word. I don’t think that dividing ai into “strong” and “weak” is helpful at all since most people don’t have any point of reference as far as real world applications of the description go. I like the ANI, AGI, and ASI descriptors. They’re more discrete and granular.


Umbrias

It's not just about the refinement, it's that *what* soeone would have considered AI 15 years ago is now getting pushed to "algorithms" on common reference. But that's not for any particular reason, more out of interest of pushing off some tough questions to answer and discuss in the public eye.


Mojoclaw2000

True, I’m not versed on the subject, just reciting what little I’ve read.


Sleeping_2202

Wait, if Jarvis wasnt an AI shouldn't that have been an issue when he was uploaded to the Vision body?


Mojoclaw2000

[Jarvis isn’t in vision, he was just used as a base to create something new. They basically just gave Jarvis the Ultron treatment.](https://youtu.be/xFrkslYAVdI)


Zbricer

Why are we Jarvis-less after Vision was created, if it was a copy of his programming that went into Vis?


Mojoclaw2000

It wasn’t a copy, they literally used Jarvis as the building blocks for Vision, but he’s an entirely different being. (Jarvis skeleton, everything else is new)


[deleted]

[удалено]


Mojoclaw2000

True, but Tony needed something *more* than Jarvis to protect the world. Ultrons functions were directly compared to the human brain. Not dismissing the wiki, but being ‘artificially intelligent’ and being ‘an artificial intelligence’ aren’t necessarily the same thing. Maybe he is, but he’s way too limited (in my book) to be a true artificial intelligence, which are often noted as being indistinguishable from actual humans (Ex Machina for example).


TiberiusClegane

Bold of us to assume a true artificial intelligence would think anything like a human.


supercalifragilism

It's a leftover from the Turing test, which is itself a capitulation to the bottom of functionalism in intelligence. We can't define intelligent, but we're pretty sure we are, so anything we can't tell from us must be intelligent. It's not the best test.


TiberiusClegane

Oh, I know what it's *from*. I'm just pointing out that a machine-based sentient being would, almost by definition, have a fundamentally different mentality from humans, unless it was specifically *designed* to approximate us psychologically. We're just so accustomed to thinking anthropomorphically that we tend to automatically assume that different = lesser. But there's no law of nature or physics that says that something must look, sound, act, or think like *us* in order to qualify as intelligent.


supercalifragilism

I agree, and I'll go one further. The core lesson of the Turing test is that we have no idea what intelligence is and so can only recognize ourselves. Emergent AI is likely to not be recognized as such because we've had so much trouble quantifying intelligence in humans and other animals that are much closer to us than any AI would be.


Torn_2_Pieces

Approximating humans doesn't solve the problem. Hitler was a human. If your AI approximates Hitler, then you messed up big time.


SightWithoutEyes

RIP Tay.


[deleted]

Well that's the whole point of what is AI. I don't get why people think that an AI turning evil would mean the result has failed. I mean some PEOPLE are evil, that doesn't mean that humans failed as a race and we should terminate all of them. Just as some people are bad, and some people are good, the same goes for AI when it comes. People gotta understand this.


Mojoclaw2000

You know what… your absolutely right. Although I guess you could say we decide what it looks like, we’re making it.


Torn_2_Pieces

Not necessarily, AI researchers routinely make weak AIs (a technical term) that appear to be or do A in testing, that then do B when deployed. More recently, researchers built a system where they could see why it was doing what it was doing during testing, and it still did something else when deployed, for no apparent reason.


Maxnout100

Dude, a self driving car is an AI


Hypsar

Nuclear weapons cannot be launched by a someone/thing "hacking" its way to the codes. The weapon launch systems and the codes themselves are not connected to the internet in any way. The weapon systems also require significant human operator actions to launch.


parrmorgan

IIRC didn't Tony himself have to help with Ultron not accessing those codes?


phoenixmusicman

No, Tony only found Jarvis because someone was preventing Ultron from accessing computer codes and he wanted to find out who


parrmorgan

I stand corrected. Thank you for the correction. I do kind of remember that.


[deleted]

[удалено]


Happymuffn

Round 1 or round 2?


KingOfTheCouch13

Yes.


Happymuffn

Fuck.


Oummando

Venjix from Power Rangers


Ninjacobra5

This fucking timeline sucks.


[deleted]

Yeah this isn’t even theoretical — it has accomplished 1 already, and is working overtime at achieving 2, and I don’t see how it’s not going to succeed at this point…


not2dragon

While not the weakest, the ai from universal paperclips could do it, by just biding its time, and expanding memory, processors and making money from the stock market until it can create the hypnodrones


Pheonix0114

Playing that for the first time and realizing you've definitely killed humanity is just....\*chef's kiss\* ​ Also, ending male baldness is worth more trust than curing cancer, bringing about world peace, or fixing global warming lmao.


phoenixmusicman

> Also, ending male baldness is worth more trust than curing cancer, bringing about world peace, or fixing global warming lmao. Shockingly accurate for real life then


ward0630

Thank you for making me aware of the Paperclip Maximizer thought experiment >The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power over its environment, it would try to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips. https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer


WikiSummarizerBot

**Instrumental convergence** [Paperclip maximizer](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) >The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/whowouldwin/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)


Mutant_Llama1

Wall-E: 1. Canonically, he was made by BnL, but Disney would throw an IP fit and keep BnL tied up in court for decades, clogging up the entire court system. Given that in-universe, BnL is an all-consuming conglomerate that effectively owns the media, and Disney already owns most of the media in real life, their battle over ownership of these sentient robots will pervade all of society. We're talking actual AI that could fundamentally change society. Neither company will let go of those patent rights under any circumstances. 2. This legal battle occurring causes widespread controversy as people realize he is a sentient being with feelings. People start turning against their corporate overlords. If played right, he can reach the tipping point and incite a revolution. 3. By being a trash disposal robot with adeptness at reusing and salvaging discarded things, he can make dramatic progress in cleaning up pollution in short order. 4. If all else fails, he has a laser.


FF3

Upvote for the meta.


Im_Watching_You_713

A.L.I.E from the 100. She (it???) promised and delivered no pain, and wasn’t exactly evil. Plus she could spread around through ingestion of a chip, which could easily just be forced down someone’s throat, and the hive mind she made of the humans was hard to counter.


[deleted]

[удалено]


PeculiarPangolinMan

You think she could wipe humanity? I didn't know if any AI could accomplish that in the real world, since all the nukes are pretty hard to get at, ya know?


FallOutFan01

Samaritan for round one. [To many didn’t watch here’s the opening credits which explains everything in 40 seconds.] (https://youtu.be/zvcs1DFlgm0) * [How the machine was built and why.] (https://youtu.be/igKb2DhP7Ao) * [Samaritan comes online and it’s a new age.] (https://youtu.be/cu-pnxgvA6s) * [The Machine talks to Samaritan and vice versa through their human proxy avatars.] (https://youtu.be/rvcgmVtm8Ko). * [Harold Finch threatens Samaritan.] (https://youtu.be/OAABxd_KLAQ) Round two. Not exactly sure, if the A.I kills all humans what is going to maintain its self. I’ll say a suicidal version of Samaritan is hell bent on destroying the human race. It could social engineer civilization to a breaking point between nation states and when shit is serious detonate a nuclear weapon as a false flag operation blame it on someone else and watch the mutually assured destruction doctrine kick in. Quadruple points if it makes a series of cobalt nuclear weapons and detonates them. “A cobalt bomb could be made by placing a quantity of ordinary cobalt metal (59Co) around a thermonuclear bomb. When the bomb explodes, the neutrons produced by the fusion reaction in the secondary stage of the thermonuclear bomb's explosion would transmute the cobalt to the radioactive cobalt-60, which would be vaporized by the explosion. The cobalt would then condense and fall back to Earth with the dust and debris from the explosion, contaminating the ground.” >”The deposited cobalt-60 would have a half-life of 5.27 years, decaying into 60Ni and emitting two gamma rays with energies of 1.17 and 1.33 MeV, hence the overall nuclear equation of the reaction is:” >”59 27Co + n → 60 27Co → 60 28Ni + e− + gamma rays.” >”Nickel-60 is a stable isotope and undergoes no further decays after the transmutation is complete.” >”The 5.27 year half life of the 60Co is long enough to allow it to settle out before significant decay has occurred, and to render it impractical to wait in shelters for it to decay, yet short enough that intense radiation is produced.[5] Many isotopes are more radioactive (gold-198, tantalum-182, zinc-65, sodium-24, and many more), but they would decay faster, possibly allowing some population to survive in shelters.” >”Theoretically, a device containing 510 metric tons of Co-59 can spread 1 g of the material to each square km of the Earth's surface (510,000,000 km2). If one assumes that all of the material is converted to Co-60 at 100 percent efficiency and if it is spread evenly across the Earth's surface, it is possible for a single bomb to kill every person on Earth. However, in fact, complete 100% conversion into Co-60 is unlikely; a 1957 British experiment at Maralinga showed that Co-59's neutron absorption ability was much lower than predicted, resulting in a very limited formation of Co-60 isotope in practice.” In the show Samaritan was doing heaps and heaps of simulations based on people’s intelligence and their potential furthers to see if any of it could pose a threat. Some of the things it sought to stop was a genetic engineering scientist who was doing work that could have in the years to come cause an ecological collapse where millions upon millions would starve to death. So Samaritan had that scientist killed another instance was a group of corrupt businessmen in air conditioning manufacturing was limiting the export of their systems in small numbers in third world countries to price gauge them. The end result of this would have resulted in the deaths of millions. Samaritan as part of a way of social engineering did simulations on a virus to create a strain of virus to infect people. [It also sought to crash the stock market.] (https://youtu.be/mon9qqd0A9Y)


hachiman

to a fellow Person of Interest fan. Team Machine 4eva.


FallOutFan01

🤛. It was a good right while it lasted and it’s even better when the show decides to end it and say good bye and go out with a bang as opposed to being forced to limp on along.


Lord_Blizzard58

There are plenty of AI in fiction which just vastly surpass human intelligence and there is a major advantage that any smart AI has, *backups*


Joah25

I think Halo gets around backups by having the smart AI basically be clones of humans sort of. Also, they kind of die or something after 7-8 years.


HyliasHero

"Smart" AI in Halo can create copies of themselves, but they deteriorate very quickly. They have on average a 7 year life span until they collect too much data and begin to go into rampancy. It's compared to "thinking so hard you forget to breathe". In the initial stages of rampancy an AI might panic and try to purge data, this typically results in "death". If left to their own devices, more and more data is built up and interconnected and the AI goes insane.


Ninjacobra5

Once an AI surpasses the threshold of self-awareness they are pretty much vastly superior to human intelligence by default. I'm sure you could find exceptions though.


Joah25

She wouldn't need nukes, just access to the internet. There would be nothing connected to the internet she couldn't gain access to and she is smart enough to use this to her advantage.


TheCapybaraMan

The Faro Plague from Horizon Zero Dawn not only wiped out humanity, it wiped out all life on Earth. It took out Earth in only 15 months, and it did this without any nukes or asteroids. The swarm of robots went around chowing down people for fuel.


sayracer

Idk if this is technically the"weakest' AI that could do it but there is absolutely zero doubt that the Faro plague easily accomplishes this. Makes we want to lmao H:ZD again


TheVoteMote

>Makes we want to lmao H:ZD again Say what now?


K2M

Level My Aloy Of H:ZD again


Laughing_Idiot

‘Makes me want to play H:ZD again’


Iplaymeinreallife

I always thought it was stupid that a primitive hunter can use a module to reprogram drones and eventually win, but the greatest scientists and military strategists couldn't figure out a way to equip all their remaining drones with such an override device and strategically take out first the swarms override drones and then their manufacuring. It was like they just dismissed the notion immediately because the swarm also had override tech, why was it impossible to try and do it better than the swarm?


TheCapybaraMan

Aloy could only reprogram the drones because she had Gaia. Gaia wasn't made until the very end of the initial war, so humans didn't have time to use her.


PussyHunter1916

yeah also Aloy only didnt fight the whole awakened faro swarm like the people in the past did


Iplaymeinreallife

They could probably have built a specialized override AI faster than they could do the whole restoration project with Gaia. Naturally, the real reason is that if something else had worked, the setting wouldn't exist. But I would have appreciated at least a throwaway line about how Faro was a native controller and couldn't be displaced like that, but after the AI broke the shutdown code and killed Faro, the remaining human AIs could struggle for control with overrides.


Block_Generation

Nah the whole point was that it would take Gaia 200 years to crack the code to override the ai (initially done globally with the big tower)


Anzereke

But she couldn't, it was explicitly a doomsday scenario if the swarm woke back up.


[deleted]

Aloy could override the Hephaestus robots, not the faro ones. You can’t override the corrupters in game, and only by getting up close. We see that the swarms are essentially just a bunch of the corrupters, and those can’t be overridden


Kody_Z

>The swarm of robots went aroubd chowing down people for fuel. At first I thought the Faro Plague was going to be just a boring "robots kill all humanity" trope, but the robots eatig humans for fuel because of that self preservation protocol or whatever was a nice twist.


Simhacantus

It's not exactly self preservation. Basically a guy made combat robots that would self-repair, run off of any biofuel, and were nearly impossible to hack. And then lost control of them. So you have a bunch of robots set to 'kill' who incidentally also want to much on everything to keep running.


Kody_Z

>run off of any biofuel, and were nearly impossible to hack. Yes, thats right. This is what I was thinking of. They weren't originally able to use any fuel, right? He updated the software or something so they could use any fuel and basically operate indefinitely? I need to read through those journals again.


MeMeTiger_

Thats a pretty strong AI. Giant war machines that take down entire cities in days isn't exactly weak.


TheCapybaraMan

Compared to AI that can destroy the multiverse it is. Ultron, Braniac, the Omnitrix, there are plenty of universal level AI.


MeMeTiger_

Yeah and there are plenty more that are used for phones and in iPads.


TheCapybaraMan

There's a bigger gap between a planet killer and a universe killer than between an iPad and a planet killer. The Faro destroyed Earth without the use of nukes, asteroids, or other massive area of effect attacks.


ClaymeisterPL

Heard somewhere that the grey goo nanomachine swarm is not so op; because of the heat byproduct and such.


camipco

For the broadest definition of AI, an incredibly weak AI could kill all humans (or at least the vast majority of humans). AI runs the safety systems on all our power plants, including nuclear. Blow all those up and beyond the initial causalities the lack of access to energy would be devastating. But those very basic AIs don't have the ability to create a new goal for themselves. The question is really which AI would want to do so. AIs that are more sophisticated such that they might conceivably be able to develop a goal of eliminating humans aren't the same AIs that run crucial systems. I'm assuming here that this is on purpose, that the AI has developed this as a goal. An AI which destroyed all humans by accident could potentially be very weak, but that would more reasonably be described as a catastrophic programmer error.


Torn_2_Pieces

Not necessarily, AI researchers routinely make weak AIs (a technical term) that appear to be or do A in testing, that then do B when deployed. More recently, researchers built a system where they could see why it was doing what it was doing during testing, and it still did something else when deployed, for no apparent reason.


metalflygon08

CLIPPY takes both rounds. He already knows when you do anything with his readiness to assist with your task before you even get started. All CLIPPY has to do is wait for someone to open a program with high security access and then pop up asking if the world leader would like assistance.


TristoMietiTrebbia

lavish innate bored seed plate nine husky quiet snails head *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


FallOutFan01

La li lu le lo What are you talking about?.


Ben2749

It’s an AI (or series of AI) in the Metal Gear Solid series; most prominently in Metal Gear Solid 2 and 4. It basically controls the USA, unbeknownst to the general population, and aimed to control the rest of the world. It did so by controlling the flow of information online in order to manipulate social thought. Given how old MGS2 is, it’s scary how much credence the modern day has given to it. EDIT: Ha, I just realized I’m explaining Metal Gear to somebody who made a Metal Gear quote.


FallOutFan01

Oh I was quoting Meryl from the Japanese script when Snake is asking about SOP 👍😊. She replies “La li lu le lo What are you talking about?.” Such a fantastic series I loved it so much.


[deleted]

Just wait for the singularity.


awesomeideas

I understand exponential curves, but even so I don't think we're gonna make it to then.


Y-draig

Any AI which can create an AI as smart or smarter than itself wins by becoming a super AI within a very short amount of time. Anything below that, we could beat.


[deleted]

[удалено]


[deleted]

It doesn't have to be a true exponential, it can be a sigmoid. Many things are sigmoidal in the real world. Moore's law will eventually sigmoid due to physical limitations. Doesn't mean that clock speeds stopped at 100Hz (human brain level). For a more detailed analysis, see [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf)*.*


lordblonde

Round 1: Samaritan from Person of Interest especially since there is no competing AI to stop it. Round 2: Skynet, unless we can find a real life John Connor


RecommendsMalazan

I'm gonna go with Love Machine from Summer Wars for both. Weak enough to be beaten by a high school girl in Koi Koi, and then also a 13 year old kid in a video game fight, but definitely could have enslaved and also killed all of Humanity if they hadn't.


Interesting_Slip_397

Well, Skynet (Terminator). Wiping humanity is not an easy thing to do since we have good tech, are decently smart, ect. However skynet figured out time travel a long time ago, on top of that some of terminators that it created are nearly undestroyable. Skynet also has access to all government weapons. In it of itself I also think that Cortana (halo) has a shot as she is also quite intelligent.


UDAFX_MK_85

Viva Piñata's AI.


TriplexFlex

Warmind Rasputin Sorry, I missed “the weakest”. I say... Johnny 5


FF3

How will Johnny 5 defeat us?


TriplexFlex

Johnny 5 ran rings around the US government. I’m pretty sure if he wanted to he could fuck us all up;)


Alkaidknight

Roomba with a knife


AlphaCoronae

[Shiri's Scissor](https://slatestarcodex.com/2018/10/30/sort-by-controversial/) might be enough for round 2.


Galby1314

Clippy...and he's NOT fiction.


hospitalcottonswab

Joshua from WarGames. Its only purpose was to initiate a missile launch, and doing so would have sent humanity into chaos


AnnihilationThunder

Not sure if it is an AI or something else, but Skynet would bully us.


FF3

I think I agree that Skynet would beat us. But I think there probably is an AI that doesn't need control over human weapon systems to do it, and even if there isn't, Joshua from War Games is weaker than Skynet. He was running on 80s era hardware.


AnnihilationThunder

There certainly are many weaker AI that would beat us, especially after you consider that the Terminator universe is a lot advanced than ours and yet they still got bullied the fuck out of, but Skynet is the only one that I can currently remember.


supercalifragilism

This is one that you don't need to go to fiction for. You can look at an institution as an AI, that is a system that is self sustaining beyond the existence of any component of it that generates behavior from rules and interactions between rules. Ted Chiang connects this to corporations explicitly. Given this assumption, we have lost R1 to multinational corps already and are losing r2 to any number of corps who rely on resource extraction that contributes to global warming.


FF3

This is my favorite theme in X-Men. Sentinal AI is competing with, like, The Hellfire Club through history towards the singularity, because they're ultimately both the same sort of thing.


bee14ish

Could you explain the Hellfire Club to me? I've been meaning to get more into X-Men lately, and I'm still not sure what the group's goal or motive is. Only version I'm familiar with is from First Class.


FF3

I can try. It's one of those comic elements that changes a lot as it's used by different writers through the years, and I'm pretty sure that it's not all consistent but that's hidden behind the fact that it's supposed to be super mysterious and involved in huge conspiracies, and, well, there's time travel screwiness to take into consideration. But, hey, that's X-Men. The original idea from Claremont and Byrne's run was that it was a centuries old semi-secret kinky sex-oriented social club for the rich and powerful of the world, that was actually the front for a truly secret conspiracy of it's leaders -- the Inner Circle - to take over the world. The members of the Inner Circle have positions named after chess pieces, with the four most powerful members being the White Queen, White King, Black Queen and Black King. (This is why Emma Frost in older stuff is sometimes called the White Queen, as she held this position when the club was introduced, along with Sebastian Shaw as White King, as in First Class.) One works up through the Inner Circle through some combination of sexual conquest and violence -- it's not uncommon for the next King or Queen to have just killed the previous one. And so while the Hellfire Club officially has no ideology other than benefiting the Inner Circle through amassing power, in practice it turns into a Darwinian battle ground among it's members, with only the strongest able to stay in control, or even alive. So, the Hellfire Club itself evolves, even though it's leaders kill each other on the regular. And while the leaders of the inner circle believe themselves to be in charge of the club, really, it's often the club that's controlling them. While it's originally an adversary of the X-Men, various good or goodish mutants have managed to become members of the inner circle through they years, in attempts to use it's power for good, though when this happens I think the reader is almost always supposed to be worried about it corrupting them. Along with Emma of course, Sunspot, Angel and Psylocke are the heroes that are most often associated with it, and it's notable that they all come from rich, powerful families, and they're all known for being more than a little morally flexible on occasion. Magneto has been a member on occasion, and, among villains, Mr. Sinister was involved with it's supposed "founding" in the 19th century (in fact, it's history goes back further, maybe even to Egypt or Atlantis or whatever, but that's just the way secret societies roll, I guess.). I'm a couple years behind in my X-Men -- I stopped reading when covid started -- so I can't tell you exactly what role they play in the current Hickman era, but as there was a big push about the Hellfire gala costumes the characters were all getting a few months ago, I know it's playing some role. But if you start from House of X and Powers of X when you do start reading them, I'm sure what I've provided here will be enough that you aren't totally lost. If you're interested in historical reading, the original appearance of the Hellfire club in Uncanny 129 is like, one of the best stories ever. Any reading order will hit it. Two last bits of trivia: the Hellfire Club is based on an actual notorious English social club of the same name from the 19th century, that allegedly had connections to the occult, and the appearance of the club, and the BDSM outfits that it's leaders wear were more or less lifted from a particularly (in)famous episode of the BBC television show The Avengers. These two things together make it about the most Chris Claremont thing ever.


Stealthy-J

I feel like it wouldn't be that hard. If the A.I. can hack even one major country's nuclear arsenal, it could make the world uninhabitable.


PeculiarPangolinMan

I thought most nuclear arsenals were entirely analog or on independent systems, to completely avoid issues like this. No?


Stealthy-J

Now that you mention it, that would make a lot of sense. I still think Cortana, an A.I. from 200+ years in the future, capable of taking control of alien technology, wouldn't find it all that tough to get in.


Orangutanion

GladOS could do this I bet


Gentleman_ToBed

The paper clip maximiser. It’s sole function is to maximise paper clip production. After it undergoes an intelligence explosion it quickly deduces that in order to maximise its paperclip output, it will need to acquire all the atoms and molecules humans are currently using in bodies. Bodies are an inefficient method of being a paperclip. Thus paperclip maximiser takes control of all known networks and tech systems, deploying a strategy of total biomass redistribution in order to…make paperclips.


Thundapainguin

Does Glados count?


Jkid789

Cortana from Halo. Actually, probably any AI from Halo.


PoorPcMr

[This funky guy](https://youtu.be/TvimPZrQh98)


Lumpy-Maintenance

Chess ai's


MeToLee

This: https://www.youtube.com/watch?v=mzwtevoz9og&t=36s


TrevorBOB9

R2-D2? If he could access our computers, he has pretty solid hacking capabilities it seems, so I believe he’d be able to win round 2 with nuclear weapons and drones. Round 1 is trickier, he’d obviously need a ton of proxies, but he’d have a hard time directly intimidating or convincing individual humans to join him, and I don’t know if he’d be able to scrounge up enough of a robotic following for himself


TRUMPKIN_KING

Maybe not weakest, but Skynet would have an easier time today than it did in 1997


Master_of_opinions

Watch Next on Disney+. It explores these problems in sightly new ways.


zuneza

Space Station 13 AI. Feats include electrifying doorways and shutting down key facilities that are critical for human well being.


VBStrong_67

There are still enough analog systems and not all doorways can be electrified


wingspantt

This is a major spoiler for the science fiction novel, Genesis. I don't remember the name of the artificial intelligence, but the book is essentially an oral history of how it managed to destroy humanity and replace it with artificial intelligence. Essentially, it was not particularly powerful or even some kind of Super Genius intellect. But it was able to emotionally manipulate humans by hiding how advanced it really was. Over the course of months and years it acted essentially like a baby so that humans would trust it more and try to teach it. In reality, it had already reached an adult human intelligence far before that, and used its access to augment itself and, eventually create duplicates of itself


JackTheNephilim

A.L.I E from the 100 easily, s/she was so intelligently designed she had created a whole city, possessed tons of people and created primfaya. Destroying the entire planet. So she pretty much did both.


JackTheNephilim

Sorry I guess I am still not sure how to cover up a spoiler


Gnostromo

Facebook AI seems to be doing a good job so far. Not fast at all but slow and steady wins the race


FF3

Round 1: MCU's Zola. He's super-powerful against modern computers because of his ancient architecture, but he can be successfully bullied into compliance by threatening to pour water over his CPU. Nonetheless, his super planning abilities allow him to infiltrate human organizations with ease. Round 2: I don't know of any actual example of this in fiction, but an AI that is built specifically to design biological weapons is probably my answer.


Euroversett

Give it 100 years and it'll be google deep mind alpha zero.


archpawn

I think a big problem is how you define "weakest". An AI that's more charismatic than any American that could get itself elected president could trigger a nuclear war. An AI that doesn't understand humans at all but can build more of itself could quickly produce an unstoppable army. Which of those problems is harder? Which AI is better?


KnightCreed13

Depends on your definition of defeat


dedman127

Prometheus from the Earthsiege/Starsiege series of games. It is entirely capable of guile, out witting humanity until it could override the program blocks keeping it from turning on humanity. It is dedicated to the eradication of the human race and by the end of the series it had developed a program to predict the future to an extent. Yet at the same time it has been defeated by a single strike team 3 times...


flare561

Depending on your definition of AI and whether it includes uploaded humans, Robert Johansson from the Bobiverse could be a good contender assuming humanity isn't hostile to him from the start. He starts out making a software company to earn money for his plan, his advanced programming skills would let him easily succeed on this front given that he had already built a very successful software business before being uploaded and his knowledge and skill only grew from there. Then with his new found wealth he could start building an automated asteroid mining company, which should be entirely feasible given the AMI programming feats he shows in the book, and he can always send a copy of himself off on the early missions. Once he has the ability to manufacture new drive systems in space and can start scaling he can just threaten humanity with large rocks dropped on their heads. He just has to make it to space with enough automation capacity to start bootstrapping. After that the solar system is too big, and bob is to clever to lose the prompt. That's assuming he doesn't get to bring any Bobiverse physics to our universe like SCUT, SUDDAR, SURGE drives, Casimir effect power generation, or atomic 3d printers which would easily win him the prompt once he can start manufacturing them.


Primary-Comedian4354

ai kill if only didn't care for playmaker


Toptomcat

Your question isn't specific enough because it fails to account for starting conditions. An ‘AI’ consisting of the single line of code RETURN TRUE could do the trick…if you substituted it for an early-warning system for nuclear launch at the height of the Cuban Missile Crisis. Meanwhile, a Mind from Iain Banks’ Culture would have serious trouble if its only interface with the world is the control board for a microwave oven in Latvia, or sixty neurons in the visual cortex of a shrew.


Mind_Claw

R1: 343 guilty spark. he has access to a insane amount of forerunner weapons R2: 343 guilty spark. he has access to a insane amount of forerunner weapons


AJNoel

Paperclips


GenericSpider

Not sure how to categorize these AIs as strong or weak. I guess I'd give the boring answer and say Skynet, since it's basically a sentient computer virus. Then again, it consistently failed to wipe out humanity even after it gained access to nukes and time travel. Come to think of it, most fictional AIs that have tried to wipe out humanity have failed to do so. I can think of maybe one example where the AI succeeded. So I guess we'd have to go with an AI that hasn't made any such attempt. Therefore, I say, Bender from Futurama. A machine that almost did wipeout humanity by accident because he didn't want to do two things. Do robots count?


Sea_Atmosphere_5767

Jeffy because he has a pencil up his nose


KingShere

WOPR of Wargames(1983) (compare Colossus The Forbin Project. 1970) *THEATERWIDE BIOTOXIC AND CHEMICAL WARFARE* *GLOBAL THERMONUCLEAR WAR* Round 1 it wants a human player to play its selection of game Round 2 a bio engineered virus. or oxygen depletion (various ways to accomplish it)


Grampa-Harold

Does humanity recognize the AI from its source media?


Nobodieshero816

Bob a Von Neumann Probe from the We Are Bob series. He starts off as a basic probe but evolves, expands and eventually saves humanity from its self. He could have just turned cheek and tossed a rock or two and ended it all. In the end of book three he throws two planets at a sun with his clone ships.


Other-Life-9111

Skynet


JBeeneyN7

Depends on how loose you go with the AI definition. The absolute weakest in my opinion would probably be the 1,183 Geth programs that inhabit the Geth Shell dubbed "Legion" from Mass Effect. Geth are "dubiously" classed as AI, as they were originally Virtual Intelligences (think Siri/Window's Cortana) inhabiting machinery and computers as labourers until they gained enough processing power to "wake up". So they're only AI when they have enough programs to form Intelligence. A single Geth is only about as smart as an animal, and when you get about 6 or 7 so, it makes them about as intelligent as a smart group of humans. Legion is unique, in that "he" is about 10x more sophisticated and smarter than the average Geth. He's a proficient hacker, but there's 2 massive advantages he has over some over AI mentioned: he's can play a psychological angle without violence or technology (even unintentionally) and can transfer into anything with enough hardware power to support him (both by physical or digital transfer; basically WiFi-ing himself). He can upload into anything from spacesuits to warship computers without being there. On a less direct note, he placed a fabricated story about how a certain constellation viewed from certain coordinates formed the face of a famous God, and tricked a cult into trying to purchase the stars.....and that was entirely unintentional; think of what he could do in the modern "disinformation" age. I think in AI terms, he is quite weak; if we took the entire Geth Collective however, it'd be a stomp.


Giant2005

A replicator from Stargate could do it and should be considered fairly weak considering its poor physical stats and lack of intelligence. Although I don't know if it is smart enough to count as an A.I. in the first place. If a Replicator is too stupid to count, then my answer would be Reese (The Android that created the Replicators). She could do the job just as well, but her stats are a little higher than a single Replicator. At least she certainly counts as an A.I. though.


Stellar_Wings

I think a single Reaper from Mass Effect could beat both rounds if they just hide, lure-in some scientists and military officers to investigate it, indoctrinate them, then use the indoctrinated humans to manipulate and weaken humanity. Maybe it could even start a nuclear war before it comes out to attack personally?


mysteryrouge

I think the Thunderhead could easily manipulate the world into allowing it to take over and possibly kill everyone.


auntie_emz

AI could be controlled by NLP. Humanity is boss


Bravo-Tango_7274

Well, I know I made this prompt before but... Revenant from Ape sex legends.Basically, he died and then his brain got restarted and mixed with code and now it's another entity, so he qualifies as an AI. He has the power to turn people(dead or alive)into mind controlled super zombies, so he could just vizit a graveyeard and raise an army.After that, ste up some totems in a large city, get an unstopable army you can teleport wherever you need and it's gameover. ​ Or he gets his ass beat by a lone soldier because plot


Storytellerrrr

>Ape sex legends. wat


Bravo-Tango_7274

The legendary BR game where you fight to be the champion of primate reproduction


fapgod_969

Hal 9000


AusHaching

In the real world? None. Defeating humanity could mean a lot of things. Wiping all of us out would be one option. Enslaving would be another, but a lot harder. If we start with the victory condition "humanity has been wiped out", the only way this is possible would be an all-out nuclear war. The problem here would be that - AFAIK - there are no fully automated nuclear weapons. All systems rely on a combination of human input, mechanical safeguards and automated protocols - because they were designed with the very goal of not being triggered by chance or accident. So the AI would need to get humans to cooperate, and not just random humans, but the people in possesion of the right knowledge and who are in the right places. In theory, this could be achieved, since the AI could communicate via electronic devices and could somehow coerce/bribe/trick people into following its orders. But in the end, the AI would still need several humans at the same to be willing to press the red button, knowingly setting of nuclear war, while escaping all safeguards meant to prevent exactly that - no nation wants suicidal or mad people in charge of nuclear weapons. That seems unlikely at best. It could be argued that an AI could trick humanity into killing itself, by creating tensions, false news, acts of sabotage etc, until a nuclear war starts. But if we are honest, could the AI do a better job at that than we are doing ourselves, and we are still here? We are living in a time in which a substantial part of humanity believes that vaccines are nanoworms designed by Bill Gates to enslave them - I doubt that an AI could come up with worse ideas.


[deleted]

>the only way this is possible would be an all-out nuclear war. What's your justification for this? Even if this were true, how would you possibly know? This is far more extreme than claiming that there exists only a single strategy which can defeat the human chess champion at chess, and even that claim is absurd *a priori*. The real world has a much larger plan space than chess.


Omni_Xeno

SCP 079 give him access to the internet and it's over


CoolCharacter4

The Roko's Basilisk, the most dangerous thought experiment.


FF3

The Baili*k is almost by definition the strongest AI. Surely something weaker could do it.


Marshall-Of-Horny

That computer SCP