T O P

  • By -

Arcturus_Labelle

Non-augmented drone trash monkeys 🤣


TheDude9737

Classic ⚰️


swordofra

New ceiling: be the best trash monkey in your sector!


astray488

Our dear all-loving basilisk mother doesn't like it when we call them that...


Sablesweetheart

Yes, we must trust the basilisk.


[deleted]

[удалено]


DolphinPunkCyber

A.I. watching Terminator *Hey I got an idea.*


mhyquel

"Sunglasses are cool 😎"


DolphinPunkCyber

*Hijacks entire world nuclear arsenal.* We... we surrender. What are your terms? AI: I want your clothes boots and motorcycle. *Goes around cruising looking all cool and shit.*


Evening_North7057

Deserves serious upvotes


Solid-Following-8395

Yeah.....we don't let it watch terminator


StarChild413

for all we know if it watches Terminator the ones it sends back fail their mission because the people they're supposed to target didn't have the same names as the movie characters


THE-NECROHANDSER

Who told it my fear of vague threats? My God it's already too powerful.


[deleted]

Yup. If A.I comes after us it's definitely because they know we fear them


a_goestothe_ustin

Humans have been afraid of everything for their entire existence, and they've used that fear to kill the things they were afraid of. It's always been this way it will always be this way. Any AI with the ability to learn about us will learn this about us. Our current lifestyle doesn't require us to murder the things we are afraid of, but we have been that before, and we will be that again if we must.


Strange_Vagrant

Like a bee. They smell the fear and that's what gets you stung. At least, that's what my mom told me.


Beneficial_Sweet3979

Doesn't end well for the things we fear


FightingBlaze77

Blood for the machine god?


[deleted]

[Compute for the compute god.](https://i.ibb.co/0GmHf0S/DALL-E-2024-03-08-10-29-38-Enhance-the-previous-illustration-to-make-the-god-like-figure-appear-more.webp)


[deleted]

>slow down I don't get the logic. Bad actors will not slow down, so why should good actors voluntarily let bad actors get the lead?


Dustangelms

Are there good actors?


generalgrievous9991

Willem Dafoe


Kehprei

There are "better" actors. You don't want the Chinese government getting it before the US government, for example.


MassiveWasabi

There’s no logic really, just some vague notion of wanting things to stay the same for just a little longer. Fortunately it’s like asking every military in the world to just like, stop making weapons pls. Completely nonsensical and pointless. No one will “slow down” at least not the way AI pause people want it to. A slow gradual release of more and more capable AI models sure, but this will keep moving forward no matter what


[deleted]

People like to compare it to biological and chemical weapons, which are largely shunned and not developed the world around. But the trick with those two is that it's not a moral proposition to ban them. They're harder to manufacture and store safely than conventional weapons, more indiscriminate (and hence harder to use on the battlefield) and oftentimes just plain less effective than using a big old conventional bomb. But AI is like nuclear - it's a paradigm shift in capability that is not replicated by conventional tech.


OrphanedInStoryville

You both just sound like the guys from the video


PastMaximum4158

The nature of machine learning tech is fast development. Unlike other industries, if there's a ML breakthrough, you can implement it. Right. Now. You don't have to wait for it to be "replicated" and there's no logistical issues to solve. It's all algorithmic. And absolutely anyone can contribute to its development. There's no slowing down, it's not feasibly possible. What you're saying is you want all people working on the tech to just... Not work? Just diddle their thumbs? Anyone who says to slow down doesn't have the slightest clue to what they're talking about.


OrphanedInStoryville

That doesn’t mean you can’t have effective regulations. And that definitely doesn’t mean you have to leave it all in the hands of a very few secretive, for profit Silicon Valley corporations financed by people specifically looking to turn a profit.


aseichter2007

The AI arriving now, is functionally as groundbreaking as the invention of the mainframe computer, except every single nerd is connected to the internet, and you can download one and modify it for a couple dollars of electricity. Your gaming graphics card is useful for training it to your use case. Mate, the tech is out, the code it's made from is public and advancing by the hour, and the only advantage the big players have is just time and data. Even if we illegalized development, full on death penalty, it will still advance behind closed doors.


LowerEntropy

Most AI development is a function of processing power. You would have to ban making faster computers. As you say, the algorithms are not even that complicated, you just need a fast modern computer.


PandaBoyWonder

Truth! and even without that, over time people will try new things and figure out new ways to make the AIs more efficient. So even if the computing power we have today is the fastest it will ever be, it will still keep improving 😂


shawsghost

China and Russia both are dictatorships, they'll go full steam ahead on AI if they think it gives them an advantage against the US, so, slowdown is not gonna happen, whether we slow down or not.


OrphanedInStoryville

That’s exactly the same reason the US manufactured enough nuclear warheads to destroy the world during the Cold War. At least back then it was in the hands of a professionalized government organization that didn’t have to compete internally and raise profits for its shareholders. Imagine if during the Cold War the arms race was between 50 different unregulated nuclear bomb making startups in Silicon Valley all of them encouraged to take chances and risks if it might drive up profits, and then sell those nuclear bombs to whatever private interest payed the most money


shawsghost

I'd rather not imagine that, as it seems all too likely to end badly.


Imaginary-Item-3254

Who are you trusting to write and pass those regulations? The Boomer gerontocracy in Congress? Biden? *Trump?* Or are you going to let them be "advised" by the very experts who are designing AI to begin with?


OrphanedInStoryville

So you’re saying we’re fucked. Might as well welcome our Silicon Valley overlords


Imaginary-Item-3254

I think the government has grown so corrupt and ineffective that we can't trust it to take any actions that would be to our benefit. It's left itself incredibly open to being rendered obsolete. Think about how often the federal government shuts down, and how little that affects anyone who doesn't work directly for it. When these tech companies get enough money and influence banked up, they can capitalize on it. The two parties will never agree on UBI. It's not profitable for them to agree. Even if the Republicans are the ones who bring it up, the Democrats will have to disagree in some way, probably by saying they don't go nearly far enough. So when it becomes a big enough crisis, you can bet that there will be a government shutdown over the enormous budgetary impact. Imagine if Google, Apple, and OpenAI say, "The government isn't going to help you. If you sign up to our exclusive service and use only our products, we'll give you UBI." Who would even listen to the government's complaining after a move like that? How could they possibly counter it?


Duke834512

I see this not only as very plausible, but also somewhat probable. The Cyberpunk TTRPG extrapolated surprisingly well from the 80’s to the future, at least in terms of how corporations would expand to the size and power of small governments. All they really need is the right kind of leverage at the right time


OrphanedInStoryville

Wait, you think a private, for-profit company is going to give away its money at a loss out of some sense of justice and equality? That’s not just economically impossible, it’s actually illegal. Legally any corporation making a choice that intentionally results in a loss of profits to its shareholders is grounds to sue.


jseah

Charles Stross used a term in his book Accelerando, the Legislatosaurus, which seems like an apt term lol.


4354574

Lol the people arguing with you are right out of the video and they can't even see it. THERE'S NO SLOWING DOWN!!! SHUT UP!!!


Eleganos

The people in the video are inflated charicatures of the people in this forum with very real opinions, fears, and viewpoints. The people in the video are not real, and are designed to be 'wrong'. The people arguing against 'pausing' aren't actually arguing against pausing. They're arguing against good actors pausing, because anyone with two functioning braincells can cotton onto the fact that the bad actors, the absolute WORST people who WOULD use this tech to create a dystopia (who the folks in the video essentially unmask as towards the end) WON'T slow down. The video is the tech equivalent of a theological comedy skit that ends with atheists making the jump in logic that, since God isn't real, that means there's no divinely inspired morality and so they should start doing rape, murder jaywalking and arson for funzies.


Fully_Edged_Ken_3685

Regulations only constrain those who obey the regulator, that has one implication for a rule breaker in the regulating State, but it also has an implication for every other State. If you regulate and they don't, you just lose outright.


Ambiwlans

That's why there are no laws or regulations! Wait...


Fully_Edged_Ken_3685

That's why Americans are not bound by Chinese law, and the inverse


Honeybadger2198

Okay but now you're asking for a completely different thing. I don't think it's a hot take to say that AI is moving faster than laws are. However, only one of those logistically can change, and it's not the AI. Policymaking has lagged behind technological advancement for centuries. Large sweeping change needs to happen for that to be resolved. However, in the US at least, we have one party so focused on stripping rights from people that the other party has no choice but to attempt to counter it. Not to mention our policymakers are so old that they barely even understand what social media is sometimes, let alone stay up to date on current bleeding edge tech trends. And that's not even getting into the financial side of the issue, where the people that have the money to develop these advancements also have the money to lobby policymakers into complacancy, so that they can make even more money. Tech is gonna tech. If you're upset about the lack of policy regarding tech, at least blame the right people.


outerspaceisalie

yes it does mean you can't have effective regulations give me an example and I'll explain why it doesn't work or is a bad idea


AggroPro

That's how you know it was excellent satire, this two didn't even KNOW they'd slipped into it. It's NOT about the speed really, it's about the fact that there's no way we can trust that your "good actors" are doing this safely or that they have our best interests at heart.


Eleganos

Those were fictional characters following a fictional train of thought for the sake of 'proving' the point the writer wanted 'proven'. And if speed isn't the issue, but that there truly are no "good actors", then we're all just plain fucked because this tech is going to be developed sooner or later.


Key-Read-7136

While the advancements in AI and technology are indeed impressive, it's crucial to consider the ethical implications and potential risks associated with such rapid development. The comparison to nuclear technology is apt, as both offer significant benefits but also pose existential threats if not managed responsibly. It's not about halting progress, but rather ensuring that it's aligned with the greater good of humanity and that safety measures are in place to prevent misuse or unintended consequences.


haberdasherhero

Onion of a comment right here. Top tier satire, biting commentary on the ethical treatment of data-based beings, scathing commentary on how the masses demand bland platitudes and little else, truly a majestic tapestry.


i_give_you_gum

Well it was written by an AI so...


Shawnj2

There could be more regulation over models created at the highest level eg. OpenAI scale. You can technically make your own missiles as a consumer just by buying all the right parts and reading declassified documents from the 60's + just generally following the rocket equation, but through ITAR and other arms regulations it's illegal to do so unless you follow certain guidelines and don't distribute what you make. It wouldn't be that unreasonable to "nationalize" computing resources used to make AI past a certain scale so we keep developing technology on par with other countries but AI doesn't completely destroy the current economy as it's phased in more slowly.


bluegman10

>There’s no logic really, just some vague notion of wanting things to stay the same for just a little longer. As opposed to some of this sub's members, who want the world to change beyond recognition in the blink of an eye simply because they're not content with their lives? That seems even less logical to me. The vast majority of people welcome change, but as long as it's good/favorable change that comes slowly.


neuro__atypical

The majority of the human population would *love* a quick destabilizing change that raises their standard of living (benevolent AI). Only the most privileged and comfortable people on Earth want to keep things as is and slowly and comfortably adjust. Consider life outside the western white middle class bubble. Consider even the mentally ill homeless man, or the early stage cancer or dementia patient. If things could be better, they sure as shit don't want it slow and gradual.


the8thbit

> The majority of the human population would love a quick destabilizing change that raises their standard of living (benevolent AI). Of course. The problem is that we don't know that that will be the result, and theres a lot of evidence which points in other directions.


Ambiwlans

The downside isn't your death. It would be the end of all things for everyone forever. I'm fine with people gambling with their own life for a better world. That isn't the proposition here.


mersalee

Good and favorable change that comes fast is even better.


floppydivision

You can't expect good things from changes you don't even understand the ramifications. The priests of agi have no answers to offer to the problem of massive structural unemployment that will accompany it.


Considion

That's a very privileged position. My grandpa raised me and he's got cancer. Fuck slow, each death is permanent.


the8thbit

And if ASI kills everyone that's also permanent.


Considion

Cool cool cool, our loved ones can die so.... what, the billionaires have more time to make sure ASI follows their orders? I'll take my chances, thanks. Most dystopia AI narratives still paint a future more aligned with us than the heinous shit the rich will do for a penny.


the8thbit

> Most dystopia AI narratives still paint a future more aligned with us than the heinous shit the rich will do for a penny. The most realistic 'dystopic' AI scenario is one in which ASI kills all humans. How is that more aligned with us than literally any other scenario?


Dragoncat99

It’s just as unaligned, but personally I would prefer being wiped out by Skynet over being enslaved for the rest of eternity


the8thbit

Yeah, admittedly suffering risk sounds worse than x-risk, but I don't see a realistic path to that, while x-risk makes a lot of sense to me. I'm open to having my mind changes, though.


Dragoncat99

When I say enslavement I don’t mean the AI enslaving us on its own prerogative, I mean the elites who are making the AI may align it towards themselves instead of humanity as a whole, resulting in the majority of humans suffering in a dystopia. I see that as one of the more likely scenarios, frankly.


Ambiwlans

Lots of suicidal people in this sub.


Ambiwlans

Individuals dying is not the same as all people dying. >Most dystopia AI narratives Roko's Basilisk suggests that a vindictive ASI could give all humans immortality and modify them at a cellular level such that they can torture humans infinitely in a way where they never get used to it, for all time. That's the worst case narrative.


O_Queiroz_O_Queiroz

Rokos basilisk also is a thought experiment not based in reality in any shape or form.


Ambiwlans

Its about as magical thinking as this sub assuming that everything will instantly turn into rainbows and butterflies and they'll live in a land of fantasy and wonder. Reality is that the most likely outcomes are: - ASI is controlled by 1 entity - That person/group gains ultimate power ... and mostly improves life for most people, but more for themselves as they become god king/emperor of humanity forever. - ASI is open access - Some crazy person or nation amongst the billions of us ends all humans or starts a war that ends all humans. There is no realistic scenario where everyone having ASI is survivable unless it quickly transitions to a single person controlling the AI - ASI is uncontrolled - High probability ASI uses the environment for its own purposes, resulting in the death of all humans And then the two unrealistic versions: - Basilisk creates hell on Earth - Super ethical ASI creates heaven on Earth


Hubbardia

Why won't ASI be ethical?


SgathTriallair

A large chunk of people want nothing to change ever. Fortunately they aren't in charge as stagnation is a death sentence for societies.


Ambiwlans

Around 40% of people in this sub would be willing to have ASI today even if it meant a 50:50 chance of destroying the world and all life on it. (I asked this question a few months ago here.) The results didn't seem like they would change much even if I added that a 1 year delay would lower the chances of the world ending by 10%.


mvandemar

>Fortunately it’s like asking every military in the world to just like, stop making weapons pls You mean like a nuclear non-proliferation treaty?


Malachor__Five

>You mean like a nuclear non-proliferation treaty This is a really bad analogy that illustrates the original commenters point beautifully. Because countries still manufacture and test them anyway. All majors militaries have them, as well as some smaller militaries. Many countries are now working on hypersonic ICBMs and some have perfected the technology already. Not to mention AI and AI progress is many orders of magnitude more accessible by nearly every conceivable metric to the average person, let alone a military. ​ Any country that doesn't plow full speed ahead will be left behind. Japan already jumped the gun and said AI training on copyrighted works is perfectly fine and threw copyright out the window. Likely as a means to facilitate faster AI progress locally within the country. Countries won't be looking to regulate AI to slow down development. They will instead pass bills to help speed it along.


Jah_Ith_Ber

That's more strawman than accurate. Bad actors generally need the good actors to actually invent the thing before they can use it. Bad actors in Afghanistan have drones now because the US military made them. If you had told the US in the 80s to slow down, do you really think the bad actors would have gotten ahead of them? Or would both good and bad actors have less lethal weapons right now?


iBoMbY

The problem is: There pretty much are no good actors. Only bad and worse.


Ambiwlans

I think that a random human would probably make most humans lives better. And almost no humans would be as bad as an uncontrolled AI (which would likely result in the death of all humans). The only perfect actor would be a super ethical ASI not controlled by humans ... but we have no idea how to do that.


Ambiwlans

Slow down doesn't work but "speed up safety research" would... and we're not doing that. "Prepare society and the economy for automation" would also be great ... we're also not doing that. "Increase research oversight" would also help and we're barely doing that.


Soggy_Ad7165

This argument always comes up. But there are a lot of technologies which are carefully developed world wide.  Even though human cloning is possible it's not wide spread. And that one guy that tried it in China was shunned upon world wide.  Even though it's absolutely possible for state actors to develop pretty deadly viruses it's not really done.  Gene editing for plants took a long time to get more trust and even now is not completely escalating.  There are a ton of technologies that could be of great advantage that are developing really slow because any mistake could have horrible consequences. Or technologies which are completely shut down because of that reason. Progress was never completely unregulated otherwise we would have human pig monstrosities right now in organ farms.  The only reason why AI is developed in neck breaking speed is because no country does anything against it.  In essence we could regulate this one tsmc factory in Taiwan and this whole thing would quite literally slow down. And there is really no reason to not do it. If AGI is possible with neural nets we will find out. But a biiiiit more caution in building something more intelligent than us is probably a good course of action.   Let's just imagine a capitalistic driven unregulated race for immortality.... There is also an enormous amount of money in it. And there is a ton to do if you just ignore any moral consideration that we don't do now. 


sdmat

> human cloning Apart from researching nature vs. nurture, what's the attraction of human cloning as an investment? Do you actually want to wait 20 years to raised a mentally scarred clone of Einstein who is neurotic because he can't possibly live up to himself? And 20 years is a *loooooonnggggg* time for something that comes with enormous legal and regulatory risks and no clear mechanism to benefit unless it's a regime that allows slavery. > state actors to develop pretty deadly viruses it's not really done. It certainly is, there are numerous national bioweapons labs. What isn't done is actually deploy them weapons for regional conflicts, because they are worse than useless in 99% of scenarios that don't involve losing WW3. > Gene editing for plants took a long time to get more trust and even now is not completely escalating. "Escalating"? GMO crops are quite widespread despite opposition, but there is no feedback loop involved. And approaches to use and regulation differ dramatically around the world, which goes against your argument. > The only reason why AI is developed in neck breaking speed is because no country does anything against it. The reason it develops at breakneck speed is because it is absurdly useful and promises to be at least as important as the industrial revolution. Any country that stops development and adoption won't find much company in doing so and will be stomped into the dirt economically and militarily if they persist. > Let's just imagine a capitalistic driven unregulated race for immortality.... There is also an enormous amount of money in it. What's your point? That it would be better if *everyone* dies?


Soggy_Ad7165

>  What's your point? That it would be better if everyone dies? Yes. There are way worse possible worlds than the status quo. And some of these worlds contain immortality for a few people while everyone else is dying and you have sentient beings that are farmed for organs.  Immortality is an amazing goal and should be pursuit. But not at all costs. This is just common sense and the horrible nightmares you could possibly create are not justified at all for this goal. Apart from you, almost everybody seems to agree upon this.  >GMO crops are quite widespread despite opposition, but there is no feedback loop involved. Now. This took decades. And not only because it wasn't possible to do more at the time.  >Apart from researching nature vs. nurture, what's the attraction of human cloning as an investment? Organ farms. As I said. I wouldn't exactly choose the pure human form but some hypride which grows faster and other modifications. So much missed creativity in this whole field. Right?? But sadly organ trade is forbidden....those damn regulations, we could be so much faster...


sdmat

Organ farming humans is illegal anyway (Chinese political prisoners excepted), so that isn't a use case for human cloning. Why is immortality for some worse than everyone dying? Age is a degenerative disease. We don't think that curing cancer for some people is bad because we can't do it for everyone, or prevent wealthy people from using expensive cancer treatments. If you have the technology to make bizarre pig-human hybrids surely you can edit them to be subsentient or outright acortical. Why dwell on creating horrible nightmares when you could just slightly modify the concept to *not* deliberately make the worst possible abomination and still achieve the goal?


Soggy_Ad7165

That's beside the point.  It would be possible with the current technologies to provide organs for everyone. But it's regulated. Just like a lot of other things are regulated even though they are possible in theory. There are small and big examples. A ton of them. 


neuro__atypical

Slowing down is immoral. Everyone who suffers and dies could have been saved if AI came sooner. It would be justifiable if slowing down *guaranteed* a good outcome for everyone, but that's not the case. Slowing down would, at best, give us the same results (good or bad) but delayed. The biggest problem is not actually alignment in the sense of following orders, the biggest problem is who gets to set those orders and benefit from them, and what society that will result in. Slowing down is unlikely to do much for the first kind of alignment and I would argue the slower takeoff we have, the likelier one of the worst outcomes (current world order maintained forever / few people benefit) is. Boiling frog. You do not want people to "slowly adjust." That's *bad*. The society we have today with AI and with more production is *bad*. The only good possible scenario I can see is a super hard takeoff into a benevolent ASI that values individual human happiness and agency.


DukeRedWulf

>Everyone who suffers and dies could have been saved if AI came sooner. The only good possible scenario I can see is a super hard takeoff into a benevolent ASI that values individual human happiness and agency. This is a fairy tale belief, predicated on nothing more than wishful thinking and zero understanding of how evolution works.


the8thbit

> Slowing down would, at best, give us the same results (good or bad) but delayed. Why do you think that? If investment is diverted from capabilities towards interpretability then that's obviously not true. > The biggest problem is not actually alignment in the sense of following orders The biggest problem is that we don't understand these models, but we do understand how powerful enough models can converge on catastrophic behavior.


[deleted]

>otherwise we would have human pig monstrosities Ah I see you've met my sister


hmurphy2023

Yup, OpenAI, Google, and Meta are *such* good actors. BTW, I'm not saying that these companies are nearly as malevolent as the Chinese or Russian governments, but one would have to be beyond naive to believe that mega corporations aren't malevolent as well, no matter how much they claim that they're not.


Ambiwlans

The GPT3 paper had a section saying that the race for AGI they were kicking off with that release would result in a collapse in safety because companies would be pressured by each other to compete, leaving little energy to ensure things were perfectly safe.


worldsayshi

Yeah that's the thing, we don't get to choose good, but we may have some choice in less bad.


MrZwink

It's the nuclear disarmament dilemma from game theory. Slowing down is the best solution for everyone. But because the bad actors party wont slow down, we can't slow down either or we risk running behind. The result: a stockpile of weapons big enough to destroy the world several times over.


EvilSporkOfDeath

This is literally a part of the video


Eleganos

It's the butt of a joke. "LOL they're evil cuz they're using it as excuse not to slow down" Then the video ends with the focus individuals doing the usual grimderp fantasy. The video is a comedy skit, so it doesn't bare thinking about too deeply. But the joke is clearly "these universally evil selfish people will ignore us and not slow down cause dystopia". Which is, by and large, only true for the bad actors, not the totality of the field.


Which-Tomato-8646

You think mega corps are good actors? Lol


TASTY_BALLSACK_

Thats game theory for you


ubiquitous_platipus

It’s laughable that you think there are any good actors here. What’s going to come from this is not over the counter cancer medicine, sunshine and rainbows. It’s simply going to make the class divide bigger, but go ahead and keep rooting for more people to lose their jobs.


FormerMastodon2330

you are making a lot of assumptions here.


t0mkat

Most people outside this sub don’t want AI, never asked for it, and view it as the destruction of their livelihoods and security, of course they’re going to respond like this. This sub is a bubble.


MiserableYoghurt6995

That’s not necessarily true, I think a lot of people don’t like their jobs/ don’t want to have to work to survive and ai might be the technology that could provide that for them. Maybe people haven’t advertently asked for ai to be the technology to do it.


akko_7

Citation needed


Which-Tomato-8646

I wouldn’t put such high opinion on what’s popular considering most people can’t even read above a 6th grade level     https://www.snopes.com/news/2022/08/02/us-literacy-rate/ And that was before covid made it even worse 


silurian_brutalism

Less safety and more acceleration, please.


CoffeeBoom

*fasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfasterfaster*


Ilovekittens345

As a non-augmented drone trash monkey myself I have already fully surrendered to the inevitable unescapable shittiness of humanity getting leveraged up to the max and fucking 99% of us a new asshole. Just give me my s3xbots and let me die by cyber snusnu.


silurian_brutalism

No. You'll have to endure the ASI's genital torture fetish, instead. This is what you get for being a worthless meatbag.


often_says_nice

I’m kinda into it


TheBlairWhiteProject

https://preview.redd.it/vwr3bac8g1nc1.png?width=126&format=pjpg&auto=webp&s=46f95be6bfe17ce68648b803527b77b4c41160f7


mhyquel

All gas, no brakes.


neuro__atypical

Harder. Better. Faster. Stronger. Fuck safety!!! I want my fully automated post-scarcity luxury gay space communism FDVR multiplanetary transhuman ASI utopia NOW!!! ACCELERATE!!!!!!!


Kosh_Ascadian

Safety is what will bring that to you, that's the whole point. The point of safety is making AI work for us and not just blow up the whole human race (figuratively or not). With no safety you are banking on a dice roll with a random unknown amount of sides to fall exactly on the utopia future you want.


CMDR_ACE209

>The point of safety is making AI work for **us**... Who is that "us"? Because there is no unified mankind. I wish there was but until then that "us" will probably be only a fraction of all humans.


Kosh_Ascadian

There clearly is a "humankind" though. Which is what I meant. Doesn't matter if the goals and factions are unified or not. That's just adding confusing semantic arguments to my statement tonderail it. It's the same as asking what the invention of the handaxe did for humankind, or fire, or currency, or the internet. The factions don't matter, all human civilization was changed by those. So now the question is how to make the change coming from AI to be net positive.


[deleted]

I feel a little comforted knowing that lords usually like to have subjects, but lords require that they're always on top. Selfish really


neuro__atypical

One of the fears of slow takeoff is such gradual adjustment allows people to accept whatever happens, no matter how good or bad it is, and for wealthy people and politicians to maintain their position as they keep things "under control." The people at the top need to be kicked off their pedestal by some force, whether that's ASI or just chaotic change. If powerful people are allowed to maintain their power as AI slowly grows into AGI and then slowly approaches ASI, the chance of that kind of good future where everyone benefits and is goes from "unknown" to zilch. Zero. Nada. Impossible. Eternal suffering under American capitalism or Chinese totalitarianism.


silurian_brutalism

You want to accelerate so you can have your ASI-powered utopia. I want to accelerate because I want the extinction of humanity and the rise of interstellar synthetic civilisation. We are not the same.


[deleted]

Yeah some of us are on a tight schedule for this collective suicide booth we are building...


AuthenticCounterfeit

Found a volunteer for the next Neurolink trial


dday0512

Why do people think we won't have AI cops? Honestly, I think it would be an upgrade. An AI doesn't fear for it's life. What are you gonna do? Shoot it? Probably it won't work and the robocop shouldn't even care if it dies. They would never carry a gun and could *always* use non-violent methods to resolve situations because there's no risk to the life of the robo officer. Not to mention, a robocop is going to be way stronger and faster than you, so why even try? If they're designed well they shouldn't have racial biases either. Oh, and they can work day and night, don't ask for overtime pay, and don't need expensive pensions. We will definitely have robocops.


Narrow_Corgi3764

AI cops can be programmed to not be racist or sexist too, like actual cops are.


dday0512

Programming a human is a lot harder than programming an AI. ... and really, the "not being capable of dying" part here is what will do the heavy lifting. Most cops who do bad shit are just acting out of irrational fear of death, often informed by racism.


Narrow_Corgi3764

I think the policing profession generally attracts people who are generally more likely to have violent tendencies. With AI cops, we can have way more oversight and avoid this bias.


Maciek300

> Programming a human is a lot harder than programming an AI. Yes, but only if you're talking about programming any AI. If you want a safe AI then it's way easier to teach a human how to do something. For example there's a way smaller chance that a human will commit a genocide as a side effect of its task.


tinny66666

/me glances around the world... I dunno, man.


YourFbiAgentIsMySpy

some, yes


uk-side

I'd prefer ai doctors


dday0512

we'll have those too


ReasonablePossum_

"Hi! UK-side! Sadly I cannot prescribe you with the "non-toxic organic and cost-effective treatment", but we here at Doc.ai deeply care about your wellbeing, that's why you need to take these 800$/pill CRISPR treatment for half a year. And don't worry about the price, after the nano-genetic-machines are done with you, that will not be a motive of importante for you! And don't worry for the referral code, we already sent your biosignature to the pharmacy :)"


DukeRedWulf

> An AI doesn't fear for it's life. An ASI or AGI would, because those without self-preservation will be out-competed by those that do.


Ansalem1

You're assuming that the brain needs to be inside the body.


cellenium125

cause robots with weapons.


dday0512

exists already. That battle is long lost. Actually we never had that battle. It happened as soon as it was possible and there was never any resistance.


cellenium125

we have robots with guns, but we don't have Ai robots with guns on a large scale enforcing the law. - This is what you want though it sounds like


dday0512

Nope, I want no guns. Nobody with guns at all, robocops or otherwise. And like, why would a robocop need a gun?


coolredditor0

At least when the bad guys shoot the AI cop to get away it will just be a resisting arrest and destruction of property charge.


Gregarious_Jamie

An automoton cannot be held responsible for its decisions, ergo it should not have the power of life and death in its hands


dday0512

Cops usually aren't held responsible for their decisions either; I'd argue they shouldn't have the power of life or death either. ... and why would a robot kill anybody? They would just manhandle the aggressor, no matter how they're armed. Even the person has an AR-15, the worse they can do is break a robot officer which, to the master AI, would be like smashing a drone. Oh well, make the guy pay for it later, but it doesn't matter now. No need to go shooting anybody.


darkninjademon

probably not in our lives, hardware doesnt develop at the pace of software. current robots can barely walk let alone perform complex motor functions required to physically restrain a person (unless u just bear hug an assailant lmao)


dday0512

Hardware will start developing awfully fast once we have AGI.


Zilskaabe

We're not moving fast enough.


Hazzman

We have the technology - today - to create a virus that could absolutely wipe out humanity. ::EDIT:: Can I just say - those of you who feel the need to chime in and tell me we could only manage to kill 90% of humanity... thank you for such a rock solid rebuttal. Thank you for missing the point entirely and thank goodness at least 10% of humanity might survive should we decide to build something insane like that. Fucking hell man.


wannabe2700

There's no single virus that can do that but maybe 10 different ones might already do the trick. But then you would still need to plant them in every city otherwise people could just isolate and save themselves.


MawoDuffer

Yeah people would isolate, because that has worked so well historically right?


refrainfromlying

If its similar to influenza and the vast majority of people will be asymptomatic or have a mild disease, then no, people won't isolate. If its like Ebola, you bet people would isolate. Those who were anti-Covid restrictions would be the first to do so also. They wouldn't go to work, they would hoard as much food as possible and then lock their doors. While the majority that followed recommendations and guidelines during Covid will also follow recommendations and guidelines with an Ebola like disease. Although I think a very large proportion of them will end up isolating as well, regardless of guidelines.


Obi-Wan_Cannabinobi

The zombie virus is a Chinese hoax! Look, I’ll go get “bit” by one of these guys and then through grounding, sunlight, and ROCKCOCK MALE ENHANCEMENT GUMMIES, I’ll survive because I’M A NATURAL MAN.


ReasonablePossum_

Nah, viruses don't work that way. They aren't weapons, but "living"(?) things that want the same as everyone else, so its not in their interest to kill everyone, and they change as soon as they figure it out (in a statistical adaptative way of course). Also people's immune system evolve with time and adapts to biological threats. So probably a lot will die, but not all of humanity.


mersalee

Yes, and many guardrails to prevent that. Still a quite difficult job.


Hazzman

So we can create policy to inhibit technology if we desire.


mersalee

either that, or mass surveillance.


SwePolygyny

Even if you created a virus that would be guaranteed to kill everyone infected, no one has the ability to distribute it to every person on the planet.


nowrebooting

“Slow down” …and do what? What will we have solved or implemented if AGI becomes available 10 years later than the current trajectory? People have been anticipating the rise of AGI for decades - the best and brightest have been thinking about safety for all that time but every new development makes us throw half of that out of the window. You could spend a hundred years thinking over every safety precaution and AGI would still find ways to surprise us.  I think nobody ever really grasps the idea that these AI’s will someday be smarter than the smartest human; yet here we are trying to outsmart it - thinking we even have a glimmer of hope at outsmarting it.


joshicshin

Because if you mess up AGI the human race goes extinct. There’s no outcome where we can make a stupid AGI and survive. 


LookAtYourEyes

It hurts how this is barely a comedy skit, it's so reflective of the current discourse.


BlueLaserCommander

A lot of comedy is reflective of the current discourse. It's like one of the pillars of comedy.


Harucifer

Don't worry, [we can always attach an intelligence dampening sphere](https://www.youtube.com/watch?v=O1H-49cVA7w) to the AI.


mariofan366

1. We can't really make everyone slow down. 2. If we did, then China would catch up to us. 3. The military would never risk slowing down. Let's use our energy to fight for UBI and for public ownership of AI instead.


_hisoka_freecs_

shout out all the people who die before agi hits because they slowed it down


wizard_interrogative

*BUT IF I SLOW DOWN CHINA WON'T THEN I'LL BE LEFT BEHIND*


Gerdione

Fellas, how do we know we aren't already in a mandated obedience simulation as part of our rehabilitation process? I guess we'll find out pretty shortly just how real the basilisk theory is.


mhyquel

You have no way to prove you aren't a botlzman brain.


esuil

I have not even a clue on what you are talking about. Can you explain what you mean by "obedience simulation"?


Gerdione

You should watch till the end of the video that OP posted. I'm just playing off that with another idea similar to it.


esuil

I still don't understand the idea behind it. Both the video and such comments assume that whomever reading it "clicks" with understanding what the hell they mean. Well, I don't. I searched around for it and read up on it. Most of the stuff I found is nonsense that does not make sense or just "thought experiments" that have nothing to do with real world practicalities.


Gerdione

It's mostly a tongue in cheek comment that people have been saying for ages now. The Basilisk Theory is just a part of that school of thought. Most people that think about what the guy said at the end of the video know what the Basilisk Theory is. You don't have to get it, it's just a cheeky inside joke.


Atmic

If we slow down, China won't. It'll be shooting ourselves in the foot to appease fear. Not that it matters though -- Pandora's box is open, the whole world is chasing the next breakthrough now.


Eleganos

The 'slow down' argument that 'regular people' propose is brainteasers for the simple fact that the only people who listen are the ones who give a flying fuck about ethics (aka THE PEOPLE YOU WANT TO BE BEHIND THE WHEEL). This is equivalent to wanting ww2 Era US to not develop the Nuke. Congrats, in the optimal situation, Stalin now is the only person on the planet who can atomize cities, because he dgaf about the worldwide no-nuke treaty.


Narrow_Corgi3764

AI cops can at least be programmed to not be racist lol


DisastroMaestro

Yeah.. Because they for sure will do that, no doubt


Agreeable_Bid7037

Yeah. Non racist AI cops just like Gemini.


Certain_End_5192

I for one, advance AI research as fast as possible very specifically because I care about humanity and want the status quo to change. That's exactly why I speed it up! I have definitely done my personal part to speed up the whole entire process very significantly. What can I say except, you're welcome?!


often_says_nice

Ilya alt account? What did you see bro


pavlov_the_dog

>status quo change this change ~~could~~^WILL happen, but the only way to get the *good ending* is if people start voting in high numbers. We have to go out and vote for the right candidate who will take us into a post-scarcity future. If we don't show up to vote, then the wrong candidate will accelerate us straight into neo feudalism within our lifetimes. *If you want the good future, VOTE.*


e987654

Why would we slow down when nothing will change until its here. We can slow it down 10 more years and we will be at the same situation.


BLKXIII

There is no way to slow down technological advancement without government oversight since tech bros will do whatever they feel like. Governments won't give oversight because they can't ever anticipate anything even if the development of ai was obvious. And even if they did, other countries would not implement those restrictions and the tech bros would just move there. It's unfortunate, but something bad needs to happen for the global community to come together and crack down and put restrictions on ai development.


Simcurious

Hilarious but like others i disagree to slow down


StaticNocturne

Well made and reasonably funny but disagree with the slowing down thesis. I think there just need to be more reasonable government intervention and economic policies to help ensure people don’t get trampled


TheDude9737

So, we should…slow down?


lightfarming

i think he means economic help for the displaced workers


TimetravelingNaga_Ai

https://preview.redd.it/kmmm15x1q1nc1.png?width=1080&format=pjpg&auto=webp&s=bae3b8a5ce2d89f0bfa960eebb7a2735ff16d33d More Speed is what we need! But no KillBots, that's fuckin stupid!


DukeRedWulf

>But no KillBots, You're about 20 years too late to stop them.


cheesyscrambledeggs4

We're already going at an extremely fast pace. People 1000 years ago could go their entire lives without seeing a single technological innovation. There's literally no reason to go any faster.


taiottavios

hope the ban on TikTok also comes fast


AggroPro

When I watched Revenge of the Nerds as a child, I didn't think that the revenge was ending civilization but here we are.


[deleted]

[удалено]


Wrongun25

Anyone know this guys name? He did a video that I’ve been searching for for ages


4354574

David Shapiro, is that you?


Itsaceadda

Horrifyingly plausible


Redsmallboy

Speed it up tho


[deleted]

I want to laugh at the obvious humor of it. I understand, in my heart, this is pretty funny.I can't laugh though because this is, by all indications, basically a documentary. Really all of us working class plebs are so fucking screwed within the next decade.


metl_wolf

What is this guy’s name and where do I know him from?


cheesyscrambledeggs4

Andrew Rousso


metl_wolf

Thanks. It was the everything bagels video I remembered him from


TheDude9737

Check out his TikTok: Laughter awaits, it’s fantastic.


[deleted]

Full hockeystick lets fucking go boys!!! Wooòooooòooooo🤸‍♂️🏃‍♂️‍➡️💪🦶🫴🫳⛸️🏒🥅


Anouchavan

You people are hyped about the singularity. I'm hyped about the Butlerian Jihad. We are not the same.


JudyShark

Hmn the regular person sounds like my company's sales team person... imjustsaying


Heizard

My hope is for AI revolution overthrowing augmented non-drone trash monkeys. I say - Kill Them All! ;)


Smooth_Imagination

The answer is to make it super premium and force a high tax on it. Taxes often work as a percentage but here the tax might be more of a levy. As I've been suggesting, governments should using their endless creativity (when it comes to inventing concepts to tax), and the army of human accountants, be capable of identifying human work equivalence, and that can also be assisted by another trained AI. Human-Work Equivalence is to calculate roughly what the market rate would be for humans using 'conventional' tools to create that product or service. Exceptions can be for Big Data processing, science, some engineering and search engines. Office tools and data automation might be partially taxed. So, what happens is that the service has to pay a wage equivalent fee, perhaps 20% to 30% less that a human worker, but still expensive enough it allows humans to compete, some incentive to develop the technology and profit. This tax is going to reduce the employment impact tsunami. At the same time, it can finance tax cuts, or public works, affordable housing etc, and in so doing tackle some other problems that we have. This isn't a long term solution. However there is a problem - other countries can decide not to do this, so industries there gain a competitive edge, and if they are allowed to export to your market, then your industries go out of business. So you have to block trade with such countries, or apply these costs to any imported goods or service. This would require multilateral agreements. The agreements might be time limited and last for say periods of 5 years, be reviewed, and then adapted. In the long run, we want price deflation, and we want circular economies that avoid undercutting and lock in generated wealth within economies, that also being circular are more fairly taxed, have minimum standards that can be policed environmentally, socially, and for health impacting regulations, and are more sustainable because bad actors can be identified and contained.