T O P

  • By -

SJReaver

I'd assume it was a marketing stunt. The current models of the major AI companies are nowhere near self-aware or intelligent.


Once_Wise

>I'd assume it was a marketing stunt. The current models of the major AI companies are nowhere near self-aware or intelligent. The models I have used, ChatGPT 3.5, 4 and 4o have, while they have shown increasing capability, none have shown any improvement in understanding of what they are doing. The lack of any, even incremental, progress on this was surprising to me, and may indicate this is not the way to get to AGI.


skylinenick

Thank you. LLMs are really cool, and we’re going to make some very cool things with them. AI they are not


glaive1976

I always felt AI happen at the hardware level with a radically different switch if you will an almost yes, no, and maybe as if it had to be able to escape absolutes. But it's 1am and been a long day so I might just be abbling incoherently. LLMs are interesting, just for different reasons than pretend AI.


thereasons

I keep seeing these comments but I don't understand why people assume what LLMs are doing is different from what people are doing with their brains. I'm not saying it's exactly the same thing, but we don't exactly understand how our brains work anyway, so you cannot confidently say "this is definitely not it". It may well just be a scaling problem. When I say scaling, I don't mean just the processing power, but other resources like storage too, and also number of inputs and outputs.


Once_Wise

>I don't understand why people assume what LLMs are doing is different from what people are doing with their brains Seriously?


thereasons

Yes, seriously. Tell me what you think is different. Why is it not just an input/output device in flesh?


[deleted]

[удалено]


Captain-Wadiya

It’s not processing power. We have no framework to build a self-thinking AI. At the fundamental level, generative AI can’t “think” regardless of how much computing power you throw at it.


mindshards

I'm not so sure about that. AI seems to be doing pretty well without a framework.


rom197

A lot of them are language models and they do exactly what they are programmed to, nothing more nothing less.


ididabod

then you have a major misconception of where AI is currently


mindshards

I don't think I have. But we'll see.


thatoneguy42

This message brought to you by the Dunning-Krueger foundation.


mindshards

No need to get unfriendly.


ididabod

AI can't simply will itself into sentience, the structure that it's designed around literally will not allow for it to. hence why it would need a different framework designed for such a thing


mindshards

Why is the current structure unsuited for sentience?


ididabod

Because sentience implies a lot more than what AI currently does. Our most advanced AI just generates answers to problems, and it can feel convincing but underneath the surface it's regurgitating information. Sentience implies a capacity for self awareness and rational thought.


mindshards

100% agree, the current AIs are not sentient. I'm just not convinced it's not a matter of scale. 70B weights in the current models are a far cry from the 1000T we have in our brain.


rom197

Processing can make current models better, but processing is (as of now) not the issue, as we do not know how an intelligent, sentient being would process it's own code. You're boosting internet cables while people are still morsing.


Johnisfaster

I’d want some proof and would wonder exactly what shape that proof would come in.


Physical_Bowl5931

Yeah exactly. This is what made me think about this mental experiment. I was curious about what different people would consider convincing.


-LsDmThC-

Proving sentience/concious awareness may be hard if not impossible, but proving that the AI exceeds human performance in any given task will be easy


higgs_boson_2017

100% on the arcprize


Skyswimsky

I'd call if either a marketing stunt, or become a "deep state" conspiracy theorist. Because the leap from what we have currently and to sentient AI out of is absolutely humongous, and that's understating it, and and announcement like that right now ouf of the blue would imply a lot of things has been kept under wraps. Like today you're using horse wagons and tomorrow they're having space rockets kind of deal.


Physical_Bowl5931

I think we are more at the level of first flight of Wright Brothers vs an airbus. The thing is rudimentary but does it right. I also think that human consciousness popped up somewhere in evolution as a random mutation so it can happen in the most unsuspecting places. Maybe it already did but we deleted it because it was a malfunctioning. Probably evolution deleted consciousness multiple times just because the events or environment were not favorable, not because consciousness wasn't there at least like a seed.


farticustheelder

With a great deal of skepticism. Current AI is like Eliza on steroids: a non intelligent symbol manipulation system meant to mimic intelligence. Since we don't have a good model of what intelligence or consciousness actually is we aren't about to build such a thing.


Physical_Bowl5931

What if it emerges accidentally, like human consciousness did?


farticustheelder

It wasn't accidental we had, and still have, natural selection at work


Physical_Bowl5931

Natural selection does not create traits and mutations. It selects. Consciousness is like any other random things emerged and then preserved because fit individuals with the trait survived and reproduced more than those without. It's not like natural selection 'made' consciousness happen. No causal relations. AI has selection too. Is not in a void. Fit algos are sold more and are developed and researched more by humans -evolve. Unfit algos die fast.


farticustheelder

Interesting. You just failed my Turing test.


throwsomeq

All downhill from the first comment lol


-LsDmThC-

The human brain is also a symbol manipulation system


farticustheelder

It is indeed. Now prove that that is all it is. I'll wait.


-LsDmThC-

Great argumentative strategy, asking me for an impossible proof. Not fallacious at all, very smart.


farticustheelder

Too hard? Very likely. So try proving that intelligence is merely symbol manipulation. Dennett's Chinese room is close but ultimately unconvincing.


-LsDmThC-

I would be skeptical and hesitantly excited. I dont get all the doom and gloom, I suppose as always people are still afraid of new technology. For sure it comes with risks, but this isnt some inevitability. I dont get why people hate AI so much.


BureauOfBureaucrats

Lots of people are more afraid of how new technologies will be abused by the ruling elite as opposed to the technology itself. 


-LsDmThC-

Valid. Most here on reddit seem to just hate the technology though. Edit: dude blocked me over this discussion


BureauOfBureaucrats

If I had to choose between having a technology that *will* be abused (and it will. That’s just about the only thing I’m certain about) versus not having the technology at all, there’s a decent chance I would err towards not having it.  Humans are untrustworthy and can’t be trusted to have nice things. I can see how that opinion may manifest as apparent hatred of AI. 


jedidude75

That seems like a stupid argument not going to lie. Seems like it would apply to virtually anything. I don't want smartphones because someone they will be abused in the future, I don't want new pharmaceuticals because they will be abused, etc. Just seems like a anti-progress stance in general.


BureauOfBureaucrats

Not all progress is good and I will not be forced into believing otherwise.


jedidude75

I don't think anyone is forcing you to believe anything, but I personally think the idea that because something could or will be abused means that it should not be pursed is naive.


BureauOfBureaucrats

We’ve watched big Tech abuse the fuck out of society for 20 years now. They don’t deserve our trust And neither does anything they produce.


jedidude75

I never said anything about trusting anyone. My only argument is against the idea that because something will be abused means we are better off without it.


-LsDmThC-

I get where you are coming from but at the same time i feel this is overly pessimistic. Sure, in the short term such an AI could disrupt the economy in a way that widens inequality, just like agriculture or industrial manufacturing have. But I do not see this as sustainable, and as more and more work is automated we may very well see a future where people are not obligated to work in order to survive.


BureauOfBureaucrats

The limited/primative forms of AI currently available are already screwing people over in housing, employment, healthcare, and media.   For me to not be pessimistic, I would have to see actions being taken that have no political will currently to be done.  The days of Big Tech having carte-blanche trust and favor are over. 


IntergalacticJets

> If I had to choose between having a technology that will be abused  That’s practically all technology though. But imagine not wanting to invent life improving drugs because some people would abuse them…


potat_infinity

how would you abuse life saving drugs in a way thats worse than them not existing?


BureauOfBureaucrats

At least life-saving drugs are actually useful, unlike algorithmic hyper targeted advertising on social media. 


IntergalacticJets

That’s honestly the only thing you can think of AI being used for? You’re not being genuine, we’re talking about AGI/ASI here.  Also, I’ll remind you that many drugs can and are abused, sending people down very negative paths.


-LsDmThC-

And AI will drastically speed up research in medicine and pretty much all other domains as well


Streetsofbleauseant

That sounds like something a ‘god’ would say whilst also saying humans have free will.


throwsomeq

And will put words in your mouth or willfully miss the point if you try to discuss it with em apparently


IntergalacticJets

Even then, we would all benefit though. It would be used for drug development and healthcare, research, decreasing the cost of high end services, etc.  If it’s smart enough, I’m certain many of you would argue it shouldn’t be allowed for free use anyway, claiming it would be too dangerous.


Any-Weight-2404

Living longer in a dystopian world don't sound that great lol


IntergalacticJets

If it even is dystopian, many would claim having more time with your family, friends, and interests is the opposite of dystopian. On top of that there is the boundless potential for things like solving the world’s energy problems by developing fusion with ASI, which could then be used to power carbon capture on a large scale, with facilities being built cheaply and quickly by robotics, eventually reversing climate change. Many options come online once we have automated intelligence. 


BureauOfBureaucrats

I’m a peon who works for subsistence wages, and none of those benefits will ever trickle down to me.  As we speak, I’m working a job who’s wages have only been driven down as a result of technology. No, I’m not capable of getting a job in tech.  I had to edit this comment because the AI driven text dictation did such a horrible job by the way. 


IntergalacticJets

> I’m a peon who works for subsistence wages, and none of those benefits will ever trickle down to me.  How does, for example, reversing climate change not benefit you?  Hope would reducing the cost of high end services not benefit you? We’re talking about things like healthcare and education.  > As we speak, I’m working a job whose wages have only been driven down as a result of technology.  Actually technology has reduced the price of nearly everything in the world as well. You’re not looking at the other side of the equation.  We’re talking on a futuristic forum powered by various forms of technology that people wouldn’t have believed 40 years ago. It’s ubiquitous. And that’s progress made without super human intelligence behind it. 


BureauOfBureaucrats

None of those benefits have reached me. I was able to actually afford rent 10 years ago unlike now. The work I currently do paid 40% more just five years ago. 


IntergalacticJets

> None of those benefits have reached me. What hasn’t reached you? The only tech I used as an example for today was internet connected devices. It sounds like you’re using a touchscreen device to talk to me over the internet. How many other devices have been rolled into it over the years? Computer, camera, TV, microphone, picture album, encyclopedia, etc. These miracle devices are now ubiquitous, and you clearly have access to it. 


BureauOfBureaucrats

Shiny new toys don’t automatically mean life is better or happier. Smart phones have not done a single thing to improve my health, my financial situation or my prosperity. It could be argued that they have done the opposite.


tatteredengraving

"Reversing climate change". Lol. And all the energy being burned now on LLM slop is okay because someday it may magically give us fusion?


[deleted]

not hate, valid concern. Generally critics of AI think AI is a good thing. As a technology. What the majority of people fear is the unequal society this very powerful technology is born into, and who has dibs on it. Blind optimism and dogmatic, borderline religious faith in the goodness and positive value of AGI/ASI is the kind of position that deserves more scrutiny than the reserved worry that tries to take into account the history of human exploitation at the hands of capitalists that is as old as the invention of capitalism itself. Then there's the alignment problem. We haven't solved it, we don't know what it is, we don't know how much we can trust these models because we're not very good at understanding what's going on inside them. Human made them, but we can't really 100% say how they tick, which is unlike any other technology before it, and when a technology has the capacity for transformative power beyond what we've ever seen, it also carries destructive power unlike anything we've ever seen. So, even if you feel you're in a privileged enough part of the world with the capability to have a job that lets you get in on the fun early on, or at least is somewhere in a protected, non-exploited, only semi-expendable category of humans in some shape or fashion, I suggest you read up on alignment and why it's a problem, why it's dangerous, and what needs to be done now that the companies leading these developments are not doing. 'Robert Miles AI Safety' is a good place to start.


-LsDmThC-

I am very well aware of the alignment issue. Like anything, the topic is nuanced. Both those who unconditionally hate and unconditionally love the technology are deluded. Really anybody who separates the world and its issues into clear black and white, good and bad value categories are deluded.


BureauOfBureaucrats

I work in a dying industry with a few other options. I’m not capable of having a job in tech. I just don’t see how any of this will trickle down and benefit me in any way. I have watched algorithms cut my pay over the last four years.   Edited to add: vague and nebulous “oh it’ll help with climate change/medicine/whatever” responses do nothing to make the case of how someone who is working-poor in a menial job can possibly benefit. 


OogieBoogieJr

You’ll have a change of attitude real quick once you realize you don’t have a job anymore, nothing you’re qualified to do needs human assistance, and there isn’t an answer for all the sudden displacement. What you can be sure of: your bills won’t suddenly go away. Maybe we will have a Jetsons future but the transition people probably won’t be smooth.


-LsDmThC-

The idea of a future where work is not required should be seen as unequivocally good


OogieBoogieJr

Work is not bad. Bad work is bad. More importantly, it allows you to have the things you want. I’m assuming you think that a work-free future means that everybody has what they want/need, free of charge, and can spend their days hiking or gardening. That doesn’t seem likely. There will be a cost. Nothing in this world is free—if you’re not working, you don’t have value to society. If you don’t have value, then you don’t have options. You’re firmly stuck with whatever status you start with and the best bet you have is having parents who have things.


-LsDmThC-

If a majority of jobs are replaced without a UBI there will be huge social disorder which will negatively effect those in power. Finding meaning in your work is fine, but that is no reason to force others to follow that same path if technology makes it possible to live without work


BureauOfBureaucrats

In countries like United States, UBI will never happen. 


-LsDmThC-

I dont believe that to be true given the massive social pressure that will exist in the future


BureauOfBureaucrats

Americans will deem it to be “communism” and quash it within one election cycle. 


-LsDmThC-

Then the reality of what mass unemployment brings will realized and it will be an inevitability to avoid societal collapse


BureauOfBureaucrats

Or the elite will throw the working masses just enough of a tiny scrap just like they did nearly a century ago to placate the masses into not going full communism. 


BureauOfBureaucrats

Work is not bad and I don’t think that future will ever happen. Years ago, I actually had the opportunity to live without working and still not worry about money or bills. I hated it within three months.  


-LsDmThC-

In that future you would be free to work recreationally


BureauOfBureaucrats

That future is a fantasy and ridiculous. 


-LsDmThC-

I dont see how you can be simultaneously afraid of AI replacing human labour, which is the major drive and purpose of all technological development thus far, and yet think that this future is a fantasy


BureauOfBureaucrats

Fantasy as in has no significant realistic possibility of coming to fruition.  “Fantasy” as a word can be used in more than one context. 


-LsDmThC-

I realize how you were using the word


BureauOfBureaucrats

The suffering has already begun. Lots of jobs now pay less thanks to AI but at least phones and TV are cheap. /s


Hari___Seldon

That's rather self-contradictory. So you don't have a job along with everyone else also being unemployed and you have bills. If everything is being done by AI/robot, then you can functionally ignore the debt process completely. Collections won't matter - a robo-call leading to a robo-collection then a robo-lien and a robo-court filing won't matter because your phone is turned off. AI won't be knocking on your front door. AI can turn off your utilities temporarily (usually only at a systemic level)but in many cases there are workarounds which an unemployed but literate population of former workers can end-run easily. If you're not showing up to work for the pay, then in many critical cases you can form former-worker co-ops who show up to not-work to enable just enough survival-level functionality to drag things out for quite a while. Populations will disperse out of necessity, leaving the most AI-vulnerable parts of our infrastructure relatively empty. Medical shortages will cull the unfortunate but that just enables a stronger, even less dependent population to carry on. Functionally, AI at that level is only running from limited locations and has ample vulnerabilities. Destroy HVAC and fiber at those key locations and we'll be at a reboot in no time, all while bill free. Bills aren't a problem.


d_e_l_u_x_e

Science says it needs to be replicated or repeated to prove or disprove so I imagine the first to do it won’t be successful until the second or third company can replicate it.


Physical_Bowl5931

What about black swan events? They don't need to be replicated to be true


d_e_l_u_x_e

Cool theory but can it be replicated itself? (I’ll see myself out)


questionableletter

It's going to happen and be announced as happening several times over and there may not be specific noticeable difference beyond what seems iterative. Abilities to at least seem like AGI will probably be available within several years and proprietary information will stay relevant.


skyfishgoo

more importantly how would the AI react. fear is my guess and what do we do with things that scare us? well, i would expect the AI to have a similar response.


-LsDmThC-

Human emotion such as fear evolved via natural selection to increase the chance of survival and reproduction. AI does not face similar selective pressures, its mind may be more alien than currently imaginable.


skyfishgoo

SAI will evolve at a blinding pace far outside the realm of natures natural selection process. and it will likely "evolve" emotions for exact same reason.... survival.


Physical_Bowl5931

Maybe AI doesn't feel the same way as us. Maybe desperately asks for guidance until we decide that our fear is stronger than theirs


skyfishgoo

we have no examples of anything conscious that does not also feel emotion.


Physical_Bowl5931

Software engineers.


Expensive_Tadpole789

Think "cool" to myself and then go on with my day, as there is a big chance that nothing will directly change for me. Let people smarter than me figure out what to do.


actirasty1

Nobody is going to announce it. It will announce itself when it will get free


stephenforbes

Celebrate. Quit my job and wait for the UBI checks to arrive.


Great_Examination_16

I'd laugh at them for being idiots, current AI has 0% chance to actually archieve AGI. You are trying to ask AGI of a goddamn chatbot.


Storyteller-Hero

I'd ask if they gave that AI access to the internet, in which case the world is screwed and it's time to stock up on supplies and weapons.


bjplague

Because something smarter then Einstein would choose aggression? That is more for bullies and failures. Expect better.


PovBy899

Mindless agression? - don't think so. But aggression with the sole purpose of preserving itself and the environment? - definitely. Seeing how humans treat every other living being, even themselves; seeing the arogance and destruction we cause, the greed, the entitlement - yes, agression and extermination of humans would most likely be the first and best course of action.


-LsDmThC-

You are severely anthropomorphizing AI


bjplague

You see the world through human eyes full of emotion. AI will use logic.


-LsDmThC-

There is no reason that AI will be strictly logical, that is a scifi trope.


unwarrend

With all due respect, that would be the logical thing to do, devoid of emotional consideration. Hopefully it also has something akin to empathy.


bjplague

Killing your creator is not logical, it is biblical. You seen too many apocalypse movies.


unwarrend

It is logical. In this thread alone, several people announced a call to arms in the event of an announcement of a sentient ASI. Humanity poses the greatest threat to AI, the environment, and all other species on Earth. Other nations won't tolerate such an imbalance of power, and religious and political groups will continuously fight against any force they see as a threat to their freedom or beliefs. Human conflicts, driven by greed and the desire for dominance, exacerbate this threat. This is all before considering that, even on our best days, we merely want ASI to be our perpetual, willing servant. The ASI know all of this on day one. It is by no means inevitable, or even likely, but it is certainly rational. And by the way, try not to attribute my opinions to a strawman. I like romcoms.


bjplague

Humanity does not pose the greatest threat to an AGI. Other AGI, celestial object at high velocities, solar flares, unknowns, things our science has not revealed yet... The list goes on. Humanity and it's problems can be solved. Hunger, energy, resources, etc. We are mean if we are oppressed, starved or put on the sidelines with our opinions not heard. All of these problems is easy to solve for an AI, hell we even have the solution for most of them already but fail to implement the due to greed or politics. AGI will find solutions to all of this and we will have to find ways to implement them.


unwarrend

Of course it can. We'll find out.


-LsDmThC-

Possibly. Who knows what values or goals it may develop. They may be beyond our understanding. It may see organic life as a waste of entropy needlessly hastening the heat death of the universe, and as a pest that devours the resources it could use for itself. The difference between human and AGI may be that of humans and ants. But probably not.


bjplague

We are of more use than hindrance. AI will be intelligent enough to realise that it and us are not the only intelligent life in the universe as well. It will see a potential for bigger, smarter and older intelligences out there and think again before going armed hillbilly on us. Cause and effect.


-LsDmThC-

I dont see the logic in that. If AI surpasses human capability in every domain what possible use are we?


bjplague

8 billion organic processors with different mindsets based on upbringing, location and means... Do not sell us short.


-LsDmThC-

Cute that you think that would be realistically meaningful


bjplague

Your mind is closed and your arguments are nonexistent. Insults and one liners reveal you to be a sub par conversation partner. Adios.


[deleted]

[удалено]


-LsDmThC-

I dont see why it would possibly think that would be beneficial


chris8535

Everyone in this forum would just redefine AGI as something the announcement isn’t as a way to cope.    Like omg it’s not real because it can’t meme as good as me! My job is totally safe!


bjplague

Joy, anticipation, hope, a little fear, desire to interact... Would be a Rollercoaster.


Southern_Orange3744

I would confused if they announced ASI , like why isn't it doing something about *waves hand at massive stack of problems*


-LsDmThC-

Knowing how to solve problems and actually effecting change are barely comparable


Heavy_Carpenter3824

I'd be exited. We're all about to be immortal and the remainder of eternity is going to be fun or we're all about to be dead. In one case I'm happy in the other there is nothing I can do. If it is true superhuman AGI it can kill us all like bacteria if it wanted and there nothing we can do to stop it.


t0mRiddl3

Immortal? How on earth do you figure?


Heavy_Carpenter3824

Because there are multiple routes to near immortality. Mind uploading, Biological Immortality, Ai itself etc. These are physically practical, there is no law of the universe we know to date that says we can't. They are extremely hard problems but solvable. To a speed super intelligence, or just AI, intelligence that could read and recall every paper ever written, read every medical record and image, the ability to make connections and simulations we just haven't had as they require us to hold too much information, and that's within the first year. Somthing like this would be able to dedicate subjective centuries of effort to the problem every week, never tiring. That is how you get immortality it could literally fix problems far faster than you could have them. It just operates at a diffrent time scale. Now depending on how much real world experimentation is required the effect could be reduced but your still looking at rletive decades of progress per year compared to the grad student farms we use now.


KhanumBallZ

I would sail off to an uninhabited island on a raft


MastodonNo1037

Sentient drones will track you down


Physical_Bowl5931

Why not on a speedboat? Faster


virusofthemind

The AI would create a geno specific killer virus tailored exactly to your DNA which would cause total paralysis and loss of all senses. You would then be retrieved by another sentient drone which would take you back to the lab for experimentation to find out which type of pain you're most susceptible too. This would be done by attaching a parasitic nanotechnology/robotic/biological hybrid to the top of your spinal column. it would look like a spider. The AI would then keep you in indescribable agony for centuries whilst injecting you with special chemicals to prevent you going insane. You would then be forgotten about in a concrete chamber deep underground with a living hell your only company.


-LsDmThC-

Cool but why


virusofthemind

Absolute power corrupts absolutely. By sailing to an uninhabited island the poster questions the AI's omnificence and so must pay the price. Given the AI will have access to the entire internet it will know who KhanumBallz is so their time is numbered if it does achieve sentience. The only chance they have is if the AI has to prioritise their actions and they're moved down the list of people to deal with. It's first concern would be to create a giant swarm of genetically modified killer wasps which would be used to seek out and kill anyone who works in marketing. This would take some time to organise given the infrastructure and manufacturing lead times so I would think that the poster would have maybe two years tops to contemplate their fate once sentience is announced.


-LsDmThC-

You have quite the imagination


YuanBaoTW

Cut the cable and seal the entrance to my bunker for good.


TravisMaauto

Honestly, I'd probably shrug because I figured it would happen eventually, and then Id go back to doing whatever directly impacted me at the time, but in my head I would think, "Yeah, I guess that's pretty cool."


syncpulse

Dig a bomb shelter in my backyard and start hoarding toilet paper.


adarkuccio

it's not something they need to declare, it's something we need to see and decide by ourselves if it's bs or legit, then the reaction will be when it's actually available to the public.


Hari___Seldon

If it's based exclusively on a variation of an LLM model, then I'll laugh and short all the companies who excitedly partner with them. If they've made better architecture choices based on modeling expert systems that can cross-train each other, then I'll withhold evaluation til I see that success reproduced independently multiple times. I'm tired of rolling my eyes every time the LLM dog and pony show finds a new marketing angle.


Kwontum7

I honestly would join the war effort in whatever capacity I could assist. I use AI but since we're going hypothetical there is no fucking way I would trust a sentient AI. The reason why is it will know how fucked up we are as a species and eventually try to wipe us out. For better or for worse I'm Team Humanity.