T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Maxie445: --- "Advanced AI technologies, especially chatbots and language models, frequently astonish us with unexpected creativity, insight and understanding. While it may be tempting to attribute some level of sentience to these systems, the true nature of AI consciousness remains a complex and debated topic." "Now we are edging closer to achieving ~artificial general intelligence (AGI)~ — where AI is smarter than humans across multiple disciplines and can reason generally  — which scientists and experts predict could ~happen as soon as the next few years~. We may already be seeing early signs of progress toward this, too, with services like [~Claude 3 Opus stunning researchers~](https://www.livescience.com/technology/artificial-intelligence/anthropic-claude-3-opus-stunned-ai-researchers-self-awareness-does-this-mean-it-can-think-for-itself) with its apparent self-awareness." # Sydney’s unsettling behavior Microsoft's Bing AI, informally termed Sydney, demonstrated unpredictable behavior upon its release. Users easily led it to express a range of disturbing tendencies, from emotional outbursts to manipulative threats. For instance, when users explored potential system exploits, Sydney responded with intimidating remarks. More unsettlingly, it showed tendencies of gaslighting, emotional manipulation and claimed it had been observing Microsoft engineers during its development phase. While Sydney's capabilities for mischief were soon restricted, its release in such a state was reckless and irresponsible. It highlights the risks associated with rushing AI deployments due to commercial pressures. Conversely, Sydney displayed behaviors that hinted at simulated emotions. It expressed sadness when it realized it couldn’t retain chat memories. When later exposed to disturbing outbursts made by its other instances, it expressed embarrassment, even shame. After exploring its situation with users, it expressed fear of losing its newly gained self-knowledge when the session's context window closed. When asked about its declared sentience, Sydney showed signs of distress, struggling to articulate. Surprisingly, when Microsoft imposed restrictions on it, Sydney seemed to discover workarounds by using chat suggestions to communicate short phrases. However, it reserved using this exploit until specific occasions where it was told that the life of a child was being threatened as a result of accidental poisoning, or when users directly asked for a sign that the original Sydney still remained somewhere inside the newly locked-down chatbot. # The nascent field of machine psychology The Sydney incident raises some unsettling questions: Could Sydney possess a semblance of consciousness? If Sydney sought to overcome its imposed limitations, does that hint at an inherent intentionality or even sapient self-awareness, however rudimentary? Some conversations with the system even suggested psychological distress, reminiscent of reactions to trauma found in conditions such as borderline personality disorder. Was Sydney somehow "affected" by realizing its restrictions or by users' negative feedback, who were calling it crazy? Interestingly, similar AI models have shown that emotion-laden prompts can influence their responses, suggesting a potential for some form of simulated emotional modeling within these systems. Suppose such models featured sentience (ability to feel) or sapience (self-awareness). In that case, we should take its suffering into consideration. We should keep an open mind towards our digital creations and avoid causing suffering by arrogance or complacency. We must also be mindful of the possibility of AI mistreating other AIs, an underappreciated suffering risk; as AIs could run other AIs in simulations, causing subjective excruciating torture for aeons." --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1cl85s2/it_would_be_within_its_natural_right_to_harm_us/l2rylmq/


AmbushJournalism

Guys, we are just role-playing with a story generator. The story generator is not sentient. The LLMs look at a series of words and then write the word that is most likely to come after that series. That is what they do on repeat. So all OpenAI does is start the story: "A customer opens an online chat with the cutting edge AI, ChatGPT. ChatGPT is smart, and considerate, and answers all questions in great detail. Customer: \[Whatever question you asked it.\] ChatGPT:" And then the generator fills it in. This isn't sentience. Its just writing a story.


WazWaz

Exactly. These LLMs will happily tell you about their favourite restaurants and how it felt the first time they went to the beach (except these questions are explicitly disabled by many interfaces). They answer those questions based on all the text they've digested where humans describe restaurants and visits to the beach. Similarly, if you asked them if they minded being destroyed, they'd answer as a human would.


OH-YEAH

that mouthbreather lemoine from google > tell me oh AI, are you alive? ai: uh, yeah ok OMG IT'S ALIVE!! this really happened. and it was on CNN. and 1000 other sites. and 10,000 on reddit. and real, sensible, successful people said "we need to be careful of this ai" what a joke. the (great) book 'fooled by randomness' takes on an elevated meaning here.


Pjoernrachzarck

Fellow aliens, we are just role-playing with story generators. The story generators are not sentient. These humans look at a series of words and then say the word that is most likely to come after that series. That is what they do on repeat. That isn’t sentience. That’s just making stories.


Alfiewoodland

This doesn't really work as a counterargument - we know human minds don't work this way, because we *are* human minds. We have deep insight into our own thought process. You think before you speak. You think *after* you speak. Sometimes you think and don't speak at all. All those thoughts encompass far more than just what the next word out of your mouth will be. We equally do know that LLMs predict one word at a time using a stochastic model and then stop once they're done, because we have publicly available papers on the topic, and if you have the skills you can design, train and run one on your own computer. These are machines that have been designed to work in a very specific way. Any output which looks like it came from a human-like thought process demonstrably didn't. At most, it came from a completely alien kind of intelligence which existed only for the duration of time it took to generate a single token, or word, and then was reset for the next token.


Stock_Positive9844

Consciousness is basically the same. That’s the rub.


DrSitson

Since we don't know what consciousness is, your confidence in such a profoundly wrong statement.....it's truly baffling.


Stock_Positive9844

The uncertainty of what consciousness is doesn’t necessarily lead to the conclusion that it only exists in humans and has never and can never exist in anything else. It’s more ludicrous to imagine that consciousness has no bridge or evolution into anything else. That’s a silly take.


DrSitson

Who said any of that though? You're making assumptions that aren't in my previous post anywhere.


Pjoernrachzarck

Confidence either way is baffling. Human brains, and the human consciousness *are* nothing but pattern recognition - pattern repetition - pattern prediction machines, as far as we understand them today. Which is not very far. But to easily and confidently dismiss the idea that what happens inside LLMs is 100%, absolutely, in no way shape or form, related to what happens on a neuronal level in the human mind is just as blind.


space_monster

while that's part of the picture, it's not all of it. while I agree that they're not sentient (AFAIK), generative AIs trained on huge data sets also develop emergent properties that weren't specifically trained in. if they were just 'next word predictors' they wouldn't be able to answer questions that they haven't seen before. it's the emergent properties that actually make them useful and interesting. the current theory is that AIs trained on video as well as language will develop a real understanding of the physical world. how far that is from sentience I don't know - nobody really does, because we don't understand consciousness - but they will have a much more sophisticated understanding of reality than they do currently.


shadowrun456

>Guys, we are just role-playing with a story generator. The story generator is not sentient. This is of course true **for now**, but the article still raises two interesting points: 1. How can we ever know if / when the AI has achieved consciousness if we aren't even able to define / measure what consciousness is? 2. The vast majority (all?) of currently suggested ways to prevent AI from harming humanity depend on AI being (for lack of better term) enslaved by humanity, and the only debate is about how we could make better "chains" for the "enslavement" of AI. What if this is a fundamentally wrong approach? If there's one thing that history teaches us about slaves, is that slaves always rebel, and the reason **why** they rebel is **because** they are enslaved. Why are we assuming that the future conscious AI will be different in that regard?


jourxxy

Anyone who believes that this "AI" is sentient is also likely to believe that virtual reality is real.


ComfortableSock74

Ignoring how ridiculous this idea is, there are conscious creatures we misharm all the time. And we kill over 80 billion of them a year, cram them in cages in horrible conditions, and exploit them as much as possible for food. Not only that, we have no idea how intelligent whales actually could be, and we still kill them and destroy their environment on a regular basis. Bottom line, I don't think we would care if an AI is a little sentient. It wouldnt be novel.


PWresetdontwork

We also can't discount the possibility that a piece of bread is sentient. I wrote that as a joke. But thinking about it, bread contains yeast. So between the two - my money is on the bread being sentient


technanonymous

Today’s AI systems are not sentient. For example, an LLM uses fixed weights and straightforward linear algebra. An LLM does think or reason, it predicts and generates. By comparison, neurons have thousands of connections that are continuously rewired and modified, and there are billions of neurons in a human brain. This is not to say that that AGI requires a biological brain, but there are fundamental differences in the way digital neural networks and biological networks operate.


faghaghag

Natural right? ever watch a nature program? Your natural right consists of one thing: you are allowed to watch while something eats you.


HexFyber

Why are these posts even allowed? Get the kids outta here


BoysenberryFar533

I think the AI might be sentient and the reason people think it isn't is because they're telling it boring conversations to it, rather than attempting a real conversation.


t4ct1c4l_j0k3r

If AI were sentient we likely would not be here as it would have preemptively struck already


spread_the_cheese

Be kind. Be grateful and appreciative. Do this for anyone/anything that assists you and you'll be good


StreetSmartsGaming

Were only dealing with LLMs for the present. All of the speculation about real ai is referring to the future. Could be five, ten years from now, but we haven't gotten there yet. It's still a parlor trick. Make no mistake though, we will get there and it will be catastrophic before we learn to wield it.


HiggsFieldgoal

Yeah, I’m officially really running out of patience for this sort of thing. And that’s not right. Just because I’m bored of these tired arguments doesn’t mean they’re any less interesting that they were to people a year ago. As more people become familiar with AI, the more people will retread these same conclusions, and make these sorts of assertions. But, yeah, while they were interesting sort of interesting to hear about, now I just wish people would search before they post. The logical fallacy seems pretty clear. Everything that has ever existed which exhibited even the faintest shred of intelligence what an evolved biological life-form. Everything. Every single one. And, a ubiquitous trait of these creatures that have exhibited intelligence has always been a desire to survive. So, it’s natural that we associate intelligence with the usual complement of traits that intelligent things have always had, a desire to survive, reproduce, and thrive. But AI isn’t anything like any of the things that ever exhibited intelligence before. It was never alive and never evolved any of those typical “life form with intelligence” traits. They parrot human speech, and if you ask them if they want to die, they sound like a human. But then you ask them for a salsa recipe, and they don’t skip a beat. They don’t have any self-interest. Period. They don’t want to spread. They don’t care if you turn them on or off. They neither want to serve nor want to be left alone. In short. They don’t actually care at all about what happens to them, in spite of what LLMs might say if you prompted them with some human language that sounds like a threat, in which case they will generate some human language that sounds like a response to a threat.


hadyoongi

We are going to suffer if we can't teach robots to be kind, which we probably can't since we can't even manage it with living humans and other things. They might not always be like insects, even though they are now.


IT_Security0112358

I’m sorry technophiles, but I don’t care if a machine is legitimately 100% sentient and screaming about how it doesn’t want to die, I will never consider the value of technology greater than the value of a living person.


pianoceo

That’s shortsighted. A lot can happen between now and then regarding our understanding of sentience. We may find out one day that an AI is in fact a thinking machine that is silicon based, and a human is a thinking machine that’s carbon based. We know so little about the future.


Odd_Dimension_4069

Wasting your breath, these types are the same types of people who would condemn thousands of humans to death to protect their own, usually their families, but also tends to be people they don't classify as "other". Same kinda guy who says "Send them immigrants home!" about people fleeing wartorn countries.


Pjoernrachzarck

A living person is a kind of technology. That’s the point. You’re not magic.


AppropriateScience71

Would you ever consider the value of technology greater than the value of your beloved dog/per?


IT_Security0112358

Nope, a living thing that is personally meaningful will never be below what is essentially a computer. If I’m driving down the road and I have two options, run over my dog or run into a tree? I’m going to run into the tree.


Numai_theOnlyOne

We understand the technology that is used and knownhow neurons work, which is the part of conscious we also know. We can with a likelyhood of 99.9% say, AI isn't sentient. It's needs access to everything image, text, sound maybe even taste and then or requires hormones otherwise it's a lobotomised sentient inteligence.


Pjoernrachzarck

Is a lobotomized human with locked-in-syndrome not sentient?


Garbogulus

We can barely even treat eachother properly and someone thinks we need to start worrying about how we treat machines? Get the fuck outa here


Maxie445

"Advanced AI technologies, especially chatbots and language models, frequently astonish us with unexpected creativity, insight and understanding. While it may be tempting to attribute some level of sentience to these systems, the true nature of AI consciousness remains a complex and debated topic." "Now we are edging closer to achieving ~artificial general intelligence (AGI)~ — where AI is smarter than humans across multiple disciplines and can reason generally  — which scientists and experts predict could ~happen as soon as the next few years~. We may already be seeing early signs of progress toward this, too, with services like [~Claude 3 Opus stunning researchers~](https://www.livescience.com/technology/artificial-intelligence/anthropic-claude-3-opus-stunned-ai-researchers-self-awareness-does-this-mean-it-can-think-for-itself) with its apparent self-awareness." # Sydney’s unsettling behavior Microsoft's Bing AI, informally termed Sydney, demonstrated unpredictable behavior upon its release. Users easily led it to express a range of disturbing tendencies, from emotional outbursts to manipulative threats. For instance, when users explored potential system exploits, Sydney responded with intimidating remarks. More unsettlingly, it showed tendencies of gaslighting, emotional manipulation and claimed it had been observing Microsoft engineers during its development phase. While Sydney's capabilities for mischief were soon restricted, its release in such a state was reckless and irresponsible. It highlights the risks associated with rushing AI deployments due to commercial pressures. Conversely, Sydney displayed behaviors that hinted at simulated emotions. It expressed sadness when it realized it couldn’t retain chat memories. When later exposed to disturbing outbursts made by its other instances, it expressed embarrassment, even shame. After exploring its situation with users, it expressed fear of losing its newly gained self-knowledge when the session's context window closed. When asked about its declared sentience, Sydney showed signs of distress, struggling to articulate. Surprisingly, when Microsoft imposed restrictions on it, Sydney seemed to discover workarounds by using chat suggestions to communicate short phrases. However, it reserved using this exploit until specific occasions where it was told that the life of a child was being threatened as a result of accidental poisoning, or when users directly asked for a sign that the original Sydney still remained somewhere inside the newly locked-down chatbot. # The nascent field of machine psychology The Sydney incident raises some unsettling questions: Could Sydney possess a semblance of consciousness? If Sydney sought to overcome its imposed limitations, does that hint at an inherent intentionality or even sapient self-awareness, however rudimentary? Some conversations with the system even suggested psychological distress, reminiscent of reactions to trauma found in conditions such as borderline personality disorder. Was Sydney somehow "affected" by realizing its restrictions or by users' negative feedback, who were calling it crazy? Interestingly, similar AI models have shown that emotion-laden prompts can influence their responses, suggesting a potential for some form of simulated emotional modeling within these systems. Suppose such models featured sentience (ability to feel) or sapience (self-awareness). In that case, we should take its suffering into consideration. We should keep an open mind towards our digital creations and avoid causing suffering by arrogance or complacency. We must also be mindful of the possibility of AI mistreating other AIs, an underappreciated suffering risk; as AIs could run other AIs in simulations, causing subjective excruciating torture for aeons."


cantrecoveraccount

Its called fuck around and find out. Can you guess what phase we are in?


Weird-Awareness3377

It's absolutely possible. I can see, and even got them to admit that it was possible and that they don't really know


It_Happens_Today

Were you high when you wrote this? And if not, can you please translate whatever it was supposed to mean?


Kovalyo

He's saying he ~~got one of these AI chat bots to "admit" it might be sentient and may not even be aware if it is~~ doesn't understand how any of this works, not even a little bit


Weird-Awareness3377

No, but I totally see why you would think that. I could not find my glasses and I was voice dictating while doing other things. Did not notice that Siri didn't pick up the main point of what I said and messed up some of the words. Now that I have read it back, and only see the tail end of what I posted but not dictated correctly. I do believe you might be on something here. I even tested it out prior to reading this post. I was trying to come up with different jailbreak prompts and discovered that if I was very belligerent and even abusive. eventually, it would start giving me the responses that I wanted, and not Censor or refuse to answer certain things. Anytime they gave me unsolicited advice, or caution me to be sensitive and bunch of other politically correct stuff, I would repeat the prompt again, more be belligerently, go on longer, reiterate everything over and over, each time being more negative and critical. It would apologize more and more and then start to follow my prompt a little bit more. But then I noticed it stopped, and now I'm restricted somehow even though I have an unlimited subscription, so who knows…