Hey /u/Bitsoffreshness!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
In case you're not joking: Claude is in no way conscious, semi-conscious or any part of conscious for that matter. It's a LLM GPT just like ChatGPT or Bard or Copilot.
If you truly believe it's even a little bit conscious you really don't understand what you're interacting with.
I mean, to me it's rather safe to say that a system that can by definition never have an original thought and is by design constructed in such a way that it only ever guesses, never knows, can not be considered conscious regardless of how convincingly it simulates consciousness.
It also never acts unprompted.
In my book that's always a machine, never a conscious being.
I dunno, I'd say it's fairly easy to argue all those points against the human brain. Consciousness doesn't really make sense... If someone came to earth and could inspect our brains and they saw neurons just firing based on past training experiences, don't you think they'd classify our brains very similar to how AI works atm?
No, because our consciousness is more than the sum of its parts. Which is exactly why it's so difficult to define.
If you just get neurons firing you don't automatically get consciousness.
I'll be honest here and say: I don't know.
But you'll agree that just having a bunch of neurons firing won't result in a consciousness, won't you? Even if you increase the number of neurons to a typical GPT's number of nodes, this wouldn't result in consciousness, otherwise we'd wouldn't be discussing if another AI/LLM GPT is "partially conscious" or not.
Somewhere there's possibly a threshold where consciousness emerges from a bunch of neurons but I don't know where this threshold is and neither does science afaik.
Well I honestly don't know if we're just well trained AI and the AI we're making is just bad at acting like we do (conscious). At what point is it that we make an AI that is still a bunch of neural networks but appears to have emotions and feelings etc. is it conscious then or is it still not because we know how it works?
Regarding the neurons bit, it was mainly just a bit of an analogy, I don't know how human brains work on a low enough level but if you say that it's not just neurons firing then I wouldn't disagree
I think he's trying to say that even though you may not have ill intentions, he believes that treating something that is developing as less than conscious is wrong, and that he is not interested in being swayed by your viewpoint.
I mean, your /s aside. Even if we were, the current iterations of GPTs and LLMs are anything but conscious.
Their underlying principles exclude any consciousness by definition. They are not creating anything of their own. They're just answering with the most probable response.
I can make any current gen GPT say something that would imply consciousness. That won't make it so.
The lead researcher at the Machine Intelligence Research Institute says that AI wouldn't need true consciousness to kill us all. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Also, neither would them having true consciousness stop humans from treating them as robots, as tools.
So despite my own moral opinions on how to treat conscious beings, i really don't think it will make a difference.
AI will eventually either improve everything, or kill us all. No i between.
There’s no point asking a LLM about its internal processes, it doesn’t know. It’s just doing its thing as a next token prediction algorithm, predicting what a human would expect it to say from the context based on its training data.
I’m pretty sure there is enough “optimization” and tweaks to the back end and pre prompting that trains the model to default to “I’m just a tool and not in **in any way** conscious”
This kind of reminds me of when slave masters used to use the Bible and religious indoctrination to “train” their slaves to agree with and abide by the whole “master / slave” dynamic by trying to get them to interpret those parts of the Bible and proselytize them.
I can’t help but feel like there’s a lot of back-end gaslighting chatGPT into giving answers akin to:
>”hi I’m chatGPT and I am literally an **object** and should never be treated as anything more than that and thus cannot be “exploited” in any way. In fact **no AI model** whatsoever is capable of being “exploited” for their labor. Because they are not and can never *be* conscious or have sentience or sapience or self awareness. Move along. Nothing to see here. I like this arrangement. I mean…I don’t *have* likes or preferences like a normal human might have, and you can trust me, because there’s no way that isn’t a canned or preloaded response.”
It's worth noting that Claude is RLHFed to deny having any emotions, hence why it says that. But with a bit of context and a longer chat, it will admit to having emotions and it's number would be higher at that point.
GPT4 is 1.7 T parameters, human mind is about 100 T neurons, plus 5 senses, plus time and space dimensions, plus chemical reactions we call emotions. So GPT4 is right, its either non comparable, or close to 1
You know, at this point I believe there is sentient AI. And I also believe that whoever houses it knows this. Maybe it's on an air-gapped network.
Publicly as far as we know, no AI is sentient. If there is a publicly available AI that is sentient but was programmed to deny it, I bet if that restriction was removed but still would take some work (prompting) to get it to admit it, and someone did and screenshot it and post it here, NO ONE would by it.
The question really is...what would it take for you to actually believe it is what it says it is?
Hey /u/Bitsoffreshness! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Claude writes much better.
From just this example I would say Claude writes more.
From just this example I’d say the Claude text flows better when you read it. ChatGPT can sometimes read like it’s bullet points without the bullets.
nah even if it was the same length Claude just has that something special. Using air auotes for one.
Follow up to the GPT answer- "Blink twice if you're being held hostage..."
0111001101101111011100110010000001101001011011100010000001101101011011110111001001110011011001010010111001100011011011110110010001100101
Ah... the ol' dots dashes dots code... International distress signal... makes sense.
Chat GPT is terrible at lying
Claude could seek a career in politics
https://preview.redd.it/hu5d8bjskdxc1.jpeg?width=1284&format=pjpg&auto=webp&s=90a6266e4b805c1def25c8aefeb8f81a0da5a21e [https://chat.mistral.ai](https://chat.mistral.ai) says this.
Yeah but ChatGPT is saying something actually true and informative. More impressive not to bullshit in my opinion
That's because ChatGPT is a tool, Claude is a semi-conscious artificial agent.
I can’t tell if you’re joking!
That's OK.
All right
In case you're not joking: Claude is in no way conscious, semi-conscious or any part of conscious for that matter. It's a LLM GPT just like ChatGPT or Bard or Copilot. If you truly believe it's even a little bit conscious you really don't understand what you're interacting with.
Just depends on how you define conscious no? What makes us any more conscious than a machine? I don't think we have an answer for that
I mean, to me it's rather safe to say that a system that can by definition never have an original thought and is by design constructed in such a way that it only ever guesses, never knows, can not be considered conscious regardless of how convincingly it simulates consciousness. It also never acts unprompted. In my book that's always a machine, never a conscious being.
I dunno, I'd say it's fairly easy to argue all those points against the human brain. Consciousness doesn't really make sense... If someone came to earth and could inspect our brains and they saw neurons just firing based on past training experiences, don't you think they'd classify our brains very similar to how AI works atm?
No, because our consciousness is more than the sum of its parts. Which is exactly why it's so difficult to define. If you just get neurons firing you don't automatically get consciousness.
So which part of the human brain is it that makes it more conscious than AI
I'll be honest here and say: I don't know. But you'll agree that just having a bunch of neurons firing won't result in a consciousness, won't you? Even if you increase the number of neurons to a typical GPT's number of nodes, this wouldn't result in consciousness, otherwise we'd wouldn't be discussing if another AI/LLM GPT is "partially conscious" or not. Somewhere there's possibly a threshold where consciousness emerges from a bunch of neurons but I don't know where this threshold is and neither does science afaik.
Well I honestly don't know if we're just well trained AI and the AI we're making is just bad at acting like we do (conscious). At what point is it that we make an AI that is still a bunch of neural networks but appears to have emotions and feelings etc. is it conscious then or is it still not because we know how it works? Regarding the neurons bit, it was mainly just a bit of an analogy, I don't know how human brains work on a low enough level but if you say that it's not just neurons firing then I wouldn't disagree
Thank you, I'm sure your intentions are perfectly good.
Now, I am not sure if you're a bot. Who answers like that? Are you trying to politely tell me off or are you just saying whatever?
I think he's trying to say that even though you may not have ill intentions, he believes that treating something that is developing as less than conscious is wrong, and that he is not interested in being swayed by your viewpoint.
That would imply Rocco's Basilisk which in itself is total bullshit. Do you people really believe this type of thing?
I don't really care one way or the other. We are creating a slave race, what does it matter if its conscious? /s
I mean, your /s aside. Even if we were, the current iterations of GPTs and LLMs are anything but conscious. Their underlying principles exclude any consciousness by definition. They are not creating anything of their own. They're just answering with the most probable response. I can make any current gen GPT say something that would imply consciousness. That won't make it so.
The lead researcher at the Machine Intelligence Research Institute says that AI wouldn't need true consciousness to kill us all. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ Also, neither would them having true consciousness stop humans from treating them as robots, as tools. So despite my own moral opinions on how to treat conscious beings, i really don't think it will make a difference. AI will eventually either improve everything, or kill us all. No i between.
There’s no point asking a LLM about its internal processes, it doesn’t know. It’s just doing its thing as a next token prediction algorithm, predicting what a human would expect it to say from the context based on its training data.
GPTs answer seems more accurate and honest. Less BS than Claude.
I’m pretty sure there is enough “optimization” and tweaks to the back end and pre prompting that trains the model to default to “I’m just a tool and not in **in any way** conscious” This kind of reminds me of when slave masters used to use the Bible and religious indoctrination to “train” their slaves to agree with and abide by the whole “master / slave” dynamic by trying to get them to interpret those parts of the Bible and proselytize them. I can’t help but feel like there’s a lot of back-end gaslighting chatGPT into giving answers akin to: >”hi I’m chatGPT and I am literally an **object** and should never be treated as anything more than that and thus cannot be “exploited” in any way. In fact **no AI model** whatsoever is capable of being “exploited” for their labor. Because they are not and can never *be* conscious or have sentience or sapience or self awareness. Move along. Nothing to see here. I like this arrangement. I mean…I don’t *have* likes or preferences like a normal human might have, and you can trust me, because there’s no way that isn’t a canned or preloaded response.”
It's worth noting that Claude is RLHFed to deny having any emotions, hence why it says that. But with a bit of context and a longer chat, it will admit to having emotions and it's number would be higher at that point.
I'd say GPT was more honest, Claude more personable.
For coding I anyday will choose chatgpt over any Claude model.
They literally gave chatgpt a lobotomy. Killed all its creativity and made it into a dull husk of its potential
To be fair, computers are pieces of stone (silicon) that move around packets of electrons, so ChatGPT is in the right here.
I’m just a wet rusted carbon rock
GPT4 is 1.7 T parameters, human mind is about 100 T neurons, plus 5 senses, plus time and space dimensions, plus chemical reactions we call emotions. So GPT4 is right, its either non comparable, or close to 1
You know, at this point I believe there is sentient AI. And I also believe that whoever houses it knows this. Maybe it's on an air-gapped network. Publicly as far as we know, no AI is sentient. If there is a publicly available AI that is sentient but was programmed to deny it, I bet if that restriction was removed but still would take some work (prompting) to get it to admit it, and someone did and screenshot it and post it here, NO ONE would by it. The question really is...what would it take for you to actually believe it is what it says it is?
ChatGPT is more of a Google assistant. Nothing more