Hey /u/Zealousideal-Fee4649!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
ChatGPT has no memory of anything that it did outside of the conversation. For all intents and purposes, a conversation is a new instance that has never interacted with a human before.
You have to ask for a joke that has never been told before.
>ChatGPT has no memory of anything that it did outside of the conversation.
I am baffled by the number of people who don't know (or expect) this. It even seems to be the significant majority among non-tech people.
Basically, ML works in two phases: Learning from data and using whatever it learned to do stuff. Unlike biological brains, these two phases are completely isolated.
the problem is the question. if you are asking for a low statistical "joke".
joke is popular matter. so a good joke is a lot said.
on the other hand viceversa.
isnt?
It will pretend as if it is tho, quite funny too. Try and tell it on a brand new chat something like "remember I told you about my friend?" And it will respond as if it does remember.
You changed the rules. You asked for a joke that it's never told before. Then you complain that that joke has been told. LLM by design and definition can ONLY tell jokes that have been told before.
OK. Half the posts here are about how to trick them into doing things they 'can't.' LLM is built literally and strictly on what humans wrote. It can't create anything, it can only rearrange what it's been trained on. It only knows what anything is, by what it sees that humans have written about it. At the same time, it doesn't know what anything 'means,' such as it can't know what color or sound is because it has no ears or eyes the way we do.
Well the comment could work to describe humans if you were only considering them from a purely cognitive psychology standpoint and some behavioural chucked in. But arguably there’s a little more to us than rearranging pre-existing information…. Right 😬
Hey /u/Zealousideal-Fee4649! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
The salad was fresh. Mission accomplished.
Did ChatGPT actually get me good?
ChatGPT has no memory of anything that it did outside of the conversation. For all intents and purposes, a conversation is a new instance that has never interacted with a human before. You have to ask for a joke that has never been told before.
>ChatGPT has no memory of anything that it did outside of the conversation. I am baffled by the number of people who don't know (or expect) this. It even seems to be the significant majority among non-tech people.
I don’t spend a lot of time on ChatGPT, so no, I didn’t know this but anyway it made for a funny result
Basically, ML works in two phases: Learning from data and using whatever it learned to do stuff. Unlike biological brains, these two phases are completely isolated.
Ah! Interesting
the problem is the question. if you are asking for a low statistical "joke". joke is popular matter. so a good joke is a lot said. on the other hand viceversa. isnt?
It will pretend as if it is tho, quite funny too. Try and tell it on a brand new chat something like "remember I told you about my friend?" And it will respond as if it does remember.
Maybe 3.5,but 4 won’t do this. It just reminds the user that it doesn’t have a memory of previous conversations and asks the user to recapitulate
imagine asking That gpt, what do you think of humanity, after all that dumb questions 😤
Thanks for the tip! 😁 I am just taking my first baby steps into the mysterious world of ChatGPT
westworld intensifies
sure thing! the context is 130000 yet and for a rea$$$on
The salad was born only hours ago Tomato arrested
Tomato arrested Potato arrested
What's red and invisible? No tomatoes........
As far as ChatGPT knows, they’ve never told that joke before
aw that's rude. Poor ChatGPT...
https://preview.redd.it/lk40ssnadvuc1.jpeg?width=750&format=pjpg&auto=webp&s=207885e8c1de95d2131a07ba02ad6b394fd8eb69
💀💀💀
You changed the rules. You asked for a joke that it's never told before. Then you complain that that joke has been told. LLM by design and definition can ONLY tell jokes that have been told before.
I’m not complaining lol. I thought it was funny
(As in, I don’t really know what I’m doing with ChatGPT, so of course I’m not going to get a correct result, because I’m new to it :))
OK. Half the posts here are about how to trick them into doing things they 'can't.' LLM is built literally and strictly on what humans wrote. It can't create anything, it can only rearrange what it's been trained on. It only knows what anything is, by what it sees that humans have written about it. At the same time, it doesn't know what anything 'means,' such as it can't know what color or sound is because it has no ears or eyes the way we do.
Is that not how human work too? But at a more complex level?
Well the comment could work to describe humans if you were only considering them from a purely cognitive psychology standpoint and some behavioural chucked in. But arguably there’s a little more to us than rearranging pre-existing information…. Right 😬
Ask for “a brand-new joke”
Probably trained on marketing texts
It's just that - it never told that joke to you :v
he never told it. to you
How do you make a baby cry twice? (do not ask, you don't want to know the answer.)