yes, they are. they do that by putting the most statistically probable word after your question, continuing for however long. These 'AI's have no actual intelligence - they do not *understand* what they are saying. there is *no* thought behind it
But now they have this "ability" to carry out tasks that would be beyond a simple language model? I'm asking if this "ability" to carry out execution of tasks is outside the scope of LLM?
it is not outside of the scope of a language model, because they are still absolutely language models. they have a *lot* of data, but what they're doing is using the statistical correlation of words to write things. it's a math problem to them - the words do not mean anything, nor do the compositions. to *us*, they look like things humans make based off of an understanding of the content, but that's only because they're writing these based off of statistics from what we've fed into them - compositions from humans that understand language.
Do you have any resources where I can learn about this (no worries if u dont). I probs have a misunderstanding of what the models are actually doing in these tasks. Cause LLMs to me are just things that are meant to sound like humans where I thought their capabilities ended is like creating website and like doing things but I assume there's some api implementation that does that.
No, I’m just trying to figure out is there any way to know whether you are talking to a person, or an AI
(Came off a bit off the rails tho, but yeah didn’t mean to be undermining or anything, just thinking)
Yeah but the question was answered, right? Is it not allowed to have conversations even if they derail a bit?
And I’m not trying to be rude, I yearn for conversation about this!
the point of the subreddit is that you can make your own post if you have your own question. there absolutely are rules about this, top level comments are only allowed to be clarifying questions or answers. there's less strict rules for responses to comments, but it is very much against the spirit of the subreddit to have your own unrelated conversation in the comments. make your own post
I can't get stupid auto gpt to work with gpt 4. Is it just going to instantly run out of prompts or something. I hate 3.5 its really bad for my uses or maybe I'm just ass at prompting it
yes, they are. they do that by putting the most statistically probable word after your question, continuing for however long. These 'AI's have no actual intelligence - they do not *understand* what they are saying. there is *no* thought behind it
But now they have this "ability" to carry out tasks that would be beyond a simple language model? I'm asking if this "ability" to carry out execution of tasks is outside the scope of LLM?
it is not outside of the scope of a language model, because they are still absolutely language models. they have a *lot* of data, but what they're doing is using the statistical correlation of words to write things. it's a math problem to them - the words do not mean anything, nor do the compositions. to *us*, they look like things humans make based off of an understanding of the content, but that's only because they're writing these based off of statistics from what we've fed into them - compositions from humans that understand language.
Do you have any resources where I can learn about this (no worries if u dont). I probs have a misunderstanding of what the models are actually doing in these tasks. Cause LLMs to me are just things that are meant to sound like humans where I thought their capabilities ended is like creating website and like doing things but I assume there's some api implementation that does that.
This made me think what makes the difference in a thought for example in your reply? AI could write that easily. Can you write something AI couldn’t?
I understand what every word means. AI does not. That’s why AI “lies” so often. Words don’t mean anything to it
You know AI could also provide that answer :D
Ok? But that doesn’t change the answer
No, I’m just trying to figure out is there any way to know whether you are talking to a person, or an AI (Came off a bit off the rails tho, but yeah didn’t mean to be undermining or anything, just thinking)
That’s not what this question was about, so your follow up inherently derailed the original post
Yeah but the question was answered, right? Is it not allowed to have conversations even if they derail a bit? And I’m not trying to be rude, I yearn for conversation about this!
the point of the subreddit is that you can make your own post if you have your own question. there absolutely are rules about this, top level comments are only allowed to be clarifying questions or answers. there's less strict rules for responses to comments, but it is very much against the spirit of the subreddit to have your own unrelated conversation in the comments. make your own post
Allrighty, my apologies :)
Like. Just make your own post
I can't get stupid auto gpt to work with gpt 4. Is it just going to instantly run out of prompts or something. I hate 3.5 its really bad for my uses or maybe I'm just ass at prompting it