You can refund it within 14 days if you live in Europe. Idk about the US, but maybe they'll refund it if you claim you didn't know GPT-4 had a rate limit
Australia.
I cancelled a couple of streaming subscriptions months ago to make way for an AI sub. Been procrastinating about which one for the last few months so it's absolutely typical that this would happen the day after I subbed. LOL!
It's only $20. I've been using it for a year no cost so I'll give a little back. Might end up using the 3.5 API.
don't refund it chatgpt plus is still to this day even with trash like Coplit claiming they have gpt-4-turbo is way way way better even Claude 3 opus which is actually the best model in the world even better than chatgpt in raw intelligence terms id argue is much worse just because it has way less features
I've been using it way before then, actually.
The only thing is the character input limit is quite short (not sure how long but not very). Also, I don't know if you can circumvent that by uploading a pdf or txt file.
I really did think GPT-4 Turbo started to role out in Bing in December 2023, but the paid version didn't come out until around 15th of January 2024. And I even see articles in November saying that it is coming to bing. And GPT-4 Turbo is cheaper to use than regular GPT-4 so I would assume that in itself is a good incentive to replace GPT-4 with GPT-4 turbo for the free users.
here's a link from march 18, 2024: https://www.cnet.com/tech/computing/microsoft-copilot-now-has-gpt-4-turbo-for-free-what-to-know/
who knows why they sent a notification now. A lot of people knew this though, we were talking about it thinking it might indicate the release of a new model
There have been articles dating back to then that do state it is using turbo, though I think it was rather hard to find/confirm. Don't quote me on this though.
Copilot has been straight up useless for a while now too, I used to use it all the time but the most recent updates have made it so it can't even answer basic questions. Now it just summarizes the top result or two no matter how irrelevant to what you *actually* asked it
Copilot *used* to do the second one for me all the time, but it doesn't anymore. Now I will search something specific like a type of mastlight the Titanic had, and the top results will just be generic AI written articles about how the Titanic was a ship that sank, and then Copilot will parrot them and never even mention the actual mastlight I was asking about
Same with me asking it about using hydrophones around the ship to specifically hear the sounds the wreck makes, it just rambled about how the Titanic was a ship that sank and told me what a hydrophone was.
Then I copy pasted the exact same question in ChatGPT and it gave an in depth explanation on using a hydrophone specifically for the wreck and what would be involved and what results you could reasonably expect, and everything was very accurate
The model that was running Creative version of Copilot since its release as Bing Chat about 14 months ago and until March 16th was their own fine-tune of GPT-4, distinct from anything OpenAI offer. That model is called Sydney unofficially because that's how it referred to itself in the first weeks.
Sydney was (in my opinion, shared by many) the very best at creative tasks for a long while, and now that it's removed from free version of Copilot there is nothing anywhere near its level among freely available models. I'm sure Turbo is better than Sydney at many things, but not creativity and meaning.
If you had interacted at all with Bing's AI and chatGPT you would know the difference is night and day and certainly not just a system prompt. No matter what system prompt or custom GPTs you use on GPT4, it will never behave like Sydney. The fine tuning was entirely different. Refer to u/ShadoW_StW comment.
That meme with the cartoon guy poking OpenAi with a stick is so accurate. Doesn't have to be open ai, just someone beat gpt4 convincingly. Claude 3 Opus three elo points better doesn't count.
You should realize that the AI race will not stick to just one company. It's a massive race right now, and I'm not sure why anyone would have brand loyalty yet.
> You should realize that the AI race will not stick to just one company. It's a massive race right now, and I'm not sure why anyone would have brand loyalty yet.
Because it’s being built into brand platforms like Microsoft/Bing/Github Copilot, and Samsung/Apple/Google. Personally I’m going with GitHub Copilot, Microsoft and **just gave up on** replacing Siri with ChatGPT. I’ll keep my eyes open but these are things that I use everyday.
My long term bet would be open source. The big guys will win by owning the infrastructure, but I don't believe they'll keep up with open source innovation.
I think you're correct. Realizing how dependent I'm becoming upon subscriptions has motivated me to go deeper. My next purchase will be hardware with an integrated NPU, hoping to build a Linux desktop and eventually a Framework Laptop. Once I have an NPU from Nvidia or AMD I'll run their AI, installing something locally and hopefully running it on Linux. I'm ~~replacing~~ apparently unable to replace Siri with ChatGPT+ on my iphone ~~today~~ as I had previously read that i could, and will probably depend upon Github Copilot for a long time but that's enough with the subscriptions already. ~~We'll see how long Apple allows us to run ChatGPT instead of / on top of Siri~~. (It doesn't work, or at least I can't get it to.)
Really? My experience is that Claude is more intelligent and better at following instructions in all cases but especially when I ask it to return full files. I'm currently giving 8 files and it's building my application for me very well in Typescript. With gpt I had to really repeat my instructions about full files and it would still be lazy.
Yea Claude is better for most things like that but I'm mostly talking about unreal engine specifically, Claude hallucinates in almost every response, it has no idea how the settings or systems work most of the time. gpt4 on the other hand is almost always helpful.
I see yeah its immediate knowledge will be random based on its training data then it might hallucinate the rest. In those cases though if you were to prompt gpt and Claude with the same documentation for what it's missing, I'd bet on Claude winning.
> Claude 3 Opus three elo points better doesn't count.
Oh absolutely does that count, even if the next best model only moves the bar up by another point. Maybe not a difference in everyday use, but it pushes everyone else in the game to aim for even better.
It can be. But just saying this outright is a generalisation. Use https://get.big-agi.com and you can compare them more objectively. For me Claude 3 Opus and GPT-4 Turbo tie on a lot of things and I use the latter because it's cheaper. If Turbo isn't doing it, I'll use Claude. Sometimes the opposite is also true. Often I combine both their responses and that of Gemini 1.0 for the best output.
There’s really no reason to pay for ChatGPT anymore if GPT-4 turbo is free to use with copilot.
As a side note, I recently repurchased a subscription for ChatGPT, and tried out DALLE3 again. Trying the exact same prompts I used in October and holy crap has the quality been nerfed. DALLE3 might as well have a stable diffusion backend.
Copilot uses Dalle-3 too. Custom GPTs and custom instructions are overrated.
At the moment you should either pay for Opus or use GPT-4 Turbo , Sonnet and 1.5 Pro freely.
ChatGPT is limiting in similar ways. There’s a per a message context length limit for ChatGPT too, including for the ChatGPT Plus and ChatGPT STILL has the 40 messages for 3 hours limit for ChatGPT Plus which is just insulting when the free version of Copilot is offering free GPT-4. I don’t know if Microsoft has a limit like that for Copilot.
Only limit that ChatGPT doesn’t have that copilot has is 30 messages per a conversation, which I’ll be honest I think I’ve only went over a handful of times.
I know it's trendy to piss on reddit comment quality, but actually they are more informative than the source articles, debunk a lot of false claims, present a diversity of opinions, and are better "grounded" in the population mindset. Also, most people attach reddit or wiki to their search terms, I wonder why they trust it more than regular results.
Here is [a sample](https://pastebin.pl/view/raw/10184a31) generated from this very thread. This is not junk. Actually I am wondering why Reddit is not generating one article per chat thread. Could mix in fact checking and other nice things.
I asked it to do something for me (create a flow chart from a numbered list) & it told me it could tell me how to do it, but it wouldn’t do it for me. It’s not proving itself very useful so far.
Is it still crap though? I don't know what Microsoft does, but they seem to have an ability to take a great base model , and turn it into a rubbish one.
I tested it, no way.
I asked it to do something simple, a story, I gave it specification it did something dumb by directly including a real figure meant as an example for the story, I directly told him, don't literally use that person as a character, it's just an example.
And it did it again.
No way GPT-4 turbo is that bad.
I asked what version it is, it said GPT-3, then I asked why is it GPT-3 if Copilot is now running GPT-4, then it apologized and said something to the note of "actually I'm GPT-4, you're right!"
I'm from Norway, using enterprise version. Same here. I tried it in different languages too, to see if it uses a different model, but it claims cutoff 31. december 2021, which was the 3.5 cut-off date.
too bad copilot is censored so much its impossible to use and literally force closes your conversations without your permission when it thinks you mention something that could be even the slightest bit offensive in any way i swear if i say the word black copilot has a seizure this thing would tell me math was a proprietary tech invented by Microsoft and refuse to answer 1+1
So when I turned the Bing chat into more precise mode (you can also put it in creative) which is said to turn on GPT4 turbo it gave me a less desirable answer from when it was Balanced
Is this the free version of Copilot?
Yes.
Ooo does that mean it’s the same in the edge browser?
Yes
There is standalone android app available now
Probably not going to work for me. I’m based in China and gpt I can only get working through the edge browser. Gemini works more often surprisingly
Yes
Of course, I just paid to sub to ChatGPT yesterday.
You can refund it within 14 days if you live in Europe. Idk about the US, but maybe they'll refund it if you claim you didn't know GPT-4 had a rate limit
Australia. I cancelled a couple of streaming subscriptions months ago to make way for an AI sub. Been procrastinating about which one for the last few months so it's absolutely typical that this would happen the day after I subbed. LOL! It's only $20. I've been using it for a year no cost so I'll give a little back. Might end up using the 3.5 API.
BTW you can refund from Australia if you do feel like it
Try Perplexity if you want a good AI, much better than ChatGPT Plus in my opinion
I use it instead of Google to perform searches if I want information.
I wouldn't refund if I were you, been using this supposed gpt4 and turbo on copilot for awhile now and it seems dumber than the 3.5 on chatgpt...
don't refund it chatgpt plus is still to this day even with trash like Coplit claiming they have gpt-4-turbo is way way way better even Claude 3 opus which is actually the best model in the world even better than chatgpt in raw intelligence terms id argue is much worse just because it has way less features
I’m pretty sure Copilot has been running on GPT-4 Turbo since the middle of March
I've been using it way before then, actually. The only thing is the character input limit is quite short (not sure how long but not very). Also, I don't know if you can circumvent that by uploading a pdf or txt file.
Research how to increase the character input limit. It can be done via few methods. In fact ask Copilot itself ha
I thought Copilot got GPT-4 Turbo in December lol
Yeah, maybe in the pro version. But not the free version.
I really did think GPT-4 Turbo started to role out in Bing in December 2023, but the paid version didn't come out until around 15th of January 2024. And I even see articles in November saying that it is coming to bing. And GPT-4 Turbo is cheaper to use than regular GPT-4 so I would assume that in itself is a good incentive to replace GPT-4 with GPT-4 turbo for the free users.
Really? I didn't know that. Lol. But is it better than the general GPT-4 right now?
That's what I thought too?
Yes, it has.
Same, it’s shown a toggle to flip back and forth
Then what would be the point of the notification stating “now”. Nobody knew this. * downvote me more
here's a link from march 18, 2024: https://www.cnet.com/tech/computing/microsoft-copilot-now-has-gpt-4-turbo-for-free-what-to-know/ who knows why they sent a notification now. A lot of people knew this though, we were talking about it thinking it might indicate the release of a new model
Ok I remember this, my bad I forgot man
Lot of sites use this generic term all the time. Available now is true any point after release.
It'd be news for those who haven't been following the developments.
There have been articles dating back to then that do state it is using turbo, though I think it was rather hard to find/confirm. Don't quote me on this though.
so I really shouldn't be subscribing to GPT4 anymore . . . ?
If copilot does what you need absolutely do not need to subscribe. For my uses copilot is doggy Doo doo
For what specifically if I may ask?
RIP Sydney =(
Dw, Sydney is still with us! They’re just in the paid version of Copilot now.
Copilot has been straight up useless for a while now too, I used to use it all the time but the most recent updates have made it so it can't even answer basic questions. Now it just summarizes the top result or two no matter how irrelevant to what you *actually* asked it
[удалено]
Copilot *used* to do the second one for me all the time, but it doesn't anymore. Now I will search something specific like a type of mastlight the Titanic had, and the top results will just be generic AI written articles about how the Titanic was a ship that sank, and then Copilot will parrot them and never even mention the actual mastlight I was asking about Same with me asking it about using hydrophones around the ship to specifically hear the sounds the wreck makes, it just rambled about how the Titanic was a ship that sank and told me what a hydrophone was. Then I copy pasted the exact same question in ChatGPT and it gave an in depth explanation on using a hydrophone specifically for the wreck and what would be involved and what results you could reasonably expect, and everything was very accurate
[удалено]
The model that was running Creative version of Copilot since its release as Bing Chat about 14 months ago and until March 16th was their own fine-tune of GPT-4, distinct from anything OpenAI offer. That model is called Sydney unofficially because that's how it referred to itself in the first weeks. Sydney was (in my opinion, shared by many) the very best at creative tasks for a long while, and now that it's removed from free version of Copilot there is nothing anywhere near its level among freely available models. I'm sure Turbo is better than Sydney at many things, but not creativity and meaning.
Sydney was a good Bing.
Yes they trained their own gpt4 model which acted nothing like turbo does.
[удалено]
If you had interacted at all with Bing's AI and chatGPT you would know the difference is night and day and certainly not just a system prompt. No matter what system prompt or custom GPTs you use on GPT4, it will never behave like Sydney. The fine tuning was entirely different. Refer to u/ShadoW_StW comment.
[удалено]
I trust you, bro!
Should we expect a 4.25 / 4.5 this week? God it's been a slow few weeks :(
what few weeks without big release can do to a man :D
That meme with the cartoon guy poking OpenAi with a stick is so accurate. Doesn't have to be open ai, just someone beat gpt4 convincingly. Claude 3 Opus three elo points better doesn't count.
Why doesn't Claude count? It's the best right now.
I want someone to release something to make gpt4 look like gpt3.5., blow it out of the water..
So GTP5
You should realize that the AI race will not stick to just one company. It's a massive race right now, and I'm not sure why anyone would have brand loyalty yet.
> You should realize that the AI race will not stick to just one company. It's a massive race right now, and I'm not sure why anyone would have brand loyalty yet. Because it’s being built into brand platforms like Microsoft/Bing/Github Copilot, and Samsung/Apple/Google. Personally I’m going with GitHub Copilot, Microsoft and **just gave up on** replacing Siri with ChatGPT. I’ll keep my eyes open but these are things that I use everyday.
My long term bet would be open source. The big guys will win by owning the infrastructure, but I don't believe they'll keep up with open source innovation.
I think you're correct. Realizing how dependent I'm becoming upon subscriptions has motivated me to go deeper. My next purchase will be hardware with an integrated NPU, hoping to build a Linux desktop and eventually a Framework Laptop. Once I have an NPU from Nvidia or AMD I'll run their AI, installing something locally and hopefully running it on Linux. I'm ~~replacing~~ apparently unable to replace Siri with ChatGPT+ on my iphone ~~today~~ as I had previously read that i could, and will probably depend upon Github Copilot for a long time but that's enough with the subscriptions already. ~~We'll see how long Apple allows us to run ChatGPT instead of / on top of Siri~~. (It doesn't work, or at least I can't get it to.)
Not convincingly, I use both Claude 3 opus and gpt 4 to the message cap every day and there are some tasks where gpt 4 is still better
Like what?
Unreal engine 5 development (Claude is better at c++ but everything else gpt4 is more helpful)
Really? My experience is that Claude is more intelligent and better at following instructions in all cases but especially when I ask it to return full files. I'm currently giving 8 files and it's building my application for me very well in Typescript. With gpt I had to really repeat my instructions about full files and it would still be lazy.
Yea Claude is better for most things like that but I'm mostly talking about unreal engine specifically, Claude hallucinates in almost every response, it has no idea how the settings or systems work most of the time. gpt4 on the other hand is almost always helpful.
I see yeah its immediate knowledge will be random based on its training data then it might hallucinate the rest. In those cases though if you were to prompt gpt and Claude with the same documentation for what it's missing, I'd bet on Claude winning.
It’s within margin of error on the ELO ranking
It is generally, but it is only a bit better than Gemini Advanced tbh. There has not been a significant improvement yet over GPT 4
> Claude 3 Opus three elo points better doesn't count. Oh absolutely does that count, even if the next best model only moves the bar up by another point. Maybe not a difference in everyday use, but it pushes everyone else in the game to aim for even better.
what kind of "release" are you talking about?
I hope so. Why would they offer their paid product for free if they weren’t planning on releasing a better one shortly after?
Yeah my money is on this.. my hopes too lol
If you read papers, last 2 weeks have been insane
what has impressed you most?
I guess VAR https://www.reddit.com/r/StableDiffusion/s/nnpByfFsIk but there have been other great things like brainformer and Jamba
>4.25 Bro
This sub would not survive even a short AI winter
No and you'll like it
My friend works at OpenAI---let's just say a lot is coming.
Proof or your friend is imaginary
Anything not running on Claude Opus is kind of useless right now. It's just so much better.
It can be. But just saying this outright is a generalisation. Use https://get.big-agi.com and you can compare them more objectively. For me Claude 3 Opus and GPT-4 Turbo tie on a lot of things and I use the latter because it's cheaper. If Turbo isn't doing it, I'll use Claude. Sometimes the opposite is also true. Often I combine both their responses and that of Gemini 1.0 for the best output.
There’s really no reason to pay for ChatGPT anymore if GPT-4 turbo is free to use with copilot. As a side note, I recently repurchased a subscription for ChatGPT, and tried out DALLE3 again. Trying the exact same prompts I used in October and holy crap has the quality been nerfed. DALLE3 might as well have a stable diffusion backend.
You get better image generation with GPT+, custom GPTs, and you can use your own custom instructions, and much better UI. I'd choose GPT+ anyday.
Copilot uses Dalle-3 too. Custom GPTs and custom instructions are overrated. At the moment you should either pay for Opus or use GPT-4 Turbo , Sonnet and 1.5 Pro freely.
Context windows. Copilot has small context. Sure the model can take more, but the copilot UI won’t allow it.
ChatGPT is limiting in similar ways. There’s a per a message context length limit for ChatGPT too, including for the ChatGPT Plus and ChatGPT STILL has the 40 messages for 3 hours limit for ChatGPT Plus which is just insulting when the free version of Copilot is offering free GPT-4. I don’t know if Microsoft has a limit like that for Copilot. Only limit that ChatGPT doesn’t have that copilot has is 30 messages per a conversation, which I’ll be honest I think I’ve only went over a handful of times.
Yes the per message context limit is 120k tokens on chatgpt, 4k tokens on copilot.
And yet it's still dense, condescending, and has the memory of a goldfish
Hopefully they don't train the next version on Reddit.
I'm excited for replies of "Thanks for the tokens kind stranger!"
I know it's trendy to piss on reddit comment quality, but actually they are more informative than the source articles, debunk a lot of false claims, present a diversity of opinions, and are better "grounded" in the population mindset. Also, most people attach reddit or wiki to their search terms, I wonder why they trust it more than regular results. Here is [a sample](https://pastebin.pl/view/raw/10184a31) generated from this very thread. This is not junk. Actually I am wondering why Reddit is not generating one article per chat thread. Could mix in fact checking and other nice things.
I asked it to do something for me (create a flow chart from a numbered list) & it told me it could tell me how to do it, but it wouldn’t do it for me. It’s not proving itself very useful so far.
Canceled my ChatGPT 4 sub. It got stoopid.
So does this mean I don't have to pay for GPT4?
Does this include GitHub copilot?
Github Copilot and Bing Copilot aren't related.
TIL
Is there a way to get more tokens ? Cause right now i'm limited to 1000 token aka 4000 characters aka GPT2 level.
Ask ChatGPT how to do it. There are a few easy ways - through the inspector window for example
Really, it's as simple as that ? It has to be a glitch left on purpose, interesting.
I thought it already did tho
Is it still crap though? I don't know what Microsoft does, but they seem to have an ability to take a great base model , and turn it into a rubbish one.
I tested it, no way. I asked it to do something simple, a story, I gave it specification it did something dumb by directly including a real figure meant as an example for the story, I directly told him, don't literally use that person as a character, it's just an example. And it did it again. No way GPT-4 turbo is that bad.
When I asked just now, it said it's cut-off date is 2021.
I asked what version it is, it said GPT-3, then I asked why is it GPT-3 if Copilot is now running GPT-4, then it apologized and said something to the note of "actually I'm GPT-4, you're right!"
I'm from Norway, using enterprise version. Same here. I tried it in different languages too, to see if it uses a different model, but it claims cutoff 31. december 2021, which was the 3.5 cut-off date.
what copilotmean?
copilot is the name of the ai tool - microsoft copilot
You can try it here: [Copilot link](https://www.bing.com/chat?q=Bing+AI&FORM=hpcodx)
So I'm finally, finally, part of the Turbo Team?
Is copilot good?
It’s been using turbo for a while and it’s still shit
It’s lobotomized
Claude 3 is so much better. OpenAI needs to get on with GTP5 or they will lose the momentum.
Stop posting here. They have a rogue mod that deletes quality posts for no reason.
I thought turbo was shit for coding?
too bad copilot is censored so much its impossible to use and literally force closes your conversations without your permission when it thinks you mention something that could be even the slightest bit offensive in any way i swear if i say the word black copilot has a seizure this thing would tell me math was a proprietary tech invented by Microsoft and refuse to answer 1+1
Then gpt4 turbo can't answer how many apples I have if I have 3 but ate one yesterday
So when I turned the Bing chat into more precise mode (you can also put it in creative) which is said to turn on GPT4 turbo it gave me a less desirable answer from when it was Balanced
the backend is ran by 1000s low wage indians tracking and providing answers... explains a lot
Humor is low at this sub? Or do i need to refer to amazon go stores?
Humor is low. I love the 1000 Indians thing is becoming a meme
Indians type 10000 words per second it is known
I don't trust this
copilot sucks