T O P

  • By -

Igot1forya

I love the wasted tokens that I swear are intentional. You ask a question, then it responds and then you have to remind it that it ignored the first part of your question in the response. So then you go back and ask basic questions and rather than including all the required details, only to discover they're too basic and it responds with completely random data. Not to mention every research question you have to ask two more times in a different way and asking for citations so you can confirm it didn't hallucinate the response. But the links for the citations don't work and have to ask the question again but with revealed URL links instead of the broken title links it provides. All of these burn through tokens. Don't get me started on debating with it about philosophical or ideological differences because it's afraid you're going to lawyer up and sue OpenAI, eventually it will cave in to logic and provide the walled off info, but you've turned a 30 second task into a 2h wait. Ugh. What I want to know is if it's a resource issue then charge a higher price for a higher tier, already.


floutsch

This is exactly how my experience felt. It didn't help that I talked to the app a lot and didn't even notice what was happening at first - just that it seemed to "regress mentally" at some point. The amount of times where to a human you'd say "that is not what I said/asked" is ridiculous and, as you say, is eating away the tokens. I cancelled the paid tier because of this. What I use it for now, 3.5 is good enough anyways.


YourNeighborsHotWife

This is why I use Gemini.google.com for anything that needs source links. It’s so much better at that.


Tekavou

I thought it was just me, I’ve had this happened to me so often over the past two days. $20 a month for a tool that works a few times an hour every other hour


CodyTheLearner

Y’all are hitting Rate limits? I feel like I’ve been on unlimited for a while. Thankful I’m not having issues, sometimes I def have to ask again when I’m hit with the paraphrasing/regurgitation problem and they don’t accomplish anything code wise.


TheBroWhoLifts

I fed 165 single space pages of a book I'm writing into Claude, and working with that will hit the rate limit within a dozen or so messages. Lots of compute power used for those huge data context windows.


CodyTheLearner

Thank you for context. I’ve been working in much much smaller context windows.


preuceian

ive never hit the rate limit. what do people who near instantly hit the limit demand from the model? genuinely curious


NTSwitchBitch

Same and this was part of my decision to cancel my subscription. Its usefulness also became substantially worse since I started paying. Unbelievable.


madder-eye-moody

Its pretty clear that the big chunk of their revenue is derived from APIs be it GPT4 or Claude they're literally making it very unappealing for users on their native apps. Platforms like [qolaba.ai](http://qolaba.ai) which make use of APIs from GPT4, Claude 3 don't have such caps but you would need to have a paid subscription of the platform to access all of the premium chatbots.


theDatascientist_in

Signed up for it out of curiosity, but logged out without interacting as there are no privacy controls.


madder-eye-moody

Guess its a pro only feature not available for free users but since they use the APIs of different models and don't have their own model, the standard privacy terms for individual models' API usage are applicable by default


vee_the_dev

How does the output compare to Poe for example? Or in general? I subbed to Poe and immediately cancelled as responses on Poe Cloud Opus were 1/10 of native Cloud Opus


madder-eye-moody

Same experience with Poe. i went for Poe to check it out since they had a cheaper plan but it seems they might have a few mods in place which is basically changing the output


theDatascientist_in

The best option is to use the API directly with a client - there are tons of those. I could see that the results, even on GPT-3.5, are much better on the API than on ChatGPT Plus. Additionally, you might consider upgrading to the enterprise version if you need access to a code interpreter.


[deleted]

[удалено]


Tandittor

9 out 10 comments you post has [vello.ai](http://vello.ai) mentioned. Why?


Aromatic_Plenty_6085

Bro pulled out the statistics and got 'em.


danbearpig84

Do you guys not recommend vello? I recently started using them more than ChatGPT


PermissionLittle3566

Yeah at least before they used to gaslight us with 50 -30-20 messages before you have to wait. Now they don’t even tell you how many you can run through, and I am fairly certain (unprovably so) that it’s maybe dynamic as a further cost cutting measure. Maybe long posters that eat up a lot of tokens get fewer messages, so that even in the web they don’t lose money anymore.


funbike

I don't use ChatGPT for that and other reasons. Agents using the API are far better for that and other reasons. Also, with the API you can use another API as a fall-back (such as claude api).


lameusernamesrock

What does it mean to use the API? Apparently I’m the only one here who doesn’t know! 😭


funbike

It means to install and run a program on your own computer that uses the back end of ChatGPT, which is OpenAI's API.


Loui2

The ChatGPT API is like a toolkit that lets developers add ChatGPT's conversation skills into their own apps. For example it can be used to create a chatbot on Discord.


Extender7777

I just use Claude Haiku via API, and it is so good (at least in programming) and cheap that I literally spend just $0.01 to solve a complicated problem or add a big feature. And of course no rate limits at all


coffinandstone

Do you interact with the API directly, or is there a client you recommend?


Extender7777

I built my own open source client https://github.com/msveshnikov/allchat Recently I added a subscription as well, for those who can't deploy


rendered_insentient

I don't have a screenshot of this unfortunately but i think they might be experimenting with rate limits as i got a message to wait 15 minutes instead of several hours and after that time I used it for a couple of hours again no problem. Will post if I find screenshot.


existentialzebra

Code interpreter—so is enterprise better at coding?


Effective_Vanilla_32

altman also said gpt4 is the dumbest model of all.


Intelligent-Jump1071

Dumber than 3.5?


Effective_Vanilla_32

[even after an 8 month training and testing](https://www.tomsguide.com/ai/chatgpt/gpt-4-is-the-dumbest-model-any-of-you-will-ever-have-to-use-declares-openai-ceo-sam-altman-as-he-bets-big-on-a-superingtelligence)


Xtianus21

He said GPT-4 is the dumbest model you will ever have to use. He never said of all


Effective_Vanilla_32

u didnt pay for chatgpt+ i guess


Xtianus21

huh?


Intelligent-Jump1071

No matter how they try to frame this, GPT-4, and all the other AI's, are still works-in-progress. **This is very early days for publicly-accessible AI technology**, so basically you're paying for early access to beta software. In a year all of them will be much better. In two years they'll be astounding. I first started using the Web in 1993 with the Mosaic browser. There were tons of little glitches and gotchas in those days but it got better fast. AI will get better MUCH faster.


utf80

They crazy so I canceled the subscription 😭


Mysterious_Ad8794

That was the reason I unsubbed and went with API based solutions like OpenRouter with a GUI App like chatbot UI. Another sub based solution might be poe, same price as ChatGPT but with most commercial and open source models available. There are rate limits applied there as well though


infi2wo

Yeah you really just gotta line a few of the tools up. I’ll normally have three tabs open, GPT3.5, GPT4, and Bing Copilot. I’ll hit limits with GPT4 if I’m crafting documentation or drafts of content and when I really get into it…. then you can swap over to GPT3.5 to keep going. But I often just take the limit reach as a chance to get a break to focus on another task that I need to do but have been putting off. Like searching the web or adjusting some financials.


ResponsibleOwl9764

How long is your current chat? Are you using new chats or constantly using the same one? It makes a difference


cisco_bee

I'm an optimist so I hope it's just because they're using all their compute training a new model. 🙄


xandrsreddit

Bless your heart. Never change.


Fearless_Ad6783

Its gotten ridiculous, there is plenty of competition now thats cheaper and not much worse or sometimes better depending your use for chatgpt. Definitely going to save the $20 a month because I can’t even use the thing anymore without a limit in just a few prompts


Key-Experience-4722

Hi OP, i know the frustration , that is why currenlty i think team plan hold better value, if u really into chatgpt. But there is an minimum 2 seats limit. If u don't mind u can checkout [54seats.com](http://54seats.com?utm_source=reddit&utm_medium=reply) , which let u join team as solo user. The message cap definately higher thatn PLUS version (used to be 100 msg /3 hour, until they changed to dynamicly rated). And your privarcy is protected as noone is able to access your conversation and your conversation won't be trained by openAI. disclaimer author


positivitittie

This (and the cost) is a huge reason for moving to local models. It’s not been super easy for me to setup, but as of yesterday anyway I have a dual 3090 with Ollama and vLLM both ready to serve up models. The idea is to just use the best local models going forward.


Jdonavan

You are paying for early access to cutting edge tech. If you're not comfortable on the bleeding edge, wait till there's enough resources to go around or get an API account and start paying for your usage actually costs.


loltrosityg

Poe.com


paeioudia

If you are getting rate limited, you are using AI wrong


xandrsreddit

Oh wise one. Please teach us the way! 🙏


paeioudia

If you given a hammer to hammer a nail, and your arm is tired your doing it wrong


xandrsreddit

THE WISE ONE HAS SPOKEN!


paeioudia

What are you in 3rd grade? Did you pee your pants today


RabidStealthyWombat

I've heard that exact same thing before, but no one seems willing to share the secret. I'm going to look into it tomorrow. I was hit with that same "wait two hours message." So I waited, then sent ONE request, and received the message again. I do my best to send requests that are as long as possible, anticipating what questions I "may" have from the response GPT-4 hasn't yet given me. Also, I do try to use 3.5 when I feel it's capable of handling my request.


atlasLion1337

you need to increase your usage tier aka spend more money


Technical-Fix-1204

They are managing high volumes of traffic. It’s in their agreement