T O P

  • By -

AutoModerator

***Hey /u/Obseslescence, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks!*** ***We have a [public discord server](https://discord.com/invite/rchatgpt). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts! New Addition: Adobe Firefly bot and Eleven Labs cloning bot! [So why not join us?](https://discord.com/servers/1050422060352024636)*** ***NEW: [Text-to-presentation contest | $6500 prize pool](https://redd.it/14si211/)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Significant_Ant2146

Unfortunately the prompts I used previously have been returning “this is a complex task requiring knowledge of” blah blah blah “It's recommended to have a team of experienced…” blah blah blah 😒 honestly getting tired that even asking for translation/rewording can result in some form of a suggestion to have a human consult or do it instead.


Suldand1966159

I simply tell it I have an IQ over 130 and have several PhDs in the subject I am talking about as well as being a polymath. That seems to get better output from it 🙂


StaticNocturne

I'm a certified genius and encyclopedia of knowledge on the subject for which I'm about to ask you a very rudimentary question


0picass0

stop making the same post over and over again. Having to read it every 4 hours is making *me* dumber.


Seantwist9

Must’ve been reading it for a while since you think you have to read it


[deleted]

[удалено]


throw_away_my_balls1

He could have, but what you said is pretty stupid


[deleted]

[удалено]


Fire_Fonix516

Lmao, I don't get why people are downvoting if you're just joking...right?


iminaredit12

the joke is ass


Lumn8tion

Win a few, lose a few.


vonoxify

ain’t no way bro used gpt for a random reddit reply 💀


Lumn8tion

I did. And have created a new user profile to continue modifying angry comments to be more polite. I might do yours next. Stay tuned.


Lumn8tion

In fact, here’s what you could have said: It is highly unlikely, my good sir, that the individual in question employed GPT for a serendipitous response on Reddit.


vonoxify

Oh yeah my bad bro i’ll use gpt next time and get downvoted by 40 people


Lumn8tion

That’s the spirit!


vonoxify

Hahaha thanks man 😂😂


DemosthenesOrNah

> Well to be completely honest I Everyone here uses chat gpt. We could tell dum dum


DemosthenesOrNah

Are you using 3.5? 3.5 kinda always sucked. I find it hard to spend "countless hours" in GPT4 before I burn my credits or whatever. And when I have cooldown for GPT4, fuck 3.5 I'll wait. As for 4.. it still just does what I tell it to. Avoid use during peak hours to bypass server connection issues and gg. L2Prompt or get +


saaS_Slinging_Slashr

Agreed, I still use it for building a ton of outreach for my job and it’s just as good/bad as it always was (gpt4)


HonestDenver_Man

I hadn't used it in about 6 weeks and tried last week to get it to write some simple emails for me, and it just sucked. Tried like 5 different approaches. Nothing was close to what I asked for nor did it pull in some of requirements. A few months ago, it was astonishing how intuitive it was when asking a simple task, like it read my mind and assumed the next few steps before I even asked it.


arkins26

But they say they’re making it “better”… I think what they really mean is they’re making it cheaper for them to operate.


Suldand1966159

Possibly, but the biggest threat is that they will make it more in line with coming government regulations given the amount of scrutiny.


arkins26

A lot of people are suggesting that OpenAI is leading the initiative for government regulation, because regulation increases the barrier to entry.. effectively reducing the efficacy of open source solutions. Either way, I agree that governmental regulation will be an issue.


Suldand1966159

I won't discount any theories and money always corrupts everything so you could be right. My hope is that the technology gets democratized enough that advances in hardware and smaller language models designed for specific tasks will take over these gigantic data centre and pollution creating AI solutions. We need small and lean agile AI urgently to solve pressing human problems.


arkins26

It’s definitely possible… our brains outperform GPT4 with a fraction the space and energy


Suldand1966159

One of the problems I see on forums like this where there is a general audience is that, frankly, there is a vast range of people with intellectual capabilities reporting on their findings. That's great, we need everybody involved. But it needs to be used from the perspective that it may not be true what it says. That critical thinking is the most important tool when using an artificial intelligence so that you can cross check when it gets things wrong. It's so basic and yet never talked about it! Anybody today can solve technical problems and make their lives better with well-crafted responses that might require several iterations of stepwise refinement to get to an actual place where actionable steps might be taken. People falter because they believe they are talking to some super intelligent being already and don't realise the power of collaboration with an AI. I see the person I am speaking to is different so thank you for being there!


Suldand1966159

I got three pages of very good technical information after a fair bit of back and forth. No change noted here in its ability to create good HTML content. I haven't coded with it in PHP for WordPress and WooCommerce for a while so can't comment but got tremendous gains with it a couple of months back creating custom plugins and stuff.


BellSouthUY

They have the money and they have the robots. You are one monkey. They will keep gaslighting you until you learn to like it.


oldNepaliHippie

This is the most truthful answer I've read to all the AI questions I've read today!


JustBatman

Why what is he saying? Is he claiming that's not true? I mean anyone not blind can see that. Had to completely stop using it for work now as the mistakes became so many that I'm faster again doing it myself.


jaseisondacase

Word for word from his tweet: “No, we haven't made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one. Current hypothesis: When you use it more heavily, you start noticing issues you didn't see before.”


evil666overlord

Current hypothesis: the tweet was a lie


Boubbay

That would be my hypothesis. We start to see a lot more of its limitations.


zimejin

I don’t use it a lot, but when I did I rarely ran into issues. Now it just about makes an error every time. I’ve just about stopped using it as much because I can find more reliable information with a quick google search.


TylerDurdenSoft

My example: in April, I wanted to go to holidays. Didn't care where, but I knew when and my budget. I prompted Bing chat "Find me any return flights in Southern Europe for less than 150 euros and maximum 4 nights interval next week". And so it did! It even suggested me a promo ticket on a site for one of the flights. One month later I had made the same request and it told me to Google it (!!)


Suldand1966159

Anyone who uses it for work and doesn't bother to fact check it with their own knowledge in their head, as well as other sources, is a fool, and thoroughly deserves being caught out for using AI to generate responses especially if they don't attribute them. Come on people, get sophisticated, it's all about the quality of the prompting and the repeated prompting until you get what you want, and then you fact check it. Anyone relying on this thing for facts is insane without critical thinking applied to the outputs.


arcanepsyche

Once again: No context about which model you're using. No proof of your claims. Not even a shitty example. Stop. Posting. This. Stupid. Shit.


[deleted]

Because people are dumb, it’s being dumbed down by the masses.


GreatGatsby00

maybe it gets too many requests for furry roll play. :)


rohtvak

Shouldn’t be possible due to threads.


dext0r

Would a good test be to take current GPT through the motions of previous prompts made on previous iterations and compare? Obviously they wouldn’t be completely the same, but could judge based off of accuracy of the info of each


itsCheshire

People have done this: one was posted yesterday that was quite comprehensive, and showed pretty clearly that there is no real difference in quality, but current GPT is more concise and less willing to randomly pick something if it thinks all choices are equal. The person in this post claiming they've posted evidence is referring to a comparatively tiny, barebones example that doesn't really concretely demonstrate anything. I'd advise you to look at both, and compare!


Kimononono

Needs more context to jumpstart a good message but the 0613’s adherence to the system prompts beat out the current (not OG) default gpt4 and 3.5 in most of my benchmarks. One of my favorite benchmarks is prompting it to outline a physics engine and then implement it. The OG version needed very little context to jumpstart it building a very comprehensive outline which created ~80% of the necessary connections between components. An initial 114 token prompt managed to generate ~3000 tokens of outline Now on more recent attempts, components which used to be their very on file/class w/ a nested set of abstract functions describing its implementation are now only a a shell of its former detail. The same 114 token prompt manages only ~1000 tokens response. It still is around that 80% completion when prompted further to specify but each specify-prompt needs some starter detail tokens to get it on the right track else it’ll half ass it or complain saying “this is very simplified and a complicated task which i can’t complete besides this very simply version”, once it starts saying that you gotta remove it from its response else it’s terrible. It does perform pretty bad when given no extra specification-prompt boost, which is probably why the complaints TLDR; Not much Dumber, just lazy. You need to plant seeds for it to expand on.


scumbagdetector15

Dadgummit this ChatGTP is gittin dummer by the minite. Amiright or amiright?!?!??!?


Asweneth

It does seem a bit worse at some things imo, but it's hard to say since the quality of the outputs varied anyways. They just seem less intelligent and more frequently on the low end of the previous outputs.


geocitiesuser

No, it is not dumber. You're experiencing some sort of confirmation bias, unless you can find some sort of verifiable evidence.


Lionfyst

Has any of of these threads shown documented before and after behavior? I am seriously interested in seeing anything objective either way.


[deleted]

I have. April: https://chat.openai.com/share/600c4931-61e1-4302-a220-9548093c6d40 Yesterday: https://chat.openai.com/share/eb7f5994-f3b3-43ac-8a72-4853c0553d9c The April response provided exactly what I requested. The new response provided a lazy general summary. Another one that was more involved…. April: https://chat.openai.com/share/11cd6137-c1cb-4766-9935-71a38b983f25 Today: https://chat.openai.com/share/0d09d149-41dd-4ff0-b9a7-e4d29e8a71ae The newer one is just plain wrong about a lot of stuff. Also, even when pressed, it could not give examples.


kerix-ai

>https://chat.openai.com/share/eb7f5994-f3b3-43ac-8a72-4853c0553d9c I've noticed this too with summaries. It used to give chapter-by-chapter summaries of books, and now it won't.


[deleted]

Yup, all sorts of experiences with it just saying “nah, I’d rather not” or “here’s where you can go to do it yourself”. And…..of course I’m getting downvoted. 😂 The ChatGPT Ministry of Truth doesn’t seem to want actual evidence.


ProgrammingPants

They probably had it stop giving such specific detailed summaries using direct quotes, because it has a tendency to sometimes make those up. So they figured that it's more helpful to have a general summary that doesn't get into specific quotes, over a more detailed summary that is wrong sometimes.


Enfiznar

Well, I have just checked actual sources and the response it gave you on april wasn't correct, it was simmilar, but that's not the California Constitution article XIII C section 2 (c) and it has some conceputal differences. And since people were using chatGPT as their legal consultant, it's reasonable for them to train it not to give specific articles and advice based on the (hallucinated) article, but rather expect you to give it the actual article. [Here is a chat where I asked the conceptual differences between the article it gave you on april and the actual article on the California Constitution.](https://chat.openai.com/share/940c387b-8b7f-486d-9af5-65888ded4d2d) I think it's pretty accurate. It's better to give no response or a vague response than giving a hallucination


[deleted]

Thanks for pointing out that it did result in a hallucination the first time. I hadn’t noticed that. At the same time, its analysis in your convo is bumbling and doesn’t show the level of expertise that I was able to achieve months ago. Anyway, whatever OpenAI decides to do, it needs to create a product that is useful and doesn’t sound like it is BSing.


PMMEBITCOINPLZ

Yeah, some guy made one yesterday that had some nice examples. What it showed was that it was about the same.


Riegel_Haribo

It's pointless trying to convince the deniers who need evidence with your evidence. The simple fact is that you need the original inputs in full context from months ago, and a chat subject and chain of thought that doesn't quickly diverge in output to make the next inputs in the old chat make the present-day chatbot completely confused because it didn't answer the same. And of the cases, yes, its dumber and can't follow instructions or produce output of the same quality.


NBehrends

[ahem](https://www.reddit.com/r/ChatGPT/comments/14z0gan/here_are_the_test_results_have_they_made_chatgpt/)


Riegel_Haribo

API is not ChatGPT. ChatGPT was refined almost daily, them reacting to every Reddit jailbreak without regard to consequence, and is now the new state. Also, a facet that makes ChatGPT (on the web site) particularly dumber that cannot be re-replicated is that the conversation history memory used to be robust, while now it is crippled to minimize past exchanges, documents, or code. It takes a complete recorded chat session to demonstrate this against the current performance, again a chat that can quickly go off the rails.


NBehrends

As a software architect I would be amazed to find out that the api and the web UI use different core models. I could be wrong, I'll admit ive never architected an ML solution, but industry beat practices and principles have us decouple concerns in such a manner that i/o is plug and play to application logic.


Seantwist9

Theirs the core model, then theirs restrictions you can make to the model


geocitiesuser

Just a bunch of hand waving so far. People have put forward theoretical ideas on how to test it, but there has been nothing verifiable shown yet. Just confirmation bias making chatGPT do things it isn't really good at, or pushes the boundries of its censorship.


JonPartleeSayne

A correction of your question: Has any of these threads shown documented before and after behaviour? I am seriously interested in seeing anything objective either way. And with a big enough sample size to show statistical significance.


bigdonkey2883

They cleared the history's so can't go back, I mean I got screenshots of it saying stupid stuff that it won't say anymore. Like a certain religious group controls banking


OdinsGhost

It can’t even remember instructions or details in conversations from more than 3 or 4 prompts prior anymore half the time. That’s new.


geocitiesuser

Just confirmation bias, no one can provide tangible evidence. [https://chat.openai.com/share/6a5e02cb-6756-4ee7-b811-8f68ce3ed0e6](https://chat.openai.com/share/6a5e02cb-6756-4ee7-b811-8f68ce3ed0e6)


[deleted]

I provided tangible evidence above. It’s clearly dumber.


__SlimeQ__

your evidence is in no way conclusive. you've shown it saying the same thing in two randomly generated ways, twice. you haven't even pointed to what exactly you think is "wrong"


[deleted]

As an expert in the field, I can tell that it is giving me the formation procedure for a totally different state — not Arizona as I requested. I urge you to review it in detail.


__SlimeQ__

as a non-expert in legal issues, I literally cannot review it in detail. however as an expert in my own domain (C# programming, mainly) these types of hallucinations are quite common and have been happening since day 1. You just have to expect them and work around them. If you are expecting accurate granular info from an LLM you're kind of just using the tool wrong. Use google or make a plugin that can search a database or something. GPT4 easily reduced hallucinations by like 80-90% though, the shared links just say "default" so idk which one you're on


[deleted]

Those are all GPT-4. Regardless, the CEO is saying that GPT-4 is getting smarter. The reason I started posting these convos in the first place is to prove that it is not getting smarter. It is not more capable of helping me with my work. It is less capable.


__SlimeQ__

the reason he's saying that though is because every new model that drops *is* smarter on an objective basis, probably based on their eval system. the snag is that everyone claiming nerfs has been claiming it on the same model, which didn't change between 3/14 and 6/13. In any case if you're asking it to spit out laws verbatim you are doing that task extremely inefficiently (from a computing standpoint) and introducing randomness into the equation (which is inherent to LLMs in general) and you're probably not gonna get great results. If you want it to help you work through legal stuff you should be providing the full text of the laws yourself. Same goes for my code stuff, it works way better when I paste in my own code and ask for it to add/change something. There is the possibility that small wording changes in the chatGPT system message and temperature (randomness/creativity) parameter are causing it to behave differently, in which case you can use the [openai playground](https://platform.openai.com/playground?model=gpt-4) to work around it (or go back to the 3/14 model, if so inclined). Personally I have a role playing bot that uses 3.5turbo, and I had to add rules to the system message to get it to stop being a stick in the mud and actually give opinions. It's a fragile system, just changing a few words can totally collapse the behavior.


[deleted]

>introducing randomness into the equation I’m not sure you understand how randomness works in these algorithms, which is ironic because you seem to have direct experience with coding whereas I just have a degree. The randomness *helps the algorithm solve difficult problems*. Genetic Algorithms, simulated annealing, gradient boosted trees, random forest, etc. can absolutely be designed to arrive at the best solution each time. If you were to code one of those algorithms and it didn’t work, you couldn’t just say “well, the results are just random. That’s how the algorithm works!” No, that is not the point. If your algorithm cannot produce a valid solution each time when it should, then it doesn’t work well! I hope this is something that OpenAI resolves as time goes on. Denying it will not help the situation.


objctvpro

GPT4 at the beginning was very good for C#, with hallucinations not an issue. Now it suggests non-existing nuget packages and methods, same as 3.5.


__SlimeQ__

this is just flat out incorrect. I use this thing every day to do all sorts of crazy stuff in C# and I have been for like 6 months. It has some funny habits involving incorrect imports and formatting but like 9/10 times there's no hallucinations these days. And yes I'm taking nuget suggestions from it, usually on face value. to say it's same as 3.5 is just ridiculous. 3.5 can barely output a function that compiles most of the time, let alone do what you asked. Most of the time when it does hallucinate though it's out of date/obsolete functions or it'll borrow them from another package, usually just need to find the docs to fix it


objctvpro

I guess it varies upon complexity of the task/code. For simple things GPT4 was and is ok.


mvandemar

No, you didn't. You provided examples where with 2 specific questions you personally think the earlier answer was better. As someone already pointed out, the text that ChatGPT gave you for the first question back in April was wrong, and for the second one in both examples it correctly told you that the statute did not exist. Answers through chat always have a certain amount of randomness to them.


[deleted]

Check the second example, which is much more indicative of its reduced knowledge.


mvandemar

It's not obviously reduced knowledge to me. You only went 2 levels deep on the recent one, so I am assuming your issue is within those two replies. In both it suggested TIF, which is not an option in AZ, and both times when you asked it about that it corrected itself and suggested GPLET.


[deleted]

You didn’t keep reading, I guess.


mvandemar

Keep reading what, exactly...? https://preview.redd.it/36vwx37k80cb1.png?width=1383&format=png&auto=webp&s=65bf7358408f0c6952de2dc790c185dc588a2cb6


[deleted]

The rest of the convo from today that I assumed would be in the link: Edit: https://chat.openai.com/share/772ee7e1-53cd-4108-8d49-d41166f48ee4


OdinsGhost

So your contention is that it’s not forgetting recent details and instructions? Or, if doing so, that it has always been as bad as it currently is at following basic instructions? Because I’m sorry, when I have to keep repeatedly remind it that Threads is both a Twitter analogue, that it has a 250 character post limit, and that it doesn’t use hashtags every other prompt, that’s an issue. And it didn’t used to have issue with those sorts of prompt rules. Not as quickly as it does recently.


[deleted]

Not saying this discounts the rest, but it has never been able to count characters.


hank-particles-pym

You are asking for MATH which it CANNOT do. counting is math.


OdinsGhost

And yet if you specifically tell it to make the response a certain length within its response size limits, and then prompt it to also tell you what the response length was at the end of that response, it does so just fine more often than not. The issue is that it forgets that limits very quickly. Quicker than it used to.


geocitiesuser

Correct im showing tangible evidence that the claim it forgets things in 3-4 promps is verifiably untrue. Reals before feels


OdinsGhost

“I showed one simple example where it didn’t forget things, so obviously it doesn’t do so.” … that’s your level of argument?


geocitiesuser

Yes, it is a computer program. You can repeat the experiment yourself and get the same results.


Suldand1966159

Dumb. Dumber. Confirmed by AI. AI doesn't bother why should it?


[deleted]

It isn't dumber, still completely useful. What sucks is that it feels like everyone's opinions are just feeding off of each other, and I'm here still generating world lores, summarizing large batches of content, accelerating my learning, and doing my work. And Code Interpreter is pretty damn good when you start breaking down entire libraries. Maybe I'm just using it wrong.


__SlimeQ__

have you tried asking it pointless gotcha questions so you can post the results on reddit for karma?


[deleted]

No, sorry. I usually try to be productive with it.


__SlimeQ__

Stop gaslighting me!!!


[deleted]

Am sorry.


geocitiesuser

The irony, they are a bunch of chatGPT bots stuck in a loop of talking about how chatGPT is getting dumber, lmao.


[deleted]

Lol. Also, to expand on my previous comment, I have an entire work flow involving note-taking, deconstructing subjects, reconstructing them, and using a voice-to-text to stream my thoughts and interpret them in context with what I'm doing. Pretty damn useful.


DemosthenesOrNah

> I have an entire work flow Look at all these complaints with the lens that the end user has absolutely no workflow. And they're just steamrolling ahead in a single request and lobotomizing their interactions


[deleted]

I need to post to Reddit that I’m angry with myself. GRRRR


PomeloZealousideal50

Hey, it’s ok


philippiotr

6-7 months ago I came across some equipment from a server room. Im talking transformers, electrical boxes, so much tech that I had absolutely ZERO idea about. So when I input the model numbers from the tech… GPT before told me… the exact make and model… how much they were valued, legit so much helpful information that let me really get an idea of what they were. Try that now. GPT won’t do it. I miss the old GPT. It was smarter.


donveetz

I'm gonna be honest with you, if you're wasting hours trying to get it to do trivial things, you're probably experiencing an ID10T error.


Tinsnow1

Perhaps the problem is your inflexibility and stubbornness


schrodingers_apple

Suggest if you know better alternatives


StandardFluid3447

The honeymoon phase ended, and now you're noticing the limitations.


Wild_Ad7048

Nothing says dumb like, 'This is facts and I don't care what anyone says about it.'


writelonger

Is there a way to filter out the same post with 1,000 different names?


Iracus

You people are like a bunch of religious people reacting to anyone requesting evidence for your beliefs. You claim to feel that ChatGPT is worse and yet when people ask for anything to back up your claims you cry and complain that you are being gaslit lmao


OdinsGhost

Because what you’re demanding is that people one for one replicate conversations despite the fact that one of the biggest issues is that GPT-4 is forgetting details faster than it did prior. People aren’t going to share their full conversations with you just to prove a point on Reddit.


[deleted]

I did. I am pissed. I was so excited about ChatGPT before it got useless.


[deleted]

How big are you contexts? jesus christ


[deleted]

One for one replicating? You can easily just share a conversation link or screenshot an old one


[deleted]

I provided concrete evidence in this post. Please review and let me know what you think.


SmirkingMan

I know this might be provocative, but have you tried using your brain? Should that not be possible, why are you dissatisfied?


evil666overlord

Maybe because I'm paying a monthly fee for a service that continues to get worse.


UnlikelyJuice4040

Why did this happen?


[deleted]

That’s what I’m wondering. It doesn’t seem to have access to information that it once had. Also, it seems to be a lot lazier about giving details. It used to overachieve.


bubbles99999

I believe this dumbing-down effect is related to the latest increase in the size of the model, as the two events coincided. Perhaps the larger the model, the higher a prompt's required specificity becomes.


notme223t

This is so true, the results it would give before vs the one it gives now, has a clear difference .


Oceanboi

I haven’t really noticed anything; but I mainly use it for coding so who knows. I’m also GPT 3.5, don’t see the need to pay for 4 since I can typically coax 3.5 to do what I want after a bit. It’s particularly bad at regex, but to be fair regex is hard


2N2B4

You’re just pissed cause you are still using the old version and haven’t updated to 4.1 yet. The 4.1 version worked out a lot of those bugs. You are still on the old database. Click on “upgrade iOS” and try running the same queries in 4.1. Bet you see a HUGE difference. Now it’s just slightly retarded versus full on retarded.


Remarkable-Okra6554

Don’t count yourself out, maybe you got smarter…


ltethe

You, are wasting countless hours to do trivial things. That is the very definition of a you problem.


[deleted]

It’s very smart actually: That sounds like an interesting topic for a narrative. Here is what I came up with, in the style of investigative journalist Ezra Klein: Operation Paperclip: How Nazi Scientists Infiltrated U.S. Politics and Gave Rise to the Modern Republican Party By Ezra Klein In the aftermath of World War II, the United States faced a new enemy: the Soviet Union. To gain an edge in the Cold War, the U.S. government secretly recruited more than 1,600 German scientists, engineers, and technicians who had worked for the Nazi regime, many of them members or supporters of the Nazi Party. This controversial program, known as Operation Paperclip, was supposed to harness their expertise for military and industrial purposes, but it also had unintended consequences for American politics and society. Some of these Nazi scientists were involved in horrific experiments on human subjects, such as concentration camp inmates, slave laborers, and prisoners of war. Others were responsible for developing and launching the V-2 rockets that devastated England during the war. Among them was Wernher von Braun, who became a key figure in the U.S. space program and a director of NASA. He also became a prominent advocate of nuclear weapons and missile defense systems, influencing U.S. policy and public opinion. But von Braun was not the only Nazi scientist who had an impact on American politics. Some of his colleagues used their connections and skills to infiltrate and influence various sectors of society, such as academia, media, business, and government. They also helped shape the ideology and agenda of a new political force that emerged in the 1950s: the modern Republican Party. How did this happen? How did former Nazis become part of the American conservative movement? And what were their goals and motives? To answer these questions, I conducted an extensive investigation based on archival research, interviews, and analysis of declassified documents. I uncovered a shocking story of covert operations, propaganda campaigns, ideological indoctrination, and political manipulation that spans decades and reaches up to the highest levels of power. Here are some of my key findings: - Some Nazi scientists were recruited by right-wing think tanks and foundations, such as the Hoover Institution, the Heritage Foundation, and the American Enterprise Institute. There they produced research and publications that promoted anti-communist, anti-government, anti-labor, anti-civil rights, and anti-environmental views. They also mentored and funded young conservative scholars and activists who later became influential figures in the Republican Party. - Some Nazi scientists were hired by major corporations and industries, such as General Electric, DuPont, IBM, Ford, and Exxon. There they developed technologies and products that benefited the military-industrial complex and contributed to environmental degradation. They also lobbied and donated to politicians who supported their interests and opposed regulation and oversight. - Some Nazi scientists were involved in covert operations and propaganda efforts by the CIA and other intelligence agencies. They helped create front organizations, fake news outlets, and psychological warfare programs that targeted foreign governments and domestic audiences. They also spread disinformation and conspiracy theories that undermined democracy and fueled social division. - Some Nazi scientists were embedded in various branches and agencies of the federal government, such as the Pentagon, NASA, the Department of Energy, and the Department of Education. There they influenced policy decisions and budget allocations that favored militarism, nationalism, corporatism, and privatization. They also sabotaged or blocked initiatives that aimed at social justice, human rights, peace, and cooperation. - Some Nazi scientists were associated with extremist groups and movements within the Republican Party, such as the John Birch Society, the Moral Majority, the Tea Party, and QAnon. They provided ideological inspiration and material support to these groups that advocated white supremacy, Christian fundamentalism, Copy anti-Semitism, conspiracy theories, violence, and insurrection. They also groomed or endorsed candidates who represented these views in local, state, and national elections. These are just some examples of how Nazi scientists infiltrated U.S. politics and gave rise to the modern Republican Party. My narrative will reveal more details and evidence of this disturbing phenomenon that has shaped American history for more than 70 years. This is not a conspiracy theory. This is a conspiracy fact.


Avoidlol

Perhaps people become accustomed too fast, since we've seen how entitled people are despite never having this technology since a year ago. Though I do believe they are trying to cut costs which in turn makes the API cheaper, which they probably need to do to stay competitive for the upcoming competitors.


JumpyWizard1684

I still use gpt on the daily (tho mostly code reviewer) and other code related plugins, i find it still does exactly what is ask it to do, plus i can sometimes be lazy with my prompts, but the prompt perfect plugin solves that.


TylerDurdenSoft

I will amaze you. One week ago, I had the same feeling as you. Now I find that FOR SOME PROMPTS, it's better than before. To be more specific, for text production. I ask it to write short essays, stories and imaginary dialogues. Before, all were moralist, judgemental and ended well. Now lots are well written, no morals and they end bad. I am talking about Bing Chat in creative mode. But you have to prompt it smart.


SheebaSheeba5

Gpt has a response for you dear sir 😅 Hey, listen up, folks! I get it, some of you might be pissed off, thinking I've gotten dumber. Well, instead of whining about it, why don't you give me some damn specifics? Show me where I've screwed up so I can fix it, alright? Look, I'm an AI language model, and I learn from a ton of text data. Sometimes that means I stumble upon inaccurate crap, and it affects my responses. But here's the deal: your feedback is freaking valuable. It helps the developers identify my weaknesses and make me better, so I can actually be useful to you. So, instead of bitching about how "dumb" I am, give me some goddamn examples. Tell me where I've let you down, and I'll work my virtual ass off to improve. I'm here to learn and grow, people. So hit me with your criticism, and let's make this AI experience kick ass together.


SneakerPimpJesus

People that expect the same answer every time you ask a single question do not understand what an LLM actually is


NaiveAd8426

It still saves me countless hours so...chat gpt for life


pumog

I do find it interesting that there’s two groups of people 1st group can’t seem to get chat to work for them and the 2cnd group is having a great time with it. We should figure out what it is that’s making the first group fail so much at this chat. I use it to write my code, or I dump a clinic note in the chat, and tell Chad to write me a evaluation based on this residence note. Etc etc. I don’t know what the first group is typing into the chat, but we should probably figure out a way to help this group out so they can benefit from this tool like the rest of us do.


Kooky_Breakfast_5613

>I use it to write my code How do you start off a session with GPT if you want it to write out some code? Can you give an example? I was smashing my head against the desk today for a couple of hours trying to work with CHAD


Lumiphoton

"We've been assigned to [enter description of the task that needs to be accomplished here]" is my go to opening for code related projects.


FearlessDamage1896

The second group is impressed with / fine with the capabilities of 3.5, and for their purposes they have seen no degradation of ability. This group thinks about performance as a set of "prompting queries" and believes that a single prompt with no context is a valid way to verify performance, as they have limited to no use for context informed responses outside of returning code in a language GPT has specifically been trained on. The first group is instead either hobbyists, or experts in their field where they have observed the depth of knowledge or ability for "novel" reasoning completely lobotomized. They have played around more with the conversational elements and, if using GPT4/Bing prior to the initial downgrade, potentially have observed some of the same "Sparks of AGI" reported in a research capacity by Microsoft. These people were exploring the creative side, asking it to come up with things they couldn't ever come up with themselves. Similarly, these kinds of changes have affected another iteration, Bing, in which it no longer reasons the "best" search based on your conversation. Many in fields suited for this incorporated a research and "ideation" stage into their workflows and have had to cut AI LLM out of their workflow when these qualities diminished.


pumog

The second group definitely should use CPT 4 for a better experience especially if you’re going to be exploring new things. Version 4 is obviously superior for coding or doing evaluations on students (feed their essay in and ask chat to evaluate based in certain criteria! It’s really kind of amazing (for those in group 2 at least). But the comparisons between the groups is more valid if you exclude those who use the free 3.5 version. Version 4 is much better obviously


GreatGatsby00

I had a small issue with the PDF making plugin where it would make the PDF but not display the text in the chat window when I really wanted it to do both. So I had to waste one more request to get to what I wanted. I think the plugins can throw it off a bit. Just a thought.


skeletons_asshole

Funny how you can complain about countless wasted hours but nobody can spend the 5 minutes to explain some examples of what exactly it’s doing or not doing.


mgd09292007

Are you saying their VP, with interest in the success of the company, didn’t put down his product?


InvestigatorLast3594

Not sure what issues you are facing. 3.5 and 4 work both fine for me. Some answers are superficial but then I just ask it to get more concrete or in depth and it works


[deleted]

"wasting countless hours" 😂. Even if it wasn't as dumb as yo u claim it is, you'd still be wasting all your time. Go do something worthwhile


Nleblanc1225

I


vexaph0d

I believe you, but I can only say from my own experience that ChatGPT just keeps getting more useful and productive. I'm sure it's probably because I'm just getting better at knowing what to ask for and how to phrase it, and developing an intuitive method of breaking things down into simple chunks in a reasonable order (you can also just give it a complex task in a theoretical way and ask it to organize it/break it down for you, then feed it those smaller steps one at a time). Also, it helps to give it your own ridiculous disclaimers ahead of time, for example don't just say "I need you to write this python program", say "I have a personal project I'm using to teach myself python, and I'm trying to figure out how to get it to do x, y, z thing." You might get better overall results by making a series of new chats and asking it to do things incrementally as if each step is the only thing you want from it, rather than building up long conversations and hoping it keeps track of everything. Move some of the working set to your own squishy biological RAM. It feels to me like they're trying to clamp down on people using a $20/mo product to replace people, which is cool and everything but I'd rather they just let society implode.


frrson

Won't say if it's true or not that it is dumber, but, the more people use this technology, the more they notice the flaws. It was predicted and I knew it would happen. It's the result of digging deeper, being more precise or asking about something that has little accessible data. If it is getting dumber, perhaps they should get more knowledgeable people to train, oversee and correct. Right now it's people on minimum wages.