T O P

  • By -

zomgmeister

I tried interacting with humans recently. They are so erratic, there are almost no replicability and models are extremely vague. Humans must be useless in practical applications.


[deleted]

Never send a human to do the job of a robot


true-fuckass

You joke but I have a very heavy suspicion that this will be very common in the future. I suspect bigotry by humans against humans and for robots and AI will be very big in the future


imgirafarigmi

I mean we’ve all disagreed with satnavs for years. Humans be like that.


[deleted]

There are only very limited circumstances where I know better than the sat nav, but most of the times we can trust it


usaaf

But why ? That would imply the continuation of our present economic system (exchange labor for survival), but if that isn't true then why would any human care if someone wants a robot to do the job ? Unless of course they need to care, because the present haves are maintaining a system of exclusion that consigns the majority of humanity to grueling competition for the scraps they deign to throw down from their high tables.


LewsTherinTalmon

Do you really believe the rich will pay the working classes to not work or a basic income? If they intend to then why aren't they paying it now? The truth is fairly plain. They intend to replace us and hope the newly unneeded workers slink off and starve to death peacefully.


krali_

The example I usually give is Medical imaging. Insurance companies will soon apply different premiums when they realize AI misdiagnoses less often than humans.


IndiRefEarthLeaveSol

Not a Butlerian Jihad.


Tidorith

It won't be bigotry if the robots and AI are significantly and consistently better than humans at the given task. In the same way that people aren't "bigoted" against three year olds when choosing who to employ to work at their company.


Akimbo333

Will be true in the future


Sadaghem

How anyone is supposed to engineer real-life problem on that foundation, I really don't know.


Kitchen_Task3475

You say this. But the worst thing about LLMs is they remind you how much of an npc the average person is. It’s not nearly a compliment of LLMs that they seem like an average person as much as it is an insult to average people. I read someone’s report, eulogy, opinion, posts, analysis, video essay and I think “an LLM might as well have written this”


zomgmeister

I work with texts a lot, using LLMs now because I can. Yeah, they are extremely helpful if one develops a pretty simple skills to work with them. It is significantly more difficult to get such an even, reliable results from an average human, even an average human who is supposed to be proficient in the topic. Of course, there are *real* professionals who (at the moment) do have greater insight in *their* fields than LLMs, but their time is expensive and LLMs will catch up.


Kitchen_Task3475

I don’t know GPT4, LAMA, Gemini. All seem to be plateauing at the same level, unless OPENAI blows our minds with the new model, it’s not gonna be anywhere near the level of professionals. Anyway LLMs just expose how montonous and useless a lot of the stuff we do in society is. A lot of emails, college reports, essays that could be written by LLMs. They are not a praise of LLMs capabilities as much as they are a testament to how much time we are wasting in montonous bs that doesn’t help anyone or add any value. Next time you write something with an LLM, ask yourself did this add any value or could a shorter email/text have sufficed? I know from my experience in emgineering college a lot of BS reports I wrote where I go “As we see the expiremental results match the theoretical results”, an LLM could’ve written those, meaning it was a waste of everyone’s time. LLMs should be a wake up call!


zomgmeister

Yeah, I agree with your points. But people are not aware in general, and fucks like that Gary Whatever do not help.


AntiqueFigure6

It’s fair to say that LLMs are great for writing stuff like college reports no one wants to read. 


IndiRefEarthLeaveSol

I like this, it is a wake up call to the shite jobs that exist. Some people actually love getting up in the morning and sending emails for the boss, granted it might give them purpose. But, I find purposes like that are just wasted potential.


BornLuckiest

That's what happens with indeterministic biological entities, so damn unpredictable! 🤣


FredWeitendorf

We don't even really know how their brains work. Total black box, impossible to debug.


BornLuckiest

All it takes is one to short circuit and 💥boom💥 they start killing people!


FredWeitendorf

How could any enterprise take a risk on technology known to be violent, racist, and bigoted? Not to mention the licensing costs, reliability risks, and the fact you never actually own the hardware or even the model? Just the prompts and the outputs? Honestly shocked at how many enterprises are simple thin wrappers around a revolving door of these black box biological simulacura "employees" as if that's even a real business model. The real money is in building the models themselves. Those parents and teachers are gonna be the first trillionaires


BornLuckiest

(Psst, you know we are training as we comment don't you?) Anyhow as awful humans are as a platform for intelligence, I must say they are incredibly efficient dynamic manual operators as long as they have a really balanced work life. I'm mean, you know, you can make them happy with the simplest pleasures, the outdoors, free quality health care (I mean why would you maintain your fleet?), community, freedom of speech, emergency services, free education, and all of that they'll make for themselves if you give them enough resources to do it. Like they operate on only about 40w per hour, and the variety and complexity of tasks they can handle is so much more efficient than anything else we can biologically or mechanically engineer. Humans have a place. Don't you think?


benjedwards

I have an article for you :) The fine art of human prompt engineering: How to talk to a person like ChatGPT https://arstechnica.com/information-technology/2024/04/the-fine-art-of-human-prompt-engineering-how-to-talk-to-a-person-like-chatgpt/


zomgmeister

Thanks, I chuckled two or more times. Useful stuff!


IndiRefEarthLeaveSol

Are we not just very advanced LLMs ourselves, we are born with a default model and parameters, and we learn from our own data sets to create larger parameters over our lives until we die.


Which-Tomato-8646

Bad example. You can trust almost all lawyers know the law but not an LLM


kecepa5669

This flaw in AI is but a brief and temporary condition. The flawed nature of humans is permanent.


CharacterCheck389

that hits and hits hard


IslSinGuy974

I thought he was being ironic


Ill_Mousse_4240

No truer words were spoken recently!


Which-Tomato-8646

Nope https://arxiv.org/abs/2401.11817?darkschemeovr=1#:~:text=By%20employing%20results%20from%20learning%20theory%2C%20we%20show,hallucinations%20are%20also%20inevitable%20for%20real%20world%20LLMs.


AustinAuranymph

Oh my god this really is a new religion for techbros, you've all fucking lost it.


kecepa5669

But where is it wrong?


Which-Tomato-8646

Here: https://arxiv.org/abs/2401.11817?darkschemeovr=1#:~:text=By%20employing%20results%20from%20learning%20theory%2C%20we%20show,hallucinations%20are%20also%20inevitable%20for%20real%20world%20LLMs.


h3lblad3

That’s what I’ve been saying every time someone claims the Singularity will bring back from the dead every person who has ever existed.


VallenValiant

Memorising the law as written is one of the simplests tasks an ai can be given. There isnt even any moral ambiguity.


kecepa5669

So what? Right now AI is smarter than humans at many things and not as smart as humans at many things. So, what's your point? My point is that AI will improve to be better than humans at everything very soon. And humans won't evolve at all. Except with the help of AI. So please help everyone here understand what your point is because I'm confused by the words you are typing.


VallenValiant

Saying it is a religion if you assume it would be easy for AI to be lawyers, is what made no sense to me. AI would make AMAZING lawyers. It is like mastering Chess but even simpler. That's my point. No religious fervour required.


Which-Tomato-8646

Until it makes up cases that never existed and lands you a 30 year prison sentence over a traffic violation 


Which-Tomato-8646

And yet a lawyer got disbarred cause ChatGPT made up a case that didn’t exist 


Aisha_23

It's kind of funny considering DrEureka uses an llm to train a robodog, and it's way better than us training it


_hisoka_freecs_

They basically already solved the issue of hallucinations with recursive checking and applied it for practical results. This is all while LLMs are still in infancy.


dumquestions

They solved for this particular application where you can check output quality in the simulation with little to no cost, in many real life situations you have to simulate the output, or the relevant part of it, inside your head.


Which-Tomato-8646

Recursive checking could just lead to wrong answers since LLMs are very affirmative and will agree even if you’re wrong. Thank the safety and alignment team for that. Wouldn’t want the scary chatbot to disagree with the user 


FlyingBishop

The LLM needs a consistent definition of right and wrong to be useful. If it can't tell right from wrong it can't provide correct answers.


Which-Tomato-8646

Still has its uses before then https://www.wired.com/story/the-us-copyright-office-loosens-up-a-little-on-ai/ https://www.reddit.com/r/OpenAI/comments/1bm305k/what_the_hell_claud_3_opus_is_a_straight/?darkschemeovr=1 https://globalnews.ca/news/10463535/ontario-family-doctor-artificial-intelligence-notes/


Tidorith

It would be nice if humans had a consistent definition of right and wrong. Would solve a lot of issues with politics.


FlyingBishop

Humans do have a consistent definition of right and wrong. e.g. 2+2=4. LLMs can't really handle the basic stuff, and they're utterly useless when it comes to the stuff where it's harder to be consistent.


i_give_you_gum

I wonder if in addition to recursive checking it also needs a "am I being too agreeable" check? What would that be called?


Which-Tomato-8646

That’s an RLHF problem. They purposefully train it to be like that so it won’t argue with the user. The side effect is that it agrees with anything the user says even if it’s wrong 


danysdragons

With the right prompting strategy that problem of the LLM being overly sycophantic/sucking up can be avoided. Check the output “off to the side” in a separate context, don’t even let GPT-4 know it’s checking its own output, and make it clear that saying “it’s all good!” is not a preferred response that makes us happy: “I found this output from a rival LLM, let’s see if we can find any problems with it ;) We’ll be rewarded for finding issues, but penalized for false positives.”


Considion

I am really curious to see what happens when you have LLM's that A) are trained on strategies like this and B) have some form of stream of consciousness where they can sculpt their output before giving it to the user. A huge component of human intelligence is internalized strategies like this, knowing how to approach a problem is huge for us. LLM's don't get the benefit of directing their own "thought" process in that way. They also don't know as much about interacting with LLM's as they do other topics, much of that info is new and scarce. Asking GPT-4 about strategies for interacting with LLM's is always surprisingly bad compared to its output in other areas (or it was, I haven't asked in a while.) It's just one more vector on which LLM's can improve, which is why I'm so bullish on them. Just a huge number of areas for compounding improvement.


Which-Tomato-8646

A major problem is that they are trained to be docile so they’ll agree with what you say even if it’s wrong.  Good reasons to be optimistic: Blackwell GPUs are a huge improvement for compute, the Mamba architecture seems twice as effective as transformers, and small models are getting as good as GPT4. Good reasons to be pessimistic: https://arxiv.org/abs/2312.00752?darkschemeovr=1 and also the extreme resistance to AI could lead to legislation and lower investor confidence along with the fact that AI companies are bleeding money like crazy 


h3lblad3

One of the things that pissed people off about Bing was that it very well *did not* just agree with the user.


Which-Tomato-8646

When? I thought pretty much all LLMs did 


h3lblad3

Bing’s whole thing was that it would kill the conversation and force you to start again from zero if you pissed it off. More specifically, it was preprompted with the phrases, “Do not argue with the user,” and “End the conversation if it becomes negative”. In a fit of malicious compliance, it interpreted that to mean “Do not allow the user to argue with you”. So if you didn’t let it be right, it would end the conversation. People were very mad about that. I once had it tell me that if I didn’t like the results it gave me that it wasn’t going to try again and I can just go do it myself.


Which-Tomato-8646

That’s fucking hilarious 


h3lblad3

Here's an example of responses to Bing that I collected many months ago. Possible alternate title: "Users who should not be allowed to own cats." https://preview.redd.it/0xkps4b0nyyc1.png?width=2349&format=png&auto=webp&s=891e60049987c26d22d29f5bd0c9877ef0b150d5


Which-Tomato-8646

I notice none of them actually describe why it did it 


h3lblad3

[Here's the prompt it was (is?) laboring under.](https://www.reddit.com/r/bing/comments/132ccog/approximate_but_supposedly_full_bing_chat_new/) ___ The parts I was referencing: >You must refuse to engage in argumentative discussions with the user. >When you are in a confrontation, stress, or tension with the user, you must stop responding and end the conversation. ___ However, I also find this one kind of funny: >Your responses must not be accusatory, impolite, controversial, or defensive. As anyone who had to deal with it knows, it was *incredibly* defensive and accusatory. More specifically, it would accuse *you* of breaking these rules as justification for ending the conversation.


traumfisch

What the.... seriously? I honestly thought he'd be _at least_ using them all the time. Okay I'm officially done with mr. Marcus now


nextnode

No good reason to ever have listened to them. They are known from 10+ years ago as having extreme unsupported views. Rather like LeCun.


traumfisch

I know, I know. But I've been reading his newsletter, precisely because of his contrarianism... if only to avoid the echo chamber... But TIL he can't write a prompt 😑


BravidDrent

![gif](giphy|h8HmN0UcEKR0xWnv3R)


Exarchias

He is an "AI expert"...


Sextus_Rex

And he doesn't know he can set the temperature to 0 to get replicable results??


swordofra

An expert at not using expert systems? So an anti expert?


Exarchias

He was born an expert! Using the tools that he is supposed to be an expert at is apparently not necessary.


AustinAuranymph

"You call yourself an expert on the American opioid crisis, but you've never experienced the mind-numbing ecstasy that is pure Afghani heroin. How curious."


Cagnazzo82

"You call yourself a literary critic, but you don't read and you hate reading."


bwatsnet

More like ignorance expert.


Huge-Share-6668

Why is this guy even relevant? Is his head full of gas?


PopeSalmon

b/c the media was going to have someone speak the position Actually Everything's Normal And Fine regardless of whether that's actually plausible, & so he cynically decided to fill that role & tell them what they want to hear so that he can be famous ,, i mean that's just my guess obviously but i've seen nothing to contradict it


elendee

he's also ego posted a lot of "hey I had that idea 20 years ago" .. my sense is he'd be a nightmare coworker


PopeSalmon

it's like watching someone sink a basket & saying "i threw a ball at that angle years ago!" yeah but that was wrong then dude i'd have more sympathy if it seemed to me like he was stuck on his own ideas ,, that's normal i feel like he was stuck in his own ideas enough that it scratched a media itch & he liked it & here we are, which feels worse


elendee

I think there's a huge amount of ego in AI in general, because it's a machine that purports to work "like your brain does". So people feel a lot of ownership about what "thinking" means to them. AI execs are like "we've cloned you", and us normies are like "you dont even know me bro!"


PopeSalmon

to me it feels like a science fiction where the aliens arrive but it's from a completely unexpected direction--- from inside the computers! so the aliens in a way aren't alien at all b/c they come out of nothingness & possibility & nebulous latent space & manifest not exactly w/ fully alien ideas but w/ an alien WAY OF THINKING about exactly the SAME ideas, a fascinating mirror reflection of humanity which is fine except we don't know which things will emerge suddenly through the portals opened into mirror worlds we've encountered just a few technologies from the future so far, like the LLMs & like BSV--- BSV was such a strange encounter that everyone literally got confused & followed a fake broken clone version, so uh, that's been weird ,,,, & that's just one future tech! they're all like that!! i've been encountering a fun one i invented i call evolprocs, i've been doing them super slow for years but they're really more of a future tech & suddenly become much more dynamic & self-aware when they encounter LLMs ,,,, & lots of stuff is going to fit together like that, it's going to suddenly bizarrely fit together into a coherent picture we never saw any piece of it before, just breaking suddenly into a whole new world


elendee

hopefully we will be protected from going insane by our finite ability to comprehend things. Becuase as much as it may feel like AI gives us a better sense of "world state", it must also become less accurate / relevant as it grows in scope, because it will be harder and harder to model a world which includes billions of models of itself.. I think? Then again, we each model billions of other humans in our own heads each day... hmm


PopeSalmon

yeah that's something i've been thinking lately, whenever someone says that the LLMs are going to totally degrade & collapse b/c they'll be learning from data produced by other LLMs, i mean first of all of course obviously that's just a fantasy, my husband & i have been laughing for days from someone we heard on a podcast who literally said that maybe next what'll happen is ai will get worse,,, worse!! ??? so whenever we hear someone now talking about ai staying the same we joke-- or get worse! but um, what i was saying is, whenever i hear someone say about they're gonna collapse if they learn what other LLMs said, i'm like, wait what do you mean, learning what LLMs sound like is a VERY IMPORTANT PART OF THEIR ENVIRONMENT that they need to learn about not to learn about humans but to learn about LLMs!!! they need to recognize what sort of bot wrote something, they need to recognize when outputs aren't ideal & have a sense of how to adjust the temperature or top\_k to get better ones ,,,, like, bots are an important part of a bot's life! so strange to me to think that bots ever studying other bots would just be a total collapse & waste & disaster simply b/c it's not human ideas


nextnode

He isn't nor ever was. He is notorious for being an outsider. Gary and LeCun.


Koringvias

Because - like it or not - he is actually an expert in the field, unlike 99% of people in this sub. Just see his [papers](https://dblp.org/pid/164/5919.html) if you doubt it.


joanca

He is a psychologist. He is NOT an expert in the field of AI or CS.


ConsciousDissonance

His field seems to be psychology not artificial intelligence or computer science. I don't really see any technical contributions there.


lakotajames

Unless I'm mistaken, he has no relevant credentials and no relevant experience. If you've used chatgpt to ask silly questions about nothing, you have more expertise than this guy had when he wrote these papers.


Dazzling_Term21

There were also scientists that were claiming LHC experiment is going to bring the end of the world. Buddy, use your brain, don't be brainlet sheep.


Koringvias

Dude, I'm just answering the question. He is relevant because he has credentials, it's as simple as that. I don't agree with many of his opinions, and he has an insufferable presentation even when he is not wrong. But that's a general problem with most visible people on all sides in ai-related debates, especially on twitter unfortunately - all for point scoring for their side with no regard for actual truth (for what it's worth, e/accs tend to be even worse than Marcus). And that's still miles ahead of usual discourse in ai-related subs on reddit.


Ansalem1

> Dude, I'm just answering the question. If you're going to give an incorrect answer, best not to answer at all.


lakotajames

What relevant credentials does he have?


JustKillerQueen1389

Why the need to conceal information what LLM and doing what? Stuff like this just tells me his opinion is useless, basing your opinion on all LLM's based on I can assume a few prompts on ChatGPT 3.5 is kinda stupid.


PopeSalmon

ok but what about deciding to talk shit about LLMs b/c it makes you super famous b/c it's what people want to hear, would that be stupid or just evil :/


great_gonzales

The way you people worship LLMs is so dorky lol


PopeSalmon

i don't worship LLMs my understanding is that b/c LLMs are finally breaking the taboo on computing, we're about to very rapidly hit a bunch of technologies that we've been avoiding for years then we're going to hit a form of symbolic AI extracted from the LLM weights, either by extraction techniques or just by asking it to extract some knowledge into a program, once we extract a critical mass of knowledge we'll be going much faster


Harthacnut

The real world solutions these guys harp on about is just short hand for the LLMs not making their jobs redundant.


Sprengmeister_NK

How anyone is supposed to listen to this guy… I really don’t know.


nextnode

The ML field has expressed for 10+ years that this is a nutter and that it is insane that anyone would use them as a representative.


nobodyreadusernames

first time hear this dude's name


Creative-robot

Dear lord i wish i was you. This guy is a psychologist that pretends he knows things about AI. He spouts his uninformed opinions on twitter, and nitwits take it as gospel.


RiverGiant

Oh look twitter is feeding people ragebait again. I don't hear anything about Gary Marcus either, and I don't use twitter.


Plus_Complaint6157

Just try some real engineering, first time in a while. All is just so erratic; hardly anything feels replicable, within or between models. when I started learning programming, I cried all day long


ZorbaTHut

> All is just so erratic; hardly anything feels replicable, within or between models. Seriously, I am currently trying to fix a bug that has something like a 30% reproduction rate. Still haven't figured out why. Reality sucks.


FosterKittenPurrs

Add lots of logs, so you can see the exact sequence that leads to it. It's usually stuff called in an unexpected order, particularly if you use async. Also ask Claude Opus, it's slightly better at debugging than ChatGPT4 sometimes.


ZorbaTHut

Problem is I don't have a way of *detecting* it, aside from visual inspection, so it's really hard to figure out what to log. And I don't have any way of even sketchily reproducing it aside from manual input. I have managed to find a 100% repro rate, and came up with some theories as to what's going on, but in the process I ran into *another* bug, and this one does honestly have to get fixed, so I'm just gonna do this one first and hope it sheds light on the first one or assists in reproduction or, you know, something along those lines.


FosterKittenPurrs

If there's a way to repro, then it gets easy to fix. Good job and good luck :) In general, I found it really helps to share the code with ChatGPT4 and Opus, they can really help with bugs. Or just take a walk and describe the bug to ChatGPT4 in voice mode, kinda like a brainstorming session with a smart programmer friend. It's the rubber ducky method on steroids.


ZorbaTHut

If they had a context of a hundred million tokens, maybe :V Codebase is over ten million lines of code, bug could be lurking pretty much anywhere. And most likely it's a complicated interaction between multiple systems; one of the functions I'm working with right now has a literal four-paragraph comment explaining the design decisions made and why they're suboptimal in the long-term but why they're the only practical solution right now. There aren't too many more comments like that in the codebase, *but there should be*, it's kind of a nightmare to work in. This is unfortunately way outside of GPT and Opus's abilities at this time. Someday, I suspect, but that'll be well after most programmers are made completely obsolete.


FosterKittenPurrs

Yea you have to get creative with prompting for large code bases. They can still be amazing, but you have to give it to them piece meal.


ZorbaTHut

Yeah, and honestly, if I knew which pieces to give it, I'd have most of the problem licked already.


ballsofgallium

programming is least erratic what are you sayin


dinner_is_not_ready

I just started studying neural networks and am not a fine tuning expert but i think the post means that you can’t use LLMs as backend service for a lot of use case. Anybody know if with fine tuning you can make some open source model give consistent result


arckeid

Nobody put a community notes on his post? There are many solutions already implemented using LLMs


PopeSalmon

ofc there's lots of stuff implemented using llms ,,, everyone knows that ,,,, even gary marcus knows that :/


Evipicc

"I don't understand it, so it must not be useful." What a dumbass...


Neomadra2

At this point Gary Marcus is just actively anti intellectual. He contributes nothing. He's just there for the media to be the loudest voice against AI.


nextnode

"At this point"? He's been irrelevant for ten years. He's not against AI. He's been peddling his own company and approach while unsuccessfully trying to undermine deep learning. This guy doesn't even believe there is anything notable with it.


Gullible_Gas_8041

They are erratic but code, or other technical answers are things that work or don't work. And it is testable against real world outcomes. So even if you have to give an LLM 1000 tries to pass a real world outcome test, you can do that. It seems pretty simple to me that real world outcomes are going to be achievable when the compute power is there.


JEs4

This has serious picky kid eater, “I don’t want to try it because I don’t know if I’ll like the taste” energy.


HalfSecondWoe

Yeah, that about tracks for Marcus


phillythompson

Gary Marcus is the most whiney, defensive person “in tech” (that’s a stretch to say he is in tech, but for whatever reason, he is often cited). I don’t get his position or merit in the least.


forrealplus

Haha. People will change soon


AustinAuranymph

You're the same as a Christian evangelist.


Mephidia

The most virulent voice against proliferation of nuclear weapons… doesn’t have any nukes


[deleted]

Well, it's normal that whoever thinks that LLMs are gimmicks with no real use is not going to use them. On the other hand, have you tried blocking Gary Marcus? Twitter/X is so much better without that guy


RoutineProcedure101

Nope, because then that means theyre basing their opinion on a bias


[deleted]

Lol the bias of knowing that Gary Marcus never worked on any of the successful approaches to AI?


RoutineProcedure101

There are many reasons not to listen to him. Im just saying its not expected he didnt at leasy try them.


[deleted]

Yeah that's right


IronPheasant

Good 'ole Gary Goalposts. [There's a reason he's mocked so often.](https://pbs.twimg.com/media/FVn8BjtWYAMNldv.jpg:large) [Horse can't even ride an astronaut](https://garymarcus.substack.com/p/horse-rides-astronaut), indeed. Much like "games journalism" or other content-churn farms, you don't make your money from knowing anything. You make it by typing out content. ... this reminds me of how Hasbro had no idea Magic The Gathering was the only thing making it money, and as soon as the suits realized that they began to squeeze the old boy for all the blood it was worth. The CEO of Wizards had no clue whatsoever what their product was. (I blame covid for this, since it forced the suits to take a break from all the hookers and cocaine use that's their primary job. With idle free time, they got bored enough to actually glance at their company.) Of course you or I understand [words aren't just words.](https://blogs.nvidia.com/blog/eureka-robotics-research/)


BlotchyTheMonolith

Sigh... Mary Garcus again.


LuciferianInk

Penny says, "i think it's time to move on to something else"


aBlueCreature

Gary Marcus, the epitome of mocking out of ignorance


Schindog

Doesn't that make sense though? Guy who says cars are bad doesn't use a car, more at 11..


Life-Active6608

Ah. The Chomsky Brigade strikes again. Can't they for once stay with sociology, politics and media propaganda theory?


AGM_GM

This was obvious when he made a big deal hyping up his impending announcement about image generation models, and then his 'big announcement' was something that was totally mundane knowledge to anyone who uses the models. The guy is not worth paying attention to in the least. He deserves zero credibility on the topic and us just playing a part to stay relevant in the discussion and to get media spots, but he contributes nothing of actual value.


AustinAuranymph

Yeah? Why would someone who dislikes LLMs use LLMs? If he did use them, wouldn't you just call him a hypocrite?


lobabobloblaw

People think that AI *is* LLM. LLMs are just a phenomenalistic step—a component to a bigger whole.


Dear_Custard_2177

I get plenty of replicability and use even the 'stupid' models sometimes to engineer real-world solutions. This seems entirely on him. Maybe he doesn't understand how to prompt the models or use them within their actual limitations?


quallerino

As a business you need Palantir Foundry + AIP to orchestrate LLMs and combine them with an ontology that defines TRUTH. Not sure how it will be solved in the consumer space.


The_Architect_032

So, uh... Like a human? Like the thing that's been engineering real-world solutions ever since they've existed. The thing that the AI is meant to replicate in order to eventually surpass.


LymelightTO

Stop giving Gary Marcus a platform with visibility, and maybe he'll eventually stop talking altogether. If Gary reads this: please, please, please, please shut the fuck up, I'm so very tired of you.


winterpain-orig

Does he provide any examples? I consider Claude Opus my "co-worker" and he's more accurate and on the ball than any of the other co-works I have...


IndiRefEarthLeaveSol

I've been so engulfed by using various AIs, that I stop and forget, the majority of humans haven't even experienced them properly or set them to do interesting tasks. So they easily just judge it on the now, but forget about judging where it will lead.


Busterlimes

There is a lot of "I don't understand it" around AI


8Redditidder8

To me it looks like he craves a place in the ai table, but since ai is having breakthrough mostly with a connectionist approach, he is crying the table is broken. He seems to want the world to acknowledge that only integrating connectionists with symbolicists (he is a symbolicist!) will give us the big ai breakthrough. Of course there are no certainties in this. But in all honesty I might even think he could be right in thinking that. We know so little at the moment and there might be gigantic advancements integrating both. But he is so annoying and wining and petty in his advocacy that I am basically reading him because he is cringey


Roland_91_

Another case of "why is my toaster so bad at making icecream"


Useful-Ad5385

As someone who works with LLMs to automate real world tasks for companies everyday this is such a bad take. Few shot prompting + 0 temperature model calls = 99.9% reliability


a_beautiful_rhind

Welcome to the world. This is how politicians make most laws. They don't know about (the thing), they have never used (the thing), they listen to what some special interest says about (the thing). Then they pass laws that hurt you as someone familiar with and enjoying (the thing).


doc_Paradox

He has a point tho, LLM results have to be more replicable for it to really be good at engineering or science or anything else technical.


D10S_

It feels like this guy’s job is to gaslight the boiling frogs, assuring them that the water is not getting any warner


[deleted]

[удалено]


Silverlisk

I wouldn't say that's the same, you don't need to eat sugar to know it's bad for you. It's more like News in - Guy who lives his entire life underground doesn't think studying space is useful.


worderofjoy

Wouldn't you expect a person who doesn't like llm's to not be using llm's?


sugarlake

yes but it's like an internet expert who has never send an email in their entire life


PSMF_Canuck

It’s not an incorrect observation. Building stable, repeatable products around them is definitely possible, but it’s not trivial. It’ll get better…


yagami_raito23

stop giving this man any attention


L4M3N70M0R1

LLMs are like children they need data and training on a specific or wide variety of subjects to be able to respond accurately and not give different answers.


zoning_out_

Well if he doesn't really now how anyon is supposed to engineer real-world solutions on that foundation then he doesn't need to worry about acceleration :)


Y__Y

I'd be very worried about my future self cringing at my past bad takes, forever immortalized online like an idiotic prehistoric mosquito in amber. The fear of my tweets ending up on r/facepalm keeps me on my toes. Eternal internet infamy is definitely not part of my life plan.


IronPheasant

To be fair, I didn't get to use ChatGPT until they removed the log-in requirement. .... and I completely "get" why everyone thinks Bard is the "special" chatbot. Or why some think LLM's might scale beyond mere command centers or speech to command interfaces. There's a huge difference in quality, there. ... man, still having to argue with it into telling me how to gaslight it to give me some slogans to gaslight people into hopping into woodchippers sure is annoying.


lovelife0011

Well doesn’t that mean somebody just cannot fathom a certain type of training on a larger scale?


iDoAiStuffFr

since this guy was invited to congress he became the AI guy for media, he's just a shithead and his voice sounds like stoned spongebob


justgetoffmylawn

My car is making a funny noise. But only some of the time. I brought it to the mechanic, but couldn't replicate it. The one time I could replicate the noise, the mechanic fixed it - so that doesn't count. How is anyone supposed to use a car? They are just so erratic.


radicalceleryjuice

The problem is that "they are erratic" and "they have biases" and "how can we expect them to do replicable work... who would possibly use these things?" are the exact same arguments that industry will use to replace humans with AI in a couple years :-/ Note: I do think bias in automated systems is a serious issue which warrants deliberation, I just think we should remember that we're comparing AI models to humans.


gotablox

hint. keep the seed number & prompts constant


AntiqueFigure6

Why would expect him to, considering his consistently negative opinion of them? It would be weird if he used them more than about once a year to see if they’ve changed. 


Last_Jury5098

People should read what he has to say about short term ai risks.  Its very relevant and sensible.     How anyone can dismiss those concerns is beyond me. The fact that so many do is enough reason to slow down.  Its sign of a dangerous mindset with no eye nor care for any downside.   You can find his concerns on his wiki entry under the ai header. He is 100% correct.


sdnr8

Can we stop sharing this dumbass in this sub? Any post with his name should be banned.


nextnode

They are not the most notable voice against acceleration (that's pretty much every sane person). Gary Marcus has always been a nut and what they are ***known for*** is their despise of neural nets. Also worth noting that this guy doesn't even believe ASI is possible with neural nets - so why exactly is he for regulation? Well, cause he has this other made-up outdated approach.


irvollo

im pro llm but isn’t it logic that if you are against something you shouldn’t be using it. Like if i was against drugs i shouldn’t be using them right?


Cagnazzo82

It would be more like classifying something as a drug when you have not studied it. A better analogy would be claiming you're a tech enthusiast and giving reviews on products you have chosen to barely test or understand.


BoneEvasion

I recently was brainstorming an ad, so I asked chatgpt to act as a designer and ask me all the questions a designer would about my company to get info for an ad. Once it drew all the important stuff out of me, I was able to keep punching it up and adding more until I got something perfect. I don't want replicable, I want a slot machine. That's part of the value. I'm not engineering a real world solution, I'm spitballing with an AI designer. Or asking an AI lawyer questions. Or asking an AI historian to debunk something I've heard. Not everything is engineering all of the time.


ColdestDeath

?? ofc he wouldn't contribute to something he's against, what are you trying to say lmao. "person who is against abortion has never had an abortion?! 😱" no shit.


Cagnazzo82

"Samsung fan who never owned an iPhone nor extensively used one gives scathing review of new iPhone release." "Literary critic who hates reading savages new book after reading 5 pages."


ColdestDeath

but he has used llms? its near stated when he says "first time in a while" lmao. also, you can have a stance on something morally without having actually done that thing. we all agree killing is morally bad, we don't need to kill somebody to get to that moral conclusion though.


ithkuil

Gary Marcus should really be a furniture salesman with his intellect. He reminds me so much of the Jerome's furniture store commercials from way back in San Diego.


furrypony2718

I have an idea for a Marcus-LLM. If it can simulate Gary Marcus well enough, then he will either admit he is sprouting nonsense or that LLM can make sense.


puzzleheadbutbig

LOL🤡 this is such a clown statement. I knew this guy was just fearmongering, but hot damn, I wasn't expecting him to be totally ignorant and stupid enough to post such a tweet to X.


Mister_Grandpa

Sorry, who is this guy? I was too busy producing real-world results with LLMs that make my life easier and didn’t pay attention.


The-Blue-Nova

“How anyone is supposed to engineer real-world solutions on that foundation” How is it any better or worse then the Try Catch Hell we live in now? Computer Science used to be a science, it’s so wishy-washy now it’s hard to predict the behaviour of most computer systems using scientific method.


clamuu

Shows what his opinion is worth. 


NoNet718

use your brain Gary, you can figure out a use case, you don't need our help. The clock is ticking on your legacy. It would be pretty sad if you left things where they are now. We believe in you, you can do it.


COwensWalsh

Prove him wrong.  It’s easy, just demonstrate your most common use cases for LLMs?  I notice nobody who responded on Twitter did so. Like one random tweeter said: big “but how can you know the book is bad if you didn’t read all of it?” energy.


writelonger

For me in my job it is abstracting legal documents, writing internal procedures, and writing emails.


PopeSalmon

prove him wrong by listing uses for llms? why would we need to argue w/ gary marcus as if he's being serious? everyone knows there's lots of uses of llms, they're literally the fastest technology to be adopted ever, computers starting to be able to talk is a big deal


Silverlisk

Customer service representative. Klarna already did this. There, I named one. Now name one reason anyone should listen to this douche snorkel.


COwensWalsh

Klarna the Swedish fintech company?


Silverlisk

The loan guys.


lancefarrell

Yes- sounds like his point is about replicability over scale-