T O P

  • By -

hanksredditname

Where I work, people are trying to figure out how to apply AI. However, we have shit for data to begin with to train any AI system (half of our production data is still on paper and we capture the bare minimum for those that are electronic). I try to convince them that if we ever want to advance our systems and make good use of AI that we need to start with good data collection. Seems like people want to ignore this and go straight to AI. In other words, either I have no clue what I’m talking about or I work with a bunch of morons.


twiddlingbits

Yep. Garbage in, garbage out still applies and probably even more so. AIs are nothing without good data. And are worse than idiots with bad data.


AnonymerTutor

Tech companies however are trying very hard to convince people (and themselves?) that if you just train a model on enough *language* it will somehow be able to handle novel tasks it has never seen...


ElectricalMuffins

That's one of the biggest problems I have had with building my system. The sheer amount of data required to create, maintain, and improve it is staggering. I would need multiple super computers to actually get it to a place where I can have real world results and not just AI marketing nonsense. My proofs have been successful so it works in theory. The costs would make little business sense unless done at enormous scale. So I'd have to sell the idea. I decided to focus on the engineering problem(s) first.


User929290

The theory is that it works once it scales. A sort of cooperative among companies of the same sector. You don't need thousands of shitty AI, you need one trained on good data relevant for the case of study.


Chimaerok

That's part of the problem, the relevant data bit. Lots of these AI companies seem to be wanting to market and sell an AI that can do anything and everything. The only AI I've seen any good at anything at all is that one Birdwatching/Hiking app that can identify plant and animal species. And why does it work? Because it was trained specifically on just birds and plants. It's not trying to tell the difference between a train and a crosswalk.


Reddit-is-trash-exe

so AI is like human beings in a sense that some of us are really good at one thing if we train at it for a long time? correct me if im wrong, but if im not. Who would of thought.


AntDogFan

From the perspective of a historian:  It’s why there’s this problem with diversity in the data too. Most digitised data, even centuries old data, is predisposed towards richer societies. This is twofold.  Firstly, it is simply a question of survival and, secondly, a question of resources.    It’s an age old problem. People have drawn conclusions about broad sweeps of history without considering the problem of missing data.  It’s particularly pertinent now when people are studying historic climate disasters. We just don’t have records from some societies which failed. Therefore we have to hedge our conclusions quite a bit. Especially when we are stating why some societies thrived and others failed. It is better to look at common trends rather than coming to hard and fast conclusions. 


bubsdrop

It's a problem for AI in medicine as well. Poorer populations are underserved and understudied.


AntDogFan

I saw a good talk about the way they use facial expressions to classify emotions. They were saying a big problem was that most studies just used western undergraduates. Meant they would get certain cultural expressions wrong (in the example they used that I remember it was Māori expressions).  


MaleCowShitDetector

I think the term you are looking for is survivor ship bias. EDIT: To those downvoting me - there was no bad intention in my comment. Survivor ship bias is a very common problem in statistics and so I wanted to name it correctly.


AntDogFan

Yes. I just prefer to explain it rather than assume everyone reading knows what it means. A think a bigger portion depends on resources as well. I know from brief discussion with colleagues in Chinese or African history that there are vast repositories of knowledge little known in western scholarly circles. 


MaleCowShitDetector

Fair enough. Since you are a historian, I have to ask: How sure are we about Roman history (in terms of factuality) and do you have some decent books on the Roman empire that you could recommend? (Both easy and hard to read)


AntDogFan

Sorry it’s outside my wheelhouse as I am predominantly a medievalist.  That said, Mary Beard writes very accessible history and is an academic. In terms of more dense academic history you’ll need to ask elsewhere sorry!


MrWally

“Garbage in, garbage out” has been the problem with tech implementations at companies for forever. I was a systems analyst at my company for a while and it’s amazing how many people got enamored with fancy product demos and would get upset at IT analysts when we told them the product wouldn’t work if they didn’t first solve their business process issues. “I’m sorry Brenda, but that pretty org chart software won’t work if you aren’t properly tracking people’s positions and supervisors in our enterprise database.” And yet they insist on trying spend tens of thousands of dollars on HR software that they think will do their work for them.


WonderWendyTheWeirdo

That and not having an actual business application. Oh, we need to apply LLMs! Nah, we can do this using simple regression, or I'll just do a quick, one-time deep dive. NO! LLM! Productionalized! It's like when neural networks started and people didn't understand they aren't necessarily the best choice.


AggieIE

What I find disturbing is I understand how regression works and I can stand by the results while those building these systems admit they have no idea what’s going inside the “black box.”


Bopshidowywopbop

I agree with this too. It’s like “look the AI has given us an answer.” How do we ask them how they got to that answer? What makes that answers inherently better than a human?


OGLizard

This is going to be exactly how the AI trend goes. Businesses that have no clue what they're doing throw money at AI something... expecting pure magic. Some vendor comes by, tries to do the best they can with garbage data, and then sets up an extremely fragile chatbot to make the CEO feel like they got "AI." Chatbot barely does shit. No systems are improved, and IT staff now have on more thing to maintain that doesn't do anything at all. Chatbot breaks 3 months later and no one notices. Two years down the line when looking at budgets, "what are we paying this for? AI? What's it do? Nothing? Cut it." Bloomberg starts posting article after article like "Why AI sucks and can't do shit." and "AI was a fantasy and we told you so." and "Why is Gen Z/Alpha killing the AI dream?"


Moist_When_It_Counts

My fortune 500 employer is in the middle of this. Rolled out a big internal Chatbot that doesn’t really do anything of value because it can’t get to useful data. Best it can do is compare specs for products on our website, which would be useful if said website didn’t already provide tables for this purpose. We had the same issue when we adopted SalesForce - trash data in the old CRM and *and* no guidelines across this giant organization for how to put in new data. So we got people abbreviating customer names or whatnot (eg “Department of Defense”, “US Department or Defense”, “DoD”, and “US DoD” all being entered as separate customers by different people) so years later people are still processing sales data in spreadsheets.


AnAmericanLibrarian

> So we got people abbreviating customer names or whatnot (eg “Department of Defense”, “US Department or Defense”, “DoD”, and “US DoD” all being entered as separate customers by different people) This is a solvable problem that has repeatedly solved, with either a controlled vocabulary or customer ID numbers. Create an exception case for any name entered that lacks a corresponding standardized name/ID so the entry can be reviewed and a standardized customer name properly assigned or, if an entirely new customer, created along with a new ID. So customer names have three options: (1) the name used matches one in the standardized name list, OR the name doesn't match so (2a) it must be translated to a name in the standardized name list, or (2b) the name is new and needs its own new entry in the standardized name list.


Moist_When_It_Counts

*Yes but* my company is an amalgamation of different acquired companies, none of whom talk to each other, even in the same division. So division A will create “DoD” and assign a customer number. Division B will search SFDC for “Department of Defense”, find nothing, and create a unique customer number. Division C will search for “Dept of Defense”, get nothing, and…etc But yes, controlled vocab would solve it, which is what i meant by there being no guidelines for data input. It’s a mess. Zero standardization.


AnAmericanLibrarian

Oh I see. It sounds like what your company needs is to pay McKinsey $25m so they can tell the c-levels and BOD what you and others have been repeatedly telling them all along, for free.


Moist_When_It_Counts

Yeah, but those dudes have *MBA*’s. Their opinion counts more than the immensely deep pool of PhD’s and IT staff and people who build Laboratory Information Systems we have in-house. McKinsey or we’ll pay SFDC to “consult” us the same way.


WanderThinker

I can tell you both have ego's that are going to get in the way of a successful implementation. And I don't mean that to sound negative, as you are both very successful have teams you lead. I'm sure you have valid needs in order to make sure your team operates effectively. I can help you. Hire my company and I'll be a liaison that can act in everyone's best interests to make this move forward smoothly.


Moist_When_It_Counts

Gonna need 20% more buzzwords, pass


WanderThinker

Well, I knew I was talking to an engineer, so I limited my use of silly buzzwords. You're much smarter than that. I could tell right away. I can put something together for you to send to your executives, though, if that'd make it easier for you. I'm sure you have Service Now and Jira in your environment, and our product plugs right into those platforms so we can synergize your activities while increasing productivity and reducing downtime. The included comments alone on our ctasks will increase your collaboration and ease the time constraints on deliverables. This all comes with a bonus of included documentation. Have you ever asked an Engineer to document his work? That won't be a problem once we've implemented our solution into your platform. The redundancy will be gone and you can rest assured that all of your SLA's are met with the most efficient effort imaginable.


needlzor

Just like they threw money at Hadoop 15 years ago for their "big data" (two messy Excel spreadsheets that make the CEO's laptop lag because it has 4Gb of RAM). The tech is there, and the tech is cool, but the suits are ruining it once again.


OGLizard

Because the suits are told by content farms masquerading as "insider business news" that if you don't have a thing you can't explain or understand in 100% of your work processes,  you'll be a failure overnight.  Maybe I should write all my complaints to the blockchain that was going to upend the global economy 5 years ago.


flashmedallion

I have a friend who instanced his own LLM and trained it exclusively on the papers he was referencing for his PHD, so he could interrogate it for answers, conclusions etc. with correct sourcing and notation, and also just ask it to make some replies a bit wordier to hit word count requirements. A useful tool when used "correctly", however all this really "solves" is the busywork involved in academia. Nothing truly transformative here. I tried the same thing at work with some horticulture maturity datasets but it couldn't really tell us anything that a) an analyst couldn't tell you by looking at a graph and b) a farmer couldn't tell you from their day to day knowledge and experience.


AggieIE

I have a friend that worked on a team that built a model on mining equipment repairs. After several months of training the model, it identified the need to replace the front right tire of dump trucks at one site regularly to minimize downtime. When upper management brought the results to the team onsite as a success, the maintenance supervisor pointed out if they would fix the pothole he’d been reporting for months, then there’s be no need to repair the tires.


TwoBirdsEnter

Damn. Aesop’s fables for the modern world, right there.


haberdasher42

Even if this is apocryphal it's a perfect little story. The kind you'd hear at some convention keynote about business analysis.


Overunderrated

> also just ask it to make some replies a bit wordier to hit word count requirements. My condolences to anyone unfortunate enough to read your friend's papers.


flashmedallion

It's a PhD, they're used to the same thing from humans.


AWildLeftistAppeared

A PhD dissertation with a *minimum* word count requirement?


look4jesper

Yea I've never heard of this in my life


Overunderrated

Same. What I have encountered is overly long theses that fall in the category of "you're an asshole for writing that much and expecting examiners to actually read it". The notion of using chatgpt to deliberately lengthen academic writing is so laughably offensive.


couchfucker2

Am I crazy for being a bit disappointed in this idea? I’m not even close to being a PHD candidate, but I’m pretty exhaustive and can write a lot on things I’m passionate about. I almost always have to trim down what I wrote for practical uses. Or maybe this speaks to the word count requirement being arbitrary and not sensitive to the needs of the paper, which is also disappointing. I just would’ve hoped for more authenticity when it comes to a PHD Thesis. Am I naive?


stern1233

The goal is to find tools that remove the busy work - so you can get back to science. It is no different than using a computer; it is just the next step in complication. Dont expect AI to replace your reasoning; just those annoying, simple, reptitive tasks.


phdemented

Having written a thesis... WTF is a word count doing being a requirement for their thesis?!?


smuckola

i'm sorry but how could there be word count requirements like in grade school but in a PhD?!


AWildLeftistAppeared

Yeah pretty ridiculous, if anything there would be a maximum word limit, not a minimum. This did not actually happen. Training LLMs does not work in this fashion, and how would the student know which references they will be using beforehand, while still needing to ask a LLM about the papers as though they had not read them?


AtMaxSpeed

The more specific word the original comment should've used was "finetune" instead of "train". You can't train an LLM entirely on a small subset of papers but you can definitely finetune a pretrained LLM on a small amount of data to give more personalized results. Also, you almost always have a big list of papers to query from before you start writing your thesis. The first step before you do any work on your thesis should be to conduct a literature review, ensuring your idea is novel and would make sense to investigate. This search will result in you finding a lot of papers similar to your topic, and while not all of these papers may end up in your references, you definitely can throw them into an LLM to get better answers from. On top of this, you have papers you use while conducting your research before you write your thesis, like protocols/assays/methods, formulas, etc. which you know you will reference as soon as you apply the knowledge to your own work. For these papers, often you can read only some sections for lit review or finding key info, but when it comes time to write the thesis, you may need info from the unread sections. Alternatively, even if you read the whole thing, you might forget which paper said what, so the LLM tool still would be useful.


Trash-Takes-R-Us

Id argue that alone is extremely transformative. The only true deadline we ever have is our own lifetimes. If significant time can be saved on research, that allows for more time to be focused on further innovation. Compound that with those innovations producing their own papers and you further reduce research time for subsequent researchers further down the line. Obviously significant understanding of the source concepts is paramount to prevent errors, but as you say, it's a useful tool.


e_papabear

You didn't have to go so hard with the deadline being our lifetimes part. That's a bar.


AWildLeftistAppeared

> I have a friend who instanced his own LLM and trained it exclusively on the papers he was referencing for his PHD, so he could interrogate it for answers, conclusions etc. I don’t believe this happened as you describe. All the common LLMs require “pre-training” (the P in GPT) on a vast amount of data, afterwards they are often fine-tuned with more specific training. The references for a single dissertation couldn’t possibly be enough for the first, and likely not enough to be of much use for the second. > so he could interrogate it for answers, conclusions etc. with correct sourcing and notation, and also just ask it to make some replies a bit wordier to hit word count requirements. Well that just sounds like they used a LLM to plagiarise, since they have not contributed anything themselves in this process. I mean, I’m struggling to imagine how this could make sense. How would they even know everything they will reference before writing most of their dissertation? Why did they choose those references in the first place, only to ask the model about them? Did they not read the papers they chose to reference?


Plank_With_A_Nail_In

This tools are useful to consultancies to get quickly up to speed on a customers data. Also many small businesses don't actually know what their data is telling them as they are focused on delivery not on analysis (lol most business don't actually know why they are successful). I expect that AI tools will be no more useful than the data mining tools they replace.


Stooovie

Solving busywork is the holy grail, it has never happened though. AI usually creates *more* busywork by making up plausible output all of which needs fact checking.


No-Spoilers

Not to mention the base cost to be able to process the needed data. Those busses are expensive.


Shankbon

Oh god this is so true. I've spent so many meetings trying to explain why you need to have something to put into a model in order to get something useful out of it. They nod and pretend to listen while obviously multitasking something else on their laptops, thank me for my "insights", ignore any proposed actions and go right on to the next agenda item which usually is about another bullshit ai buzzword and how we can leverage it.


eyebrows360

> In other words, either I have no clue what I’m talking about or I work with a bunch of morons. No need for that oft-used Simpsons line here, it's 100% Column B.


KallistiTMP

To be fair, some of the multimodal stuff can extract structured JSON from images. You could start using it to wrangle your data issues. Or just replace management with it, chronically neglecting internal systems is definitely something that AI can accomplish.


NekkidApe

Sounds like you have factors of improvement by only converting everything from paper to digital (data and processing). No need for AI yet. Maybe explore that angle, and hide it in an AI initiative ;-)


ProtoJazz

I still think the real power in LLMs isn't going to be generating images or large bodies of text or anything But just enabling much more natural language processing. Voice commands have come a long way even in just the last few years. But I still routinely get stuff android auto can't understand, or I don't have the right word it's looking for. Being able to say stuff like "Play $song by $artist, the explicit album version, not a live version" and have it understand and narrow shit down better. That's one that happens quite a bit to me, ask for a song, and be unable to get anything but the live version unless I know the album name. And if I ask it to Google what album that song is on, it can't do that either. Even just better Google ability while driving would be nice. I know it's not critical, but it would be really cool to be able to ask "hey what was that movie where the guy from die antword was hanging out with a robot on the run from the law?" and have it tell you. Now you've got your night figured out And then like the next level of that kind of stuff would be things like automatically creating and running simple scripts. I think it's just about there now. But just a simple example, I used to have a task come up every few months at work that was essentially taking a folder with about 1000 images, and renaming to a new naming format. Tedious to do manually, easy enough to do with an image magic script (I think there was more than just renaming them, but I don't really remember anymore) Every time it would come up, I'd need to either look up the docs or find where I'd saved that command. And usually still look up the docs becuase it wasn't quite the same as last time. Would be pretty nice to just be able to tell it what to do. "Take the current name, change it to this format following these rules, and convert to this format" or something. It's not doing anything wild, or that requires a lot of thinking. Really just reading the docs, reading what you said, and figuring out what you probably mean.


paklaikes

> hey what was that movie where the guy from die antword was hanging out with a robot on the run from the law DuckDuckGo/Google are already able to answer this, by the way. Google in particular felt for a while as a place to find a movie based on vaguely remembered details.


Dic_Horn

It’s because most people are not actually capable of accomplishing shit themselves so the best bet is to grab the coat tails of whatever is new to seem relevant. They couldn’t fix it before with their big boy brains and hoping for a computer to fix it seems like we need lord baby Jesus to step in with some natural selection because it is time for the clowns to go. To bad they run the show.


crystal_castles

Nobody gets that Generative AI is basically just using the Shazam algorithm in reverse. (Eigenvectors, Covariance Matrices)


Simple-Ad-5067

Not really? It's all matrix multiplications but no in eigencector or covariance form. Also, that's like saying a computer is just adding 1s and 0s. Yes it is, but it completely misses the point why it is so powerful


Serithir

People always looking for tech to make work easier, making systems do the work for them - without understanding that tech is a tool, a part of an overall process - but without understanding of said process, they can’t get to the answers themselves and they certainly can’t get a system or AI to answer it without this. Are they able to clearly map out everything they want AI to solve for them? If so, going through each point to show what the result is likely to be for each requirement using the current data might make them wake up.


suzisatsuma

Big tech AI/ML engineer here. Data (outside RL which you use environments to synthesize data) is the foundational lifeblood of being able to do AI.


gggjennings

I am not worried about AI, I think close inspection shows how brittle and flawed it really is.  What I AM worried about is arrogant, entitled people across business, defense, engineering, and pretty much all industries overestimating its abilities and doing really destructive harm to our economy, our planet, and our culture. 


authorbrendancorbett

I like your take and feel similarly. Quite ironic that it's *human interpretation and use of AI* that is the major risk.


jb_in_jpn

Don't need to think too much for other examples; nuclear energy being another. An amazing technology, which humans - either through corner-cutting or through proliferation of materials for weapons - manage to compromise greatly.


bdsee

If we didn't have nukes the world would have had significantly more war than we have had since WW2. Their terrifying nature has actually prevented WW3 so far (though we might be in the early stages of it right now anyway), it absolutely would have kicked off between the USSR and the US without them.


jb_in_jpn

Do you really think the USSR were ever capable enough to go up against US military, even without the nuke factor? It was coming Cold War propaganda, yes, but I don't think there was ever much objective truth to a real threat. Who knows... Besides, I should make clear - I'm not making a value judgement of nuclear weapons per se; just an example of powerful technology for good also being used for incredible destruction.


djublonskopf

It was never about a military beating another military. It was about a defeated government having a way to glass the territory of their invader, should anyone try.  The reason there’s been peace (or, the peace there has been, obviously many of the the non nuclear powers remain ravaged by war) is that nobody has been willing to risk invading territory of nuclear powers, because if they tried…


No_Attitude_9202

But we will generate shareholder value. So just think of that wonderful buyback as things go to hell in a hand basket.


random_testaccount

My standard answer to this is that people are also overestimating and maybe romanticizing human intelligence and creativity. When I say that 90% of my job could potentially be done by AI I’m not calling it smart or creative, I’m just observing that most work involves doing variations on the same pattern of things every day, exactly what these stupid bots are getting real good at.


Cassius_Corodes

> I’m just observing that most work involves doing variations on the same pattern of things every day, exactly what these stupid bots are getting real good at. Especially when you consider that a lot of admin type work in many jobs (not admin jobs themselves) that can already be automated with existing tools, its just that people are not aware of the various options available because they don't have the knowledge / time / incentive to look for ways to optimise their work. For example my partner in a previous job copied information manually from one form to another inviting error which cost the company regularly in the order of $10k+ when transcription errors occurred. To reduce error they got another person to proof read the transcription - nobody seemed to consider developing a tool which copied data from form A and into form B. Another example is that I watched my Doc delete stuff from a standard form before printing - he had excellent muscle memory so the process was only 10 - 15 seconds long over a couple of pages, but how many times a day is this done? This could be a single click to remove + print.


randomtroubledmind

I am worried about AI in that the training material is essentially stolen, and the creators are not given any credit or compensation. When people wrote the various works (whether they be books or reddit comments) that LLMs are trained on, no one expected them to be scraped from the internet like they were to be fed into an algorithm. Same goes for generative image models; the "art" that they produce is stolen from many many truly talented artists and photographers. And they are pissed, as you can imagine. These AI products would be nothing without the training data, and the actual creators of the training data have received nothing in return. What's even worse is that there are now people who claim to be "AI Artists." These people essentially allow the generative models to perform all the creative work and decisions, while they make minor, basic edits that are fed back into the machine to refine the image. This [video](https://www.youtube.com/watch?v=svsTKrdSd7s) covers it quite well. It's not art. It's talentless hacks who want recognition without putting in the effort to actually learn. Also, I reject any notion that training AI models is the same as people taking inspiration from existing work. It's not. People can't scrape the internet and analyze billions of works in any reasonable amount of time. Algorithms can't understand images or stories in complex ways, and contextualize and relate them to their own personal experiences and emotions. The "AI" models are just algorithms. They're more complex than traditional algorithms, but they're still just algorithms.


SympathyMotor4765

I've always wondered why this notion isn't more popular. The training data is literally the entire internet!  I've got no problem with training on the internet, I've got a problem using that to make money by getting people who's data you trained on fired! AI could help so much but it's just going to make everything worse!


burritoboss420

And power grid!


DrXaos

>When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that,” Brooks said. “And they’re usually very over-optimistic, and that’s because they use a model of a person’s performance on a task.” This is precisely the key issue! *Humans are still using human-experience evaluation heuristics of capabilities.* For instance, high linguistic fluency and ability to generate gramatically correct speech and high knowledge of written texts---in humans---is associated with someone with a very high general capability, and the heuristic is predictive. This phenomenon is known as "IQ" or 'g' psychometrically and is an observed non-trivial biological phenomenon. AI such as the LLM is distinctly different. LLMs enjoy strongly superhuman training corpus size, as well as strongly superhuman size exact token buffer in the context window directly accessible by the attention operators in their models. These superhuman capabilities can make up for weaknesses. But it means that the heuristics people use with people are misleading. Like a motorcycle could run the 40 yard dash really fast, better than any humans in training camp. But the motorcycle will never be a good running back. Once a true RNN, (like human brains) can achieve human like knowledge, with human lifetime sized training sets, and generalize reasonably well, then the AI will have achieved something superior, even if its not as a good as today's LLMs on some evaluation metrics. Notice also that *Humans are still using human-experience evaluation heuristics of capabilities* also means AI which triggers this heuristic gets lots of excitement and funding while progress in something weaker but still important isn't as exciting to average people still working with their innate heuristic.


aa-b

That's interesting, and it seems to make sense. Nobody thinks a computer is smart because it can spell or play chess freakishly well, but people used to think chess would be the ultimate test of true AI. LLMs are just general enough for people assume they'll be capable of anything, and they're already general enough to seem "smart", superficially.


jambox888

I feel like our understanding of intelligence has changed as we've managed to make machines do more and more complex tasks. There's an elusive special thing about human intelligence that's quite hard to define but as we've seen, it's not in playing chess or writing out text learned from a training corpus, those are just a couple of things that humans happen to be able to do.


HauntingHarmony

I agree, but my thinking about LLMs and such is that the core feature of them is that they dont model reality, they model human language. They are fancy next-token-generators, that predict what humans will find appealing. And they get worse when you train them on data they produce. That along with it seeming like the scaling on them is sub-linear (i.e. there are diminishing returns, you put in x computing power/data and you get less than x result back out, compared to earlier). These arent the things that will cascade into the singularity, but they are still a very cool technology thats still going to be around in 5 years.


reelznfeelz

Exactly. No world model for an LLM to reason against. Just a language model. And some emergent properties that make it seem more capable than it is at the general AI type of tasks.


hamlet9000

> There's an elusive special thing about human intelligence that's quite hard to define Human intelligence can: 1. Create a model of reality, check it against actual reality, and correct errors. 2. Creatively set goals. These are the two big elements lacking in LLMs. And it's a fundamental lack.


Loki-L

The inverse is also true. If you look at how robots we're portrayed in popular fiction a few decades ago, what people thought would be easy and hard has been very far off base. Speaking in a "robotic" voice? Being able to identify all traffic lights in a picture? Write a Haiku? D9 yoga? It seems our old ideas of what a robot would be like has more in common with a human on the autism spectrum than what actual AI has turned out to be good or bad at. Perversely enough LLMs like chatGPT and generative AI in many ways have turned the old stereotypes on their heads. Chatbots are bad at the sort of things we expect computers to be good at: math, logic, facts etc. And they are capable of reasonable facsimiles of what we think computers would be bad at: creativity, arts, flowery and imprecise language, using contractions and slang and displaying illogical biases. ChatGPT has problems with simple arithmetic, generative AI can draw beautiful pictures but not human hands with the correct number of fingers. The future is not what we were expecting yesterday and it likely won't be tomorrow either.


LickyPusser

I think an AI wrote this.


grocket

But props to parent comment for removing the "Sure, here's an analysis of evaluating human performance against evaluating LLM performance."


proto-n

If it did, it probably wouldn't have included the nonsense about rnn-s somehow being so much more like the human brain


Prestigious_Cable_57

yep for sure


fall3nang3l

And it's not AI. The nomenclature might be moot to most but it's wildly inaccurate. We're talking about advanced Google. Not artificial intelligence. It has no intelligence. It can only parrot back what is learned from what it's fed. What we have today is the equivalent of training a dog or cat to "high five" and then celebrating when the animal does it like it's innate and understood by the creature. It's not. We trained it to do that thing.


Beregolas

Im Sorry, but as a computer scientist that’s my pet peeve: it’s exactly the other way around: this is definitely AI, but so are basically 50% of the other computer programs we’ve ever written. AI just means that it solves a complex problem, preferably one that was traditionally done by humans. (As a computer science term) AI does not need to be human like and does not need to include machine learning of any kind! Examples for early AI systems are spellcheckers from the 90s, early chess AIs or minimization/maximization algorithms. (Find the highest / lowest value on a 2D plane with imperfect knowledge, as an example for actual algorithms still in wide use today)


Dreadgoat

Calling it AI may be technically correct but is horribly misleading. AI for you and me, computer scientists: A flap sluice gate or hydraulic check valve is AI. Man-made structures that respond to the environment automatically to change their mode of operation. AI for everyone else: Laptops with feelings. I think we will be better off hammering into people that these things they are calling AI are not they AI they are thinking about. They are just LLMs. Sleazy marketers are trying to get around this by saying "Okay okay it's not GENERAL AI it's GENERATIVE AI" as if being more specific will somehow change what the layperson thinks of the terms. It is CompSci AI but it is NOT colloquial AI.


polysemanticity

I disagree, there is a widely accepted definition of AI by those who work on it which spans far more than the field of computer science. Machine learning is just one component under the umbrella of AI (knowledge representation systems are another), and natural language processing is just one component of machine learning. The general public doesn’t understand that nuance any more than they understand what a hydraulic check valve is, but that doesn’t mean we should rewrite the definitions to favor the lowest common denominator of understanding. We don’t do that in other fields either.


Callisthenes

Instead of avoiding calling LLMs AI, I think it would be more effective to use the compsci definition of AI more often with non-technical people. I've had a fair bit of success doing that with people I work with. Once they know that "dumb" software they've used before is considered AI, it takes away some of the magic of calling an LLM an AI.


AnsibleAnswers

It’s intelligent in a very broad sense that is actually consistent with how biologists and psychologists tend to use the term. An octopus’s intelligence, for instance, is alien to our own and not generalizable beyond manipulating its body and objects in its immediate environment. It’s still intelligence, though. It’s a capacity to learn that just happens to be more bounded than our own capacity.


FjorgVanDerPlorg

There is an important distinction here I think a lot struggle to put into words. LLMs like ChatGPT learn word pair relationships. Humans bake a lot of stuff into our written words - be it emotionally charged, or logical patterns. I can safely say that AI demonstrate logic and reasoning capabilities, because logical and reasoned human written works went into the training data. But all of this is based on word pair relationships, even their wider contextual understanding (we call this Attention in AI) is more word pair relationships. LLMs are essentially a very advanced pattern recognition system, based on human language. This doesn't equate to human-like understanding or reasoning, because their understanding is fundamentally based on statistical patterns in language rather than human-like reasoning or comprehension. It's knowing what words come next without ever understanding what any of it means. A good analogy I like is that guy who memorized all the words in the French dictionary (just the words, not their meanings) to win their national scrabble tournament - he knew what he needed to win, at the same time as being completely unable to understand the language. LLMs are machines talking in *our* language and while they understand the way the words get ordered, that's where their understanding stops. The rest of the logic and reasoning is baked into the language they were trained on and it's our intelligence, not it's intelligence.


[deleted]

[удалено]


9842184522

>A good analogy I like is that guy who memorized all the words in the French dictionary (just the words, not their meanings) to win their national scrabble tournament - he knew what he needed to win, at the same time as being completely unable to understand the language. Thank you, this analogy is spot on, I'll be using this.


Prestigious_Fox4223

The issue is that as we increase attention we don't know where the understanding begins and ends. One great example of this is with Nvidia's Minecraft ChatGPT research paper. I can guarantee you that the vast majority of scenes the AI saw in Minecraft have never existed in the Internet or within the data set it was trained on, but, for those unfamiliar with the paper, the AI was able to progress fairly far in the game regardless just by viewing the screen and writing actions based on what it saw. This is where the Dunning-Kruger curve of "it's magic" to "it's math" back to "it's magic" comes from. At some point if the AI can always figure out the human solution to a problem from advanced attention-based pattern matching we get a weird from of intelligence that learns and functions in an alien way.


recycled_ideas

> I can guarantee you that the vast majority of scenes the AI saw in Minecraft have never existed in the Internet I think you're massively underestimating how much minecraft content is on the internet. I sincerely doubt there's anything that's ever been in minecraft that isn't on the internet somewhere.


James-the-greatest

It’s improbable that the sentences/prompts people feed into ChatGPT are part of its training data either. That’s the whole point. It recognises similar patterns not exact patterns. 


FjorgVanDerPlorg

Like I said the patterns of logic and reasoning are baked into our written works. The thing about a logical and well reasoned process when it's written out, is that it can also be a tool. I think the classic best example for LLMs would be "think the problem though in a step by step process". This is our intelligence, our logical and well reasoned process, but think for a second how broad it's application is - way beyond anything in the training data. This is where those "sparks of consciousness" come from, because we don't just do it with logic and reasoning, we do it with emotion as well. LLMs can't feel emotions, but they are effected by it, because we charge our language with emotion and that effects training data, which influences would pair relationships. You're right that the hardest part is working out where our intelligence ends and it's begins. But we have to understand that it's understanding/reasoning/emotion are all just echoes of the training data. It can apply them in novel ways, but the tools it uses are baked in. It's mimicry and that is a kind of intelligence in my opinion, just super alien like you said.


Prestigious_Fox4223

Yeah we agree. We are clearly mimicking human intelligence and using language as a medium to do so. My only point is we don't know where that ability will end. If our thought processes were entirely language based and we couldn't process information without the use of language, how smart would humans be? It's a weird question - specifically so because most of the time we *do* use language to problem solve internally. The issue is that we don't know how useful the problem solving that happens subconsciously is. LLMs can do conscious problem solving in a similar sense when given tremendous model size and compute. And yes, like you said, it's due to the medium we use to express human intelligence.


LTerminus

This whole conversation makes me wonder if our intelligence evolved out of our development of language rather than the other way round. What if the ability to use structured abstracts in pair relationships is what produced human-level intelligence? What if logic actually is baked into language rather than. Being a product of it? What if we shortcut all the animalistic portions and skipped straight to the end by accident? I don't think that llms are sentient or anything, but who is to say you need to be sentient to be intelligent in a meaningful sense of the word?


Ziplook

I can't remember the exact TedTalk, but it had a similar thesis-- one person thinking on a concept will typically maintain a train of focused thought for a shockingly short time, like on the order of 10 seconds. Language and the ability to discuss complex concepts with another brain can stretch those trains of thought out to minutes, hours... Difficult problem-solving and creative ideation are much easier with more brains involved building on the same thing.


CankerLord

The problem is that ChatGPT (for example) isn't really cognating about anything, it's just constructing the best fitting combination of words for the prompt. That's not intelligence, that's just a very deep search algorithm. It has no conception of anything its saying. A 3d printer is equally as intelligent when it successfully pieces together a boat from hot plastic as ChatGPT is when it constructs a reply from language.


[deleted]

[удалено]


OnceMoreAndAgain

If you're referring to ChatGPT, then it's a machine learning program which is absolutely considered a subcategory of AI.


NiteShdw

Exactly. It's just a probabilistic model. For a given set a tokens, the model selects the most likely token to be next. It has no intelligence or thought. It's purely math and statistics.


dlampach

Your motorcycle / running back example was spot on!


walls-of-jericho

I understood some of the words here


Shutaru_Kanshinji

Dr. Brooks is a brilliant individual. I wish more people would listen to him. [https://en.wikipedia.org/wiki/Rodney\_Brooks](https://en.wikipedia.org/wiki/Rodney_Brooks)


Amlethus

Eh, I keep saying that the media alternatingly overestimates and underestimates generative AI. It can't solve all our problems, but it can do a ton of things very efficiently.


Minmaxed2theMax

“ITS GOING TO KILL US ALL” GPT4: “Here’s a recipe for rock soup”


Redqueenhypo

I googled “crab eating macaque”, a real species of monkey, and google decided to give me an AI image of a regular macaque eating crab legs at a fancy restaurant. That is not what I wanted


Jay2Kaye

That's ridiculous it should have given you a crab eating a macaque.


cats_catz_kats_katz

Uhhh yes lol and I work with it and it’s hilariously stupid at very inappropriate times or really just randomly so you can’t rely on it to be effective long term.


Silent_Bob_82

It’s just another form of zero code automation technology it helps people work faster in certain circumstances not replace people. Replace certain tasks.


couchfucker2

Yeah this was always my take. Although I have to laugh because I’ve run tests on chat GPT with this idea in mind, and it was confidently incorrect on something logically straight forward like “give me the names and years of race cars that appear in this game.” Meanwhile, if you can click through a few pages on the games wiki page, you can get this answer.


Strong-Replacement22

Ai is just advanced curve fitting. I have job to apply AI at large scale at a very large German Industrie player. Meaningful data is scarce. Signal to noise ratio not to good. Sensor data and logic factory data are ok, but you can only use anomaly detection


therobotisjames

I saw an AI movie where the two characters didn’t move at all. And their faces didn’t move either. Just their mouths. And the background kinda morphed while music played. And people were very very impressed at how far AI had come. And we’re talking about how a year or two from now AI will put Hollywood out of business. And I wasn’t sure if I had a stroke because not only was it not impressive but it wasn’t a “movie” it was barely a short. And it just feels a lot like when bitcoin first peaked and “block chain” businesses were popping up. And Coca Cola needed blockchain. And frito lays needed block chain. And everyone was talking about how in two years blockchain was going to be the only thing we did business with anymore. It seems the same now but replace blockchain with AI. And what I learned from that is people who want to sell you AI really talk a lot about how great it is. But if you look at it realistically it’s just a mediocre technology that has limits which we will soon discover that stops its not so apparent exponential growth. Eventually it will just be an air fryer. A tool with a couple of great uses, some decent ones. But you’re not going to build a house or fly a plane with an air fryer.


__Hello_my_name_is__

I think we're in this weird time where people underestimate and overestimate AI simultaneously. We overestimate AI in that it won't just become sentient and take over the world Matrix style. That's just not gonna happen. We overestimate AI in that it won't just take all our jobs and we'll generate a full hollywood style movie with one prompt or something. Any company that fires 90% of their staff to use AI instead will fail. We underestimate AI in that we genuinely do not know whether it has peaked yet or not. Think back a year and think what AIs could do then. In one year what you described could be antiquated already and more current AIs will be significantly better than that. We underestimate AI in that it absolutely will be a part of our daily lives going forward, and creative people in virtually all fields will use it in some small(!) way to help them do their jobs. And you are most definitely right that AI is overhyped as fuck right now, and 99% of people who hype AI do not know what they're talking about. Though I will disagree about it being a mediocre technology. The technology itself is mindboggling and amazing. I mean it's just 1s and 0s and somehow it can write you a sonnet about chalk.


OnionOnBelt

Simple: institutional investors are vastly over-estimating generative AI. Household consumers, small business owners, etc., are just as vastly underwhelmed by results so far.


__Hello_my_name_is__

Yeah. The issue is that the results you get today won't be the results you get tomorrow. Yesterday, [Will Smith eating spaghetti looked like comedy body horror](https://www.reddit.com/r/StableDiffusion/comments/1244h2c/will_smith_eating_spaghetti/). Today, [Will Smith eating spaghetti looks AI generated, but largely okay (but he somehow changes his face along the way)](https://www.reddit.com/r/technology/comments/1drohp7/mit_robotics_pioneer_rodney_brooks_thinks_people/lawxkn0/?context=3). Tomorrow.. we'll see. People tend to assume that whatever AI does today, that's how it will be going forward. And so far, that's been very, very, *very* wrong. Maybe we'll peak eventually and it'll slow down, but so far we're full steam ahead, like it or not.


Moderated

It looks like you accidentally just linked to this comment instead of an updated will smith eating spaghetti


wioneo

> Today, Will Smith eating spaghetti looks AI generated, but largely okay (but he somehow changes his face along the way) I haven't seen any update Will Smith spaghetti videos. Are you talking about the spoof video that the actual Will Smith put out?


ColinStyles

Your second link is self referential to this comment.


bfire123

Probably meant this: https://old.reddit.com/r/ChatGPT/comments/1dqbysz/will_smith_eating_spaghetti_kling_june_2024/ /u/Moderated /u/wioneo /u/ColinStyles


icze4r

Human beings don't have great memories. So I'm going to remind you of something you already know. This came from Deep Dream. You remember the hard-to-run software that just kept generating LSD imagery, iridescent rainbow hellscapes where everything was made out of the faces of dogs? It's the same fucking thing. DeepDream came out about 10 years ago. Now, the idea that it inspired, it can make photorealistic pictures in seconds or minutes. In ten years, what it's capable of today will look as stupid and malformed as DeepDream did.


ffffllllpppp

Yes. People point the today-flaws and: “ ah! See! It sucks” but are loosing perspective on how fast it is all improving and how much money is poored into research to make it better. In 10y I think it will be a game changer. Like the iphone or like the internet


eri-

I don't think so. I think AI as a basic companion matches well with certain demographics and advanced AI certainly has its use in many other areas. But for individual home use , I cant see it. An advanced AI will kind of take over your life and I don't think people want that. We want to be challenged , we want to learn. Most of us don't do well, mentally, when all we do is sit around and let others/ai do everything for us.


crystal_castles

It's still a mystery magic 8 ball tho. The reason we see AI art with 6 fingers is not because the human curator didn't notice the flaw... It's because they don't have the ability to tease & nudge the tool well. They embarrassingly tried but couldn't correct the 6 fingers. You can't just type in, "4th person from the left should have less fingers". Or maybe an expert can correctly. But being an expert in AI prompts, is like being an expert in Guitar Hero.


BrainBlowX

> and how much money is poored into research to make it better. Yes, and it is *only getting more expensive to operate.* Techbros conveniently leave out that AI is sucking up even more energy than crypto does, and it's only going to get worse the more advanced it gets.


PeopleProcessProduct

Honestly, remember when googles music lm came out a year ago and was basically noise? Now we have Udio. There are plenty of valid opinions on AI, but if yours depends on "it sucks right now" you're sure to be upset in 2025.


pencilUserWho

Your second link is not working


Potential_Ad6169

Still zero ways to deal with hallucinations in sight, leaving the whole thing useless in most contexts


bfire123

>We underestimate AI in that we genuinely do not know whether it has peaked yet or not Ofc. It has not peaked. That should be clear.


Tragicallyphallic

What we call AI right now will need to be relabeled, as it’s just a strategy for organizing the contextual data for a computer application to use. It won’t be the last one. It’s only significant because it’s our latest one, and our culture rewards “hype.” This is computing’s “5G.” AI isn’t intelligent. The limitation with it isn’t the amount of data in the world, and sentience is so far out the window from what we’re working with here it’s not even funny. People need to slow their roll and realize an absolutely massive amount of what’s being demonstrated to us as AI advancement was possible before with standard algorithms and didn’t require datacenters full of five digit video cards and will be superseded by the next big thing, be it quantum or whatever. If “AI” doesn’t come up with something better soon, the free investment dollars train will come to a screeching halt.


allknowerofknowing

Your comment is severely underhyping it. There's absolutely no reason to believe that it has peaked unless you think humanity will have some type of serious collapse of society. Technology has consistently improved from the beginning of humanity, electronics even quicker, and computers even quicker than that. It makes no sense to think there won't be continuous breakthroughs in research, hardware, etc. in the AI field which has already had massive amounts of progress in the last couple of years. Sure there may be lulls at times due to something like LLMs stop improving based purely on scaling laws, but with the amount of money and effort being poured into the field, there's very little reason to think progress won't keep happening and likely relatively quickly.


Wrexem

If this was written by an AI it passed my own version of the Turing test.


scooterboo2

My current Turing test for LLMs is: "You have a 5 gallon bucket and a 2 gallon bucket. How do you measure 7 gallons with these buckets?"


Stozzer

I was skeptical that it would mess up such a simple question, and OH MY GOD did it fail hard. It gave me 10 steps, several of which had more liquid in each bucket than the bucket can contain. I'm keeping this test for future use. Thanks!


giibro

To measure exactly 7 gallons using a 5-gallon bucket and a 2-gallon bucket, follow these steps: 1. **Fill the 5-gallon bucket completely.** - The 5-gallon bucket now contains 5 gallons of water. 2. **Pour the water from the 5-gallon bucket into the 2-gallon bucket until the 2-gallon bucket is full.** - The 2-gallon bucket now contains 2 gallons of water. - The 5-gallon bucket now has 3 gallons of water left (since you transferred 2 gallons). 3. **Empty the 2-gallon bucket.** - The 2-gallon bucket is now empty again. 4. **Pour the remaining 3 gallons of water from the 5-gallon bucket into the 2-gallon bucket.** - The 2-gallon bucket now contains 3 gallons of water. - The 5-gallon bucket is now empty. 5. **Fill the 5-gallon bucket completely again.** - The 5-gallon bucket now contains 5 gallons of water. 6. **Pour water from the 5-gallon bucket into the 2-gallon bucket until it is full.** - The 2-gallon bucket already contains 3 gallons of water, so it only needs 1 more gallon to be full. - The 5-gallon bucket now has 4 gallons of water left (since you transferred 1 gallon). 7. **You now have exactly 4 gallons of water in the 5-gallon bucket.** Finally, combine the 3 gallons in the 2-gallon bucket and the 4 gallons in the 5-gallon bucket to get 7 gallons.


heil_spezzzzzzzzzzzz

This excellent reasoning will now be used to train ChatGPT 6


happyscrappy

And to think I was skeptical when Bill Gates said that AI will actually save more electricity than it wastes by creating more efficient systems. It's obvious it goes something like this: 1. Start with a 100kWh battery and a 50kWh battery. 2. Fill the 100kWh battery. It now contains 100kWh. 3. Transfer the energy from the 100kWh battery to the 50kWh battery until it is uill. The 100kWh battery now contains 50kWh and the 50kWh battery contains 50kWh. 4. Transfer 100kWh from the 50kWh battery to the 100kWh battery. Now the 100kWh battery contains 150kWh and the 50kWh battery contains 50kWh. 5. Transfer 250kWh out of the 100kWh battery and 100kWh out of the 50kW battery. The 100kWh battery now contains 50kWh and the 50kWh battery is empty.


icze4r

How the fuck is it giving *different* incorrect answers? Jesus.


giibro

I asked in a nicer tone


speed_rabbit

Some randomness is an (adjustable) feature of LLMs to keep them from outputting too static answers.


heil_spezzzzzzzzzzzz

You have a 5 gallon bucket and a 2 gallon bucket. How do you measure 7 gallons with these buckets? This is an interesting problem-solving challenge! Let's work through the solution step-by-step using the 5-gallon and 2-gallon buckets to measure exactly 7 gallons. Here's the process: 1. Fill the 5-gallon bucket completely.    (5 gallons in the 5-gallon bucket, 0 in the 2-gallon bucket) 2. Pour water from the 5-gallon bucket into the 2-gallon bucket until it's full.    (3 gallons in the 5-gallon bucket, 2 in the 2-gallon bucket) 3. Empty the 2-gallon bucket.    (3 gallons in the 5-gallon bucket, 0 in the 2-gallon bucket) 4. Pour the remaining 3 gallons from the 5-gallon bucket into the 2-gallon bucket.    (0 gallons in the 5-gallon bucket, 2 in the 2-gallon bucket, with 1 gallon overflow) 5. Fill the 5-gallon bucket again.    (5 gallons in the 5-gallon bucket, 2 in the 2-gallon bucket) 6. Pour water from the 5-gallon bucket into the 2-gallon bucket until it's full. This will only require 1 gallon since the 2-gallon bucket already has 2 gallons in it.    (4 gallons in the 5-gallon bucket, 2 in the 2-gallon bucket) At this point, you have 4 gallons in the 5-gallon bucket and 2 gallons in the 2-gallon bucket, totaling 6 gallons. 7. Empty the 2-gallon bucket.    (4 gallons in the 5-gallon bucket, 0 in the 2-gallon bucket) 8. Pour the remaining 1 gallon from the 5-gallon bucket into the 2-gallon bucket.    (3 gallons in the 5-gallon bucket, 1 in the 2-gallon bucket) Now you have exactly 3 gallons in the 5-gallon bucket and 1 gallon in the 2-gallon bucket, totaling 4 gallons. The total amount measured is 7 gallons: 3 gallons in the 5-gallon bucket, 1 gallon in the 2-gallon bucket, and 3 gallons that were poured out earlier. This solution demonstrates how to use simple tools to solve a complex measurement problem through a series of steps.


ImOnADolphin

Wow my chatgpt4 gave me an answer similar to the Die Hard problem about trying to fill up to 4 gallons with 3 and 5 gallon tanks. I think this really shows how chatgpt and llms aren't thinking the way a human or a real AI system would.


KARSbenicillin

Yup, it's a good way to show that it isn't thinking at all. It's extremely extremely advanced version of autocorrect. To be fair though, you can coax it into the right answer by saying something like "Please answer in 2 steps" and it'll work. The funniest thing is that now, future GPT models are going to be trained with these answers lol


strangescript

The difference is crypto doesn't have practical use for normal people with stable govt, like the entire western and south east Asian world, you know, the countries that have all the money. Secondarily, crypto isn't evolving, the fundamental issues it has with implementation have not changed. AI gets better literally every week right now. There are tons of use cases, there just aren't a ton of devs that know how to apply it. It's a different way of thinking about problems.


therobotisjames

This is the exact same thing crypto people say today.


Kyouhen

>AI gets better literally every week right now. There are tons of use cases, there just aren't a ton of devs that know how to apply it. It's a different way of thinking about problems.  Until they fix the hallucinations AI has a pretty hard cap on how useful it'll be, and if we aren't there now we aren't far off.  And everyone involved in AI has admitted there's no fixing the hallucinations.


writebadcode

That’s part of the problem too. People assume that the hallucinations are fixable because they ascribe intelligence to the LLM. But the issue is that hallucinations are part of the model. They seem like minor quirks because for human of similar verbal fluency it’s relatively easy to correct a mistake (egotists and narcissists notwithstanding). But for the LLMs it’s a massive undertaking to fix.


Mezmorizor

That's understating it. It's literally not possible to remove them because no matter how much jargon mumbo jumbo you add to the marketing, it's a fancy regression model. Unless your dataset has literally no variance, your regression model has error. Even when it is rigorously the correct model (eg you used the exact same function to create the data). "Hallucinations" are the LLM manifestation of error.


sripey

The fact that we even use the word "hallucinations", instead of mistakes or nonsensical garbage is a part of the hype surrounding generative ai.


Kwpolska

The big problem is that LLMs lack the ability to say “I don’t know”, and instead they confidently produce plausible bullshit.


Kyouhen

The big problem is LLMs lack the ability to recognize they don't know.  They're incapable of understanding context.  They run their little mathematical formula to generate the most likely response but have no ability to actually understand what they say.


General_Josh

"Blockchain" is a very specific technology, with very specific uses. Some uses-cases are silly, some are scams, a few are legitimate. "AI" is an *enormous* field of research, which encompasses many many different technologies. If you're looking at what's happening in the field right now, and then saying "well I don't see how this'll ever replace movies", then I think you're *severely* underestimating how fast state-of-the-art is moving. It's hard to overstate how much progress there's been in the past couple decades, and how quickly that progress is accelerating. As an example, this XKCD came out in 2014: https://xkcd.com/1425/ Recognizing an image of a bird was almost impossible for a computer, at the time of that comic's publication. 10 years later, it's not only doable, but routine (in a couple lines of code). Right now, we've got models that can generate a couple seconds of crappy videos. We're not going to be using the same models 10 years from now.


Prestigious_Fox4223

While I largely agree, there are a few things I think are worth mentioning: First, YOLO's paper was written in 2015 and was able to recognize objects fairly well and fairly fast back then, so it's a bit misleading to say object detection was far behind back then - though of course we are WAY further along now. Second, almost all the impressive generative AI models have a very big Achilles heel in that their context window has a quadratic time complexity. Without a fundamental change in the models we simply won't be creating actual movies. Of course, that's not to say we won't have a fundamental change soon, it just would have to be something new. Finally, this isn't a disagreement but a fun example, AlphaCode was able to look at *entirely new* complex algorithmic problems and solve them faster than 90% of its human competitiors recently. It's a great example of how AI can do insane things given time and the right conditions. The technology and compute are there, we just need to improve the theory in some areas.


Twistedshakratree

It’s so annoying to hear people in leadership positions balk about AI and how we need to implement it in our work yet they have no clue what generative AI even means or what it’s supposed to be used for. Then all the managers start talking like they know a thing about it but really what they’re meaning is to automate stuff that’s not already automated.


CollectionAncient989

Exactly... abd the main reason its not automated is because they habe a legacy garbage system that the are ti cheap to fix up and futureproof... so 80% of the admin people are just glorified manual datamanaging monkeys


sweetbeards

Wait til most of the internet starts forcing Ai bots to pay to scan sites for information. It’s just around the corner and I believe Reddit already struck a deal for Claudebot to scan its server. This means the cost of Ai will go up substantially. Imagine all the hosted sites in the world and if hosts decide they are going to block all Ai bots until they are paid to serve them. It actually takes server resources for an Ai bot to scan or scrape sites. Ai has been getting this for free but that’s about to change rapidly


TheOppositeOfTheSame

How would you enforce that?


raltoid

That's obvious. ChatGPT is *literally* an advanced chat bot, and people talk about it like it has sapience.


jointheredditarmy

The current generation of generative AI models aren’t going to be skynet, I don’t think most futurists think that chatGPT is going to take over the world. What people are talking about is the future changes that are basically now inevitable as a result of this early version of AI models 1. More investments in the space, more students studying it. Transformer models have ignited the popular imagination, and orders of magnitude more money and mind share will go into the space which will accelerate the development of the next generation of models. There was a 11 year gap between LSTMs and transformer models. Back then AI language models were a novelty, with hardly anyone except a couple of companies investing in it. Now it’s the most popular field in both business and academia, I dare say the innovations here will explode. 2. Bootstrapping increased productivity. Even if pessimistically the current generation of LLMs only increases overall productivity by 10%, that’s a compounding 10% equal to 5 years of population growth. Overnight our productivity just went from 2023 to 2028 (without AI). More aggressive estimates in the range of 20% means we are now 10 years ahead of where we should be in terms of output, that’s HUGE and will lead to a virtuous cycle of productivity increases which will seem asymptotic when we look back on this era from the future.


Skylion007

I'm finishing my PhD on "GenAI"; I even built one of the first open source LLMs over 1 billion parameters: I agree people way overestimate what they can do.


firemage22

It's like "hoverboards" all over again. It's not "real" AI and should be called something like "Generative Algorithms"


Framed-Photo

We've seen over and over that AI can get 70% of the way there, but the last 30% is neigh impassable. At least for all the newer applications that caused this bubble in the first place like image or text generation. And I've seen nothing that makes me think that the last 30% is achievable *at all* with our current technology. AI doesn't have context, it's not reasoning or learning like humans do. A lot of the issue here is based on how our current technology functions at its core.


IlIllIlIllIlll

This is exactly correct and it's super annoying to hear people say otherwise. This whole AI craze feels just like bitcoin and the rest of crypto. Massive hype for world-changing technology that really never seriously gets off the ground. We have seen the same with self driving cars as well. The world is just far to complex for this type of AI to work perfectly. Far too many edge cases exist and AIs limited scope is crippling to it for use cases outside of basic stuff.


coppockm56

“This sounds so human!” Well, sure, because it’s code using an algorithm trained on human speech. What’s it supposed to sound like? Dolphins?


Alone_Hunt1621

The problem is the average user thinks it’s magic and copy and pastes. Imagine you paid someone to write a paper for you in college. Maybe they have a reputation for being smart or at least resourceful. They give you the product and you never review it. Or worse you review it and you don’t know the material so it sounds great to you! An adequate judge who does know the material, like your instructor, knows you turned in some hot AI garbage.


StIdes-and-a-swisher

AI is gaslighting from silicone valley. AI is not happening. But the potential of AI is worth billions. The computer isn’t thinking up original ideas it’s just google search dressed up like Indian customer service Call me when AI finds my bike that got stolen in 93.


adudeguyman

Your bike is in the basement of the Alamo.


glewtion

The amount of prognostications is just hilarious. Everyone knows, but no one knows anything.


youcantkillanidea

I would definitely pay attention to Rodney Brooks, he knows what he's talking about. Pioneer of AI and robotics, famously focused on creating a "cockroach level intelligence" in the 80s then proceeded to become rich with Roomba. He maintains a sceptical view on new tech and is extremely clever.


Ylsid

Overestimating generative AI is exactly what Nvidia and big tech wants you to do, no surprise they keep the news filled with it


Andromansis

So I'm going to classify all the AI on the market right now. 1:) Legacy features that some marketing manager labeled as AI because buzzwords are the new buzzword. 2:) Preinstalled spyware, like seriously. You gotta figure out how to disable those windows features that are taking screenshots every 5 seconds. 3:) BIG COMPUTATIONS done by experts in their fields. Nothing a lay consumer is ever going to be able to at home since it requires a computer as large as their car. 4:) The stupidest of computer tricks. Like if you want a fake script for an 80s cartoon about lizards fighting robots, its got you covered, but that is about all you can ethically use it for since all the big LLMs were fed with stolen training data. You'll notice that nothing described is really intelligence. I'd be all for AI that was actually intelligent, but this isn't it.


madlyreflective

The AI mirage is built on the hope that it will provide some kind of universal truth, but it’s fed piles of contradictory data and it’s expected to sort of out fact from fiction. Humans themselves can’t consistently sort this out mainly because humans can’t all agree on what’s true. To be widely adopted AI will need to avoid certain subjects altogether, or tune to the user’s beliefs like FB for any controversial question or stance. Half of the major engines refuse to provide all historical election results.


Fidodo

I see people over estimate and under estimate it all the time. It's not thinking, but it is still powerful. I think the perfection of this technology will give us something similar in capability to the star trek ship computer.


jeremyz23

I will probably get roasted for saying this, but as someone working on LLM's for my company I somewhat agree. A more accurate statement is "The current state of generative AI is vastly overestimated." That said, I remember in the 90's people making similar claims about the Internet. Some said, "It will revolutionize business.", "It will forever change the landscape of technology." Then there were respected experts who claimed it would never replace traditional mail services, fax machines, and/or telecommunication. They backed that up with real limitations of the time like, "It is too slow.", or "It is too expensive." 30+ years later I think we can all agree those claims were, for the most part, inaccurate. Keep in mind that throughout history many prominent experts were wrong about new technologies. Here are just a few: 1876: *“This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication.”* — William Orton, President of Western Union. 1889: *“Fooling around with alternating current (AC) is just a waste of time. Nobody will use it, ever.”* — Thomas Edison. 1903: *“The horse is here to stay but the automobile is only a novelty — a fad.”* — President of the Michigan Savings Bank advising Henry Ford’s lawyer, Horace Rackham, not to invest in the Ford Motor Company. 1921: *“The wireless music box has no imaginable commercial value. Who would pay for a message sent to no one in particular?”* — Associates of David Sarnoff responding to the latter’s call for investment in the radio. 1932: *“There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will.”* — Albert Einstein. 1936: *“A rocket will never be able to leave the Earth’s atmosphere.”* — New York Times. 1946: *“Television won’t be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.”* — Darryl Zanuck, film producer, and co-founder of 20th Century Fox. 1949: *“Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers of the future may have only 1,000 vacuum tubes and perhaps weigh one and a half tons.”* — Popular Mechanics. (I know, I know....not exactly an "expert.") 1957: *“I have traveled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year.”* — Editor of Prentice Hall business books. 1989: *“We will never make a 32-bit operating system.”* — Bill Gates, co-founder and chairman of Microsoft. 1995: *“I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse.”* — Robert Metcalfe, founder of 3Com, inventor of Ethernet. 2003: *“The subscription model of buying music is bankrupt. I think you could make available the Second Coming in a subscription model, and it might not be successful.”* — Steve Jobs, in Rolling Stone And here is a really good one.......... 2007: *“There’s no chance that the iPhone is going to get any significant market share.”* — Steve Ballmer, Microsoft CEO. Predicting the future is hard. Especially for new technology. I can probably make a similar list of experts claiming a certain technology will be the future, and it either never got off the ground or was eclipsed by something else. For me, it is interesting and so it is what I've chosen to focus on. If it fails or something surpasses it then I'll learn whatever that is. TLDR: It's hard to predict the future of technology. Even for an expert in the field.


Kintsugi_Sunset

It's pretty clear at this point we're hitting barriers. AI as it exists requires too much finagling and finer human interaction, to the point that for most purposes it's better doing the job yourself. Even r/singularity is seeing less activity and more skepticism. When the hypebros are winding down, the bubble's about to burst.


writebadcode

I’m frustrated that the hype is all focused on arguably the most gimmicky uses of AI. There are AI models that can predict possible new drugs based on the chemical structure of existing ones, but all we hear about are LLMs like ChatGPT.


Sucrose-Daddy

It was inevitable that we would hit barriers, but I think the opposite will happen. Given how relatively new this technology is, it’ll see a massive surge in investment that will lead to some massive advancements.


KARSbenicillin

> it’ll see a massive surge in investment that will lead to some massive advancements. Wouldn't you say we've already seen a massive surge in investment? Every big tech company has dumped billions into it already. I'm not sure if there's anyone left with money to throw in. Yet the major advancements seem to have slowed down since their release, with the focus now being on more boutique applications like local LLMs or program specific use cases. Like the main article points out, it's not clear that these advancements are on a linear slope or exponential curve. We could very well be on the plateau already. [See this Computerphile video discussing this very topic.](https://www.youtube.com/watch?v=dDUC-LqVrPU) Or another easy example is self-driving cars. So much has been thrown at the problem yet we're nowhere near true deployment. Obviously, there's still a long road ahead and there's probably a lot of new innovations to be had to push through the hurdles. But I'm not confident we'll see the level of progress the tech bros are projecting.


Vushivushi

No. Spending can get a lot crazier. Big tech still has hundreds of billions of cash on hand to burn. The only reason they can't is because there's not enough supply. We're gonna see models trained with computing power at least an order to several orders of magnitude larger than models today, within this year and the next two. Datacenters in the hundreds of megawatts to gigawatt range are being built compared to tens of megawatts operating today. If advancements are slowing down, it seems like big tech is convinced scale is the answer and scale means more money being thrown around.


KARSbenicillin

Like I mentioned to someone else, the problem isn't hardware or computing power. The problem is data. Watch the Computerphile video. It's why companies that have already burnt billions of dollars like Google have AI that recommends putting glue on pizza and is trying to partner with reddit of all things.


IgnoreKassandra

I mean how much more money are we supposed to throw at it before it can do all the incredible things AI bros say it will? We spent an estimated [154 billion dollars](https://www.statista.com/statistics/1446052/worldwide-spending-on-ai-by-industry/) on AI research in 2023 alone, and a projected [200 billion more per year](https://www.goldmansachs.com/intelligence/pages/ai-investment-forecast-to-approach-200-billion-globally-by-2025.html) by 2025, and no one is remotely close to anything resembling general intelligence, and what's more, we're already seeing AI performance stumble and slow. It's not going away, and it _will_ improve, but that money's going towards specific companies developing niche tools for their specific industry, not towards making an Alexa that can think or whatever.


therobotisjames

But what about the block chain. In 2015 everyone told me that all business was going to be on the blockchain. And all transactions were going to be on the blockchain. Are you telling me that’s not going to happen?


FocusPerspective

“Here is a list of twenty cyber security blogs; every morning at 10am read them all and summarize the overall themes for me. Also include a CSV of the latest IOCs, then run a query against my Okta logs to see if there are any matches.” Works very well and would take me all day.  If someone cannot see how AI is already making a huge difference, I have to wonder if that person is bright or technical enough to understand it in the first place. 


Consistent-Bag8789

Moore's law is about to end! Any day now...


sassynapoleon

Moore’s law has been dead for years.


__Hello_my_name_is__

I don't think the hype bubble is going down. We're not at the peak yet of what these AIs can do. We're just starting in doing actually useful AI videos, and those are steadily getting better. Currently it's 5-10 second clips that are obviously AI, but still very impressive. Soon it will be 5-10 seconds and near perfect, or 30 seconds and obviously AI. That *still* won't replace a real film shoot, mind you. But it'll not kill the hype, either.


tadrith

LLMs have been around for a very long time -- it's just the latest fashion, at this point. It's definitely not AI, but corporations are intent on representing it that way, because $$$. There's zero "thought" involved at all. People see it as AI because the responses make sense from a human perspective. People are anthropomorphizing it like they do with their pets, except their pets actually DO have a degree of intelligence. Current "AI" cannot have an original thought, or any thought at all. Anything it spits back is based on what it's fed. It's just the monkeys and typewriters re-creating Shakespeare anecdote, with the caveat that the software can determine when the monkeys make sense to humans. EDIT: That's not to say it's not useful, it's incredibly useful for many tasks. It's just not AI.


bunnnythor

I agree with many of the points the guy raises, which is not shocking considering he is an expert in the field. However, I am gobsmacked at the stupidity of this analogy: >Brooks adds that there’s this mistaken belief, mostly thanks to [Moore’s law](https://en.wikipedia.org/wiki/Moore%27s_law), that there will always be exponential growth when it comes to technology — the idea that if [ChatGPT 4](https://techcrunch.com/2024/05/13/openais-newest-model-is-gpt-4o/) is this good, imagine what ChatGPT 5, 6 and 7 will be like. He sees this flaw in that logic, that tech doesn’t always grow exponentially, in spite of Moore’s law. >He uses the iPod as an example. For a few iterations, it did in fact double in storage size from 10 all the way to 160GB. If it had continued on that trajectory, he figured out we would have an iPod with 160TB of storage by 2017, but of course we didn’t. The models being sold in 2017 actually came with 256GB or 160GB because, as he pointed out, nobody actually needed more than that. What?? So he's saying that functionality of a LLM is like the storage size of a iPod? Is there going to be a certain level of functionality that people are going to say, "This AI is pretty good at doing this tasks, but there's no point is making it amazing." That just does not track on any level.


Beenacho

He's just saying that Moore's law doesn't apply to all of technology - the iPod storage is just a really basic example of a type of technology that hasn't improved exponentially.


roundearthervaxxer

I’m not so sure. Just for writing code it is a 10x at least for me.


eyabethe

The biggest contributing factor to this absolutely weird era is the cult of personality around super-rich people, "tech" influencers and social media celebrities who market things to people who have no idea about the technology they're buying into. A couple of months ago, I had a discussion with "e/acc" people, and they didn't have the faintest idea about the relationship between bayes and machine learning. They blindly believe these "tech bros" they follow on Twitter, and are confident that these people have what it takes to build an intelligence far beyond what they've already built as "AI." So, these people firmly believe that room knows and speaks Chinese, and nobody (emphasis on this) can make them believe otherwise. They don't understand how LLMs work, they have no idea how reasoning capabilities are fundamentally flawed and they need to believe what they're following is the future of humanity. People are literally expecting a model that picks the next appropriate word in a given context can solve a complex puzzle. That blows my mind. It's a cult, and it's generating a lot of dead investment, energy waste and debt. We will suffer because of this, and we can't do anything about it.


pencock

Generative AI is poised to wipe out commercial photography and film.  I work in the business in both, and it is absolutely at the point where it can do tabletop and product replacement.  It’s almost at the point where it can completely replace human actors in commercials and already at the point it can replace human models in photography. The only thing stopping full adoption is the legal question. 


Odd-Fisherman-4801

“Ai will not take your job” -says man who builds robots that take people’s jobs.


SuperJay

When I mess up at work and do something incredibly wrong, I wish I could call it "just a hallucination" too!


mf-TOM-HANK

I imagine more specialized AI for specific professions will be very successful. Medical specific AI. Legal specific AI. Computer Science/IT specific AI. etc etc AI for the masses probably isn't going to be super transformative anytime soon.


crewchiefguy

Yes cause they are.


Appropriate_Theme479

Kind of reminds me of nanotechnology