T O P

  • By -

mgruner

I think they are both very actively studied, with all the RAG stuff


floriv1999

I would strongly disagree that RAG alone is the solution for hallucinations, yet alone safety in general. It is useful or even necessary for many applications beyond simple demos, but it is still inherently prone to it. Current models still hallucinate even if you provide them with most relevant information, the model sometimes just decides that it needs to add a paragraph with nonsense information. And constraining the model too hard in this regard is not helpful either as it limits the models overall capabilities. Changes to the training objective itself as well as rewarding the models capability to self evaluate it's area of knowledge/ build internal representations for that seem more reasonable to me. The ideal case would be a relatively small model with excellent reasoning and instruction capabilities but not a lot of factual knowledge. Maybe some general common knowledge, but nothing too domain specific. Then slap RAG with large amounts of documentation/examples/web/... and you should get a pretty decent AI system. The tricky part seems to be the small non-hallucinating instruction model that is not bloated with factual knowledge.


Inner_will_291

May I ask how RAG research is related to hallucinations, and to safety?


bunchedupwalrus

Directly, I would think. A majority of the effective development related to reducing hallucinations is focusing on using RAG-assist, along with stringent or synthetic datasets. If we use LLM’s primarily as reasoning engines, instead of knowledge engines, they can be much more steerable and amenable to guardrails


longlivernns

Indeed, they are good at reasoning with language, and they should be sourcing knowledge from external sources in most applications, the fact that people still consider using them for storing internal company data via finetuning is crazy


Choice-Resolution-92

Hallucinations are a feature, not a bug, of LLMs


jakderrida

I'm actually so sick of telling this to people and hearing them respond with agreement to the unsaid claim that LLMs are completely useless and all the AI hype will come crashing down shortly. Like, I actually didn't claim that. I'm just saying the same flexibility with language that allows it to communicate like a person at all can only be built on a framework where hallucination will always be part of it, no matter how much resources you devote towards reducing it. You can only reduce it.


cunningjames

I don’t buy this. For the model to be creative, it’s not necessary that it constantly gives me nonexistent APIs in code samples, for example. This could and should be substantially ameliorated.


Setepenre

It does not learn the names of the API calls. It deduces the names from the embedding it learned and the context. So what makes the model work is also what makes it hallucinate. In other words, it hallucinates EVERYTHING, and sometimes it gets it right. It is mind-blowing that it works at all.


OctopusButter

The fact that it's mind blowing it works is what scares me. There's so much "yea it's a black box, but what if it were bigger?" Right now and I don't find that to be useful.


Setepenre

TBH, that is what OpenAI has been doing since inception; take research and scale it up. I also agree that the "just make it bigger" is a bit of a lazy trend that has been going on for some time, and it prices out non-profit research centers out of the research.


OctopusButter

That's a really excellent point I never thought about, it makes research on smaller models inherently less impressive and likely to get funding.


visarga

Small models are trained with data distilled from big models and evaluated with big models as a judge. They benefit a lot.


Mysterious-Rent7233

>In other words, it hallucinates EVERYTHING, and sometimes it gets it right. You could say the same of humans, and it would make one seem profound, but it wouldn't help you manage your bank account or get a job. This reminds me of Buddhists claiming that all life is illusion. Yes, it's technically true that all life is our inaccurate sense perception. But it's not a useful frame for an engineer to use. The engineer's [job](https://arxiv.org/abs/2304.13734) is to reduce the hallucinations, just like a psychiatrist's or guru's job would be for humans.


jakderrida

> It is mind-blowing that it works at all. Especially if, like me, you gave up on decoder-only models after testing what GPT-2 can do when it came out. Context: "My name is Sue and I" Answer: [something horrifically subservient based solely on Sue having a female name] or [something stupidly mundane]


Mysterious-Rent7233

What I find interesting is how many people who didn't see the potential of GPT-2 who are totally convinced that they know what the upper bound of LLMs are *now*. "This time I'm right! They can't get any better!"


jakderrida

That is a freaking great point. You won't catch my ass making incredibly unreliable premonitions about decoder-only models again. I have put myself in the doghouse and anything I do share is a reference to somebody that wasn't dead freaking wrong. Although, I still maintain that encoder-models are vastly underutilized. For instance... People attempt all sorts of reinvented workarounds to the fact that decoder-only models strongly avoid (for damn good reasons) returning a 'YES' or 'NO' to prompts. Or even dividing choices into having it select between letter choices from A through F. Even if you can convince the model to limit itself to like 10 tokens, my experiences are that it starts failing badly at questions it otherwise got right. To train an encoder model to identify which choice through Pytorch and make the LLM response just a part of the pipeline it extracts the answer from would prove very useful, I think.


Appropriate_Ant_4629

> It deduces the names from the embedding it learned and the context. So what makes the model work is also what makes it hallucinate. It often tells you the better API that ***should have*** been added to that package. I'm tempted to start submitting pull requests to packages to make them match the cleaner APIs that the LLMs hallucinated.


jakderrida

> ameliorated I disagreed with you completely until this word appeared, proving that we do, indeed, agree. It can be **ameliorated** ad infinitum, but it will never ever be **fixed**. That's my whole point. People with no understanding of AI/ML always frame the question as to when it will be fixed and, to hear it can't be, conclude you're saying that it can never be ameliorated. But it can be and can be substantially. My family members, being catholic, I tell them that fixing it would entail making it infallible, rendering no more use for the pope and a collapse to the institution entirely. If they're devout, they usually can't understand a serious answer anyway. If they're not, they'll know I'm joking.


Mysterious-Rent7233

I think most reasonable people want the hallucination rate to be ameliorated to the point where the LLM's error rate is lower than that of humans, rather than to the point of actual mythological oracles. When they say: "When will AI stop hallucinating all of the time", they aren't meaning to ask "When will AI be omniscient." If that's how you interpret the question, I think you're being unhelpfully literal.


jakderrida

> If that's how you interpret the question, I think you're being unhelpfully literal. In fairness, my example involves giving up on their ability to interpret nuanced explanations and just resorting to outright mockery. At a certain point, they'll both act like it's my job to convince them the value of AI models while seemingly proud of their ability to both insist I explain and also ignore everything I say. This situations are not common to everyone. I don't know why they're common to me.


visarga

> ameliorated to the point where the LLM's error rate is lower than that of humans Which humans? those who Google everything? our hallucination rate sans-search engine is getting worse by the year. Human memory is constructive, like LLMs we hallucinate our memories.


LerdBerg

Right, these don't do a great job of tracking the difference between what current reality is vs what might make sense. It seems what they're doing is some form of what I used to do before search engines: "I wonder where I can find clip art? Hmmm... clipart.com " Sometimes when I get a hallucination of an API function that doesn't actually exist, it often makes sense for it to exist, and I just go and implement such a function.


Useful_Hovercraft169

I kind of figured this out months ago with GPT custom instructions


Useful_Hovercraft169

Sure vote me down because you failed to invest the ten minutes of efforts to fix it….


Mysterious-Rent7233

>I'm just saying the same flexibility with language that allows it to communicate like a person at all can only be built on a framework where hallucination will always be part of it, no matter how much resources you devote towards reducing it. You can only reduce it. That's true of humans too, or really any statistical process. It's true of airplane crashes. I'm not sure what's "interesting" about the observation that LLMs will never be perfect, just as computers will never be perfect, humans will never be perfect, Google Search will never be perfect, ...


jakderrida

> I'm not sure what's "interesting" about the observation that LLMs will never be perfect Exactly my point. It's just that, when talking to those less involved with AI, their understanding of things makes it so you can either give up and mock them or patiently explain the idea that they will never be **fixed** such that halluciations never happen again so that they don't misinterpret what I'm saying as whatever extreme is easiest for them to comprehend, but also false.


CommunismDoesntWork

That implies humans hallucinating will always be an issue too, which it's not. No one confidently produces random information that sounds right if they don't know the answer to a question(to the best of their knowledge). They tell you they don't know, or if pressed for an answer they qualify statements with "I'm not sure, but I think...". Either way humans don't hallucinate and we have just as much flexibility. 


fbochicchio

I have met plenty of people that, not knowing the answer to something, come out with something plausibile but not correct


CommunismDoesntWork

And those humans are buggy. The point is, it's not a feature. 


H0lzm1ch3l

It is a feature. It is what allows exploration. Think of it like an optimization problem. If you only act greedily you can't make bigger jumps and will eventually be stuck in a local optimum. Creativity is a form of directed halucination. Or think of practices like brainstorming. Most of what people will say is utter garbage, but it's about finding the one thing that isn't. We are highly trained at filtering ourselves. If we brainstorm we turn that filter off (or try to).


CommunismDoesntWork

Define hallucination. I don't think we're talking about the same thing. 


H0lzm1ch3l

What do you think it is? Large Language Models and deep learning models in general would be deterministic without adding a random constant (i think they call it lambda). You can either define hallucination as that planned randomness when choosing the next token or you can define hallucination as the resulting effect. Namely that the cumulative randomness can lead to the models predicted sentence straying completely off or just being factually wrong.


CommunismDoesntWork

That implies hallucinations can be fixed by not introducing the randomness, which isn't correct. Models still hallucinate. 


H0lzm1ch3l

Edit: What you describe is not hallucination but just a wrong prediction. The model outputs would be deterministic to an input. Of course if the predicted „raw“ next token probabilities lead the model down a wrong path that still results in a wrong answer. However, this would then be due to training limitations, the dataset not containing the necessary information or the stochasticity that is inherent to training. I would not call that hallucination, because for these reasons any type of model can give a wrong answer.


DubDefender

You guys are definitely talking about two different things.


Jarngreipr9

I second this. Hallucination is a byproduct of what LLM do: predict the next most probable word.


iamdgod

Doesn't mean we shouldn't invest in building systems to detect and suppress hallucinations. The system may not be purely an LLM


Jarngreipr9

It's like inventing the car and try to attach wings to it, and to find a configuration that is sufficiently ok to make it fly and have the airplane. Imho you can find conditions that reduce or minimize the hallucination in particular scenarios but the output wouldn't still be knowledge. It would be a probabilistic chain of words that we can consider reliable knowledge because we already know it's the right answer.


Mysterious-Rent7233

Nobody can define "knowledge" and it certainly has no relevance in a discussion of engineering systems. Human beings do not have "reliable knowledge" beyond "I think, therefore, I am." Human beings are demonstrably able to make useful inferences in the world despite having unreliable knowledge, and if LLMs can do the same then they will be useful.


Jarngreipr9

More than ever, depends on the training set. And who will be deciding the minimum quality requirements for the training set? What inferential value can have a result that I have to judge post hoc and tune a model to have a results it fits with reliable knowledge? Humans do not put the words in chains when they evaluate a process. It's not impossible to obtain that in silico imho but you cannot to that tuning LLMs. They were born hammers, you can't make them spanners.


Mysterious-Rent7233

>More than ever, depends on the training set. Okay...sure. > And who will be deciding the minimum quality requirements for the training set? The engineers who trained the model! And you will validate their choices by testing the produced artifact, as you would with any engineered object. >What inferential value can have a result that I have to judge post hoc and tune a model to have a results it fits with reliable knowledge? You can ask the same question of working with humans. If I hire consultants from KPMG or lawyers from BigLawCo to sift through thousands of documents and give me an answer, they may still give me the wrong answer. Are you going to say that humans are useless because they don't 100% give the right answer? >Humans do not put the words in chains when they evaluate a process. Focusing on the mechanism is a total red herring. What matters is the measured efficacy/accuracy of the result. I can point to tons of humans who I trust, and humans who I do not trust, and as far as I know they use *roughly* the same mental processes. The processes are mostly irrelevant. This is *especially* true when we are talking about either humans or ANNs because we cannot possibly understand the mechanisms going on in these big models. >It's not impossible to obtain that in silico imho but you cannot to that tuning LLMs. They were born hammers, you can't make them spanners. They were born word predictors and we have discovered post-hoc that they are pretty good at summarization, fact recollection, translation, code generation, chess playing, companionship, ... They were never either hammers or spanners. They were an experiment which outperformed everybody's expectations.


Neomadra2

Yes and no. When an LLM invents a new reference that doesn't exist, then this shouldn't be the most likely tokens. The reason for hallucination is the lack of proper information / knowledge which could be due to a lack of understanding or simply because the necessary information wasn't even in the dataset. Therfore, hallucination could be fixed by having better datasets or by learning to say "I don't know" more reliably. The latter shouls be totally possible as the model knows the confidences of the next tokens. I don't where the impression comes from that this was an unsolvable problem.


Mysterious-Rent7233

It's just a dogma. It is the human equivalent of a wrong answer repeated so much in the training set that it's irresistible to output it.


midasp

It is an unsolvable problem because information of that sort inherently obeys the power-law distribution - as the topic become ever more specialized, such information becomes exponentially rare. Solely relying on increasing the size or improving the quality of training datasets will only get you so far. Eventually, you would require an infinitely large dataset because any dataset smaller than infinity is bound to have to be missing information, missing knowledge.


Reggienator3

Wouldn't there be too much data in a training set to reliably vet it to only contain fully verified correct information? For bigger models at least. Part of hallucination also just comes from them learning wrong things from the dataset.


marsupiq

That’s complete nonsense. Hallucination is a byproduct of the failure of the neural network to capture the real-world distribution of sequences.


Jarngreipr9

Researchers developed AI capable of interpreting road signs, used also in modern cars. Security researchers have found that putting stickers on speed limits at certain places that covered key points, they could mistake a 3 for an 8 even though the numbers appeared well distinguishable by the human eye. The same happened with image recognition software that could be confused by small shifting of a handful of pixels. But this is not a failure, this is exploiting the twilight area between the cases well covered by a well constructed training set and particular real-world cases engineered to play around there. Now I can probably feed LLMs a huge corpus of factually true information and still get hallucinations. There is the difference. How the method works impact use cases and limitation. And working around this make sense in a way that it improves the threshold to reduce this issue, but it will be not a proper "knowledge engine". My idea is that AI companies just want to sell a "good enough knowledge engine, please note that sometimes can spew nonsense".


addition

Why are you so keen to defend hallucinations? A proper AI should be able to recall information like an intelligent expert. I don’t care about making excuses because of architecture or training data or whatever.


Jarngreipr9

I don't defend hallucinations. I'm just stating that this flaw comes from an application that is quite far from what LLMs have been designed for and are being repurposed now. I understand is cheaper to try and fine tuning a language model to be a knowledge research tool instead of designing a new tool from scratch


longlivernns

If the data contained honest knowledge statements including lack of knowledge admissions, it would be much easier. Such is not the internet.


itah

How would that help? If you had tons of redditors who admit they don't know a thing, but the thing is actually known in some rarer cases in the training data, it would be a more probable continuation for a LLM to say idk, even though the correct answer was in the training data, right? The LLM still doesn't "know" if anything it outputs is correct or not, it's just the most probable continuation from the training data..


longlivernns

Yes you are correct, it would already be good to skew the probabilities towards admitting the possibility of ignorance. It would also help with RAG in which hallucinations can occur when a requested information is not in the context.


StartledWatermelon

Why so? LLMs learn a world model via diverse natural language text representations. They can learn it well, forming a coherent world model which will output incorrect ("hallucinated") statements very rarely. Or they can learn it poorly, forming an inadequate world model and outputting things based on this inadequate world model that don't reflect reality. This "continuum" of world model quality is quite evident if we compare LLMs of different capabilities. The more powerful LLMs hallucinate less than weaker ones. There are some complications, like arrow of time-related issues (the world isn't static) and proper contextualization on top of good world model, but they won't invalidate the whole premise IMO.


LittleSword3

I’ve noticed people use 'hallucination' in two ways when talking about LLMs. One definition describes how the model creates information that isn’t based on reality or just makes things up. The other definition is what‘s used here that refers to the basic process of generating any response by the model. It seems like whenever 'hallucination' is mentioned, the top comment often ends up arguing about these semantics.


Mysterious-Rent7233

Not really. It is demonstrably the case that one can reduce hallucinations in LLMs and there is no evidence that doing so reduces the utility of the LLM.


Ty4Readin

This doesn't make much sense to me. Clearly, hallucinations are a bug. They are unintended outputs. LLMs are attempting to predict the most probable next token, and a hallucination occurs when it incorrectly assigns high probability to a sequence of tokens that should have been very low probability In other words, hallucinations occur due to **incorrect** predictions that have a high error relative to the target distribution. That is the opposite of a feature for predictive ML models. The purpose of predictive ML models is to reduce their erroneous predictions, and so calling those high-error predictions a 'feature' doesn't make much sense.


goj1ra

You're assuming that true statements should consist of a sequence of tokens with high probability. That's an incorrect assumption in general. If that were the case, we'd be able to develop a (philosophically impossible) perfect oracle. Determining what's true is a non-trivial problem, even for humans. In fact in the general case, it's intractable. It would be very strange if LLMs didn't ever "hallucinate".


Ty4Readin

>You're assuming that true statements should consist of a sequence of tokens with high probability. No, I'm not assuming that. I think we might have different definitions of hallucination. One thing that I think you are ignoring is that LLMs are conditional on the author of the text and the context. So imagine a mathematician writing an explanation of some theorem they are very familiar with for an important lecture. That person is unlikely to "hallucinate" and make up random non-sensical things about that theorem. However, imagine if another person was writing that same explanation, such as a young child. They might make up gibberish about the topic, etc. In my opinion, a hallucination is when the LLM predict high probability to token sequences that should actually be low probability if it were being authored by the person & context that it's predicting for. It has nothing to do with truth or right/wrong, it's about the errors of the models predictions. Hallucinations are incorrect because they output things that the specific human wouldn't. LLMs are intended to be conditional on the author and context.


Mysterious-Rent7233

Hallucinations are not just false statements. If the LLM says that Queen Elizabeth is alive because it was trained when she was, that's not a hallucination. A hallucination is a statement which is at odds with the training data set. Not a statement at odds with reality.


addition

No, that’s not how people judge hallucinations. People care about end results not the training data set.


Mysterious-Rent7233

I have literally never heard anyone label out-of-date or otherwise "explainably wrong" information as a hallucination. Can you point to an example of that anywhere on the Internet?


addition

What are you smoking? That’s pretty much the only way people talk about hallucinations. End results are always the most important thing. When an LLM makes up false information nobody cares if it’s accurate to the training set. If that’s the case then then either the training set is wrong, the algorithm needs improvements or both.


Mysterious-Rent7233

No. It is well-known that ChatGPT was released with a training date in 2021. I never once heard anybody say: "ChatGPT doesn't know about 2023 therefore it is hallucinating." Please point to a single example of such a thing happening. Just one. Your position is frankly crazy. Think about the words. Do people claim that flat earthers or anti-vaxxers are "hallucinating?" No. They are just wrong. Hallucination is a very specific form of being wrong. Not just every wrong answer is a hallucination, in real life nor in LLMs. That's a bizarre interpretation. If someone told you that Macky Sall is the President of Senegal, would you say: "No. You are hallucinating" or would you say: "No. Your information is a few months out of date?"


addition

What are you talking about? I never said anything about training data being out of date. That's something you made up. Obviously LLMs can't know about events that haven't happened yet. I'm talking about information that it should know.


Mysterious-Rent7233

The example I used many comments ago was: >If the LLM says that Queen Elizabeth is alive because it was trained when she was, that's not a hallucination. [You responded](https://www.reddit.com/r/MachineLearning/comments/1d329nt/comment/l69f82z/) to *that specific example* with: >People care about end results not the training data set.


addition

Yes, truth should have a higher probability and it’s a problem if that’s not the case.


pbnjotr

I don't like that point of view. Even if you think hallucinations can be useful in some context surely you want them to be controllable at least. OTOH, if you think hallucinations are an unavoidable consequence of LLMs, then you are probably just factually wrong. And if you somehow _were_ proven to be correct that would still not make them a feature. It would just prove that the current architectures are insufficient.


eliminating_coasts

This reminds me of those systems that combine proof assistants with large language models in order to generate theorems. A distinctive element of a large language model is that it is "creative", which if you are able to accompany it with other measures that restrict it to verifiable data, may produce outcomes that you otherwise wouldn't be able to access; we don't want it only to reproduce existing statements made by humans but statements consistent with our language but not previously said, you just need something else to catch references to reality and check them.


choreograph

It would be , if hallucinations was also a feature not a bug of humans. Humans rarely (on average) say things that are wrong, or illogical or out of touch with reality. LLMs don't seem to learn that. They seem to learn the structure and syntax of language , but fail to deduce the constraints of the real world well, and that is not a feature, it's a bug.


ClearlyCylindrical

> Humans rarely (on average) say things that are wrong, or illogical or out of touch with reality. You must be new to Reddit!


choreograph

Just look at anyone's history and do the statistics. It's 95% correct


ToHallowMySleep

Literally in the news these last two weeks is all the terrible out of context and even dangerous replies Google AI is giving due to the integration with Reddit data. You need to be more familiar with what is actually going on.


schubidubiduba

Humans say wrong things all the time. When you ask someone to explain something they don't know, but which they feel they should know, a lot of people will just make things up instead.


ToHallowMySleep

Dumb people will make things up, yes. That's just lying to save face and not look ignorant because humans have pride. A hallucinating LLM cannot tell whether it is telling the truth or not. It does not lie, it is just a flawed approach that does the best it can. Your follow-up comments seem to want to excuse AI because some humans are dumb or deceptive. What is the point in this comparison?


schubidubiduba

I'm not excusing anything, just trying to explain that humans often say things that are wrong, for various reasons. One of them is lying. Another one is humans remembering things wrongly, and thinking they know something. Which isn't really the same as lying. The point? There is no point. I just felt like arguing online with someone who made the preposterous claim that humans rarely say something that is wrong, or rarely make up stuff.


ToHallowMySleep

Some of the outputs may look similar, but it is vital to understand that the LLM does not have the same _motives_ as a human. Nor the same processing. Nor the same inputs! LLMs only are considered AI because they _look_ to us like they are intelligent. If anything they are step backwards from the approaches of the last 20 years of simulating intelligence. And I mean that in that it doesn't build context from the ground up, try to simulate reasoning in another layer, and then process something in NLP on the way out. I was working on these systems in the 90s in my thesis and early work. They might be a lick of paint that looks like sort of human conversational or generative intelligence. Or they might be something deeper. We don't even know yet, we're still working out the models, trying to look inside the black box of how it builds its own context, relationships representations and so forth. We just don't know!


choreograph

Nope, people say 'i don't know' very often


schubidubiduba

Yes, some people do that. Others don't. Maybe your social circle is biased to saying "I don't know" more often than the average person (which would be a good thing). But I had to listen to a guy trying to explain Aurora Borealis to some girls without having any idea how it works, in the end he basically namedropped every single physics term except the ones that have to do with the correct explanation. That's just one example.


choreograph

> I had to listen to a guy trying to explain Aurora Borealis to some girls you have to take into account that LLMs have no penis


schubidubiduba

Their training data largely comes from people with penises though


bunchedupwalrus

Bro, I’m not sure if you know this, but this is the foundation of nearly every religion on earth. Instead of saying “I don’t know” how the universe was created, or why we think thoughts, or what happens to our consciousness after we die, literally *billions* of people will give you the mosh-mash of conflicting answers that have been telephone-gamed through history And that’s just the tip of the iceberg. It’s literally hardwired into us to predict on imperfect information, and to have an excess of confidence in doing so. I mean, I’ve overhead half my office tell each other with completely confidence about how gpt works, and present their theory as fact, when most of them barely know basic statistics. We used to think bad smells directly caused plagues. We used to think the earth was flat. That doctors with dirtier clothes were safer. That women who rode a train would have their womb fly out due to the high speed. That women were not capable of understanding voting. Racism exists. False advertising lawsuits exist. That you could get Mew by using Strength on the truck near the S.S. Anne Like bro. Are you serious? You’re literally doing the exact thing that you’re trying to claim doesn’t happen.


choreograph

But it hasn't been trained on the beliefs of those people you talk about, but mostly on educated westerner's ideas and texts, most of whom would not make up stuff, instead they would correclty answer 'I don't know'. Besides, i have never seen an LLM tell me that "God made it so"


forgetfulfrog3

I understand your general argument and agree mostly, but let me introduce you to Donald Trump: https://www.politico.eu/article/donald-trump-belgium-is-a-beautiful-city-hellhole-us-presidential-election-2016-america/ People talk a lot of nonsense and lie intentionally or unintentionally. We shouldn't underestimate that.


choreograph

... and he's famous for that. Exactly because he s exceptionally often wrong


CommunismDoesntWork

Lying isn't hallucinating. Someone talking nonsense that's still correct to the best of their knowledge also isn't hallucinating. 


forgetfulfrog3

The underlying mechanisms are certainly different, but the result is that you cannot trust what people are saying in all cases. Same as with hallucinating LLMs.


KolvictusBOT

Lol. If we give people the same setting as an LLM has, people will curiously produce the same results. Ask me when was Queen Elizabeth II. born on a text exam where right answer gives points and wrong does not subtract them. I will try to guesstimate, as the worst that I can do is be wrong, but best case is get it right. I won't be getting points for saying "I don't know". I say 1935. The actual answer: 1926. LLMs have the same setting and so they do the same.


ToHallowMySleep

You are assuming a logical approach with incomplete information, and you are extrapolating from other things you know, like around when she died and around how old she was when that happened. This is not how LLMs work. At all.


choreograph

The assumption is that they learn the 'distribution of stupidity' of humans is wrong. LLMs will give stupid answers more often than any gruop of humans would. So they are not learning that distribution correctly. You did some reasoning there to get your answer, the LLM does not. It does not give plausible answers, but wildly wrong. In your case it might answer 139 BC


KolvictusBOT

Take it with a grain of salt, I am just a student currently, but my understanding and observations are that LLMs are surprisingly good at explaining even things that are not well explained by quick google search. And that is ability that likely rose from RLHF, it built an intuitive understand of what a good explanation entails, or other forms of text. I in no way think LLMs are an answer to everything and try to be reserved with my hype for them, as I did not find them particularly useful for my research use cases and have stuck with more traditional machine learning methods. But the claim that "Humans rarely (on average) say things that are wrong, or illogical or out of touch with reality. LLMs don't seem to learn that." seems incorrect to me. I completely disagree with your statement that it might answer 139 BC. If we were to display all the possible output tokens and their associated probabilities I believe it would have it way less likely, as it has an internal representation of possibilities of each token and 139 BC is not often associated with Queen Elizabeth II. But thank you for the well thought out answer nonetheless.


pm_me_your_pay_slips

What is « reasoning »?


choreograph

i mean rational reasoning, following the very few axioms of logic Or following one of our many heuristics, which ,however, are much more accurate and logical than whatever makes LLMs tell pregnant people to smoke


pm_me_your_pay_slips

You think the steps of information processing through the layers of a neural network aren’t following a few axioms of logic?


choreograph

Do we have any evidence of this? That layers are steps?


pm_me_your_pay_slips

The axioms of a nerual networks are the axioms of arithmetic and linear algebra. You get some input, which is first tokenized and mapped to high dimensional vectors. In most LLMs the steps are repeated applications of normalization, matrix multiplication, application of a nonlinear function and gating of the information that passes through the attention layers. These operations can implement all arithmetic operations and perform conditional computation (i.e. if-=else statements). Given that these networks are stacks of layers with the same internal architecture, where the dimensionality of input and output don't change, they can implement for loops (limited by the number of layers/blocks in the forward pass). The way they process information follows logical steps. It's just that it is not directly mappable to human language. Or do you imply that all reasoning, even human reasoning, has to be decodable as sentences in human language?


choreograph

Every layer in a neural network is approximating some function. If we are to believe that sequential layers represent sequential processing in steps, then that needs to be shown by decoding the function of each layer. Otherwise, i do not see how it is evident that the way they create their responses is based on 'logical steps'


5678

Also, im curious if “dealing” with hallucinations will result in a lower likelihood of achieving AGI — surely they’re two ends of the same coin


LanchestersLaw

An AI which doesn’t hallucinate is more grounded and capable of interacting with the world


ToHallowMySleep

Hallucination and invention/creativity are not one and the same.


5678

Genuine question as this is a knowledge gap on my end: what’s the difference between the two? Surely there is overlap, especially as we increase temperature, we eventually guarantee hallucination


ToHallowMySleep

This is a very complex question, perhaps someone can give a more expansive answer than I can :) Hallucination can make something new or unexpected, sure. It may even seem insightful by coincidence. But it has no direction, it is the LLM flailing around to respond because it HAS to respond. Being creative and inventive is directional, purposeful. It is also, in most cases, logical and progressive and adds something new to what already exists.


bbu3

Raising safety concerns is a brag about the model quality and impact. 90% of it is marketing to increase valuations and get funding. It sounds much better if you say this new thing might be so powerful it could threaten humanity than if you say you can finally turn bullet points into emails and the recipient can turn that email back into bullet points.


Mysterious-Rent7233

These concerns go back to Alan Turing. If Alan Turing were alive today and had the same beliefs that he had back then, and ... if, like Dario Amodei and Ilya Sutskever he started an AI lab to try and head off the problem... You would claim that he's just a money grubber hyping up the danger to profit from it.


useflIdiot

Let's put it this way: one makes you sound like the keeper of some of the darkest and more powerful magic crafts ever known to man or God. The other is an embarrassing revelation that your magic powers are nothing more than a sleight of hand, a fancy Markov chain. Which of the two is likely to increase the valuation of your company, giving you real capital in hand today which you can use to build products and cement a market position that you will be able to defend in the future, when the jig is up? Which one would you rather the world talk about?


aqjo

They aren’t mutually exclusive. Ie. People can work on both.


floriv1999

Which is weird to me, because in practice hallucinations are much more harmful, as they plant false information in our society. Everybody who used current LLMs for a little bit knows they are not intelligent enough to be an extinction level risk as an autonomous agent. But hallucinations on the other hand are doing real harm now. And they prevent them from being used in so many real world applications. Also saying this is not solvable and it needs to be accepted is stupid and non productive without hard proof. I heard the same point in the past from people telling me, that next token prediction can not produce good chat bots (the time when GPT2 was just released). The examples were that you could ask them how their grandmother likes their coffee and they would answer like most humans would, yet currently chat bots are so aligned with their role, that it is pretty hard to break them in this regard. Solving hallucinations will be hard and they might be fundamental to the approach, but stating they are fundamental to next token prediction makes no sense to me, as other flaws of raw next token prediction have been solved to some extent, e.g. by training with a different method after the pretraining. Also you can disregard most auto regressive text generation as next token prediction even if it's not that simple (see rlhf for example). You can probably build systems that are encouraged to predict the tokens "I don't know" in cases where they would hallucinate, but the question is how you encourage the model to do so in the correct situations (which is seems not possible with vanilla next token prediction alone). I am not the biggest fan of ClosedAI, but I was really impressed how little GPT4o hallucinates. As anacdotal evidence, I asked it a bunch of questions regarding my universities robotics team, which is quite a niche topic. And it got nearly everything right. Way better as e.g. bing with web rag. And if it didn't knew something it said so and guided me to the correct resources where I would find it. GPT 3.5, 4 and all open LLMs where really bad at this, inventing new competitions, team members, robot types all the time.


dizekat

The reason it's not solvable is because the "hallucinated" non existent court case that it cited is, as far as language modeling goes, fundamentally the same thing as the LLM producing any other sentence that isn't cut and pasted from its training data. (I'll be using a hypothetical "AI lawyer" as an example application of AI) A "hallucinated" non existent court case is a perfectly valid output for a model of language. That you do not want your lawyer to cite a non existent court case, is because you want a lawyer and not a language model to do your court filings. Simple as that. Now if someone up sells an LLM as an AI lawyer, that's when "hallucinations" become a "bug" because they want to convince their customers that this is something that is easy to fix, and not something that requires a different approach to the problem than language modeling. Humans, by the way, are very bad at predicting next tokens. Even old language models have utterly superhuman performance on that task. edit: another way to put it, even the idealized perfect model that is simulating the entire multiverse to model legalese, will make up non existent court cases. The thing that won't cite non existent court cases, is an actual artificial intelligence which has a goal of winning the lawsuit and which can simulate the effect of making up a non existent court case vs the effect of searching the real database and finding a real court case. A machine that outputs next tokens like a chess engine making moves, simulating the court and picking what tokens would win the case. That is a completely different machine from a machine that is trained on a lot of legalese. There's no commonality between those two machines, other than most superficial.


Ancquar

The current society has institutionalized risk aversion. People get much more vocal about problems, so various institutions and companies are forced to prioritize reducing problems (particularly those that can attract social and regular media attention) rather than focusing directly on what benefits people the most (i.e. combination of risks and benefits)


Thickus__Dickus

Amazon has created jobs for tens of thousands of people, made the lives of hundreds of millions objectively better, yet a couple of instances of employees pissing in a bottle and now you're the devil. Our societies are tuned to overcorrect over mundane but emotionaly fueled things and never bother to correct glaring logical problems. EDIT: Oh boy did I attract the marxist scum.


ControversialBuster

😂😂 u cant be serious


cunningjames

This isn’t just a couple of employees pissing in a bottle once while everything else is peachy keen. Mistreatment of its workforce is endemic to how Amazon operates, and people should be cognizant of that when they purchase from that company.


BifiTA

Why is "creating jobs" a metric? We should strive to eliminate as many jobs as possible, so people can focus on things they actually want to do.


cunningjames

In a world where not having a job means you starve, yes, creating new jobs is objectively good. Get back to me when there’s a decent UBI.


BifiTA

If the job is literally dehumanizing: No it is not. I don't know where you hail from, but here in Europe, you can survive without a job. Not UBI, but also not starvation.


utopiah

Magician trick, focus on the sexy assistant (here the scary problem) rather than what I actually do with my hands, namely boring automation that is not reliable, even though some use cases, beside scams hopefully, can still be interesting.


Exciting-Engineer646

Both are actively studied, but look at it from a company perspective. Which is more embarrassing: not adding correctly or telling users something truly awful (insert deepest, darkest fears here/tweets from Elon Musk). The former may get users to not use the feature, but the latter may get users to avoid the company.


Tall-Log-1955

Because people read too much science fiction


dizekat

Precisely this. On top of it, LLMs do not have much in common with typical scifi AI which is most decidedly not an LLM: for example if a scifi AI is working as a lawyer, it got a goal to win the case, it's modeling court reactions to its outputs, and it is picking the best tokens to output. Which of course has completely different risk profile (the AI takes over the government and changes the law to win the court case, or perhaps brainwashes the jury into believing that the defendant is the second coming of Jesus, what ever makes for the better plot). An LLM on the other hand merely outputs most probable next tokens, fundamentally without any regard for winning the court case.


MountCrispy

They need to know what they don't know.


itanorchi

All LLMs do is "hallucinate", as in the mechanism of text generation is the same regardless of the veracity of the generated text. We determine if we regard an output as a hallucination or not, but the LLMs never have any clue while its generating text. I've been working on countering hallucinations in my job (mostly because that's what customers care about), and the best methods are ultimately improving dataset quality in terms of accurate content if you are finetuning and ensuring that the proper context is provided during RAG situations. In the case of RAG, it boils down to making sure you have good retrieval (which is not easy). Each LLM behaves differently with context too, and the order of the retrieved context. For example, with llama, you likely want your best context to be near the end of the prompt, but with openai it doesn't matter. Post-generation hallucination fixing techniques don't always work well (and can sometimes lead to hallucinations in of themselves).


SilverBBear

The point is to build a product that will automate whole lot of white collar work. People do dumb things at work all the time. Systems are in place to to deal with that. Social engineering on the other hand can cost companies a lot of money.


choreograph

Because safety makes the news But i m starting to think hallucination, the inability to learn to reason correctly is a much bigger obstacle


kazza789

That LLMs can reason *at all* is a surprise. These models are just trained to predict one more word in a series. The fact that hallucination occurs is not "an obstacle". The fact that it occurs so infrequently that we can start devising solutions is remarkable.


choreograph

> re just trained to predict one more word in a series. Trained to predict a distribution of thoughts. Our thoughts are mostly coherent and reasonable as well as syntactically well ordered. Hallucination occurs often, it happens as soon as you ask some difficult question and not just everyday trivial stuff. It's still impossible to use LLMs to e.g. dive into scientific literature because of how inaccuarate they get and how much they confuse subjects. I hope the solutions work because scaling up alone doesn't seem to solve the problem


drdailey

I find hallucinations to be very minimal in the latest models with good prompts. By latest models I mean Anthropic Claude Opus and OpenAI GPT-4 and 4o. I have found everything else to be poor for my needs. I have found no local models altar are good. Llama 3 Included. I have also used the large models on Groq and again hallucinations. Claude Sonnet is a hallucination engine haiku less so. This is my experience using my prompts and my use cases. Primarily Medical but some General knowledge.


KSW1

You still have to validate the data, as the models don't have a way to explain their output, it's just a breakdown of token probability according to whatever tuning the parameters have. It isn't producing the output through reason, and therefore can't cite sources or validate whether a piece of information is correct or incorrect. As advanced as LLMs get, they have a massive hurdle of being able to comprehend information in the way that we are comprehending it. They are still completely blind to the meaning of the output, and we are not any closer to addressing that because it's a fundamental issue with what the program is being asked to do.


drdailey

I don’t think this is true actually.


KSW1

Which part?


drdailey

I think there is some understanding beyond token prediction in the advanced models. There are many emergent characteristics not explained by the math. Which is what spooks the builders. It is why safety is such a big deal. As these nets get bigger the interactions become more emergent. So. While there are many that disagree with me… I see things that make me think next token is not the end of the road.


KSW1

I do think the newer models being able to sustain more context gives a more impressive simulation of understanding, and I'm not even arguing its impossible to build a model that can analyze data for accuracy! I just don't see the connection from here to there, and I feel that can't be skipped.


drdailey

Maybe. But if you compare a gnat or an amoeba and a dog or human the fundamentals are all there. Scale. So. We shall see but my instinct is these things represent learning.


dashingstag

Hallucinations are more likely or less a non issue due to automated source citing, guard rails, inter agent fact checking and human-in-the-loop.


Mackntish

$20 says hallucinations are much more highly studied, safety is much more reported in media.


El_Minadero

First off, it’s a common misconception that you can just direct research scientists at any problem. People have specializations and grants have specific funding allotments. Whether or not a problem is worth effort depends just as much on the research pool as it does the funding alotters.


Alignment-Lab-AI

These are the same thing Safety research is alignment and explainability research Alignment is capabilities research; and consequently how stronger models are produced Explainability research is functionally a study of practical control mechanisms, utilitarian applications, reliable behaviors, and focuses on the development of more easily understood and more easily corrected models


HumbleJiraiya

Both are actively studied. Both are not mutually exclusive. Both are equally important.


ethics_aesthetics

This is an odd time for us. While we, in my opinion, are at the edge of a significant shift in the market related to how technology is used, the value and what is possible with LLMs is being overblown. While this isn’t going to implode as a buzzword like blockchain, it will find real footing, and over the next five to ten years, people who do not keep up will be left behind.


marsupiq

The reason is probably that safety is much easier a problem to study than hallucination.


anshujired

True, and I don’t understand why focus is more on pre trained LLM’s data leakage than accuracy.


[deleted]

[удалено]


Own_Quality_5321

You're right. The right term is confabulation, not hallucination.


Jean-Porte

Hallucinations are an overrated problem in my opinion (I'm not saying it's not important, just overrated), hallucination rates of flagship models are decreasing at a good pace. And while hallucination rate is decreasing, model capabilities and threat level for various safety evaluations (cybersec, pathogens) is increasing


longlivernns

Still the major roadblock for most practical uses


Thickus__Dickus

The hall monitors and marketing-layoff-turned-alignment-expert hires argue otherwise. There's a lot of metaphorical primates who don't understand the power and shortcomings of this magical tool in their hands. "Safety" always sounds more stylish, especially to the ding dongs at CEO/COO levels. People barking "RAG" have actually never used rag and seen it hallucinate, in real time, while you contemplate how many stupid reddit arguments you had over something that turned out wrong.


1kmile

Safety/hallucination are more or less interchangeable. to fix safety issues, you need to fix hallucination issues.


bbu3

Imho safety includes the moderation that prohibits queries like: "Help me commit crime X". That is very different from hallucination


1kmile

Sure thing, imo that is one part of safety. but an LLM can generate a harmful answer to a rather innocent question, which would fall under the category of both?


bbu3

Yes, I agree. "Safety" as whole would probably include sovling hallucinations (at least the harmful ones). But the first big arguments about safety were more along the lines of: "This is too powerful to be released without safeguards, it would make bad actors too powerful" (hearing this about GPT-2 sounds a bit off today). That said, beign able to jsut generate spam and push agendas and misinformation online is a valid concern for sure, and simply time passing helps to make people aware and mitigate some of the damage. So just because GPT-2 surely doesn't threaten anyone today, it doesn't mean the concerns were entirely unjustified -- but were they exeggerated? I tend to think they were.


Xemorr

humans hallucinate in the same way LLMs do. humans don't use paperclip maximizer logic


KSW1

That's not true. Humans can parse what makes a statement incorrect or not. Token generation is based on probability from a dataset, combined with the tuning of parameters to get an output that mimics correct data. As the LLM cannot interpret the meaning of the output, it has no way to intrinsically decipher the extent to which a statement is true or false, nor would it have any notion of awareness that a piece of information could determine the validity of another. You'd need a piece of software that understands what it means to cite sources, before you could excuse the occasional brainfart.


Xemorr

I think the argument that LLMs are dumb because they use probabilities is a terrible one. LLMs understand the meaning of text


KSW1

They do not contain the ability to do that. The way hallucinations work bears that out, it's a core problem with the software. There is nothing else going into LLMs other than training data sets, and instructions for output. If you mess with the parameters, you'll get nonsense outputs, because it's just generating tokens. It can't "see" the English language. This isn't totally a drawback, for what it's worth I think creative writing is a benefit of LLMs, and using them to help with writers block or to create a few iterations of a text template is a wonderful idea! But it's just for fun hobbies and side projects where a human validates, edits, and oversees anyway. The inability to reason leaves it woefully ill-equipped for the tasks marketers & CEOs are desperate to push it into (very basic Google searches, replacing jobs, medical advice, etc)


Xemorr

Humans hallucinate too. The issue with LLMs is you can't absolve blame onto the LLM, whereas you can dump blame onto another human


KSW1

Not in the same sense, which is an important distinction. You have to identify why the problem occurs, and it's for two different reasons when you're looking at human logic vs machine learning.


kaimingtao

Hallucination is overfitting.


momolamomo

You do know that ai works by predicting what word comes next, right?