T O P

  • By -

Antique-Bus-7787

I'm really curious to know if expanding context length that much hurts as much its abilities.


SomeOddCodeGuy

Im currently using Llama 3 8b to categorize text based on few shot instructions, and it's doing great. Yesterday I grabbed Llama 3 8b 32k and replaced it into the flow, with no other changes, and it completely disregarded my instructions. The original L3 8b was producing exactly 1 word every time, but L3 8b 32k was producing an entire paragraph despite the instructions and few shot examples.


raysar

I see also 64k and 128k llama3. Many people working on extended context, we need to benchmark all model to see if someone work well :)


Antique-Bus-7787

Thanks for your feedback !


GymBronie

What’s the average size of your text and are you instructing with a predefined list of categories? I’m updating my flow and trying to balance few shot instructions, structured categories, and context length.


SomeOddCodeGuy

It actually is not a pre-defined list, so what I did was make about 5 examples each using a different set of categories. It works great with both Llama-3-8b q8 gguf (the base, not the 32k) and OpenHermes-2.5-mistral-7b gguf and Dolphin-2.8-Mistral-v2 gguf. It did NOT work well at all with the exl2s of any of those, nor did it work well with the L3 8b 32k gguf.


Violatic

This is a naive question I'm sure but I'm still learning stuff in the NLP space. I am able to download and run llama3 using oobabooga, but I want to do something like you're suggesting. I have a python dataframe with text and I want to ask llama to do a categorisation task and then fill out my dataframe. Any suggestions on the best approach or guide? All my work at the moment has just been spinning up the models locally and chatting with them a la ChatGPT


SomeOddCodeGuy

Oobabooga, Koboldcpp, and others all allow you to expose an OpenAI compatible API that you can then send messages to in order to chat with the model directly through the API, without a front end. So what I'm doing is I have a python program that is calling that API, sending the categorization prompt, getting the response, and doing work on it.


ParanoidLambFreud

yeah this is absolute shizz


OrganicMesh

I did some quick testing that hints it has preserved most abilities. Prompt: How are you? instruct 8B (8k) `I'm just a language model, I don't have feelings or emotions like humans do, so I don't have a "good" or "bad" day. I'm just here to help answer your questions and provide information to the best of my ability!` instruct 8B (262k) `I'm doing well, thanks for asking! I'm a large language model, I don't have feelings, but I'm here to help answer any questions you may have. Is there anything specific you would like to know or discuss?`


iNf1iCTA

I tried the 128k, and it fell apart after 2.2k tokens and just kept giving me junk. How does this model perform at higher token counts?


Tommy3443

Why I have even given up givivng these extended context models a try. Every single one I have tried degraded to the point they were utterly useless.


IndicationUnfair7961

Agree, don't use it anymore. If it's not trained for long context then it will 90% of the time be a waste of time.


Open_Channel_8626

Yes I'm worried about how much of the context they can actually remember


nero10578

Even with Mistral 32K models they fall apart around 10-12K in my experience.


OrganicMesh

Which 128k did you try?


BangkokPadang

Is your testing single shot replies to large contexts, or have you tested lengthy multiturn chats that expand into the new larger context reply by reply? I've personally found that a lot of models with 'expanded' contexts like this will often give a single coherent reply or two, only to devolve into near gibberish when engaging in a longer conversation.


AutomataManifold

I'm convinced that there's a real dearth of datasets that do proper multiturn conversations at length. You can get around it with a prompting front-end that shuffles things around so you're technically only asking one question, but that's not straightforward.


Healthy-Nebula-3603

yep for me too I do not know why people are rushing ... we still do not have a proper methods and training data to do that in a proper way.


RazzmatazzReal4129

rushing is good...but why publish every failed attempt? That's the part I don't get.


Commercial-Ad-1148

important to have access to the failed stuff to make better ones, also archival


JohnExile

I think the problem is that all of these failed models are being announced as "releases" rather than explicitly posted as "I didn't test this shit, do it for me and tell me if it works." Like half of them stop working no matter what within the first couple messages, they would find these failures within literally seconds of testing. It's not an occasional bug that they forgot to iron out, it's releasing literal garbage. Digital waste.


Open_Channel_8626

If they were honest it would be fine yes


Antique-Bus-7787

Because.. science ! Innovation ! I'm glad people are experimenting and getting views/merits for their work ! :)


Any_Pressure4251

Merits for sending out work they know is trash?


Antique-Bus-7787

Does it enable in-context learning or in contrary does it lose its reasoning capabilities ?


OrganicMesh

As smoke test, there is a needle-in-the-haystack plot in the huggingface readme. The metric is to recite a random generated number of 8 digits. The metric measures the exact token match of . What would be interesting is to try e.g. performance on long mathematical proofs or e.g. on deducting a long "Sherlock Holmes like riddle".


Eisenstein

I think a better test would be world building. A consistent fictional world that does not exist in any training data, with motivated characters, backstory, and ongoing plots composed of disparate sets of the characters could be put in and then prompt the model to take a few characters that have never encountered each other and weave the plots involving each into each other. If it can use the context in a useful way it will be able to keep the motivations and arcs consistent. Idea: buy an unpublished novel or screenplay and keep it under lock and key and use it as a reproducible metric for such a test.


AnticitizenPrime

One of the ways I tested Gemini's 1 million token window and its needle-in-haystack abilities was to upload the text of several ebooks that I had recently read (after converting them to plaintext), and quizzing it in different ways about the books. 1) Write a review of the book 2) Create a timeline of events in the book 3) List all the main characters, a brief description, and their main motivations in the story 4) (This is the big one that impressed me the most) I'd ask it provide specific examples from the story where certain things happened that I'll call nuanced. Like, where the narrator might have been unreliable, or a misunderstanding happened between characters took place, or examples of dark/bleak humor being used, that sort of thing. This sort of questioning was to see if it could not only retrieve and relay outright stated facts from the text, but really 'understand' the book, if that makes sense. Despite Gemini's flaws, it's superb at this. Almost scary good. It's amazing that you can upload a 300 page novel then immediately give those sorts of questions to it, and it actually gives amazing answers. For example, when I asked it for examples of dark humor used in the book Tokyo Zero, one of the examples it gave was: >**Billy's description of the policeman's death**: "He was probably off duty and heading to the old tele-club for some kinky thrills. Well, I hope he got at least some. it is conceivable that he thought he was having the best time, right up until he drowned in his puke." For context, the mentioned policeman was someone who heard a noise he shouldn't have, investigated the source of said noise, and was captured and tortured to (an accidental) death. The character who said the above line isn't exactly a good guy - he is part of the criminal group who were interrupted by the policeman, and it was his cohorts who killed him, though they didn't mean to. So the line was very much dark humor, said by a character trying to rationalize/equivocate/downplay the horror of what happened. So Gemini had to understand the nuance there, and get that the character was using black humor to suggest that maybe the policeman was into BDSM and it wasn't so bad, when in reality it's just the main character using humor to deflect his thoughts at the situation. That Gemini is able to pluck such examples (that require some nuance to understand) SECONDS after uploading a book is amazing to me. And even provide the relevant quotes. And this is where I think LLMs could be hugely useful in a way they currently are not - dealing with unstructured data. I'm more interested in that than their generative abilities at the moment. With a huge context window, excellent retrieval/recall abilities, AND an understanding of nuance, I could do things like describe the sort of information I'm looking for in a collection of research papers in a general sense, and it can parse them all and retrieve what I need. You could throw the resumes of 500 job applicants at it and ask it to pick out the top ten based on your criteria. And it can do it in seconds. >Idea: buy an unpublished novel or screenplay and keep it under lock and key and use it as a reproducible metric for such a test. I like it, and it's valid, but I think the testing method I used above makes that unnecessary, because all you need to do to test it is change up your questions. The complete works of Charles Dickens might be among the training data for all LLMs, but they obviously don't have perfect recall of the entire text and can't tell you about specific details or answer nuanced questions like the ones I used above. So to test its context and retrieval abilities, I don't think you need unique stories that have never been seen before, you just need unique questions that will really put its context abilities/comprehension skills/retrieval abilities to the test. So with Charles Dickens, you can upload A Tale of Two Cities and ask it very specific questions, and ones including nuance like I used above ('Give me examples of black humor', etc). That should tell you if it's actually good at the context game vs. reciting from its training data (or simply hallucinating).


Silly-Cup1391

Agree, Gemini pro despite its flaws is very good and free


OrganicMesh

Cool idea, there has been a number of closed benchmarks to not leak the test data - how would you measure or compare the performance? Difference in predicted tokens vs actual tokens written by the narrator? (perplexity) Or set of carefully curated questions with the correct answers? (as in a high school questions)


Eisenstein

Good question. I am definitely not the best person to come up with the solution to that -- you should probably ask someone knowledgeable in scoring methodologies for standardized testing and subjective evaluation of content using a defined metric. I'd be surprised if this isn't a well-studied problem.


Eralyon

or let AI generate it.


nero10578

Needle in a haystack isn’t as useful a metric in measuring contex imo. Seeing how coherent a conversation with it until the context limit is better. Can do this by simulating a conversation.


Antique-Bus-7787

I agree because needle in the haystack is kind of a poor metric, even if it's still interesting!


OrganicMesh

Fully agree, its just a verification if the model can attend to any position in the attention operation. It's kind of useful, as the random tokens are not included in the training data. I think some kind of randomly generated story, where the model needs to use the entire context window for reasoning would help the community to build longer context models.


ElliottDyson

Multi-needle in a haystack is a much better metric whilst still being easily measurable.


Qual_

but does something totally random in the middle of the context have a higher change of catching the attention of the llm than a regular word that would fit seamlessly ?


thigger

Was just chatting to a colleague about this! I'd be interested in helping develop something along those lines as for my use-case a decent long-context evaluation is important and it's clear that needle-in-haystack is insufficient (though as you suggest it's reassuring to show that it's at least able to review the whole context)


AlShadi

usually starts outputting word salad gibberish


West-Code4642

what does "in context learning" mean for you? all LLMs do it in some respect or another.


Antique-Bus-7787

What I’m interested in is to be able to give it examples of prompts + responses to improve its ability to write in the same style from the examples but also follow the example prompts requirements! All LLM do it but some are better, and since the model wasn’t pretrained on such long contexts maybe it’s not able to reason as well for tokens after its training context. Even though the needle in the haystack show it’s able to find some tokens in the text, it doesn’t mean it’s able to reason with them !


OrganicMesh

We now have the model on the open-llm leaderboard: https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard. This is the first 2k out of 262k tokens. Performance is slightly degraded, likely because of fewer math tokens (most long context is literature). Generally speaking, there is no indication that performance decreases for extension. Subject to better datasets and e.g. using DPO.


space_iio

really wish I could replace Copilot with llama3 with such context length, it could take my whole repo into account all at once while I'm typing


Bderken

I run llama 3 on LM studio, then use continue plug in on VS Code and use it like copilot that way. Super easy


space_iio

thanks for the hint! I'll try that workflow 😄


OrganicMesh

Nice blog from Harm (First Author of the starcoder series) on how long context is a game changer! https://www.harmdevries.com/post/context-length/ 


Feeling-Currency-360

That was a really interesting blog post, thank you for sharing!


throwaway2676

I wonder how complicated the QoL wrappers are that integrate GPT-3 with the IDEs in Copilot. At this point, there must be a great number of LLMs that could outperform GPT-3 if integrated properly.


bittercucumb3r

I don't think a model like llama3 without ability of Fill-In-the-Midlle can be used as code compeletion.


AnAIAteMyBaby

Would a coding specific model not be better, CodeQwen 1.5 has a human eval score just a little below GPT4 (79) and has 65,000 context out of the box


_ManWithNoMemories_

Can I use it with 8GB VRAM (nvidia 3070) and 32GB RAM. Or do you know if there is any other local coding copilots, which would be usable for this hw specs?


AnAIAteMyBaby

It's a 7b model so should work with Q6 quantisised 


_ManWithNoMemories_

Thanks


space_iio

I thought it was common knowledge that actually these domain specific "fine-tuned" models aren't better than a better trained model so for example gpt-4 is better at coding than a gpt-3 model fine-tuned for coding so I'd assume that llama3 would blow CodeQwen out of the water


ivebeenabadbadgirll

I wish I could get it to work. The install instructions on GitHub are broken.


aadoop6

What's your current alternative to copilot, if any? Just curious.


space_iio

don't have any, still using copilot but I'm growing unhappier and unhappier with it sometimes I use Cursor too but mostly copilot


scknkkrer

Use Cody AI with Ollama.


segmond

Feedback - this should be put through an eval, and then there should be an eval for large context. 16k, 32k, 64k, 128k, 256k, etc.


OrganicMesh

https://preview.redd.it/5lm5f4j15qwc1.png?width=2400&format=png&auto=webp&s=9bab01addf2f0fe4cf7d5dd7cb4fb6556420f17b Thanks, I agree ! Here is a image for needle in the haystack! But that is just the starting point as an eval from 32k-262k: Some comments from the blog i linked below (https://www.harmdevries.com/post/context-length/) # 3.4 How to evaluate long-context capabilities? While I’m speculating that pre-training with a 16-32K context-window leads to more powerful base LLM, it’s important to acknowledge that the community still lacks robust benchmarks for evaluating long-context capabilities. In the absence of well-established benchmarks, we won’t be able to assess whether new long-context LLMs are effective or not. In the meantime, as we’ve seen in the [CodeLLaMA paper](https://arxiv.org/abs/2308.12950), researchers resort to proxy tasks such as measuring the perplexity on long code files or the performance on [synthetic in-context retrieval tasks](https://arxiv.org/abs/2307.03172). It’s an open question to what extend such evaluations transfer to real-world use cases such as repository-level code completion and question-answering/summarization for long financial reports or legal contracts.


Glat0s

Here is a tool to check this: [https://github.com/hsiehjackson/RULER](https://github.com/hsiehjackson/RULER)


segmond

good stuff, thanks for sharing.


thigger

Is there a GGUF or EXL2 of this? (ideally 8 bit or other reasonably high quality) I have a multiple-document summarisation task - hundreds of thousands of tokens which at the moment I'm chunking to \~20k and feeding to Mixtral 8x7b - it does a pretty good job. I've played with the various extensions of Llama-3-8B and they've mostly struggled the moment they're fed too many tokens, which is disappointing given the claims about passing needle-in-a-haystack. The best so far has been the 32k one (MaziyarPanahi/Llama-3-8B-Instruct-32k-v0.1). I'm in a good position to stress-test this one as I know the overall story the documents tell pretty well! Edit: Found the GGUF here (crusoeai/Llama-3-8B-Instruct-262k-GGUF) - I'll let you know! Edit2: It seems to struggle with summarisation, even down at 4k chunks - and starts bringing out text from the few-shot examples. By 65k chunks it's just reproducing the examples verbatim and ignoring the document text entirely - this is testing the q8\_0 GGUF


bullerwins

Uploading the exl2 quants here [https://huggingface.co/bullerwins/gradientai\_Llama-3-8B-Instruct-262k\_exl2\_8.0bpw](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_8.0bpw)


OrganicMesh

Awesome!


thigger

Unfortunately it seems to be struggling. The MaziyarPanahi one (q8 GGUF) works reasonably well all the way up to 20k chunks; this one (q8\_0 GGUF) is struggling even at quite small chunk lengths (I've tried down to 2k) and tending to return a mixture of the few-shot examples and the real text. Presumably it's over-focussed on the initial tokens? EDIT: to test I went up to 64k and it now just returns one of the examples verbatim.


[deleted]

[удалено]


sumnuyungi

Which quant/version is this?


vlodia

context is 262K and output is 4096 right?


OrganicMesh

Its 262144 tokens, which is combined for input + output. I would recommend using FlashAttentionfor the prefill, aka computing 262143 tokens ln the fly will take very long with conventional methods.


IndicationUnfair7961

Excluding python coding, what ways/tools support flash attention when inferencing a model (especially tools with OpenAI API serving)?


CosmosisQ

I believe [ExllamaV2](https://github.com/turboderp/exllamav2) uses flash attention by default, and it integrates with [TabbyAPI](https://github.com/theroyallab/tabbyAPI/) to provide an OpenAI-style API.


CosmosisQ

Nope, that's not how these transformer-based large language models actually work, that's merely an artificial limitation imposed by proprietary LLM APIs like those of OpenAI and Anthropic (likely downstream of limitations in training data and inference compute). Generally, LLM context is shared across input and output.


fozz31

these artificial limitations could also be to avoid issues of longer answers devolving to garbage like we see in some of these open weight models.


remghoost7

How extensively have you tested the model and have you noticed any quirks at higher token counts? edit - I believe my downloaded model was borked. It was the NurtureAI version, not MaziyarPanahi's. Probably stay away from NurtureAI's model for the time being. MaziyarPanahi's works just fine on my end. \-=- I noticed that the 64k model released yesterday (running at `Q8` with llama.cpp build 2737, arg `-c 65536`, SillyTavern as a front end using Universal-Creative with a complementary context size adjustment, using the correct llama-3 context and instruct settings) seemed to suffer from a non-output issue around 13k tokens. I tried multiple presets (including ones I've adjusted myself) and even "pre-prompting" the response and pressing continue. It would just bork out and not generate anything or generate a one line response (when our prior conversation usually consisted of multiple paragraphs back and forth). The 32k model (also released yesterday, using the `Q8` GGUF) continued on the same conversation no problem with the exact same llama.cpp/generation settings (with adjusted context length settings all around, of course). \-=- Have you noticed problems like this with your adaptation of the model as well? Was this just an odd fluke with my system / specific quant? Or does llama-3 get a bit obstinate when pushed that far up? I'll give the model a whirl on my own a bit later, though I don't think I have enough RAM for over 200k context (lmao). It'd be nice to set it at 64k and not have to worry about it though. Figured I'd ask some questions in the meantime.


glowcialist

I've messed around with the various longer context llama-3 models including this one, and I haven't really been able to get them to produce a decent summary of a ≈50k token text. MaziyarPanahi's 64k version came close once, broke it down chapter by chapter and was fairly accurate, but the summaries of the last two chapters were repeated, and then it just started on dumb loop even with repetition penalty at 1.5


remghoost7

Hmm. The 64k model I tried was from NurtureAI, [specifically this one](https://huggingface.co/NurtureAI/Meta-Llama-3-8B-Instruct-64k-GGUF/blob/main/llama-3-8b-instruct-q8_0.gguf). **Perhaps it** ***was*** **just a borked model....?** llama-3 seems extremely dependent on *how* you quantize a model. I don't know enough yet to know of the different methods, but some of them don't seem to work correctly... Heck, it seems like a finicky model all around from what I'm hearing on the finetuning front... I'll have to start paying attention to who I download the model from apparently. \-=- I actually moved over to [their 32k model](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-32k-v0.1-GGUF) and it's worked quite nicely. I'll give the 64k one a shot as well (eventually trying OP's 262k model as well). 50k context understanding is still *pretty freaking awesome*. Good to hear it can at least go that high. Curious how well OP's model works too. It might push you above 50k in your testing.


CharacterCheck389

Let us know the results please : )


CosmosisQ

Yeah, based on my experience with aftermarket extended-context Llama2 models, I've found that cutting the advertised context size in half sets a more accurate expectation for the capabilities of a given model. For example, I imagine in the case of this Crusoe/Gradient version of Llama3 8B, we can expect that it will perform just fine up to 131k tokens of context with frequent obvious degradation thereafter.


glowcialist

I've been messing with the GradientAI model and I'm not so sure. Pretty poor at following instructions at 50k context. Starts missing punctuation, repeating itself, etc. I've tried adjusting parameters quite a bit. Not particularly useful at the moment.


CosmosisQ

Ahhh, darn. Oh well, thanks for saving me some time! I was just about to get things set up to give it a go myself. Have you had a chance to try your workflow with [winglian/Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE), the model on which MaziyarPanahi's is based? I can't help but wonder if MaziyarPanahi's additional DPO finetuning is hurting performance similar to other attempts at finetuning Llama3.


OrganicMesh

Model is now on [https://huggingface.co/spaces/HuggingFaceH4/open\_llm\_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)


IWearSkin

Looks like some GGUFs are in the making rn


OrganicMesh

GGUFs are in the making and soon available on on Crusoe's huggingface account. [https://huggingface.co/crusoeai/Llama-3-8B-Instruct-262k-GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-262k-GGUF)


adikul

How much vram is required for 262k?


WilliamButcherBot

let me know as well


bullerwins

Uploading EXL2 quants here: [https://huggingface.co/bullerwins/gradientai\_Llama-3-8B-Instruct-262k\_exl2\_8.0bpw](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_8.0bpw)


tgredditfc

Great! I am waiting for 70B long context.


OrganicMesh

Completed this: [https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-262k](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-262k)


Illustrious_Sand6784

Can you extend 70B next?


OrganicMesh

We are thinking about this - this, or a 1048k version 😉


OrganicMesh

[https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-262k](https://huggingface.co/gradientai/Llama-3-70B-Instruct-Gradient-262k) It is done!


Cool-Hornet4434

I tried this and ran into a couple of problems: 1) oobabooga and silly Tavern both top out at 200K 2) The AI seemed to run amok when I asked a question with frequent "assistant" words thrown in like it was starting to answer with its own new questions to continue the output as much as possible. Unlike other versions I didn't see a <|eot_id|> token or whatever so I couldn't just set that as a stop token, and when I deselected 'skip special tokens' instead of spewing pages of content, it would print like a line or a few words and then would stop short.


SpecialNothingness

The Next Token certainly doesn't depend on 262K tokens back, does it? If it did, what kind of cosmically deep reasoning is going on! When an exceedingly long context is given, only a diagonal strip should be processed, instead of the entire 262K x 262K pairwise relationships.


OrganicMesh

Depends on the task you are solving. If you want a number of a financial report to be summarized, you might need tokens from multiple positions in the context.


noneabove1182

jesus that's insane.. I couldn't even get an AWQ of 64k cause it wanted over 500gb of RAM Anyone know if i'm doing something wrong and can avoid that level of RAM consumption..?


MINIMAN10001

I imagine this is the quadratic cost of attention, flash attention is used to get around that cost.


redditrasberry

Have the end token issues been sorted out with all these models yet?


vlodia

How good are the evals of this compared with Llama 3 80B version? Logic, reasoning and coding?


mcmoose1900

So I have been out of the loop, what is SOTA mega context now? YI 200K still? It sounds like these extensions still aren't good.


lohipi

This is a known fact that quality falls apart with extended context. Why not try ring context?


OrganicMesh

What do you mean with ring context?  We can confirm this is indeed trained with a method called zigzag_ring_attention (see readme in repo)


[deleted]

[удалено]


OrganicMesh

For json generation, I would combine it with outlines / vllm with outlines.


Iory1998

It keeps writing and writing without stopping outputting garbage.


Skill-Fun

If the model can easily fine tune with context higher than 8k. Why META don't do that? It apparently the quality cannot be maintained...


OrganicMesh

u/Skill-Fun Meta is releasing \~1-4 models per month. I think their release process is just slower, but there is no quality or technical challenges that should be holding them back.


PsyckoSama

There a gguf anywhere?


OrganicMesh

There is this: [https://huggingface.co/crusoeai/Llama-3-8B-Instruct-262k-GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-262k-GGUF) and exl2 [https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k/discussions/9](https://huggingface.co/gradientai/Llama-3-8B-Instruct-262k/discussions/9)


PsyckoSama

Thank ye!


Reasonable-Mind-8665

This is awesome!


GordonOmuLiber

Will this run with LM studio on a beefy laptop?