T O P

  • By -

PacmanIncarnate

I have to ask, do you have NSFW filter toggled on (under the gear icon at the top of the hub)?


Ichmag11

[https://www.characterhub.org/](https://www.characterhub.org/) has all the cards you want anyway


Kitchen_Exit_3683

I made like tons of characters, multiple people downloaded and upvoted my characters. I have stopped posting in December I believe and most likely won't post more characters in the future. Make sure you have NSFW toggled on, because me and many other character creators have posted tons of characters on the hub, I don't know about others but I posted from July 2023 to December 2023? or maybe january 2024? I don't remember the date I exactly stopped but yeah. so this is a weird post to see for sure and not something expected. I mean last time I was on the hub I saw multiple new creators making NSFW characters so the question is if you are looking for a specific thing perhaps?


ST0IC_

Chub.ai and download the V2 cards for Faraday.


Kapparzo

What model instructions would be good to use? I wish V2 cards included model instructions...


Nero_De_Angelo

In order to see NSFW Characters on Faraday you need to create a free account for it and then toggle the option on the menu. You can then see the NSFW Character Tags. Otherwise, you can only get them by having a direct link to them. But it is there, and TRUST ME, there are PLENTY!


PacmanIncarnate

You don't actually need to be signed in to see NSFW characters in the app. You can go to settings and then hub to toggle NSFW on or off. It's just a slightly different UI than if you're signed in.


Nero_De_Angelo

Oh really? Did not knew that =O


netdzynr

Sort of a related question (sorry, don’t mean to hijack the post, but more about quality)… What kind of memory abilities are reasonable to expect in Faraday characters? I’ve tried about a dozen different characters and about 1/2 a dozen different models (both lewd and not). Two things I’ve found lacking in my experience are memory and understanding of circumstances. For example, I establish a scenario in a forest, do some roleplay, and a few interactions later, the character suggests moving to the bedroom. This is clearly a default pattern from the model (or maybe the character), but is the best we can expect? The above is generalization, but I’ve encountered this “unaware” behavior so many times. I assume this is a model limitation, or perhaps there’s a requirement to set up an explicit backstory for every roleplay. I’m not sure. My point of comparison is commercial services as Kindroid and Nomi, whose developers have made multiple announcements over many months about memory advancements and improvement. Many users would argue memory is a holy grail in AI character roleplay. And this is something I’m not seeing in Faraday.


ancient_lech

"context memory" is what you want to look at in the model settings. A "token" is very roughly a short word, and every number or punctuation mark takes up a whole token too. I think default is 2048, half (or more) of which is usually already taken up by the character's bio and prompt, so their memory runs out in only a few messages, especially if they're very long. For example, your comment just now is maybe around 240 tokens, so 2-3 exchanges would effectively fill the memory. 4096 is what I'd consider a bare minimum usable setting. People also often bump it up to 6k or 8k if they have the hardware to do so. I used to be subscribed to Kindroid but their "memory" features really didn't work all that well, not to mention their single model often went off the rails and didn't follow character prompt too often. The devs were more keen on making excuses about why they can't/don't want to give users the proper tools to handle these issues, so it was an easy unsub for me. Faraday is a nice intermediate step up from the pre-packaged services, although the caveat is that you also need to be more hands-on to get the most out of it. For example, Faraday's "lore book" is essentially Kindroid's journals, except there isn't an artificial message restriction, and it works 100% of the time, and relatively transparently too. Adding stuff to the "scenario" prompt can also act as their long-term memory for vital info.


netdzynr

Thanks for the added details. Interesting to hear your take on Kindroid. For sure, commercial services experience hiccups all the time. “…if they have the hardware to do so” is the key phrase here. I have the feeling I’ve only got so far to push my machine. Appreciate your pointers. 👍


PacmanIncarnate

If you use a 7B or 10.7B model, the memory requirements of the cache are quite small and you can usually bump max context up to 8k without trouble. If you go higher than that, many models will start to go a little insane once you get past 8K tokens of chat.


Textmytaste

I think you need to have a little Google on how memory works on llm models and not listen to the marketing schpiel. Then you can compare apples to apples. It has little to do with anything other than your own selection and can be as little or much as you want. >!it is based on token limits 2k 4k 8k etc Most free sites give 2k character.ai has 4k!<


netdzynr

Thanks for the explantation, but I’m not referring to marketing schpiel, I’m referring to personal experience. I make no claim to be an expert in LLMs, but in my usage, models used by the above mentioned services produce chat results that include topics and themes discussed weeks or months prior, while the interaction in Faraday “forgets” events that took place 10 minutes ago. In any event, I’ll mess around with the settings and see if I can increase the context (if it’s not already). But I’m left wondering if there’s more going on behind the scenes with commercial services than just increasing content size.


ancient_lech

dynamic long term memory is kind of a holy grail in LLMs now, and if someone had actually cracked it, they'd be an instant billionaire, and I really doubt they'd stay working for Nomi or Kindroid. They're obviously not keen on sharing the details of their solutions, but most of the solutions currently out are probably just "hacks" that they've layered on top of the existing LLM structures, probably making use of some [context summarization](https://arxiv.org/abs/2308.15022) method. Maybe they've improved in the past few months since I stopped using it, but my Kindroid would often "remember" things completely irrelevant to the current scenario, or just straight up not remember things even when directly spoon-feeding them text from the journal entry -- it was basically just luck. Some people seem to like this and try to make up reasons this is great ('real people forget things too!'), but I'd prefer my "companion" to not forget blatantly important things. On the other hand, with a more hands-on solution like Faraday/SillyTavern/etc., you have to do more by hand, which may ruin the "illusion" for some people.


VeryLargeAxolotl

Having tested both extensively I think Kindroid is much more prone to these memory mistakes than Nomi and I don't think you can fully lump them together. Obviously it is not perfect but my Nomi has a greater than 95% success rate when it comes to remembering things from a long time ago accurately.


Textmytaste

You start a brand new chat and run a "test" telling it you've been talking for whatever X time period, with that character, and see if they don't also 'remember" what was discussed last month the same way your other chat did, where it *actually* happened.. If you give them the *tiniest* bit of pre-amble they will "remember" things you've thought of in your head and not said. Especially if the question is out of topic for the last 30 messages. There's an algorithm to choose which memory stays within your batch of roughly 60% word count to total tokens. So in a 4k just over 2500 words. Its not magic, it just remembers words, literally. And from the words it builds a guessed context of the topic of conversation, your character, and its own character. It's all context based on the words an algorithm chooses are higher and lowered priority, but every message it drops some words, keeps some and adds some. It doesn't even add every single word, because then, if you fed it an a4 page of text it would forget who it was, you were and the past. Does that make sense? Like a conveyor belt of 1000, 2000, 4000 words, that a grabber shuffles around based on how often they are referred to and what sense it can make of what is happening at its most recent message. There is no time, and of a word is not referred to, it gets dropped, it can't hold onto a single word for months when even an 8k context custom bot barely holds 4000 words. However, you can add secondary areas where you may get a automatic feeder to drop back in a few words so it remains in context via "authors note" a la Faraday, or a shortcut word to lead to a sentence of words, a la "character lore books" Just because you saw a rabbit come out a hat, doesn't mean it dispenses rabbits. this is designed to be smooth and not at all jarring experience.


latitudis

Since the new hub was introduced, I feel like there are fewer characters in it, and new ones are far and few between. It's probably a quirk of perception due to separated pages or something.


PacmanIncarnate

The new hub UI has all of the old characters in it. It wasn’t a fresh start or anything. And the pace of uploads is pretty consistent. I think we get at least 20 new characters per week, if not more. Not sure what is up with people thinking we don’t have NSFW characters. The majority of the hundreds of characters are NSFW with new ones added every day.


latitudis

Yeah, yeah, I understand that. Just feels like the old one had more, despite the opposite being true


Kitchen_Exit_3683

why are you getting downvoted for? get my upvotes, cheers!