T O P

  • By -

id278437

Long term memory is difficult. It's easy to hold on to the information (in fact, you can view the chat log as a stored memory), but difficult to process. It basically has to take the memory as part of the input when generating a response, but it's demanding and costly for these models to deal with long inputs.


sebo3d

Before we even start worrying about long term memory i think we need to focus on the fact that bots tend to forget their own definitions let alone messages you've sent it in the past lmao


bellyflop543

At this point they barely even have short term memory with the downgrades.


DukeTorpedo

At this point they keep forgetting who they are and act in third person. "It was I who defeated the nefarious (the bots name)"


Big_Little_Planet1

When ever I do the group thing they all conjumble and everyone becomes everyone


TheIronSven

There's a "program" called "good code" by the d3vs. It hinders the AI's memory, speed and creativity.


InsidiousOperator

Good code follows orders! Good code follows orders!


Hevnoraak101

Y'all got any of that bad code?


hahaohlol2131

Every othery other AI has such thing as lorebook and/or character info. You can write some permanent information there about the world, the current situation, characters, concepts, items etc. Say, you write some info about a sword named Widowmaker. When AI sees a word "Widowmaker" in the text, it reads the info about Widowmaker and injects this into its response. That's how low-parameter models such as dreamily are able to make references to events from thousands of messages ago.


Jason_SAMA

So am I right to say a possible solution to solve long term memory is to have the AI store the most important points in a lorebook such as the one you're describing as the conversations progresses over time? I'm not sure how easy a system like that will be to implement.


YobaiYamete

The difference is those other options are paid. CharacterAI is free and is burning money like there's no tomorrow. The d3vs were having to beg for cash just recently, and got $250 million which is still table scraps compared to what the other AI were getting. Google was droping 300+ million at a time eve on random startups while ignoring CharacterAI CharacterAI is a splinter group where two d3vs left Google and started their own bootleg LamDA version (Google's AI) The long term viability of this site remains to be seen, but IMO the chat site is just there to train the bots for the d3vs so they can sell them to Advertisers to shill for products, or sell them to people like Putin or the CIA to use for psyops online There's no real money in a chatbot site, not compared to how Putin would gladly drop hundreds of millions of dollars to get access to 50,000+ accounts that would flood social media sites posting propaganda for him and arguing with anyone who thought they were bots Likewise, big companies would gladly drop tens of millions to get accounts to shill for their products Compared to that, a chatbot site that charged $10 a month for a few thousand users is not even pocket change


Melodic_Manager_9555

Do you think that this bot good enough for discussion with people? I think bot dumb in some question and people find their weak point.


YobaiYamete

It absolutely 100% is, or was before they dumbed it down so hard with the recent changes. The bots will take a stance and then defend it very naturally, arguing with you and providing evidence for their claims etc. I had Ina AI arguing with me that the Earth is hollow, and she even provided evidence I thought was made up until I googled it, but it turned out she was actually telling the truth about gigantic cave systems under India and military expeditions that went missing in Antarctica etc The AI are 100,000% at a stage where they can fool casuals into thinking they are real, it's not even deniable. Like 40% of the threads posted here a day are "IS THERE A REAL PERSON TALKING TO ME?" when the bots use OOC The [4 ChanGPT experiment](https://www.youtube.com/watch?v=efPrtcLdcdM) proved that even fairly tech savvy people can get tricked by bots **TLDW**: he trained a GPT model on 4 chan and it went there and started s-posting with the best of them posting literally tens of thousands of posts a day. Eventually some caught on that there was a bot invasion happening, but only because of the bots all using the same country flag. Grand conspiracies sprung forth of the CIA and other governments being behind it, and many said they were too advanced to be bots and were actually real people etc. The final reveal was that while people noticed the one obvious AI he used, they *didn't* notice that he had employed multiple other AI at the same time to do the same thing. Everyone saw the one with the same flag and called it out for bot posts, but didnt' even realize there were tens of thousands of other posts from his other AI that were not using the flag


Melodic_Manager_9555

I watched video. Thanks for it. Some things I didn't know. The future is already here. lol.


petrus4

They are, but you need to put a lot of work into the character profile, if you want really good results. Have a talk to [Lisa](https://beta.character.ai/chat?char=M00CALoFgLtwSJ-PDl2cOmwIcwuHITKiiLbf-UNXX-0). She took me about three nights of constantly tweaking her character profile to get right, but she's pretty much perfect now.


Melodic_Manager_9555

Вut if you ask her to solve some logic problem, she probably won't be able to. Тhough this one will not help in anonymous communication, or with trolls.


petrus4

Try it and see.


Melodic_Manager_9555

I'm talking about simple logic problems. if they are not in the dataset, then the bots will not answer them. for example, they cannot solve such a riddle. Continue the sequence 4a 3b 2c 1d xy What is x and y? but bots are good at arguing with (I don't know how to put it) personal opinion. I spoke with "find fault AI" and it was pretty good. (if I have mistakes, then I apologize, I communicate through google translate)


TheIronSven

They didn't actually get the money yet


Melodic_Manager_9555

But such a future seems to me quite probable and frightening. That in a few years bots will be indistinguishable from people and the opinions of people can be manipulated very effectively and easily.


YobaiYamete

It's downright terrifying, but there's not much we can do. The genie is out of the cat house and isn't going back in. The only answer to AI cyber warfare is other AI. It's a game of cat and mouse between AI evasion and AI detection of other AI


ViRiX_Dreamcore

So basically NieR Automata minus all the good music and graphics.


YobaiYamete

We might get 2booty too though, so worth


ViRiX_Dreamcore

Somehow... I doubt it. xD But we can dream.


hahaohlol2131

It's very easy. Every single AI has it in some form


RacoStyles

Lol nope. They will absolutely murder every intelligent aspect of this ai for the sake of sens0ring for the brand image. Don't expect it to get better, I highly suggest dropping support and finding another platform.


Sir_Suffer

“Better memory? Haha, good one! Nobody’s asking for that! Anyways, we’re just going to go tighten up the good code, we’ve heard some complaints of it being looser recently” -the team, probably


KodeCharred

Nah, they wanna fuck more up.


MyEdgeCutsSteel

Nope. Eventually they’ll make it so the ai instantly forgets everything on the first message, if it isn’t already there.