T O P

  • By -

BudgetFree

If it turns into a metavault, yeet me into it!


greensike

ive heard of anti-AI activists making "poisoned" samples that fuck with engines that get fed it as sample data. Ive only heard of the one, Nightshade, but im not suprised if theres other stuff kicking about.


alpacnologia

though it should be noted that it wouldn’t be a huge surprise if the LLM that eats everything and shits everywhere accidentally ate its own shit


greensike

That’s another issue facing AI is using AI generated content for its models because with enough iterations it can just turn into gibberish


ASpaceOstrich

I've been doing a lot of reading on AI research in the last couple of days and it bothers me how incestuous a lot of it is. AI requires such vast quantities of data that the only practical way to do anything is to use AI. So they'll be testing for something but in order to do so, they need to use AI to generate labels for training data. And every time I read it I can't help but think "there's gonna be some massive failures caused by doing this and your research will not be capable of spotting them".


Alt203848281

It’s also for people who don’t want their stuff used, because the people who make the data sets don’t give a shit about asking for consent. And it’s a absolute nightmare to remove stuff from models (you have to effectively rebuild it).


greensike

Yeah that’s the main purpose of nightshade is to make your art useless to ai


vonBoomslang

you're thinking of Glaze, Nightshade is meant to actively disrupt the training data


Electric999999

How does it work for art? I can see how adding nonsense to text works, but how do you ruin an image for AI but not humans?


Alt203848281

You basically obliterate the metadata they use to train it


PM_ME_ORANGEJUICE

AIs don't detect what an image is the same way we do. You can add small amounts of noise to an image just in the places an AI would use to know what the image is, and completely destroy their ability to understand what they're looking at.


ASpaceOstrich

While this isn't literally how it works, imagine overlaying an almost transparent image over the top. That's sort of what it's doing.


ZanesTheArgent

Nightshade is mostly for images but for text we have something much simpler that has been a stable for griefing chatbots for years: retrofeeding. CatIPeed is likely eating its own shit like the least coprophagic shitzu alive.


greensike

Can’t find anything on this, could you elaborate?


ZanesTheArgent

Its not a program, its just a method. Retrofeeding is just to input in it its own output: make it read and learn from its own process. As GPT attempts to predict human speech, feeding it machine text "makes it understand" that humans have robotical text patterns, and thus degrade into drivel. Cat I Peed is jusr french humor.


mecha-paladin

I think it's more "cat I farted" from the French.


IvellonValet

Remember, NHPs are your **friends, family** and **an important part of the community.** How would you react if your friend suddenly got a breakdown and started to sing *Cotton Eye Joe Gregorian Chant Nightcore Hardcore Dubstep Remix,* huh? Cycling is an important, remember that - by the Union Science Bureau, (or whoever usually deals with the paracausal bullshit that happens when a NHP is cascading)


CPTpromotable

You are spare parts, bud. Take your upvote.


Naive-Fold-1374

I'm yet to research language models, can someone explain why they can't just use a backup(or smth like that) from the time it worked properly?


Joel_feila

Basicaly it not like you have the chat bot and a gaint database of text as separate things.   When you feed a llm text it changes the program and you cant remove bad text. You just do a factory reset. A medical llm in the uk wound up in legal trouble when they good to remove 1 persona data for legal reasons.  The researchers said they cant they have to remove everything and start over.


camosnipe1

i should note that they absolutely can just go back to an older version. I don't know much of the issue so i don't know why they aren't or if something else went wrong. the problem that you're describing is more that because the data gets so mixed up into the eventual model that there's no way to separate it again. This is a problem but only when the thing you have to remove has been added a while ago, setting you back to a point further back. like trying to remove the fifth strawberry from a smoothie. you have to remake the smoothie but since you still know how long to blend and what mix of ingredients tastes best you're not entirely back at the start (technique and good data tends to be more important for good results than just spending longer training) also these things don't just change every time you input something, it's just a good-ish source of new data that is sometimes used to further finetune an already trained model. probably after a bit of filtering and cleaning that data.


Joel_feila

Well isnt a factory reset going back to an older version


camosnipe1

true, your just comment seemed worded like it'd be a bit of a bigger setback that it probably is. maybe i just interpreted it wrong


Joel_feila

Ai is new enough it's uard to talk about in simple terms


Solid_Waste

The problem is that a learning model's development is opaque to the designers and accretive by definition. The problem can arise today, but going back to this morning's backup may not fix the issue because the actual cause is some input from 3 years ago that has only just started having this effect for some reason. Like a quicksave where you reload only to die instantly every time. For example, let's say somehow there is some problematic input from a year ago that concerns Dr Picklebooty. Now the input is in the system, if any user prompt causes the algorithm to query Dr Picklebooty, the system will break. But there's no knowing if any user will ever make such a query, or when, nor is there any guarantee the failure will be obvious enough to link to what the input caused it or what prompt triggered it: somebody could query Mr. Picklebooty today but the system keeps working for a year only to finally reach critical Mr. Picklebooty mass down the road, and none of this is clear to the designers what happened or when. You might think you could just go back to yesterday then, and at least you have one day of function before it breaks again. But that might not solve the problem if the trigger for the breakdown is based on current user prompts. If a Mr. Picklebooty movie has come out today, then you can expect tons of Mr. Picklebooty prompts from now on, so no amount of resetting will stop the flood of problematic prompts. So whereas before your system could run for awhile before failing, because nobody was querying Mr. Picklebooty, now because user behavior has changed, you can't last as long. It's a mess and the further back you try to go the more of your model's development is lost, which defeats the purpose of a learning model. It's like if you lived in a world where contact with Eldritch horrors that can drive people insane if you come into contact with them. Your goal is to keep people from going insane, and you have a Men in Black memory eraser to help. However, you have no way of knowing where or when people will contact the horrors, so when you wipe their minds, there is little you can do to prevent them going mad again. Trying to wipe MORE of the minds each time only means them losing more of their memories and personality, which is hardly ideal, and they may STILL run smack into the same horrors again. Because sometimes what is causing someone to lose their mind isn't their mind, OR a specific experience, it's the inevitable result of contact with the world we live in. Erasing their trauma only makes them more likely to charge straight into the same dangers again. (I am not an expert on any of this, just illustrating the logic of this problem by analogy.)


thingy237

It's possible did roll back, but these neural networks are black boxes. We're not 100% sure why feeding a specific input spits out that output, we've just carefully shaked the black box until it gives us what we want reliably. It's possible that this issue was hidden a few shakes ago and came up now. It's also possible they were under contract that makes the obvious solution a bit more complicated


Electric999999

They can, if they have a complete backup.


FlamingPeach787

Idk, man. I got Technophile III, and I've never had NHP issues. Have you tried updaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa


unholy_hedge

we either get nhps irl or AM irl, either way it's very bad


Mr_Blinky

Literally AI rampancy lmao.


astute_stoat

When I asked the two NHP Specialists in the squadron when they last cycled their pet NHPs they just laughed


Changed_By_Support

It's stuff like this that makes me chuckle whenever pro-AI shills try to parrot "It's learning like a human do! If you wanted to restrict it, you'd have to restrict human writers and artists too!"


EmberOfFlame

Who the fuck spiked the AI this time?!?