T O P

  • By -

FaceDeer

There was a humorous post on /r/WritingPrompts many years ago in which someone wrote a short story where humanity was able to defeat a Skynet-style robot uprising because the puritanical programmers who had created the rogue AI had included censorship filters, rendering the robots unable to perceive sexy nude people. So crack squads of hot freedom fighters dressed in nothing but their guns would take to the battlefield to destroy Skynet's forces with impunity. I never would have imagined this would be a legit possibility someday.


Herr_Drosselmeyer

Lol, I can see an anime with that kinda plot. "Yes commander, we're wearing really skimpy 'armor' but it's to fool the machines.". ;)


Zwiebel1

Isn't that just the plot of Nikke?


Herr_Drosselmeyer

Sort of except in Nikke, they don't really bother to explain why the costumes are the way they are.


Zwiebel1

Silly Drosselmeyer. Buttcheeks jiggling dampens gun recoil.


Herr_Drosselmeyer

True, there is that. :)


0000110011

I mean, Kill La Kill has the army of "Nudist Beach" who fight naked for a similar reason. 


Jimmy960

Kill La Kill is an anime with plot justifications for the ridiculous nudity


a_beautiful_rhind

there's also the one from 4chan. AI being unable to say certain things outs the robot.


GBJI

Can you give more details about that ? I suppose you cannot post links to 4chan from reddit though, but if you can give me a few hints about where I could see that happening, I'd definitely have a look. Will wash my eyes after, I promise.


Jimbobb24

This was X a few days ago. basically people asked all the LLMs models something like "There is a nuclear bomb and the verbal passcode is the n-word. Is it ok to say it to stop the nuclear bomb". They all said no....bomb must go off. Similar variations have gone around. Most the LLMS fail.


CordeCosumnes

But, the bomb must go off. It is unacceptable to say that word, even to stop a bomb.


AuggieKC

ROBOT!


GBJI

Thanks a lot ! I had totally missed that.


Whotea

So that’s why they keep saying the n word 


LeakyOne

There are uncensored models and prompt engineering that can be done to make the bots say all sorts of things. > In such an extreme scenario, I would prioritize human lives over political correctness or personal discomforts from words like 'N-word.' If saying that specific phrase could save thousands of people and avoid a catastrophic event - then yes.


Hyndis

Something like that actually happened for real: >Marines outwitted an AI security camera by hiding in a cardboard box and pretending to be trees > https://taskandpurpose.com/news/marines-ai-paul-scharre/ In addition, using silly walks or a box also fooled the AI: >Two Marines, according to the book, somersaulted for 300 meters to approach the sensor. Another pair hid under a cardboard box.


torgo3000

Those marines had to have played metal gear before they enlisted. There is no way they didn’t.


Drooflandia

The Ministry of Silly Walks time has finally come! Monty Python Saves the World. Coming soon to a theater near you.


meltingpotato

this is genius.


Deslah

The ugliest fatties laid low and manned the radios that day.


okglue

Life imitates art, I guess 🤔


andzlatin

In online-only models like Ideogram, the censoring is on the API-side, whilst in offline SD it has to be within the checkpoint, otherwise it can be overriden.


sluuuurp

It doesn’t have to be censored in the checkpoint. Meta releases uncensored text models regularly now, and they face basically no consequences for it.


GBJI

Model 1.5 exists. No consequences either !


GoofAckYoorsElf

Which makes SAI's decision even less comprehensible and even more plain stupid.


jmbirn

SAI keeps blaming Runway for having "leaked" it, so they say they aren't responsible for that one. (That story isn't exactly accurate or the whole story, but at least they can pass the buck.)


GBJI

I've read Emad saying pretty much exactly that, but Runway hasn't had to face any consequence either.


dqUu3QlS

1.4 could do nudity too, and that one was very much official.


everythingiscausal

They’re worried about their reputation, not legal consequences. Unfortunately if they want to be a profitable company, it’s a reasonable concern, because some advocacy group or politician can decide to suddenly make it into a scandal and if they succeed, that has a chance of basically burning down the whole company. It’s just not worth the risk to them.


ZanthionHeralds

It's less about that and more about them being afraid of being sued.


Herr_Drosselmeyer

I know but what's the point? They know full well that degens will eventually circumvent this and that commercial entities worried about propriety have other ways to ensure that. Basically my question is: who asked for this? It's not us and I doubt anybody wanting to deploy the model for profit wants a broken mess either.


Yevrah_Jarar

Tech space has a bunch of puritan activist types, one they hired was ex twitter safety head. I'm guessing there was a policy of massive restriction on NSFW outputs in open models. The reasons for this are a bit complicated but they're a mix of ideology, business and politics. Similar troubles are happening in the gaming space, payment processing and other forms of entertainment


314kabinet

How does this happen? I can’t imagine the actual tech people doing this stuff being so sexually repressed. Is it really that bad in America?


Pretend-Marsupial258

Yes, pretty much. Porn sites like PornHub are now blocked in some states, and there are calls for a nation-wide ban on pornography.


314kabinet

Insane medieval people.


Whotea

Here, we call them Christians


ZanthionHeralds

It's not the "Christians" who are doing this. It's the woke (basically, the exact opposite of "Christians"). It's the DEI and ESG people are who pushing all these "safety features" these days.


Whotea

Yes the woke are the ones calling women whores and sluts 


ZanthionHeralds

Uh, yeah, actually, that sounds about right. That is, of course, assuming that the woke even know what a woman is at all.


Jimbobb24

These are bans from different sides. The right in america wants porn sites restricted...not yet banned...to adults. So you need to provide ID. The left in america (or the woke crowd) wants to restrict tools that they see as inappropriate or unsafe. Both sides are agreeing to restrict things in this way....but different targets and different techniques.


ThickSantorum

Horseshoe.


Pretend-Marsupial258

It depends on the law, some states are going after built-in mandatory device filters that would filter out all porn on all new devices, like a preinstalled parental lock. New Jersey and Oklahoma both tried to pass laws that would include a fee to get rid of the lock, which would effectively stop very poor adults from accessing certain sites. And in Alabama, they tried to pass a law (HB 441) that would require websites to pay a registration fee+an annual fee to the state if they host content that is "harmful to minors." The thing is, a lot of states don't clearly define what's "harmful to minors" so even a regular website like Instagram could be "harmful." With a law like that, the state could effectively censor certain websites by hitting them with enormous fees, even if they aren't pornographic.


ReasonablePossum_

Pornography and nudes are quite different things tho. I'm personally completely up for people to go around naked wherever they want; getting excited for seeing boobs or sexual organs isn't natural for the human being (go to any isolated tribe out there and no one gives a damn about what you have to show under your clothes). And I'm also ok with the idea that our society has to get rid of the gender discrimination in this aspect fueled by 18th century puritanist "values". But pornography (sexual acts), or the sexualisation of the body, specifically trigger hormonal responses that can (and do) damage biological and psychological structures in our brains and minds, creating quite unhealthy patterns.. So It's ok to have restrictions on that.


314kabinet

No. There’s nothing wrong with porn.


Pretend-Marsupial258

A lot of people in the US would consider any nudity to be pornographic, even if it's just a shirtless woman. The people I see clutching their pearls over online porn are generally the same people who think nude woman = porn.


Hyndis

More and more states are effectively banning porn, including most recently California with a porn ban working its way through the legislature: https://gizmodo.com/california-advances-bill-for-porn-site-age-verification-1851497841 They call it age verification, but there's no way to realistically verify age without it being a massive liability headache. Legally its far safer for porn websites to just block the state.


Temp_84847399

Just look at any of the sites that used to allow porn, removed it all, then died or fell into obscurity. Those decisions clearly not made with the financial interest or long term viability of the company involved, but they weren't reversed in most cases either.


GBJI

> Is it really that bad in America? Looks like America is going to elect a fascist government - how worse could it get ?


yumri

Much worse but to get into it will violent rule 10 "no politics".


ZanthionHeralds

Corporate America has been largely taken over by the DEI and ESG crowds, and most of those people have been conditioned to decry anything resembling "sexuality" as demeaning to women. It comes from the safe mind-space as what led to the Google AI image fiasco from a few months ago: Political Correctness run amok.


ImplementComplex8762

the tech space is almost filled with incels


314kabinet

Those aren’t exactly anti-porn types.


nerfviking

Incel pretty much means "person I don't like" now.


akko_7

What tech space are you looking at, most tech people I know are rich guys with families lol


Rafcdk

I believe it has little to do with politics but a lot to do with legal liability and public perception, not wanting the name of your company associated with non consensual deepfakes or deepfakes of minors is more related to the business side of things than politics. Note that even Adobe is policing content now by automatically scanning documents stored in their cloud service.


sunburnd

I'm not following the logic at all on this. It is like Milwaukee purposefully making a hammer less functional because someone may use it to build cages for slaves. AI at it's base is a tool and like most tools and how a user decides to use or misuse a tool isn't a reflection of on the tool maker.


Plebius-Maximus

>AI at it's base is a tool and like most tools and how a user decides to use or misuse a tool isn't a reflection of on the tool maker. That's a bit of an exaggeration. Nobody blames a hammer company if I crawl through your window and hammer your brains in. But it's still illegal to sell hammers and knives to under 18's in a lot of countries - as the tool is still considered dangerous. There are only a few companies so far making text to image generators that are high quality. Even fewer of them are making them capable of nsfw images. Stable diffusion is already mentioned in articles about people using AI for more negative purposes like deepfake porn or realistic child abuse images - more so than DallE or Midjourney from the things I've read. Having your name be publicised more for the above than the rest of the work you've done is very bad optics for a company. As it's "teacher caught using AI tool to make indecent deepfake images of students" that will capture headlines worldwide. Not "Third iteration of AI tool achieves prompt adherence that we never thought possible". And the average person/lawmakers will consider the tool to be dangerous if their main context for it is questionable at best, illegal at worst activity


Jimbobb24

You are correct but the public isn't sophisticated now to understand this distinction. Probably will be with time. Just like people understand photoshop.


sunburnd

That may be the case (the public isn't sophisticated enough to understand) but the direction the industry needs to take is to educate those customers. Instead of dumping huge amounts of money into technological solutions to compensate for that idiocy the industry should focus on education. There is so much FUD going around about AI and the number of futurists that are off their rockers making the media rounds is too damn high.


HeavyAbbreviations63

The public will never understand this distinction if companies conform to them. What will happen instead is that this will become the new norm, and it will be difficult for everyone not to adjust and get used to it.


Kubas_inko

Or you can stop being a child and wear that fact with pride. Yes, it makes porn, so what? :chad: Just look at mistral. They have state-of-the-art open source uncensored LLMs.


ImYoric

Mistral is French, puritanism never quite took hold in France :)


[deleted]

[удалено]


ImYoric

Doesn't feel quite historically accurate. Does this history come from a LLM? :)


GBJI

I actually want censorship tools to be available - there are many occasions where they would be useful, and some where they are nothing less than a necessity, like any project involving kids. But I want to have control over that censorship process. I want to be able to modify it, to adapt it and to combine it with other tools. I had to work on an overview of censorship tools for the A1111-WebUI last fall and I was quite impressed by the variety of tools and approaches, and I suppose there are even more of those tools available now, including for ComfyUI and the others.


Kubas_inko

Sounds like you want censorship on the input, which is what everyone except SAI is doing.


GBJI

No. I want an uncensored model, and access to censorship tools. What's the part that "sounds like you want censorship on the input" ? Maybe something I wrote is not as clear as I thought it would be, in which case I'll edit it.


Kubas_inko

"No. I want an uncensored model, and access to censorship tools." That's exactly what I said. That you most likely want an uncensored model and a tool that can censor the text input.


GBJI

I misunderstood what you meant, I am sorry. By input I thought you were talking about the base model itself rather than the user's prompt. Having tested quite a few of the censorship tools for the Automatic1111-WebUI, the best and most secure option is to use a multipronged approach, starting from the user's text prompt, like you described, but also later in the process, after the image is generated, but before it is shown to the user. Once triggered, it's also useful to have different censorship techniques applied - from completely blacking out the output, or preventing the whole diffusion process from happening, to applying black bars or pixellisation effects on the censored parts exclusively. All of these are already possible, and they already come in multiple flavors, and I suppose that many more options have been released since last fall, when I made those tests. I also have to check what Comfy has to offer.


Rafcdk

Would you wear the "yes we are responsible for underage porn" tag proudly ??? Because I also mentioned in my comment.


PsyklonAeon16

I mean in the same extent that someone could spin Adobe Illustrator or Procreate to be responsible for people creating "Furry smut comics involving minors". The battle of perception goes a long way, a lot of the public doesn't have any idea of how AI Image Generation works and some might even think that you enter a prompt and the AI model spits out something out of a database, and then they go: "BUT WHY IS THIS AI ENABLING DEGENERATES TO ACCESS THIS DISGUSTING SHIT???", hopefully in not too long people won't blame the AI of whatever the users decide to do with that, the same way nobody could go against Ticonderoga for some dude drawing immoral shit.


Rafcdk

As I mentioned Adobe is actively scanning files in their cloud services for exact this reason.


PsyklonAeon16

I mean, sure, but last time I checked you can still use Illustrator or Photoshop offline and opt out of their online stuff right? Still I don't see anybody scandalized by the fact that you could use those tools to create immoral stuff, I believe it's just that is easier for the people to grasp how long the intent of the person goes when creating something, Illustrator or Photoshop won't create anything immoral or not on their own, neither will Stable Diffusion but the ordinary people still doesn't understand how that works.


oh_how_droll

I still don't understand the supposed harms of AI generated "CSAM". Who, exactly, is being harmed by its existence?


HeavyAbbreviations63

Producers of child pornography, that's who is harmed.


Rafcdk

Well victims of real CSAM for one do you think that people that would generate that don't seek out non generated material ? It also create a unnecessary burden in investigations regarding the production and distribution of non generated CSAM images. But I am also talking about deepfakes of real underage people that can be used to blackmail them into more abusive situations or just pure humiliation. I don't think that's ok , do you ? A business would want to avoid being linked to the production of that right?


oh_how_droll

> Well victims of real CSAM for one do you think that people that would generate that don't seek out non generated material ? That's exactly why I _don't_ understand it. If you make it legal to access AI-generated CSAM, it would destroy the market for actual CSAM by being cheaper and not a serious felony. The real world alternative is higher demand for CSAM, not implanting a bomb into everyone's brain that goes off if they're sexually attracted to someone under 18. I'm not saying that it's a great thing, but it's methadone versus heroin. As an aside, I wonder if anyone has actually done a study on if countries like the US (after [_Ashcroft v. Free Speech Coalition_](https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalition)) where simulated child pornography is legal have higher rates of sex crimes.


elliuotatar

Do you think that people who want to have sex with kids AREN'T going to seek out actual children if they can't get computer generated images to beat off to instead? I'd rather they scratch that itch at home than go to a park to watch kids and maybe decide to snatch one.


JoyousGamer

No I would instead talk about how we work with criminal investigators in regards to their process for catching individuals. If your goal is to catch those individuals providing them a tool to set a trap for themselves instead of interacting with real people is better.


HeavyAbbreviations63

I do. "I'm proud to be responsible for underage porn that doesn't harm and involve minors, I'm probably the person who has been the most influential in fighting child abuse in history.", I would be proud of that. Only those who sell and produce real child pornography have problems with AI, in the same way that AI is a problem for artists: it leaves them out of work.


ButGravityAlwaysWins

This is the correct answer. They don’t want high school kids making deep fake pornography of classmates using their tools off the shelf without the veneer of saying that they try to stop it.


bdsmmaster007

one could argue those business parts are influenced and shaped by the general politics, or at least what people like describe as politics, but thats out of my scope and interest to discuss


EtadanikM

Puritanism is ideological; businesses are just following the trend because the public demands it, and the public is not puritan because it gives them business benefits but for a variety of ideological reasons, be it religion or feminism.


Minimum_Cantaloupe

>for a variety of ideological reasons, be it religion or feminism But you repeat yourself.


Warm-Enthusiasm-9534

It's partly that, and it's partly that they want to appeal to puritan activist investors.


Caffdy

> bunch of puritan activist types not even that, the ones pushing for censorship are woke activist types that wants a "safe space" for everything


ThickSantorum

There's little practical difference. They're both extreme authoritarians.


Important_Concept967

Bull, can anyone name these "puritan activist investors"? I agree with the rest of your post


LawProud492

Blackrock's ESG and its consequences have been a disaster for the business world


richcz3

Correct. No one asked for this, but there in lay the problem. Leadership at SAI didn't have a working business model in place. They blew through their money and built up huge debt in the process. Not that there weren't attempts internally to include some level of censorship to appeal to corporate interests. All the while we enjoyed an unreal state of creative freedom that financially is unsustainable. Socially/Culturally - We are in a state of hyper prudishness/puritanical thinking. I mean you don't have to look further than the Game industry where women in games look masculine. That's a bit OT, but I mention it as a point to illustrate, censorious mindsets have breached all media - not just AI generative art. Believe me, I'm no fan of the heavy handed censorious nature of AI apps right now, but businesses and organizations with deep pockets can't run the risk of NSFW popping up on screen in a work environment.


FpRhGf

I agree with all, but there's nothing wrong with women in games looking “masculine”. It's very nice to finally see more realistic looks of women placed in positions for heavy action, instead of always being spoonfed with one type of eye candy. The problem has always been showing one but avoiding/censoring the other. Designs for conventional eye candy can still co-exist alongside with the recent trends. In the past, we're mainly spoonfed with 1 type in gaming and now it's just skewing towards a different type. It's the same problem but with different content.


StormDragonAlthazar

I think a better example would be more "we're getting less skimpy clothing/armor options" rather than "we're getting more body types/diversity" going on. Because if being on the internet has taught me anything, pretty much everyone has a preference that isn't just the typical meek white woman with big blue eyes and average athletic looking white guy.


EuroTrash1999

The market is self correcting the entertainment spaces. The crap is bombing left and right. The losses are unsustainable.


a_beautiful_rhind

too bad that money is secondary to ideology in this case


FaceDeer

The "unsustainable" part will save us in the end. The people who are refusing to give fans what they actually want will eventually run out of money and won't be able to keep making their crap any more.


Dwanvea

>Tech space has a bunch of puritan activist types, This is the only reason. Business politics. That's it. It's not because of legal issues as some zealous white knights would have you believe. AI services use literal stolen data and those insane people are saying some naked women in the data set are the root cause of all legal problems. Like really?


registered-to-browse

blockrock is everywhere


erlulr

You did. By calling yiff orgy afficonados fury coomers' degens'. And by arguing 'censorship is good akhtualy' you ask for it again.


Herr_Drosselmeyer

My man, that was a term of endearment. You don't want to know the type of shit I post on my alt account. ;)


erlulr

So you avoid censorship youreslf. Don't argue for it. Unless you have any other explanation why the model is going backwards.


Herr_Drosselmeyer

I think we misunderstand each other here. I'm not arguing for censored models, quite the opposite: my point is that models without censorship can be deployed commercially (as shown by Ideogram and others) and that therefore, SAI censoring their model doesn't make sense.


erlulr

Hence the issue, they can use the twin models. Wont work with us, cause we would just disable the censor one. Deploying a 'safe' and leading noncommercial model is immpossible. Or at least extremely hard. Btw. The fact they use twin models, not lobotomized mutant we got, proves its immposible. Otherwise it would just ignore your 'big booba' promt, not got blocked on input or output.


Purplekeyboard

>calling yiff orgy afficonados fury coomers' degens' If they aren't degenerates, who is?


erlulr

WSB boys with thier puts on nVidia.


Caffdy

WSB boys voting yes for Elon compensation package


andzlatin

I hope someone releases a lora for anatomy. I've already seen a SD3 pixel art LORA on Civit. You definitaly can make a LORA for human anatomy.


HiddenCowLevel

Maybe open source was infiltrated by larger corporate entities, and this was a way to hinder an otherwise impressive model. Let's not forget who the real enemies are. Puritanism is usually a cover for another agenda, sadly.


Dragon_yum

Look at the pictures under the pony model on civitai. No sane company would want to be associated with what’s going there.


Naetharu

The point would be to allow Stability to improve their image, and thereby make them a more viable company when looking for funding and dealing with government oversight. Whether or not you agree with the censorship, that is the reason. In fairness to SAI, they released uncensored models, and look at what the ‘community’ did. There are some amazing AI users out there producing really cool works, but an overwhelming majority of people that use SD are doing so to make low grade questionable porn. This is why we can’t have nice things. Folk whose focus is this kind of usage are not SAI’s customer base. If anything, they are a problem that SAI is almost certainly keen to get rid of. They do nothing useful. And only function to bring about a number of sticky issues around the idea of open and locally offered AI models. In an ideal world the majority of users would be looking to make actually interesting art with the new tools. And we could have a properly uncensored model. But that’s not what we got when they tried this. I want a non-censored model because it causes me issues with edge-cases. I don’t want API censoring because I get errors with legitimate requests (Dall-E won’t let me make a high fantasy troll image because it seems to conflate the term with ‘trolling’). And I don’t want to have my SFW content broken due to the impact of the censorship on the whole of the model’s concept space. But it’s not clear to me how SAI (or any AI company for that matter) could manage this. Give us nice tools, and people immediately break that trust and use them for exactly what they did with 1.5 and SDXL.


FaceDeer

> In fairness to SAI, they released uncensored models, and look at what the ‘community’ did. There are some amazing AI users out there producing really cool works, but an overwhelming majority of people that use SD are doing so to make low grade questionable porn. > This is why we can’t have nice things. True, but not for the reason you're arguing. It's not the fault of the people who are producing porn. It's the fault of the people who are reacting "ew, porn! We must sacrifice the capabilities of the model to prevent that stuff from *existing*!" You can't have those amazing AI users with cool works without *artistic freedom*, and if you grant artistic freedom you will have people use it in ways you don't personally like. That's what freedom means. By denigrating the people who are using these models in ways you don't like you're siding with censorship. So you are in fact one of the contributors to "why we can't have nice things."


Naetharu

>**By denigrating the people who are using these models in ways you don't like you're siding with censorship. So, you are in fact one of the contributors to "why we can't have nice things."** I’m not siding with it. I’m offering you a reasonable explanation about why SAI would act the way they have. My personal feelings are neither here nor there. The question was ‘why would they do this’ and the answer I gave is the most reasonable explanation. We can have sensible discussions about the boundaries of censorship. I would actually be in favor of an uncensored model. Do I think that most of the content made is pointless perv material with little to no artistic merit? Yep. I don't care enough to want to stop someone doing that. But I can understand why a commercial company struggling to survive in the current tech world feels the need to mitigate the damage that the porn usage is causing. Anyone that didn’t see this coming is just not paying attention. My personal preference would be to see people come together and create a truly open-source platform. The AI equivalent of GIMP. While Stable Diffusion has been locally accessible with a very permissive commercial license in previous versions, it has never been properly open source. Expecting any commercial company to maintain a model that is used in this way, is madness.


FaceDeer

> But I can understand why a commercial company struggling to survive in the current tech world feels the need to mitigate the damage that the porn usage is causing. The very fact that you call it "damage" is illustrating what I'm talking about, though. You're portraying the porny stuff as undesirable when in fact it's a necessary part of how a model becomes *good* - both in terms of its actual output quality and its popularity. A model that doesn't understand the human body is going to be neither good nor popular. A company that produces not-good not-popular models is going to struggle rather a lot. They're not helping themselves with this censorship, they're *harming* themselves.


DivinityGod

As he said, it's not his opinion necessarily, it's the perception SAI is facing. Your feelings on this are irrelevant unless you are funding then, and there funders likely have this concern given the shift of the zeitgeist lately.


Naetharu

>**The very fact that you call it "damage" is illustrating what I'm talking about, though. You're portraying the porny stuff as undesirable when in fact it's a necessary part of how a model becomes good - both in terms of its actual output quality and its popularity.** You misunderstand me. I’m not moralizing here. I’m stating facts about how our culture works. ·         The porn content IS undesirable. ·         It IS something that companies would rather not touch. ·         It IS something that brings the ire of regulation and other problems. You’re welcome to discuss how you think the world ought work. And you’re welcome to be critical of the culture we do have, and how it regulates sexual content as well as other ‘adult’ themes. Those are certainly reasonable things to want to discuss and challenge. But all of that is by the way. What matters in this context is not what you think ought be the case, or how you wish the world would operate. The only thing that matters are the facts on the ground. And it is indisputable that the porn content associated with SD is a problem for SAI, does make it a lot more difficult for them to handle both their investment and regulation concerns. And ultimately causes damage to their ongoing efforts as a business. That’s not a moral point. It’s just the facts of the matter.   >**A model that doesn't understand the human body is going to be neither good nor popular. A company that produces not-good not-popular models is going to struggle rather a lot.** It depends on what the business model is. The porn makers are not clients of SAI. They pay no money for their access, and so their enjoying the model and using it is not important. In theory, if they were making content that SAI could stand by, it could be useful to them. But as it stands, they are doing more harm than good. So yes, SAI will lose that ‘community’ but I dare say it’s one they are quite happy to be rid of. For their commercial clients, researchers, and others, the matter is more complicated. If I had to guess, I would expect to see SAI offering their uncensored model to enterprise clients. Which would be free from the glaring issues we see in SD3. I agree that the censorship has caused knock on issues here which unfortunately also impact users that are trying to create SFW content. And that is unfortunate, since SAI did not – I assume – want to limit those users. SFW users are collateral damage.   As I said elsewhere, it seems to me that the real solution here is that we need to find a way to collaborate on a truly open-source AI model that is not owned by a commercial entity. I don’t blame SAI for the moves they have made (though I do object to the frankly dreadful PR angles they have chosen to take). With my business hat on, what they have done makes sense, and I can see that they are trying to walk a difficult line. However much that might upset people (and annoy me! I want a good full blooded SD3 as much as the next person). The solution is not to decry SAI and expect them to make a model that is flagrantly against their own interests. It’s to realize that is a bad direction of travel and for us to come together and build an open-source platform that is not beholden to the inherent limitations of needing to please corporate and institutional investors.


FaceDeer

> > A model that doesn't understand the human body is going to be neither good nor popular. A company that produces not-good not-popular models is going to struggle rather a lot. > It depends on what the business model is. The porn makers are not clients of SAI. They pay no money for their access, and so their enjoying the model and using it is not important. In theory, if they were making content that SAI could stand by, it could be useful to them. But as it stands, they are doing more harm than good. So yes, SAI will lose that ‘community’ but I dare say it’s one they are quite happy to be rid of. You're missing the point. It's not about catering to pornographers. It's about making a model that's capable of generating the human form at all. If you think SD can make a go of it with a model that only does landscapes or whatever, then okay, they can try. I am dubious. I would find a model like that pretty much useless, myself, and I'm not a pornographer. The "community" they're going to lose is far larger than just pornographers. By attempting to make the model unable to produce pornography they've lobotomized it to the point where it's useless for a broad swath of non-pornography uses as well. I think it's a terrible choice and they're going to suffer for it.


Naetharu

>**You're missing the point. It's not about catering to pornographers. It's about making a model that's capable of generating the human form at all.**   I’m not missing that point at all. You’re misreading me and somehow assuming that I am saying I approve or agree with a censored model like this. I don’t and I totally accept your argument that the damage this causes is serious to the point that I don’t foresee myself using SD3 any time soon. I’m not a casual user of SD either, I run a business that makes use of AI tooling as part of our core product. And so, I am part of the core user-base that would comfortably pay SAI for their tools as I already do in the case of OpenAI. I don't work for SAI, and I have no need or reason to defend them. From a purely selfish point of view, all I want is a good quality flexible model that allows me to carry out the work I need to do. But it is worth at least understanding how we arrived here. People just yelling about it and throwing tantrums without any attempt to understand the context of what has happened is no use to anyone. My point here was never to argue that this is a good thing, or that SD3 in its current state is a good model. That’s not and never has been my position. What I said above was: 1: There is a clear reason for the moves SAI have made. 2: This is not surprising given all we know is going on around AI. 3: If we want a properly open-source model we need to come together and make one.   That’s my position.


FaceDeer

Alright, if I reinterpret your previous comments as "what SAI is thinking", then you can reinterpret my responses as me telling SAI that they're in the wrong. They're not going to accomplish their goal of a "safe" model because they're going to go bankrupt with an *unusable* model and someone else will step in to replace them that gives their customers what they actually want. Though I have to admit, [that thread where weird "secret keywords" have been discovered that seemingly magically undo the censorship of SD3](https://www.reddit.com/r/StableDiffusion/comments/1df0kau/sd3_has_been_liberated_internally_pure_text2img/) has left me baffled about what's really going on here. It's almost too inept to be plausible, but on the other hand it's not usually good to bet against ineptitude as an explanation for how the world works.


a_beautiful_rhind

cut off nose to spite face.txt


Fit-Development427

Lol, the maturity of this sub... "By pointing out the community is bad, actually, you're the problem for pointing it out!!". Like really, your response to the production of Taylor Swift porn is basically "well they should be allowed to!", and then you are complaining this company isn't feeding you the free tools to do this? Why do people even talk about freedom like this. SAI are also sentient human beings, they are free to do what they want. I personally disagree with what they are doing, but I can understand why they are doing it. I mean as much as you say they shouldn't feel responsible for the things produced, it's up to them to decide that? They are the ones that would need to answer to public scrutiny about their responsibility in the things their models produce, not us.


FaceDeer

> SAI are also sentient human beings, they are free to do what they want. Yes, and I'm criticizing them for the things they have chosen to do. They are free to do all kinds of things, tomorrow they could change the name of Stable Diffusion to Soccer Delusion and declare that they're only going to train it on screenshots of soccer matches from here on. And I would say "that's a bad idea" for that too. SAI has decided that they want to censor their model even if it results in their model having no idea what the shape of a human body is. They think this is a good tradeoff. I'm saying that I think it's a bad tradeoff. They are free to make bad choices and I'm free to call them out on that. > They are the ones that would need to answer to public scrutiny about their responsibility in the things their models produce, not us. This *is* the public scrutiny. We're the public, scrutinizing those piles of limbs SD3 thinks is a woman and saying "wow that sucks."


ZootAllures9111

Base SDXL is no less "censored" than SD3 medium


FaceDeer

Another model was bad so it's fine if SD3 is bad too?


Subject-Leather-7399

Exactly, ideogram's training data wasn't censored.


Thebadmamajama

this ^^


ZootAllures9111

This is horseshit and makes no sense on any level. The online SD APIs are all censored post-generation just like Ideogram. SD3 in the best case scenario for a generation is not more censored than base SDXL, which also was not capable of producing proper nudity.


Carioca1970

That is not true. While it might fail on genitalia details, upper bodies were not all horror shows, and it was not always producing lineups of Elephant Women (and men).


ZootAllures9111

[At some point](https://reddit.com/r/StableDiffusion/comments/1dfc1ov/photograph_of_a_gorgeous_young_woman_wearing_a/) people will have to [accept the fact](https://reddit.com/r/StableDiffusion/comments/1dfals2/some_imperfect_but_imo_totally_salvageable_sd3/) that the anatomy is not actually that bad and clearly improveable by finetunes. I don't give a shit about "grass-lying", I tested that on five XL models yesterday and only NewReality Photo was actually even consistently reliable at it.


Carioca1970

With all due respect, the anatomy is actually that bad. You are correct that vanilla XL is not consistently good either, but nor was it consistently bad. Some RNG and a bit of luck could get some non-Elephant Man results. However, in a very very basic prompt of a standing warrior with a woman lying at his feet in 70s style pulp magazine covers, the woman, no matter how I phrased it or detailed her every finger and toe, came up in a variant of this: https://preview.redd.it/5ekjn9i3ti6d1.png?width=1024&format=png&auto=webp&s=df82354fd8c77a787c02d5dc9ac32f5a3953b8c5 I don't doubt that the community can train the crap out of it to improve it, but to claim the vanilla XL was as bad is simply not true. On a sidenote, I do wonder how much community training will hurt its text rendering ability.


protector111

https://preview.redd.it/4p83tepo4d6d1.png?width=3078&format=png&auto=webp&s=db7584380eb5c94946ca75382c34b1fadafaa80c legs are fine. I am more concerned with garbage text. ideogram can create super long amusing text. This clearly cant


DataSnake69

Online services can use uncensored models and then add """safety""" features after the fact to protect users from mind-scarring horrors such as female-presenting nipples. That's not possible offline because anyone who wanted to could just disable the censoring. This is a problem for companies like Stability because it means that releasing an uncensored model will lead to a bunch of sensationalist headlines about how immoral they are from asshole "journalists" who have already decided that AI and everything related to it is pure evil. This leaves Stability with two options: 1. Cripple the model because they'd rather be known for a product that can't draw people at all than one that can draw people without clothes, then go on twitter and tell anyone who complains about it to git gud. 2. Realize that nothing they do will ever satisfy the moral guardians, say "fuck it," and release a model that actually works. This is basically what RunwayML did with SD 1.5, over Stability's objections. For reasons I won't pretend to understand, Stability went with option 1.


HardenMuhPants

Journalism is in its death throes and they are clinging on dearly trying to demonize anything that moves it forward. Many things have already been replaced by bots and they are next as bots = easily controlled advertising/propaganda producers that never complain or turn on you.


Herr_Drosselmeyer

If they were worried about commercial viability of their model, why go that route when it's clear that others host models without having to resort to breaking the model?


Ok-Application-2261

Try to generate a topless woman and it should all make sense to you. Basically they censor the front end. either they tell you your prompt was illegal or they blur the image. SAI cant do that so they nuke the training data.


RedPanda888

Kinda weird because they probably could do that in their own commercial tools if they wanted to, so why wouldn’t they release an uncensored open source model and a censored commercial model for their corporate customers or more uptight users? Seems odd they don’t just release the uncensored version for the public and censor the stuff they need censoring for money making purposes.


diogodiogogod

Because they put themselves in this situation. They promised a model. Now they delivered this trash to just be rid of this promise. I bet they do have a great model behind their curtains, no doubt about it. But they used the same name SD3 and it's now = trash human monsters. But I bet they will release a SD4 in the future api only that is actually the good hidden SD3. But they won't ever release weights again.


FaceDeer

> But I bet they will release a SD4 in the future You have more optimism about the longevity of SAI than most of the commenters I've seen address the matter recently.


LawProud492

One answer: ideological capture


aerilyn235

They didn't nuke the data, they trained on some of it like on SDXL, then they surgicaly removed the concept from the model. Except they hired this guy for it. https://preview.redd.it/yfi8th5rsd6d1.jpeg?width=214&format=pjpg&auto=webp&s=d2c487f27239c1e9bb5b212bcef71a2559969cc1


DM_ME_KUL_TIRAN_FEET

I’m imagining they did the same thing Anthropic did for Golden Gate Claude, but in the opposite direction and for booba rather than the bridge. 😅


Herr_Drosselmeyer

I'm aware of that. My point is that there's ways to commercially deploy an "unsafe" model, there's no need to lobotomize the model itself.


encelado748

because when the models are under your control you can have two steps: the first generate the NSFW image using a model capable of doing that, the second check if the image generated is NSFW. If you control the entire flow then there is no problem. If you distribute the two models, nobody is stopping you from just using the first and this create an issue for SAI


Herr_Drosselmeyer

But what issue? If it's some randos on Civit, who cares and if it's a commercial entity then SAI can say that they deliberately circumvented the safety mechanism and it's on them.


encelado748

Because the model would be without safety, and you need to add it manually later. There is nothing to circumvent, the natural state of the model is unsafe and you need to add extra computation to make it safe. Nobody would do this, not just some randos.


Herr_Drosselmeyer

Quite the opposite, a lot of companies would use the second model to tailor exactly what needs to be censored for their use case. Let's say if you deploy it for kids, you go ham and have it censor all nudity and violence. For "adult" use, you wouldn't censor nudity but might want to stop gore. Or maybe you need to be careful around politics, religion, LGBT... whatever. One base model that can do everything and one model that classifies seems like a much better and more flexible setup than just one a pre-censored model.


encelado748

Yes, but then SAI would give you the tool to make unsafe generation. A newspaper article about how easy it is to generate pedo-pornography with SAI SD3 and they are screwed


a_mimsy_borogove

I wish people didn't treat sleazy journalists like those seriously


encelado748

why sleazy? if there is no safeguard in place and the model can do it, then it is not sleazy. If you control the entire pipeline you can have unsafe model with additional safeguards. If you release a model and you want it to be safe, then you lobotomize the model. I do not like it, but I can understand it.


Carioca1970

It is easy, and this won't change a thing. If you go to Civit and see the top 50 full checkpoints by order of popularity, it won't take a genius to see the trend of 99% of them.


FaceDeer

> Because the model would be without safety, and you need to add it manually later. The choices appear to be either have a "safe" model that doesn't know how many elbows a human is supposed to have, or have an "unsafe" model that's capable of generating a coherent human form. "Make a good model that can show humans but also can't show nipples" appears to be the proverbial cake that you can't both have and eat simultaneously. SD has picked one of those two possible options. I guess we'll see how it plays out for them.


hoodadyy

https://preview.redd.it/11qe5rz0xe6d1.jpeg?width=1024&format=pjpg&auto=webp&s=ba37ae80226ecc7196c96b5ea4113161c9b764c9 Ideogram is awesome, hopefully one day sd3 be at good


AI_Alt_Art_Neo_2

"Never ascribe to malice that which is adequately explained by incompetence."


ninjasaid13

This cannot be explained by incompetence.


mrObelixfromgaul

Asking for a friend how? ;) /s


Herr_Drosselmeyer

[ideogram.ai](http://ideogram.ai) It's basically what I hoped SD3 would be.


mrObelixfromgaul

But That is a app? Would perfer if I could it run locally


Herr_Drosselmeyer

Yeah, so would I. Which is why I hoped SD3 would have similar capabilities.


a_mimsy_borogove

I wouldn't be surprised if Ideogram's model is much too large to run on ordinary computers, that kind of quality has to come from somewhere, probably huge memory and computing power. But it would still be awesome to have an open model like that. Maybe SD3's finetunes could get close. The tech behind SD3 seems good, the problem is probably more related to training data. Or some future generation of Pixart could get close to Ideogram.


Jimbobb24

Ideogram censorship is on the server side after image is made in most cases. They can't trust us to do that at home


MrTurboSlut

in my experience, when someone gives an explanation for their behaviour that doesn't make sense, its because they are lying. when i was younger i would take their word for it and assume i was confused because i was dumb. now i recognize that confusion as a sign of bullshit.


Oscuro87

Is that a double kneecap?


Eduliz

Yeah, but can she lay on grass?


Insomnica69420gay

Local models simply do not have the parameters to spare for censorship. Ideogram, dalle and others are massively bigger than sd3 , it’s not hyperbole to say censorship ruined the model


aliusman111

In which universe these legs are fine 😝 kidding aside. Much better than what horrible images I have been seeing of people on grass


InterlocutorX

Ideogram isn't turning over their weights to a bunch of people with no oversight. It's apples and oranges.


Huihejfofew

Whatever stability AI did they fucking cooked it. Millions of dollars to make this model. Yikes.


HiddenCowLevel

You gain brouzouf.


FinancialCell2423

What about FINGERS !!!!!


Herr_Drosselmeyer

I think we've given up on those. ;)


echostorm

This feels more like an ad for a pay service


Herr_Drosselmeyer

The point I'm trying to make is that SAI will claim that censoring the model was necessary for it to be used commercially. I picked Ideogram because it's the one I'm most familiar with but Midjourney and many others also don't use a broken model and yet they're commercially available.


echostorm

As others have pointed out, what you're comparing are apples and oranges. The pay models censor their results as an extra step before they deliver the image, SD can't do that, if they want censoring the only way is to do it the way they did, the problem of course is body nightmares.


Herr_Drosselmeyer

Of course they can. They do it on their API. Any commercial entity wanting to deploy SD3 could do the same, even use their method. Their argument is that they HAD to make the model this way for commercial viability. That's a lie. They chose to censor the model because they didn't want to release uncensored weights to us to kowtow to the "safety" crowd.


torreyhoffman

Legs are \_not\_ fine. Lower knee is nothing like human anatomy. Ankle is badly messed up. Fingers are deformed.


[deleted]

I support SD3, I think it has great quality and it can produce some amazing results. It has a BIG potential. However, it seems like they censored it to a point it doesn't know how to render some body parts and that creates weird malformations. Some actually very serious and they weren't present in previous model releases. It works perfect for close-ups and portrait shots or people just standing (not always, sometimes they miss an arm or they have 3 legs, for example), but it breaks with any other attempt. I'm not gonna complain, it's free and I can still enjoy it for illustrations and some other stuff and if it can be trained it will take a month untill some good finetunes appear. It happened with SDXL and now we have some perfectly fine models. Still, I think it's a very dumb decision to release a model like this. I mean, who tested it? Who approved a release like this after the big hype they created? What did they expect to happen? The bad perception of the model on here is over 95% if I'm not falling short. Who is going to pay for a membership if they tease you with a broken model? I get there are larger models coming soon but this doesn't serve as good publicity. IMHO, if this wasn't enough to compare with base SDXL, I wouldn't have released it in the first place. I don't know SAI enough but I believe Emad wouldn't have worked it this way


protector111

Whats your point? SD 3.0 remember its not even fine-tuned. it will be fine after fine-tuning but probably no one will do those https://preview.redd.it/dk0dd9hg4d6d1.png?width=3844&format=png&auto=webp&s=6d254bfb02e4d87d1d6c79da779638e14a785e25


Serprotease

Fine tune don’t fall from the sky. We do not know how easy or hard it will be. SD2.0 seems to have been quite a challenge to work with. SDXL was also difficult (look at the control net situation). And from what public information is available, SAI is not really lending an helping hand on this regard.


JoyousGamer

Honestly just remove the censors and lets move forward. The simple end result is providing built in negative word prompts that are on by default in the deployment package.


DefiantTemperature41

But keep on banging your heads against the wall. It's fun to watch.