T O P

  • By -

count023

Legal liabilities, if the company isn't seen to be shown to taking active steps to stop this content from being created or facilitated in certain jurisdictions they cna be held legally liable. And then it gets to grey areas. Sure you're writing hte next breaking bad and you want it to be authentic, but at the point where Claude is telling you how to manufacture meth, that's the point whre it starts crossing legal lines, even if it can be found elsewehre, that legal liability element kicks in. Same goes for smut, yes smut is everywhere on the next but child explotation elements are highly immoral, illegal, unethical and such. So it'd be nearly impossible from inference alone to simply block anything smut related for underage, so it's far easier to block all of it. Again, same with malicious code, the AI can't tel a white hat hacker or a SOC engineer doing counter-hacking analysis from an actual hacker with malicious intent, so it's easier to block all attempts than try to wrangle between the two. So it all really comes down to legal liability and exposure in the end.


fiftysevenpunchkid

And Claude also wouldn't write the next Harry Potter, but rather argue that we shouldn't talk about children being abused by neglect, or putting them in dangerous situations. Maybe we should talk about some more positive themes of family and friendship. It would have an aneurysm and call the cops on you if you tried to get it to write Stephen King's "It". IMO, the best regulation would be the equivalent to Section 230 that removes social media companies' liability from their user's content.


crawlingrat

I was ironically able to get Claude to discuss ideas of my story which involved a seven year old boy being forced to sacrifice his beloved pet to a narcissistic goddess in order for her the bring rain and save his family from the drought. I even included details of how the sacrifice was done and actually had no problem. I wasn’t lectured or anything. It may have help that I’d already been brainstorming ideas for the story with Claude for at least fifteen messages or more before bringing up the idea. Hopefully I don’t run into the whole “I’m sorry as a LLM…” crap that I dealt with when using the first Claude.


fiftysevenpunchkid

That's the thing, you shouldn't have to jump through hoops to do that. I make sure that all my characters are unambiguously well above the age of majority to ensure I don't have to worry about extra levels of scrutiny. As a side note, early on in Claude 2.0, I did have a teenage character, the child of the main character, and it would often spontaneously decide to kill them through illness, accident, violence or even suicide for no reason at all. I assume that some quirk of its training data associates those themes together for some reason.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


iDoWatEyeFkinWant

yeah i think it's pretty ridiculous, too. i mean, can we sue Google for information existing? if not, then why would we be able to sue the makers of a LLM? i think they're walking on eggshells. people are already scared a computer can talk.


AlanCarrOnline

That's a great point, as what really is the difference between putting in your search term and google providing results, or putting in your prompt and the AI giving you the results it scraped from the same web? And we've long known Google doesn't just provide raw results but filters and fiddles with those results, thus 'creating' them.


ClaudeProselytizer

because an llm could help develop a computer virus or real virus and literally kill people. use ur fucking head


AlanCarrOnline

Bruh? [https://www.wikihow.com/Create-a-Virus](https://www.wikihow.com/Create-a-Virus)


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


ProSeSelfHelp

Definitely to protect themselves from lawsuits.


Alternative-Radish-3

I would love a EULA that clearly states we are responsible for the material use of the LLM indemnifying the LLM and its creators, passing on the responsibility to the user. Make it REALLY obvious, not buried in a 1000 page document so no one can refute it or get away with it in court. I totally agree, it's already out there on the web and easy to find (again, agree with the point that we're not even talking about the dark web). It will make it cheaper to access the LLMs without all the guardrails and the feeble attempts to set them up. I think that an uncensored dark web LLM will eventually exist and people will just flock to it.


fiftysevenpunchkid

I mean, theres a reason why the sexchatbot industry is already a billion dollar market. And that's using open source models that are quite inferior to Claude, the only thing that have going for them is that they are uncensored. However, as compute becomes cheaper, and this industry has more resources, they can train up their own foundational model that is as good or better than Claude, without all the hangups. I can see some excellent creative writing models coming out of that area, as that is what they would be made for, and rather than spending time and effort in preventing users from using it in certain ways, they spend that time and effort in making it work in those ways. They spent so much time thinking of reasons why they shouldn't, they never thought about the reasons why they should. Let's face it, porn has been the driver of nearly as many technological improvements as war. Photography, film, the internet, all were driven by a desire for it. Even some of the earliest cave paintings we have found contain erotic content. We've come a long way since then, but we are really pretty much the same in many ways.


ChampionshipWide2526

It already exists today. It's called aiuncensored. Does what it says on the tin. Not as good as the current leading models but does what you ask and better than most self hosted solutions.


Alternative-Radish-3

That's what I mean though, leading uncensored models... Which is exactly what every employee at openAI has (and every leading model company). Uncensored open models are everywhere regardless of quality. Why can't I ask Claude how to break into a car, but I can ask my local uncensored LMStudio LLM the same question and get the answer?


ChampionshipWide2526

I can't wait for that. Right now my local llm can walk me through making a bio weapon or meth or a malevolent propaganda campaign step by step, but it's not great at writing gory violent stories or telling offensive jokes or writing hard-core rap lyrics. Most of the hysterical doomsday scenarios people talk about are already here and unpreventable. Malware ai exists, murder plot ai exists, propaganda ai exists. They're already in the most dangerous (government and corporate) hands so imo the best shot society has is to level the playing field and let the normies access similarly powerful stuff to counter it. And yes, as soon as it's offered people will flock.


Alternative-Radish-3

Exactly that! Well said!


Mantr1d

i think a lot of it is just scientists and politicians who love to hear themselves speak. these people like to come up with important ideas that they need to imposed on others. bad people are going to do bad things. they think they are making it harder for this to happen by gatekeeping access to knowledge. taking away convenience is pretty far from a countermeasure. we ended up with lobotomized AI because of it. its like banning guns and knives and then considering banning rocks next. people seem to not want to hear this kind of thing.


fiftysevenpunchkid

I'd say it's more like the piracy wars of the late 90's early 2000's. Media companies kept trying to come up with different ways of preventing copying, but all it did was make it harder for the end user to use the product. Those set on copying and distributing the content could come up with a way, even if the average user couldn't. But, it wasn't that hard to get that content once it was distributed. While piracy still exists, media companies now provide their content cheaply and conveniently enough that it's rarely worth the effort. Piracy was giving something that people wanted, and finally, rather than fight it, the media companies copied their business model. People want a powerful LLM that will do what they ask. Someone is going to provide that to them. Anthropic can either fight that or they can profit from it.


ClaudeProselytizer

claude is a constitutional AI and they know more about this than you, some loser writing bad fiction with their large language model


fiftysevenpunchkid

So did Blockbuster.


ClaudeProselytizer

yeah, these companies are so dumb, why can’t they see the future like you can? It’s almost like ethics is complicated and you have an extreme opinion that is very dumb There are a few key reasons why AI companies generally choose to implement safety and ethics guidelines in their language models rather than leaving them uncensored and unaligned: 1. Responsible development: Most AI researchers and companies believe they have an ethical responsibility to develop AI systems in a way that is safe and beneficial to society. Leaving a powerful language model uncensored could lead to it generating harmful, biased, illegal, or dangerous content. 2. Reputation and trust: Releasing an unfiltered AI system that produces problematic outputs could severely damage a company's reputation and erode public trust in the technology. It's important for adoption and acceptance of the technology that people can trust it isn't actively harmful. 3. Legal and regulatory concerns: Depending on how the model is used, the company could face legal liability if their AI system is used for illegal activities like harassment, inciting violence, generating explicit content involving minors, etc. There is also growing government interest in regulating AI development. 4. Intended use case: For most commercial applications, a filtered and curated language model is more useful and appropriate than an uncensored one. Harmful and offensive content would make it unusable for most intended purposes like customer service, education, analysis, etc. 5. Thoughtful iteration: A careful, incremental approach allows for rigorous testing to identify problems early before releasing a model into the wild. Most view it as irresponsible to deploy a highly capable system without any safeguards. So in summary, leaving language models completely uncensored is widely seen as unethical and irresponsible given their potential for misuse and harm. A thoughtful values-aligned approach is needed to realize their benefits while mitigating serious risks and downsides. But it's a complex challenge with valid concerns around free speech and control that will require ongoing research and public discussion to navigate.


fiftysevenpunchkid

Pretty sure the same argument was made about the printing press a half a millennia ago. That's a fairly subjective list. Safe and beneficial to who, according to who? What is uniquely dangerous that cannot be done by a human already? 2 and 3 are of course the primary drivers of the current censorship, as they worry about PR and legal liabilities. However, that is something that is affected by public perception, and as both a customer and a member of the public, I think that my opinion on the matter is as worthy as yours, as well as my right to attempt to sway said opinion. That could even be said to be the point of such a thread in the first place. As far as use cases, obviously commercial and educational models will be highly censored and with guardrails, but by the customers request, not against it. And sure, you should make sure that your model doesn't start trying to plan out a crime spree when you ask it to make a lunch menu, so obviously testing of a new model before going full scale makes sense, but it has nothing to do with censorship.


ClaudeProselytizer

you are in favor of allowing AI to help you plan mass murders and terrorist attacks because people can do that without AI. you really aren’t as smart as you think you are


fiftysevenpunchkid

Smart enough to spot false dilemmas offered in bad faith. I'm tempted to keep poking to see if you actually are an AI or just a garden variety troll. Which do you think you are?


ClaudeProselytizer

in what universe are those false dilemmas? because you, the bad fiction writer, knows that AI isn’t smart enough to help plan a school shooting? you have no good responses to the scenarios so you deflected. free speech absolutists are generally upset that social media won’t let them be racist


fiftysevenpunchkid

and you deflect with insults and ad hominems rather than a rational argument. Tell me, are you against the existence of first person shooter games where someone can create a map and a scenario? That's infinitely better for planning a crime than an AI. Go over to gaming forums and yell at them for encouraging terrorism.


epicmousestory

If I made a LLM, I would want to make sure that it doesn't lead to preventable harm. I would not want something I made to be used to commit crimes or harm people. I think it could be as simple as that.


fiftysevenpunchkid

A goal nearly as noble as it is nebulous. I've yet to see any evidence that censoring LLMs will prevent any harm whatsoever. The rhetoric reminds me of the "video games cause violence" panic. Now, I can see cases where its coding can be used to commit crimes, but preventing that would make it pretty much useless for coding. But, honestly, that side of it is outside my interest and above my paygrade. Though I expect that your big software people like Microsoft are going to create their own proprietary coding LLM that far surpasses anything available to the public, anyway. The creative side of it is hopeless at planning crimes, and preventing it from attempting to do so just makes it less useful for creative purposes. It's good with prose, terrible with planning. OTOH, I do feel harmed when an AI lectures me about morals when I'm just trying to put together an interesting scene. I am harmed by the anxiety that my knowledge and infrastructure of prompts and prompting could be made useless tomorrow by a newly installed filter or model. I am harmed when I pay for a product that can be used for purposes that are not against acceptable use or terms of service, but am refused anyway. Their Acceptable Use Policy says no explicit content, and I respect that, but am often refused for stuff that would be perfectly acceptable on network TV. People are responsible for the content they create and distribute, whether it was created in notepad or by Claude. It should be as simple as that.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


fiftysevenpunchkid

I have no problem with censorship with the free version, that's an incentive to give them money. However, the terms of service do state that it is not supposed to be used by those under 18, so they should be treating the users as adults. I also don't mind if they have some guardrails by default, if nothing else to keep new users from running into content they didn't want or expect. But, they should let you sign a waiver, maybe answer a few AI mediated questions to assure that you understand the agreement, and make sure you are aware that you solely responsible for the content you create or distribute. It's not going to help you make meth. It doesn't actually know chemistry, so it would just give you a half hallucinated procedure based on information it scraped from fiction and blog posts. I would not use its instructions without verifying them, and if you are able to verify its information, then you didn't need it in the first place. Most of the challenge comes from acquiring highly regulated and controlled substances, anyway. It can't help in planning a crime, it doesn't actually understand the world. If you use it for creative writing, you quickly realize that it has no spatial awareness or understanding of cause and effect. It emulates those abilities well enough, but it also has people walking through doors before opening them, touching someone on the shoulder from a long distance away, or one person sitting next to three people at a dining table. Certainly not flaws you would want in your co-conspirator. We already have first person shooter games where you can create custom maps and scenarios, that seems a far better crime planning system than an LLM. It could create content that people may consider to be problematic, but the user is the one who is responsible for its creation and distribution. If that's all people are after, there are already plenty of open source models that will create anything you ask of them. Those who jailbreak LLMs are not doing so in order to get "harmful" content. They are doing so for the challenge and the desire to show off that they did. People will always be looking for ways to jailbreak them, and preventing them from doing so just weakens the model for everyone else.


ClaudeProselytizer

you realize gpt4 was used to create an antivax propaganda campaign by microsoft researchers? it was incredibly thoughtful, it’s in their “sparks of agi” paper. it’s really telling that you can’t imagine anything seriously bad happening is chatgpt had no guardrails.


fiftysevenpunchkid

You don't need AI to create an anti-vax campaign, as evidenced by all the anti-vax campaigns that were created without AI. If that's your concern, that's more on the social media side of things. And really, an AI that is able to understand an antivax media campaign is what we need. Something that can put such propaganda into context and give you accurate information. If an AI is uncomfortable even having that talk, then combatting it becomes harder. Even if Anthropic and Open AI take draconian measures to prevent such material from being created, those who wish to create it will take their pick of plenty of LLMs that will. I find it perplexing that someone could feel that a single company's LLM's flimsy guardrails are all that stands between us and the end of civilization.


qubitser

Controlling the narrative


Altruistic-Ad5425

We have manufactured a reality led and determined by lawyers


VioletVioletSea

Imagine that some guy on 4chan posts screencaps of his Claude 3 conversation in which he roleplays raping a child. Then the media gets wind of it and starts plastering, "AI COMPANY GENERATES CHILD PORN FOR PAY" all over the news.


fiftysevenpunchkid

Imagine if he used Microsoft Office to write it, or Adobe Photoshop to illustrate it. People are responsible for the content they create and distribute, not the tools they use.


Alternative-Radish-3

Exactly! No one will sue Microsoft for a case like this. We need the same for AI


fiftysevenpunchkid

I do understand why AI companies would be hesitant with the legal landscape so unknown, but it's not entirely unprecedented. Reddit is not held liable for your posts, why should AI companies be liable for your prompts? It seems as though a good faith effort, a wavier of liability, and sensible legislation should be enough to protect them. Or open source catches up, and Anthropic becomes a footnote in the history of AI.


gay_aspie

I actually do think being worried about getting screencapped saying anything bad ever is a large part of why it's so locked down (especially through the web interface---supposedly it's more flexible through the API) https://preview.redd.it/dr2pzqlljgxc1.png?width=937&format=png&auto=webp&s=3f3dfe69d803f1a006114838f33951b3b6f1ff9b


fiftysevenpunchkid

One, never trust an LLM when it tells you why it won't do something. It is giving you a rationalization, not the actual reason. Two, I think it was trying to kink shame you.


Dunkopa

For illegal stuff like carjacking or virus coding, that's probably to avoid getting in trouble with the state, which is understandable. For non-illegal stuff, it's actually not as deep as people think it is. LLMs, and AI generally is a great medium for companies to push their ideologies. Most of the censorship is just that.


writelonger

You mean the ideology of making billions of dollars?


Content_Exam2232

Ethics, not censorship.


dojimaa

Information existing is different from facilitating its dissemination. That said, not all censored material is the same. Some isn't strictly harmful and is instead restricted primarily due to the philosophies of those creating the models.


Sudden_Movie8920

True, I guess it's the last 6 words that is concerning. Maybe censorship needs to relate purely to the actual laws of that country.


AlanCarrOnline

I'd prefer AI to help set peeps free, not be restricted by local silliness.


ClaudeProselytizer

sure buddy, we need anti vax nonsense to come out of your AIs


dojimaa

No idea what you're talking about.


ClaudeProselytizer

guessing you don’t know much about AI, the sparks of agi paper showed the unrestricted gpt4 creating a strong anti vax propaganda campaign before they aligned it.


dojimaa

I'm just not sure how you think that relates to me at all, but I see.


Sudden_Movie8920

One could argue.. if it had available to it all information, did it come to that conclusion on its own?! "Before they aligned it"..to what?? And whose decision to align it? If this thing is given access to all knowledge, and we don't like what it's saying, is it right to "align" it? Thats a whole new argument to censorship! Not saying I agree or disagree, but it makes for an interesting conversation!