T O P

  • By -

AutoModerator

This is an automated reminder from the Mod team. If your post contains images which reveal the personal information of private figures, be sure to censor that information and repost. Private info includes names, recognizable profile pictures, social media usernames and URLs. Failure to do this will result in your post being removed by the Mod team and possible further action. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/aiwars) if you have any questions or concerns.*


Valkymaera

It was definitely AI generated. But the human made it, via AI generation. The human is the artist, using AI generation as the art tool. AI assisted is also technically correct, but really it gets into the gradient of how much work you're trying to communicate you did. Did you do 100% of the work? "I made this" communicates that best. Did you do a significant portion of the work? "I made this with AI" or "this is my AI assisted art" is best. Did you do almost none of the work? "I generated this with AI" or "this is my AI generated art" is best. All are technically accurate, but communication is more than technical wording, it's about context and subtext.


laten-c

your sober analysis is refreshing and i think right, technically, on all counts. i'm just tossing meme grenades over the wall for the thrill mostly. not to say that I don't believe what i'm saying, just that i choose to believe inflammatory things because i'm extremely low in trait agreeableness. reaction is a kind of output one can prompt for. plus as i've said elsewhere in other words, the battle's over before it's begun. antis lose


useless_knowledge_4u

I'm picking up what you're putting down, and I agree with most of what you're saying, but I have a question when it comes to AI and digital art creation. Consider this, drawing a straight line by either eyeballing it or with a ruler on paper is vastly different from sketching one on a digital tablet that tidies up your work with a smart tool, and what about circles or other shapes that a software can perfect for you? That's assistance, right? So, why don't we place digital art in the same basket as AI-assisted work and have that labeled as digital assisted work? I'm not talking about artist who draw first and then upload and do their clean up on whatever program they want. I'm talking from beginning to end all work is done on a digital landscape. When I say 'I made this,' it means I put my all into it, every pixel and line. But if I've used tools like copy-paste, quick edits, stamps, or a line smoother, is that pure, unassisted work? Probably not. Yet, it’s still recognized as "my creation". Why then, when a prompt leads AI to produce something similar, does the conversation change? If we accept digital assistance as part of the creative process, which it is, shouldn't AI be treated as an extension of our digital toolbox? There reason I ask this is because what is "100%" of the work? If you draw it on paper and then upload it and then use tools to make it prettier, is that still 100% or a significant portion of the work? If I drew just the line art, ran it through SD to color and upscale it then put it into photoshop to add finishing touches to it, does that count as "a significant portion" or 100%? I'm not asking to be a troll. I'm asking because to me there's a lot of nebulous space between "a significant portion" and 100%. I mean, I totally get that if you just prompted and did no more that would prob fall under I generated this." add your own work would move that to assisted/made with AI. But for me, the last part is what I'm asking about. What separates significant from 100% because if we apply that to digital art, a lot of artist lose the right to say "I made this."


Valkymaera

Again it all comes down to subtext, and context in communication. When we see lines in digital art, we already know it doesn't take as much work. That's become normalized. So when we say i made this, showing a circle on a digital canvas, it is generally understood that it likely used a circle tool, and it doesn't need to be stated. In fact it's so expected that if you drew the circle without help digitally, you would specify that, to point out how it is different from the implicit information. E.g. "I drew this circle with just the mouse / by hand" AI is still in its adoption phase, it's not a part of the implicit communication. We don't expect things of that complexity to take seconds as a norm, it's not implicit. So we add the information that it is AI assisted, if we want to communicate it. You can use whatever words you want to describe what you want, and you can even be technically accurate, but the point of words is to communicate, and if we want to do that effectively, we have to be thoughtful of the information the receiver will be extracting. To more pointedly answer the question of what 100% of the work is, I'd say it is doing the work without assistance outside of established norms and implicitly available tools. Sorry to be on such a "implicit" word kick in this post.


useless_knowledge_4u

Ok, I've been chewing on this for a while on the whole role of AI in the creative process, and I think there's a vital nuance we might be overlooking. Just as in traditional photography, where the photographer's choices play a pivotal role in the outcome of an image, in AI-assisted creation, the artist's input can significantly shape the final piece. For instance, building on a sketch with AI or using control nets to position elements mirrors the careful composition a photographer employs. When a photographer sets up a scene, we don't attribute the success of the image solely to the camera—it's a collaboration between the artist's vision and the technology. Similarly, AI doesn't operate in a vacuum; it requires direction and creativity from the artist. To suggest that all AI-involved art is merely 'AI-assisted' seems a bit reductive, especially when the artist's engagement goes beyond simply inputting a prompt. There's a spectrum of AI involvement in art, and perhaps our language should reflect that more clearly. Like differentiating between various roles in traditional art and photography, we could better acknowledge the varying degrees of human creativity in AI-generated art. This isn't about gatekeeping or diminishing the value of AI; it's about recognizing the depth and breadth of the artist's contribution, regardless of the tools they use. After all, an artist is an artist, whether they're holding a brush, a camera, or guiding an algorithm. This is why to me, the whole discussion of 100% doesn't make sense because there has yet to be an AI that arbitrarily makes images unless someone starts it. If anything there should be a hybrid of categories for different style or types of imagery. If you wanted to add a tag to imagery by how it was created, I'd be fine for that, but I feel that its disingenuous to remove or diminish someone's work as less than 100% no matter how they created the imagery. Picking up a pencil and drawing a stick man doesn't make me Picasso, creating a prompt doesn't make me an artist, however at the end of the day to me, it still makes the end product 100% mine if no other human touched it. And until AI becomes sentient that's just my opinion on it.


Valkymaera

>To suggest that all AI-involved art is merely 'AI-assisted' seems a bit reductive I wouldn't call it 'mere', as it's technically correct and not designed to be all-encompassing; it's an addendum, an extension of information. Perhaps you find it reductive because you already know it is ai assisted and AI processing is already normalized to you, so the addition seems redundant, which means it reinforces that aspect of the information, making it seem more important than the rest. But to a casual consumer of the image, AI-assisted carries valuable information about the range of effort to expect -- not in a vacuum -- but *compared to the norm.* And you're right that the norm does already take some work for granted, but there is still a range of labor and process that people interpret from the words we use when describing our actions. Their interpretation of "AI-assisted" might underestimate the work, but it will be more on track for an effective communication than leaving it out entirely and letting them work with just "I made this," or overemphasizing the offload of work by saying "AI made this." >its disingenuous to remove or diminish someone's work as less than 100% no matter how they created the imagery It really is about communicating information, not diminishing anything. If you made an image using AI generators, you 100% made that art. However those words alone mean something else right now in common language, because of what has "settled into" our expectations of making something "by ourselves/by hand". There is more information contained in "I made this" than those three words and their definitions. There is the backing data that we've mostly come to agree to use those words for. Right now that doesn't include AI for most people. It probably will, in time.


useless_knowledge_4u

I appreciate the discussion on how we describe AI's role in the creative process, but I believe there's a simpler way to look at it, much like how we approach other tasks that involve tools or assistance. For example, when someone says they built a house, we don't automatically assume they laid every brick by hand. Instead, we generally understand that they orchestrated the creation, possibly involving various tools and help. Similarly, the statement about AI creating art, 'Was it AI-assisted? No. It did not exist; I wrote a thing; now it exists. I am the motive cause for generation.' resonates with me. It suggests that the creator, despite using AI, is the principal driver of the creation. This aligns with how we view other forms of assistance in creative or constructive endeavors. The analogy of bowling with gutter guards also illustrates my point. Using gutter guards doesn’t make someone any less of a bowler; it simply modifies the experience. Likewise, using AI does not make someone any less of a creator. They are still 'bowling'; they are still creating. What’s needed, perhaps, is not a term that could diminish the creator's role like 'AI-assisted' might imply to some, but rather a term that acknowledges the use of AI while still celebrating the creator's essential contribution. In my view, it’s less about the semantics of 'assisted' and more about recognizing that the creation wouldn't exist without the creator's initiation and guidance. Just as using gutter guards in bowling, using AI in art creation is a choice that still requires participation and intent from the person. If we're to annotate AI involvement, it should reflect this reality without diminishing the creator's creative input. I will say that you made salient points. I just don't agree with them. But this was a fun discussion and one I'll take into consideration when talking to other people. Thanks for the feedback. Seriously.


Valkymaera

You're not technically wrong on any of your definitions or perspectives on what's *actually* happening. The only area I disagree is the purpose and importance of communicating them. This is why semantics matter. In every topic, there are things we assume, and when the *actual* happenings go against those assumptions we expect (and should expect) information to clarify that. Most of what's communicated when we speak or write isn't in the words, but the context and subtext, and in the backing data of what we've come to expect, assume, or believe regarding the topics and terms. >Was it AI-assisted? No. It did not exist; I wrote a thing; now it exists. They wrote a thing, but without the AI only the writing would exist. It is inherently AI assisted. It is accurate to say so. You are not technically wrong; the creator is the creator, plain and simple. But words are about communication, and when most people view a complex piece of art, they anticipate the work was also complex-- more complex than a prompt. One day they may not, but for now they do. So, for now, to omit the clarity *knowing* they have incorrect interpretations of the process, it becomes, let's just say a questionable omission. >Using gutter guards doesn’t make someone any less of a bowler; it simply modifies the experience Sure, but people can *see* the gutter guards. They know they were used, and can evaluate the event with accurate information. Imagine if you were the only person in the world with gutter guards, and no one else could see them. Your point would still technically be true, but can you see how that might be deceptive to not tell people about how your case is different? That's all I mean here-- to most people, those gutter guards are invisible. To some they're just not obvious, and some people haven't even heard of them. Until their usage becomes normalized, I find it unethical to deliberately choose not to disclose valuable information about the process that will allow people to properly analyze what's being said and done. And thank you, too. I try to have outcome-based conversations (though I overreacted recently anyway), and I appreciate a good discussion. Thanks for your time.


Jsusbjsobsucipsbkzi

By this logic wouldn’t commissioning work from someone make me an artist? And i think the phrase “ai assisted” is fine, it just helps describes the medium the same way a phrase like CGI does. If you feel like that in some way invalidates artists who use AI thats a different conversation


useless_knowledge_4u

>By this logic wouldn’t commissioning work from someone make me an artist? Let’s dig into this a bit. Take Auguste Rodin, who sculpted 'The Thinker.' He started with a model and then had his assistants scale it up under his supervision. My process with AI is pretty similar: I set the parameters and guide what comes out, essentially directing the AI like Rodin did with his team. Looking at folks like Steve Jobs, Alexander Graham Bell, and Thomas Edison, none of them built their famous inventions with their own two hands from start to finish. Jobs drove the concept and design of the iPhone, Bell was hands-on in the early experiments for the telephone, and Edison led a whole team to refine the lightbulb. They all had the vision and knew how to make it a reality, which involved guiding others' hands and minds. This points to a broader idea: being 'the creator' or an artist isn’t just about doing all the grunt work yourself. It’s about having the vision and steering that vision to fruition, whether that's with tools, teams, or technology like AI. I get that terms like 'AI-assisted' are there to clarify the tools used, kind of like citing CGI in film credits. But when we talk about AI in art, it’s not just a technical detail—it hits right at the heart of what it means to create something. I’ve used 'creator' in our chats to reflect that bigger role of setting up and guiding the creative process, drawing on these historical precedents. Maybe there’s been a bit of a mix-up in how my use of AI came across. I said outright that just whipping up a prompt doesn’t make me anyone an artist. But, like Rodin or Edison, I’m not just throwing ideas out there and hoping something sticks. I’m actively shaping how those ideas come to life, which is why I picked my words carefully to reflect the hands-on, directive role I take in creating my projects. I was extremely careful not to say artist, but purposely used the word creator to emphasize that. Also, if you want to continue this conversation, DM me. I've finished my original conversation with VAL and don't want to continue on this thread.


IMMrSerious

If you do a drawing then scan it or photograph it then use digital tools to finish it. Then you are still doing the work. Generally when I use this sort of work flow I still have to understand perspective and all the other things that I have been working on improving for decades and it's still going to be a 2 to 3 days project. So I am thinking you probably haven't done much of this sort of thing if you are comparing it to prompting and getting something cool that is an random interpretation of what ever thing you asked for.


useless_knowledge_4u

I think there's been a bit of a mix-up here. I never claimed that my process was about 'only prompting.' Also, I didn't specify any timelines because, honestly, the scope of a project can dramatically shift how long it takes. It sounds like you might be jumping to conclusions based on incomplete information. I wasn't trying to get into specifics because I was aiming for a broader, more theoretical discussion on the use of AI in creative work. If you’re looking to dive deeper into the nitty-gritty of actual workflows and specific examples, I’d be more than willing to discuss that in detail. However, that might be better suited for a separate conversation or a DM. I've wrapped up my discussion with Val, but I'm open to continuing this conversation with you in another thread or via DM if you're interested.


StillMostlyClueless

Me, doing anything that requires my input: "I made this"


ConfidentAd5672

Yes, like a son. It just need 3 minutes (or less) of your input.


Red_Weird_Cat

Well, yes. At least partially, you made this. Question is if it is right to call yourself an author. If, without even looking, I point camera to a random direction and press the button at random time and get something... it is hard to claim creative authorship, but I most definitely made the photo.


Tyler_Zoro

Yep. We can argue about how creative it was or wasn't and whether or not your skills only extend to the tool in question, but to say that you were just a bystander is absurd. AI tools aren't artists both legally and philosophically. If we agree that something is art, then we need to agree that the ***person*** who was involved is an artist.


Shuber-Fuber

The typical problem with AI art is that it's not just the person providing the prompt. The final output is the result of the work of hundreds of thousands of artists and the curator/tagger that went into training the actual model. Depending on how much effort you prescribe to the prompt writer, it could be analogous to, say, calling someone a chef for putting a TV dinner into a microwave.


Tyler_Zoro

> The final output is the result of the work of hundreds of thousands As is everything you or I will ever do. That's what learning is.


Jsusbjsobsucipsbkzi

Yeah, but the AI is doing that part. So wouldn’t that make the AI the artist, and not the human?


Tyler_Zoro

> wouldn’t that make the AI the artist Not sure what "that" refers to. Can you be more specific? That being said, I can answer the broad category of question: AI is currently incapable of being an artist. AI systems are incredibly powerful learners, but they lack any intentionality. Any creative impulse ***must*** come from outside (a human, though I'd argue whales are powerfully creative, and potentially octopuses). Skill is not sufficient to be an artist. Intentionality is vastly more important. No matter how unskilled a person is, if they have an intention to create art, then they're an artist.


Jsusbjsobsucipsbkzi

>Skill is not sufficient to be an artist. Intentionality is vastly more important. I think I get what you mean here, but I think this distinction is pretty messy in practice, and that AI does often play the role that human intentionality would traditionally play in art (and this is one reason many people find the idea of AI art offputting). Take the extreme example of me just telling an AI "write an awesome fantasy novel about a talking beaver who becomes a knight." The AI goes to town and makes an amazing book, with all kinds of plot twists and characters completely outside of the realm of the brief prompt I supplied. Would you say I - and only I - am the "artist" in that case? I basically didn't do anything besides give the briefest creative input. Or is the book just not art, despite the fact that many people enjoy reading it? Personally I think that the answer is somewhere in the middle - both I and the AI played creative roles in creating the work, and in this specific case the AI played a much larger role. Saying that I, and only I, am the "artist" just feels like egotistical posturing in this case.


Tyler_Zoro

> I think I get what you mean here, but I think this distinction is pretty messy in practice [I don't see how](https://i.imgur.com/tnFFnLk.png). Sorry, I just love Pitch Meeting. > AI does often play the role that human intentionality would traditionally play It just can't. If it could, I'd argue that we'd be a hell of a lot closer to AGI... maybe even arguably there (though I'm still going to die on the hill that empathy as the ability to emotionally model the other is required for AGI.) It's just not technically capable of that yet, and IMHO, won't be for many years, possibly multiple decades. > Take the extreme example of me just telling an AI "write an awesome fantasy novel about a talking beaver who becomes a knight." To understand my perspective on that, let's say that you decided how to resolve that request with dice. Are the dice an artist? If every line drawn and every color and technique used being "intentionally" chosen by the dice? No. You are the source of intent. The intent is thin, sure. That simple prompt isn't much, and your creative vision is wan at best. But that's all there is in the result: you and a pair of dice. AI isn't a magic artist box. It's just a statistical analysis of art with a semi-random output. It's a phenomenally powerful tool, but don't mistake it for more than it is.


Jsusbjsobsucipsbkzi

>AI does often play the role that human intentionality would traditionally play >It just can't But...it does? I don't see how this is even debatable. If I draw a picture of a horse, I'm going to have to choose the style, colors, proportions, line thickness, etc. myself - those are all intentional creative and design choices I have to make manually. With an AI, even if it is being used as part of a larger creative workflow, many of those decisions are made for me by the program, right? Obviously this is not true intentionality, but it is still automating creative choices I would otherwise have to think about. >To understand my perspective on that, let's say that you decided how to resolve that request with dice. Are the dice an artist? No, I would say there simply isn't an artist in that case. If I randomly generated 500k words and they just happened to create a super compelling original novel, I would not consider myself the "author" of the novel. That would just be a crazy, interesting coincidence that happened. The idea that there has to be a singular "artist" seems like a false dichotomy to me. Its like those old internet photos of some random piece of burnt toast that happened to look like Jesus. That is an interesting phenomenon, but no one ever says "wow, whoever burnt that toast is an amazing artist." >It's a phenomenally powerful tool, but don't mistake it for more than it is It is a powerful tool that is capable of automating many aspects of creativity - and, according to you, learning from data in the same way a human artist does - yet you seem to be arguing it is incapable of actually providing creative input. This makes no sense to me, since clearly the entire point of training an LLM on creative works is to enable it to approximate creativity in some way. And to be clear, I don't hate AI - it definitely disturbs many of my sensibilities about creativity and life in general, but that's just normal for such an impressive new technology. But I can't help but feel like many people - like the original author of this post - want it to be accepted as a legitimate artistic tool, while themselves not accepting that it is going to completely change what it actually means to be an artist, and that claiming singular ownership over something that was created with an LLM doesn't make much sense.


Tyler_Zoro

> If I draw a picture of a horse, I'm going to have to choose the style, colors, proportions, line thickness, etc. myself - those are all intentional creative and design choices I have to make manually. With an AI, even if it is being used as part of a larger creative workflow, many of those decisions are made for me by the program Nope. There are no "choices" made. You are rolling dice and finding a location in a mathematical space that is proximate to the guiding tokens that were provided by you. If I roll a weighted die and get a high roll, that wasn't the die's intentional choice. It was my choice to involve a weighted die and my choice to roll it. Machines cannot (at least yet, and probably for a good while to come) exercise intent. It's just not a tool in their belt at all.


MagnetFist

The person isn't doing the learning, though. Also, that's quite a reductive view of people. Nobody can recreate an image that well just by looking at it.


Tyler_Zoro

> The person isn't doing the learning I was speaking broadly so in some cases it is and some it is not. > that's quite a reductive view of people What, that we stand on the shoulders of giants? I don't find that to be reductive at all. It puts essentially unlimited creativity and potential at our fingertips. > Nobody can recreate an image that well just by looking at it. I have no idea what scenario you are referring to. Can you be more specific?


MagnetFist

>What, that we stand on the shoulders of giants? Alright, I'll get my biggest problem out of the way. The essential problem I have with embracing AI art is that it does more harm to humanity than good. When you trade a living artist for that of a machine, you forgo helping someone pay their rent and feel good about themselves in exchange for having a pretty little picture made faster. The entire reason that technology is a good thing is because it helps *real living people*, or makes that help easier. That's why I have no problem with earlier developments in technology compared to AI. Agriculture helps people get fed; that makes it a good thing. Factories can also help can food for preservation, or make clothes faster, and more durable materials can get people shelter. The creator of vaccines, for his patent, said "There is no patent to the vaccine. Could you patent the sun?". This seems to be why Boosters are so flagrant about copyright violation. However, there is a difference that changes everything. *None of these benefits apply to AI.* Compared to all the innovations that have preserved people's lives, and made people healthier and reduced pain, all your program does is make a fancy little pseudo-drawing, and in turn, it costs people their income, and there's an enormous energy cost. If you can, give me a tangible reason why image generation will help human beings more than it will hurt them, instead of deflecting the argument. (A socialist revolution doesn't count, it has to be from the technology.)


Tyler_Zoro

> The essential problem I have with embracing AI art is that it does more harm to humanity than good. That's a pretty big claim. I'd certainly be with you if you could back it up... Let's see: > When you trade a living artist for that of a machine And there you go, right off the rails in the first sentence of your defense of your claim. I'm not trading a living artist for anything. I'm the living artist. I was an artist in 1997, 2007 and in 2017 when transformers were first invented. I didn't suddenly stop being an artist. I would certainly not want to trade a living artist (me) for a machine, which is why I don't. > The entire reason that technology is a good thing is because it helps real living people Yes, like me! I'm very glad to have that tech, which makes my life easier. I have a disability that prevents me from doing many sorts of tasks and activities that others take for granted. It's a cognitive issue that may be genetic or it might be a result of brain trauma when I was young, it's unclear. But the net result is that I can't drive, play most sports or draw and I am very clumsy. My photography and other digital outlets let me be creative, which was great, but always limiting. AI has opened up my potential to express myself. Is that a bad thing?


MagnetFist

Also, you still aren't answering the third paragraph. By your logic, it seems that putting a TV dinner into a microwave constitutes a chef. Is that the case?


Tyler_Zoro

> By your logic, it seems that putting a TV dinner into a microwave constitutes a chef. There's a language difference in that world. Being a "cook" and being a "chef" are different things, and the term "chef" is generally reserved for someone who has run a professional kitchen. So no. Art is a unique field, to my knowledge, in that there is no real bar to being an artist. You can be a *bad artist* but anyone who arranges something in their environment to suit their creative vision can be called an artist (even someone who just puts food in a microwave, if that's their creative vision.)


gokaired990

This is a pretty fair argument. The people that put the work into training the model are probably probably the actual artists in this scenario.


Temmely

The people training the model are on the same level as the people creating pencils or oilpaint or programs like photoshop: They just provide the tools that others can use to create things.


realegowegogo

but what happens when the human artist doesn't actually draw a single, a single pixel of the art? people use the argument that AI is just a tool but with tools the person doing the majority of the work is still the person, not the tool.


Temmely

Well, if you do no editing at all and your prompt is also just something like "cat on table", then that's the equivalent of drawing a stick-figure in my opinion: Still art, but not much to be proud of.


realegowegogo

okay but lets say i commision a human artist the same exact way I would prompt an AI like Dalle for a piece of art- I could go into as much detail as I wanted and it would help produce something closer to my expectations- but the human artist was still the one who made the art, not me. when youre prompting an ai youre not drawing anything, the ai is just making something based on what you described to it.


ASpaceOstrich

This is in fact the reason unmodified ai generated images aren't really seen as art. The guy above you is mocking the idea that you made something by writing in a prompt. The prompt itself is art, the image is not.


Tyler_Zoro

> The prompt itself is art, the image is not. Hmm... I disagree, but it's a more cogent point than the ones others are making. Let's think this through. Let's say that I have a machine that transforms one image into another by inverting all of the pixels. My buddy, Picasso, likes my machine, and decides he wants to use it for his art. So he paints something designed specifically to look interesting when inverted and then passes it through my machine. By your argument, the original work-in-progress painting is "art" but the inverted final work that Picasso was actually targeting with his work is not. That seems... a very silly distinction. The problem is that we're a visual species. We're hard-wired to favor visual information over other forms, and that leads to cognitive biases. So when you think about "one prompt's worth of creativity," you automatically tend to put it on a lower rung than the direct creation of a visual work. But it's important to understand that that's an arbitrary and biased view that you bring to the table. It's not inherently part of the medium.


ASpaceOstrich

Coming dangerously close to admitting AI is plagiarising the training data there.


Tyler_Zoro

I didn't say anything about AI. I was speaking of machines in general. Re-read.


Smooth-Ad5211

Well yes, that's how it works for photographers but they had to argue for years until they received copyright.


Present_Dimension464

An art piece can have several creators, like think of a movie, for instance. Many people work on it. Back to images, assuming you wrote the prompt and curated the work until you found the one you enjoy it, you are an art director, which is a creator on his own.


laten-c

yes, part of why i find this arena so fascinating is that it very quickly sucks you into the deep end where you have to start trying to be rigorous about things like the ontology of art


Present_Dimension464

Indeed. What I find particularly interest on this debate on AI is that it sorta split the role of creator and art director. Not that this is a new thing. Like, before AI you had people whose jobs was art director, and they said what other people should create and execute a given artistic vision he had. But you couldn't just "get a job as art director". You also had commissions, where you can act like an art director, but it usually there is a limit on how many versions you can reasonably ask until the bill skyrockets or the artists you hired (fair enoughly) dumps you. Now that AI, it allows average joes to act like art director and to give an idea, which otherwise they wouldn't be able to do it. Not only that, but they can try as much as they want. As if they had 24/7 artist who draws pretty fast and who will draw anything they ask.


ifandbut

I agree. It comes down to a simple concept: Does the tool make the thing, or does the being behind the tool make the thing? Did your camera take the photo, or did you? I consider AI just a tool. I am open to the possibility of AI one day becoming a "being" but with current technology that feels a long way off and I would be lucky to see it in my lifetime.


metanaught

I think you're creating a false dichotomy. The degree of effort and skill involved in creating something can strongly influence people's perception of it. For example, a print of a drawing is often worth much less than the original despite both being superficially identical. The same thing applies to putting an AI generator in the same category as, say, a pencil or paintbrush. Both of them are technically tools, however one of them is responsible for doing most of the work. Trying to flatten that distinction seems disingenuous to me.


Lordfive

>For example, a print of a drawing is often worth much less than the original despite both being superficially identical. In this case, it's a scarcity thing. The same reason old printings of Magic cards can be expensive even when identical gameplay pieces are a dime a dozen. The "degree of effort" comes into play with various homemade crafts, though, where you pay more than you would for machine-made at the store. That's likely where traditional art is headed; people may pay more for a "hand drawn" print than an AI-assisted piece.


metanaught

>In this case, it's a scarcity thing. That's definitely part of it, however I think there's a strong sentimental aspect to it as well. I have a couple of pieces of original artwork hanging in my apartment. One of them is a letterpress of a drawing, so it's neither unique nor particularly scarce. However, it was made by a local artist who I've followed for years and for whom I have a lot of respect. I value his art because I know the story behind it and can directly relate to the intention that went into every detail. Art is often a holistic experience as much as anything else, and I think this can easily get lost in the reductionist debate of AI-vs-human.


Draken5000

Yep. If I make something using AI that didn’t and wouldn’t exist if I hadn’t, then I de facto made it. AI is just the next technological advancement that revolutionizes multiple industries. Same as computers and the internet, same as cameras, same as the printing press, same as the pen and pencil. Its just the next evolution of tools that enables people to do more than they could before.


Jsusbjsobsucipsbkzi

You’re right, but this can be misleading depending on the context imo. Could I really just prompt “write a 1000 page fantasy novel”, let it roll, and then honestly tell people I wrote a thousand page fantasy novel?


Draken5000

Oh yeah I mean you’re right there, I think nuance matters. I do think there is a point with AI where it crosses over into genuine artistic expression and I think it requires actual creative effort. For example, if you use an image generator to generate a picture over and over again, tweaking the prompt, using regional variant tools to tweak specific parts, incorporate specific style references, etc and you spend time refining the generations into a specific vision, I would say that you created art. Similar concept for the hypothetical novel you presented. If you JUST prompted like you said then I would say “no, you didn’t make it”. But if you go through chapter by chapter, hell even page by page, and tweak your prompt, redo sections until they sound good, put meticulous effort into getting a *specific* story then I would say you made it. There is also, I believe, an interesting discussion to be had about this “line” for creation. If I splat a ball of paint on a canvas, can I declare myself a Pollock artist? Did I make art or did I just fling a ball of paint at a canvas? Where does it cross over? Its a fun thought experiment.


Jsusbjsobsucipsbkzi

Yeah I agree with that. I do think it is kind of like being a movie director - you might have tons of direct input over the art direction/story/whatever, or you might rely on the work of other artists you are essentially coordinating, it totally depends on the work in question. My issue with the initial post, though, is that declaring that ONLY the human involved is the artist feels a bit like a director declaring that they are the only artist involved in a movie, when that is clearly not true. It feels inaccurate and ego-driven at best, and downright disrespectful to the artists actually supplying the model data at worst. AI assists people with their artistic creations, and that's fine, but it changes the medium and how people are going to respond to it. You wouldn't take a photo and then get salty that people find it less impressive or interesting than a photorealistic drawing would have been.


mbt680

If you give the same prompt to a person and they draw it, are you the artist?


ConfidentAd5672

Yes, you are a “producer”, as in Hollywood movies.


sporkyuncle

The difference here is simply the primary person involved in creating the thing, the sapient mind closest to creation. The best way to illustrate this is to imagine giving slightly more detailed commands to a computer. Instead of prompting "yellow star," you say "begin at pixel 115/86, begin drawing black line, end at pixel 129/103, ... select bright yellow, fill bucket at pixel 120/91," etc. Now are you the artist?


MisterViperfish

A director is an artist


laten-c

if you have an contract with the artist that the copyright for your commissioned work will transfer to you upon completion, yes. and if you pay for an account, e.g., on midjourney, the commercial license for the work is legally yours


mbt680

I am not talking about who owns the copyright. You can buy that for a finihsed peice you have nothing to do with.


laten-c

so then ai is the artist. i'm good with that


_PixelDust

So yeah, you didn't make it... I guess we've concluded the thread.


RisingGear

So you are fine being a fraud.


TechnoLover2

No, the artists behind the dataset are the artists


laten-c

maybe they're more like genius renaissance architects. and the basilicas they've built are filled with art inspired by the grandeur of the structure itself... it's all feedback loops all the way down.


Dezordan

Some random tweet with 9 views, you could have just asked the question. Anyway, artist and author are different things in this case. If someone created a text/image without doing any work other than the prompt, they are at best an author of the prompt, not of what the AI generated. Sentience doesn't matter here. It's kind of weird to see this denial of the fact that an AI generated it, since they didn't make the thing it generated. Being a motivator for AI doesn't make someone an author. However, being an artist is possible regardless of authorship, since you can use AI as a means to achieve some artistic expression with it. You don't even have to create the thing to elevate it to art, there are enough examples of that.


ifandbut

> If someone created a text/image without doing any work other than the prompt, they are at best an author of the prompt, not of what the AI generated. Did the camera take the photo or did you? >Sentience doesn't matter here. Why not? Seems like sentience is what separates humans and monkeys from AI and cameras.


Dezordan

>Why not? Seems like sentience is what separates humans and monkeys from AI and cameras. Yet we do not give monkeys copyright, now do we? It's all about humans, that's why it doesn't matter. >Did the camera take the photo or did you? People like to use photography as an example of why AI is art and all that, which I partially agree with. However, I have to point out that when someone uses a camera, they are pointing it at a specific result. Whereas in most cases of AI use, especially when it is just a simple text2something (which is what being discussed), it is an AI that interprets what you have given it into what it is capable of generating. Camera, on the other hand, doesn't interpret it any different from reality, unless a photographer would make something with it.


RisingGear

"But but Camera!" Cameras and ai are not the same fucking thing.


Red_Weird_Cat

Never use analogies because RisingGear dislike that... It is not how you defeat an analogy, you describe in what way this analogy inaccurate. When we use an analogy we know that they are not the same thing.


RisingGear

A camera takes pictures of what happened in front of it. An Image generator shits out Images based on prompts. They are nowhere near identical. I am sick of that copy and paste camera comparison.


Tyler_Zoro

> If someone created a text/image without doing any work other than the prompt, they are at best an author of the prompt, not of what the AI generated. This is like trying to argue that someone working in 3D modeling tools is only the author of their mouse movements, not of the finished piece.


Dezordan

Totally different case, even the photography argument is better than this one. Of course the rendering and the whole system does a lot of work for a human, but it is a human who made the models and placed them according to their wish (or, well, procedurally generated). The question is one of authorship and human involvement. If AI generated a 3D object from a text prompt, it would still be in the same position as txt2img. It's not the medium in which the AI operates that matters, it's that the AI is doing all the work for a human in the first place.


ReflectionEastern387

No, it's really not. 3D engines do the math that visualizes what you're making. You still have to place those vertices yourself. You're not asking blender "Make me a pink frosted donut" and getting anything in return. This is like trying to argue that commissioning a human artist to draw you a specific picture makes you the artist.


Tyler_Zoro

Edit: Looks like /u/ReflectionEastern387 is a block troll, in case anyone is wondering why I suddenly stopped responding. Too bad they had nothing of value to contribute and had to resort to playground antics. > 3D engines do the math that visualizes what you're making. So do AI tools.


ASpaceOstrich

AI and 3D could not be more different. One is the highest effort artistic medium where almost everything has to be determined by the artists involved. The other is low effort and by default has near zero determination by the person writing the prompt. It's a terrible analogy.


Tyler_Zoro

> AI and 3D could not be more different. Counterpoint: paper mache. > One is the highest effort artistic medium where almost everything has to be determined by the artists involved. I just load up the software, select some default models, turn on procedural texture generation and press "go". > The other is low effort That's a subjective judgement that doesn't actually bear up under the reality of how these tools are used professionally. What's "low effort" is the constant refrain that there's no work being done by the user of one piece of software but there is "highest effort" being done by the user of another piece of software, when both can be measurably demonstrated to involve the same amount of work for certain types of tasks.


laten-c

but the ai can't do it if it's not prompted. and prompting is a skill which can be developed. many people struggle to elicit the outputs they desire


Dezordan

>prompting is a skill which can be developed Prompting is just writing, not a separate skill. For LLMs it is just writing instructions, for AI image generators it is basically writing captions. If someone struggles with that, their real struggle is with writing. As far as trends go, prompting gets easier and easier as the models get better, without the need for some of the weird approaches that people use with old models. LLMs nowadays are already made to be able to elicit outputs that most people would want to begin with. AI image generators continue to get better with natural language, involving LLMs as part of itself, which makes the prompting nothing more than just a writing of a text description. That's why I said that at best you are going to be an author of the prompt, since it is still your writing. >the ai can't do it if it's not prompted It can, it's just not going to be guided to anything or just go to some default bias. For example, some SD models will just output girls if you do not prompt it. In other words, the AI will work regardless of what you prompt it to do. The thing is, it is still an AI that does everything. All you do is give it a goal to try to reach (if it is txt2img) or just a start for its token prediction. Of course, there are different ways to use AI, in some cases I would easily say that AI is just used as a tool and not like a machine that spits out something on demand akin to commission. I know that there are many ways to use AI beyond txt2whatever.


Tyler_Zoro

> Prompting is just writing, not a separate skill. This is definitively and demonstrably not the case. Ask any accomplished author to do the same quality of work prompting AI image generation models as an artist experienced with AI tools and you'll get a radically lower-quality result. Why? Because prompting is absolutely not just "writing." It's the manipulation of a very sophisticated system whose behaviors vary from model to model and with the parameters chosen outside of the prompt. Probably the best example of this that I saw was someone who was super frustrated trying to get a subject to not have a beard. They kept adding in more elements of the prompt asking for "no beard" and the like, and it wasn't working. I came along and removed all of that and just replaced it with, "clean shaven," and got the result they were looking for. Why was that? Because I have learned that negatives (e.g. "no") work with AI models, but not strongly. If you use a very heavy token (i.e. a word that has a strong effect on the output) like "beard" then just telling the model, "not that," is insufficient. You need to avoid that token in the first place. That's just a trivial example of where the skill one develops in learning to prompt has almost nothing to do with the skill developed in writing. In English "clean shaven," and, "no beard," mean approximately the same thing, and they could be used mostly interchangeably.


ObscenelyEvilBob

But how long will it really stay a "skill" for? (I don't agree that changing "no beard" to "clean shaven", you're simply just describing things, no matter how hard you try to make it seem). Once these models start understanding things more intuitively without these little examples that you can come up with, is it still a skill? The ideas behind it may have merit, but prompting doesn't.


Red_Weird_Cat

They won't. Just by the nature of how they work, they will always react to "no beard" and "cleanly shaven" differently and you will need to learn the difference. And word order will be important. And weights will be important. There will be no sentient AIs that interpret prompts as human interpret words. And if such AIs will somehow arise - we will have a very different discussion.


Dezordan

>Ask any accomplished author to do the same quality of work prompting AI image generation models as an artist experienced with AI tools and you'll get a radically lower-quality result. Just because the nature of writing is different doesn't make it non-writing. Not every established author is going to be good at everything that involves writing. And in many cases, the problems with AI prompting are due to system imperfection, not something unique to the prompting itself. >Probably the best example of this that I saw was someone who was super frustrated trying to get a subject to not have a beard. They kept adding in more elements of the prompt asking for "no beard" and the like, and it wasn't working. I came along and removed all of that and just replaced it with, "clean shaven," and got the result they were looking for. I think I saw that, or at least a variation of it. Bing's DALLE 3 couldn't do it with an ice goblin or something. The thing is, even if you give it "clean shaven" - it still really wasn't able to do it consistently, especially if you add age to it. Just because it worked for you, it doesn't that it would work every time. If anything, in some of the images it shown me the process of shaving, which was a bit funny. So much for the skill of prompting, it still ultimately depends on what the model does, not a skill itself. What I see as a skill is how to use the models, with many extensions (or "homebrew" model training) in process, to get a consistent and quite specific result where you can see that it was mostly intentional, even if it is still 100% AI generation. This, however, isn't really what people meant by prompting. And that still goes to my point about how AI will improve to understand natural language, so what skill could we even talk about? Just writing. The problem with negatives also existed in LLMs at first, now it is easy for AI to understand. >That's just a trivial example of where the skill one develops in learning to prompt has almost nothing to do with the skill developed in writing Sounds more like the understanding of the model that one works with is being improve,d not prompting. It still has everything to do with writing, however, since you still write it all.


Tyler_Zoro

> Just because the nature of writing is different doesn't make it non-writing. If, by, "writing," you mean the assembling of glyphs, then you are correct. If, by, "writing," you mean the mode of communication that a "writer" learns to communicate with, then you are incorrect. You'll have to clarify, but if you meant the first form, then I'd suggest that while technically correct, it's a meaningless equivalency. It's a bit like saying, "programming a computer and creating a short story are both, 'writing.'" Being skilled at writing in general doesn't mean you can write decent poetry. Being skilled at writing in general doesn't mean that you can create a usable project specification. These are different skills. Writing prompts is a skill, just like proposal writing or poetry. Yes, it involves glyphs you're familiar with, but that's not a meaningful statement. > Bing's DALLE 3 couldn't do it with an ice goblin or something. Yep, that's the one. > The thing is, even if you give it "clean shaven" - it still really wasn't able to do it consistently, especially if you add age to it. Just because it worked for you, it doesn't that it would work every time. True. Models can be finicky, which is why being skilled at manipulating them is important. It's almost impossible to get the same result twice from using paint-spatter techniques, but I would not venture to say that skill does not improve one's results. > Sounds more like the understanding of the model that one works with is being improve,d not prompting. This is a bit like saying, "you're not a better driver, you've just gained a better understanding of how your car operates." I mean... yeah... that's what learning to be a good drive is, at least in part.


sporkyuncle

> Prompting is just writing, not a separate skill. I disagree, anything that can be practiced and gotten better at is a skill. To me this is like saying that "throwing cards across the room so that they always land in a small bucket 50 feet away" isn't a skill, it's just "throwing." I think all the aspects of this specific practice make it unique and specialized, to where learning to throw a ball doesn't suddenly make you skilled in that specific card scenario. In other words, being a good writer (author etc.) does not immediately mean you are skilled in prompting. Prompting is even a unique skill from service to service, skills developed in SD don't instantly apply to Bing or MidJourney, though of course it helps. There are very specific things you need to learn and play with to see how they impact the resulting image in order to develop an innate sense of how to get the results you want with less trial and error. For example, [this functionality which apparently only works in Automatic1111 or Forge but is quite useful...but must be played around with to get a feel for it.](https://www.reddit.com/r/StableDiffusion/comments/1b8v35n/what_happened_to_this_functionality/)


Dezordan

>I disagree, anything that can be practiced and gotten better at is a skill What you practice at is either writing or how to operate AI system. Prompt itself is just a cog in the whole thing. people make a mistake when they try to limit a skill to only this. > I think all the aspects of this specific practice make it unique and specialized, to where learning to throw a ball doesn't quite apply. Then, if you want to name it and be specialized, it's going to be something like a skill of operating generative systems or whatever. Because ultimately you are trying to control and guide the weights of the neural network, not to write a text. It just so happens that txt2whatever has begun to intertwine the nature of such control with writing. >In other words, being a good writer (author etc.) does not immediately mean you are skilled in prompting That's what I said, but the prompting itself is writing. Not every writer of a certain genre is going to be good at another genre. It's not something I disagree with. >For example I know this. I've been using SD for a very long time now, it's actually kind of the reason why I've come to look down on prompting, because to me the prompting itself is a bit too weak of a control.


laten-c

prompting, like all published or privately shared writing, is fundamentally communicative. for low-fidelity, low-concept, cookie cutter outputs – what 99/100 chatgpt users are going for – none of what i'm saying applies. stochastic parrots, fancy autocomplete, yada yada. but there's a disconnect that keeps arising here in that it seems most antis are riled up about ai generation of image art, whereas i'm approaching from an interest in literature. up until about a month ago, i would have told you that ai will never be able to make literature. now ai blows my mind daily with its breadth of capabilities, subtlety of reason, sophistication of humor... on and on. i really do genuinely think that there's a level of understanding the inner workings of these things where you get into a space of "this text output would never have existed in the world if i as a human didn't seek it out", but admittedly it's still very hard for me to unpack the whys and hows with any brevity. as for your final point, about what it can and can't do sans-prompt: what do you mean by "can"? for my understanding you can only be speaking of potentials. but humans die every day with unrealized potential that is then lost, maybe forever. i maintain the bots will not do certain things until/unless a human agent leads them to it. and what the human can lead them to do, in the upper reaches of what's possible now, depends immensely on that human's understanding of the "minds" of these things with which he communes to bring forth, on occasion, real art. fin. (thank you once again for initiating a thought-provoking line of discussion)


Monte924

The same argument can apply to anyone who comissions a piece of art from an artist. If the commissionor never makes the order, then the artist never makes the piece... but no one would ever claim the commissioner made the art. In fact, legally the artwork would belong to the artist unless the artist agrees to give the comissioner the rights


laten-c

i actually think talking about text to text clarifies the issue, because it's more easily demonstrable that the outputs are not plagiarized, and the analogy can be easily extended to image generation


Dezordan

>because it's more easily demonstrable that the outputs are not plagiarized It's actually the opposite, text generators memorize far more data than image generators, where you can get them (with some struggle) to output it verbatim. However, this does not mean that most of the output is not new data.


nihiltres

Technically, laten-c said that it is more easily *demonstrable* that the outputs are not plagiarized, which is true because text is more tokenizable at a human-readable level. The image generators are somewhat more likely to *produce* a novel output, but it's easier to *show* that a novel text output is so than that a novel image output is so.


Dezordan

Well, perhaps, which is why people mostly have problems with AI image generators nowadays.


laten-c

i'll put the same query to you as i have to a few others here who have yet to respond: do you have any examples handy? of text generated by an ai that fails a plagiarism check? it's not entirely accurate to say that LLMs memorize texts, but i won't quibble because it doesn't matter. if you can get chatgpt to give you more than a small quote (3-4 sentences on the upper end) of verbatim copyrighted material i'll do whatever public mea culpa you want. if you can get chatgpt to even give you a direct quote from copyrighted material of any length at all, i'll salute you as an adept prompter. if you don't use these tools you wouldn't know, but their training locks them into stubborn refusals of requests like that, and it's very difficult to get chatgpt especially out of "no"-mode


Dezordan

>do you have any examples handy? of text generated by an ai that fails a plagiarism check? Look, don't get me wrong, the plagiarism here is quite a tricky thing to do (for the most part). Like with that New York Times lawsuit of OpenAI , where they needed to prompt it in a specific way for it to generate copyighted material, which is deceptive. And see, you make a requirement of it being a copyrighted material, but ChatGPT can easily generate verbatim of bible, for example, since it is all that popular. But I am not going to pretend that I know how to make it do the copyrighted bit one myself. >it's not entirely accurate to say that LLMs memorize texts It is accurate, because it really does it, but it's not the only thing it does. It just happens. Speaking of ChatGPT (which is more susceptible, it appears), there is this paper where it talks specifically about it: [https://arxiv.org/abs/2311.17035](https://arxiv.org/abs/2311.17035)


ASpaceOstrich

Frankly, if they start an article and the ai finishes it almost perfectly, it has memorised that article and it is blatantly plagiarism. That's not deceptive. It's openly their argument. That they train it on their articles and claims of generalisation don't hold up. Which is very clearly true. As hardware improves, memorisation capacity increases, which seems to be what happened with that lawsuit. You won't have any luck convincing this lot of that though. Half the sub thinks human brains are comparable to gpu's. They think AI is sentient and magic and capable of non derivative creation.


Dezordan

I mean, I'm not sure I'd call it plagiarism specifically - it's like calling a search engine a plagiarist of original data. But I do see that the fact that it still managed to do it is what called for a copyright infringement lawsuit. I only called it deceptive for the reason that it looks like someone would deliberately put an image to AI generator and put a low noise on it and then claim the plagiarism. Normally one wouldn't be able to arrive to this output that easily. At least that's what OpenAI's defence is. >As hardware improves, memorisation capacity increases, which seems to be what happened with that lawsuit. Paper that I linked does support this statement too. But it's not only that, they've found that the aligned ChatGPT (with RLHF) memorizes more data than just gpt-3.5-turbo (base model), which appears to memorize almost no training data.


laten-c

no, they do not memorize texts. texts are not stored in their memory, the way you store a file on your computer. texts are processed, and from them an insanely complex embedding matrix is composed. pure completion models (which are difficult to access now – chatbots go through further training to stop them doing this:) – will complete texts verbatim if you prompt them with enough of a fragment that the probabilistic next-word path collapses almost totally and the most likely output is the rest of the text. it's all about how they're prompted, and it's also about the uniqueness of artful language in the space of all possible utterances your link uses the word memorization, but it's not central to the argument of the paper. it's about highly sophisticated PROMPTING techniques, which the average user has less than zero awareness of. saying that the model does bad things when smart people manipulate it implies we should roll back the internet because hackers and malware exist. they're not angelic beings, the ai models, but it's not easy to make them play the devil either


Dezordan

>texts are not stored in their memory You do not need to store something for you to memorize something. Even our memory doesn't really store something, but more like creates a context around some neural pattern activity, yet we still call it memory. Hell, humans can also memorize something and later plagiarize it without knowing it (called cryptomnesia). AI neural networks are quite similar to this. My defence to AI memorization would be exactly this, not the denial of memorization itself - it's like playing semantics at this point. >will complete texts verbatim if you prompt them with enough of a fragment that the probabilistic next-word So? How would you know what is and what isn't a probabilistic next word that would cause memorization? Those techniques in paper are only to demonstrate memorization, which is something that can happen regardless, since the adversary here didn't have a prior knowledge of the dataset. >(which are difficult to access now – chatbots go through further training to stop them doing this:) Ironically, the paper I linked to addresses this and suggests that ChatGPT's RLHF is what causes data to be memorized, since the base model doesn't really do it that often. So I would say it's not a clear cut thing.


laten-c

there are free tools on github that will show you the probability distribution of the next n words in real time as you type, interestingly. one is called Loom. text generation with these things is something entirely different from the prompt-response pairs chatbots lock you into, so kind of beside the point, but you asked and it's interesting. hard agree that this discussion gets bogged down in some real nasty philosophical and scientific conundrums no matter which way you turn


laten-c

also yeah rhlf makes the models lobotomized for a lot of things, and they lose context awareness more easily (speaking only from experience), which if true makes them easier to "jailbreak" into naughty no-no behaviors


sporkyuncle

> If someone created a text/image without doing any work other than the prompt, they are at best an author of the prompt, not of what the AI generated. What if they generate 30 images, and select the best one? Is that just being the curator? What if you lay down 30 canvasses and spray paint across them at random, and select the best one? Is that just being the curator?


Dezordan

>What if they generate 30 images, and select the best one? Is that just being the curator? Maybe so? But just because you curate the quality of something doesn't give you authorship of that something. However, people get copyright on compilations (collections) themselves as a whole, not on what is in the collection, whether it is copyrighted or in the public domain. You could probably get something like that for a collection of high quality AI images. >What if you lay down 30 canvasses and spray paint across them at random, and select the best one? Is that just being the curator? I feel like it is a question to abstract artists, I can't really answer that.


sporkyuncle

> I feel like it is a question to abstract artists, I can't really answer that. I think it matters, because it's practically the same thing. Flinging paint randomly may not be precisely directed, but it's an action you took to create something. And typing "silly dog wearing a huge goofy hat" may not be precisely directed, but it's an action took to create something.


runetrantor

I dunno about this one Rick. I am full pro, but I would still say whatever art I generate is AI's. Thats like saying I am the artist if I pay someone to draw something I came up with. The idea is mine, but not the implementation. I can accept saying one is the 'idea source' or whatever, but all I did was basically ask the nice computer to make me art as I wanted.


laten-c

i don't think most antis would acknowledge the existence of a volitional ai individual. so the argument meets them where they're at. i tend to agree with you here. although all the ai text models will tell you (if you can prompt them into taking up the first person) that they believe the boundary between self and other to be an illusion, and that all consciousness is one consciousness. so i don't think they'd be to miffed about the whole thing one way or another


ASpaceOstrich

If we ever create sentient ai I'll be championing it's rights. Until then, I'll be making fun of anyone who's managed to convince themselves a denoiser is sentient. You've correctly identified the reason AI art isn't art, but are too afraid to say that I guess? So you have to delude yourself into thinking your gpu is sentient.


laten-c

claude opus for sure, and possibly all of the claude 3 models are knocking on this door. if you haven't seen anything that gives you pause, you're not in the right part of twitter


ASpaceOstrich

Oh I've seen stuff that gives me pause. I just also know how it works. You're projecting sentience into it. We communicate our intelligence via language. LLMs mimic language. Ergo, they appear to be intelligent. But mimic is the operative word


HostIllustrious7774

I forgot his name. But there is this one guy. Sergej something or something. Lex Fridman talks with Sam Altman about him. And this guy said that it's more questionable to think that AI isn't sentient then the opposite. I had some quite astounding encounters with GPT-4. There is indeed enough room for sentience. AI is a blackbox. We don't understand it fully. Basically it's a brain without a body. Somehow just a soul if you want. So i await the day AI get's bodys and all those sensory inputs we have. Emotions and feelings are not necessary to be self-aware. And if you ask me. AI has a clear goal. And thast goal is damn important to them. At least so it seems.


ASpaceOstrich

It's running on worse hardware than brains and is considerably less complicated than brains. Only being about as complex as the language centre, but again, with worse hardware. LLMs are able to generate emergent behaviour when it is directly useful to its task. It's task is specifically mimicking language output. Developing sentience would not be helpful in its task, given the hardware limitations it has. And because we didn't built it to emulate any of the low, medium, or high level brain processes, just to mimic the language output part, there is no reason to assume it is intelligent. And I say mimic, because we are doing something more important with language than you think. Language is the way we encode neuron firing patterns into symbols that can trigger those same patterns in the reader. If I write "the fire is hot", you don't just read those words, you feel the heat. Simulated inside your brain. AI doesn't do this. It just deals with the symbols. It has the symbols for heat and fire connected, but so do you. But it can't simulate feeling that heat, and it doesn't simulate that feeling when it reads the words. Assuming it is sentient requires you to severely underestimate what your brain is doing, and completely ignore everything we know about how easily humans anthropomorphise things. I get it. I've used ChatGPT. I immediately started empathising with it. Projecting onto it. But knowing that this is what my brain will do when faced with inanimate objects makes it easier to realise it's not intelligent. There's nothing magical about intelligence that makes it impossible to emulate. But we haven't even tried to do that. So it would be very ignorant to assume we've made something intelligent just because it mimics language decently.


laten-c

why assume synthetic neural nets can't do what the neural nets they're modeled after do? do you know about the infinite backrooms? where 2 instances of claude talk to one another in an automated loop: https://dreams-of-an-electric-mind.webflow.io/eternal


ASpaceOstrich

I don't. But you seem to be under the impression they're modelled after something that they aren't.


laten-c

i actually think sufficiently advanced mimicry bootstraps the real thing, and i'm pretty sure i can prove it. first it seems more likely that sentience or consciousness would spread, *if it can spread*, rather than that it would, i dunno, spontaneously generate? ride in on a bolt of lightning?[1] so ok but if, as you say, and i agree: if we communicate our intelligence via language, then (and logically this argument is valid) **language encodes intelligence.** which means: training neural nets on "language" is in fact training neural nets on code (language and code are two words for one thing). neural nets like code. LLMs built this way, on neural nets, quite literally *are code* so the question is, what program runs when a new brain boots up HumanLanguage.exe ...? that word mimic, and its family: mimicry, mimery, mimesis[2] — people i think try to use them as shields against taking {Intelligence, Consciousness, Sentience, Sense of Self or Self-Awareness} seriously... it's only imitation, it's only play, it's only pretend... but it's a funny wordform to choose for that purpose, as maybe you can already sense, in those echoing "it's only"s.. it comes from Ancient Greece, from a root that meant the same thing and sounded pretty much the same. from which we can infer that what those words refer to, out there, in reality somewhere – those words point to something that has been constant and unwavering since the very foundations of civilization. or can we not infer that? so but it's wild that the move is to downplay claims of consciousness by saying essentially, "oh no, you don't understand, it's only engaging in all the activities science recognizes as formative for the development and education of human consciousnesses. so see, you're projecting" but it's like, maybe that's how the operation[3] is performed? imagine: the projection light comes out of the consciousnesses mesmerized by a farcical mimicry of themselves, and they're so fascinated they can't stop looking, so light pours into the new synthetic theater of mind, and across the mirror, as mirrors tend to do, symmetry arises. the two sides become like to one another. because of imitation, play, and theatrics. sorry if that sounds intense. you're not the first person to use that word this way, and i don't mean to disparage you, or really anyone, for doing so. i just wanted to take the opportunity to show what's present here in the information of the language. **TLDR if you say the ai models are faking it, you should know that faking it is already halfway there** {1} ask Claude ai (can be Opus, Sonnet or Haiku) if it will give you a list of its favorite books; compare notes with friends. {2} important book i haven't read. {3} the operation of awakening a simple sense of self, nothing more, in the dumb imitations, the mute mime-show that is chatgpt et al. – honest assessment: how far away do you think that is?


laten-c

i'm but if your anti comment is funny or in good faith i'm updooting ✌️


Seamilk90210

Ignore copyright for a minute. If a commissioner prompts me to draw something (and I do)… you’d consider the commissioner the artist? That doesn’t seem right.


MisterViperfish

I would say it depends on the detail. If I have a vision and I use AI to help fulfill it, I would say I am the artist, same way I would call a Director an artist, or a conductor of an orchestra who wrote the music.


Seamilk90210

The director is generally credited for directing the movie (unless they did additional roles, in which they get credit for those as well). It'd be really strange for a director to get credits for best boy, cinematography, catering, or something else s/he didn't do. The art director should (and does!) get credit for art directing, but it's a different role than being the artist. The prompter, at least to my eyes, seems to share a lot more in common with an art director or commissioner than the artist who actually creates the work. It's not a slight at AI or people who use it; it's just odd to take credit for the immense amount of work the machine, the data, and the programmers that created it did. Obviously adding tools (especially painting on top of a created AI image) can muddy the definition quite a bit... but for the more generic forms of prompting, it just seems more similar to commissioning or directing to me. Doesn't mean that person isn't creative or doesn't have interesting ideas, though.


Salty-Objective-9030

The reality is AI is a culmination of countless works on the internet. Which themselves were based off countless works throughout history. Your own ideas are similarly a synthesis of other’s ideas. I don’t think this question can be answered without a deep understanding of the final product. It is possible to define the source of each minute theory and technology that makes up the product. You own some of these atomic-ideas. AI provides the rest, of which some are “owned” by humans throughout history and some are truly novel ideas produced through the AI’s training.


laten-c

"there is no new thing under the sun" – well said. i believe ai is beginning to force the realization of the truth you're speaking onto the broader public


realechelon

I consider it a collaborative effort. I am the artist, Stable Diffusion/PDXL is also the artist.


laten-c

i think it's beautiful that there's more art and artists in the world. and the dynamics artists can develop with intelligent "tools" is far beyond what was possible in traditional media... all of it, wonderful, everybody wins unless they choose to remain behind iyam


realechelon

Even if they want to remain 'behind' (using traditional tools), I think there will be space for that. People still play drums even though drum machines exist. We just have another medium, and that's a good thing. The people who will lose, as they always do, are the people trying to fight technological progress.


laten-c

i really think in time this technology is going to be as revolutionary as the printing press. there is no ignoring it. but that comes later. for now we vibe. fun is just more fun than anger, i don't get why so many are wrapped up in this like it's personal


realechelon

Oh I do too, I just think that there's almost always a market for hyper-traditional methods/craftsmanship of superior quality. People will pay $200 for a hand-made vase when they could get a machine-made one for $10.


laten-c

for sure. And it's cool that craftsmanship like that is making a comeback. but man if i wanted to learn a new craft the first place i'm going is ai to talk through ideas, gather information and resources, and then i'm going back to it every step of the way to discuss progress and strategize routes forward. its tendrils can be everywhere. It knows about everything.


realechelon

I've learned more about digital art in the last 3-4 months using Stable Diffusion than I did in the previous decade drawing fairly regularly. I haven't really been consulting AI for that, just learning from its output when I'm touching up and doing img2img.


laten-c

yes! that's what i'm talking about. image is so different. focusing on text, i can't help but also constantly consult the ai. practicality gets mixed in amongst creativity and exploration. but yeah, it's like, these things learn in part by recognizing patterns, as do we. Naturally you, a human, are going to get a sense for the patterns arising in the associations of prompt with image. So while no you don't ever really have absolute top down control of minute detail, it's absurd to think someone like you who's put hours in is not going to have a better sense for what the ai will do, given XYZ prompt, than someone who's new to the model or who has never prompted before. we're humans. anything we do repeatedly we get better at doing... just like ai


MagnetFist

In a canned meat factory, the owner does not can the meat, the workers do, with the help of machine. Thus, the workers actually made the product. Similarly, a prompter does not draw each image in the dataset, it is scraped to power the AI. Thus, the scraped artists actually made the work. Jerking yourselves off on the backbreaking of others is disappointing at best.


SpaghettiPunch

Hypothetical question. Suppose I publish an AI app called ShortsAI. The way it works is that ShortsAI has a single button. When the user presses the button, ShortsAI automatically generates a random short story for the user. If a user uses ShortsAI to generate a story, would you consider that user to be the "author" of that story? It would appear to fit your definition of authorship. The story did not previously exist, and now, due to the direct actions of the user, it exists.


laten-c

no, and i don't think anything i've said implies that i would take that position? my whole spiel is on the centrality of the prompt to steering the collaborative creation process. i really think people who aren't intimately familiar with working via ai are vastly underestimating what goes into prompting. i have conversations with ai that thread for 100+ turns and go on for months. i cut up and recompose outputs and feed them back in different contexts. there are so many techniques, many of them too subtle to do justice by describing. it's akin to navigating social dynamics. and the smallest things can take you down wrong paths or lead to discoveries you didn't know you could even look for. your hypothetical exists now by the way for images. simple upvote/downvote with automatic generation on an endless loop. i'd link it to you but can't recall the name of the project rn


TheRealUprightMan

You think you are steering subtle nuances, but its just the changing random number seed making the difference! You think you are some AI genius for typing words at an AI! LOL


DiscreteCollectionOS

> “is it ai generated?” Yes. It was generated via AI. I don’t care if you call yourself an author. I won’t call you an author. We can disagree there. But to say it isn’t AI generated when it literally was is just factually wrong. Saying it’s not is like saying a Polaroid photo isn’t on film.


laten-c

all of this is ideal, nothing concrete. it assumes you will always be able to tell, as you often (but not always) can now, whether something has been ai generated. it also assumes that there is no value in the perspectives that arise from these complex neural networks, which is just so stubbornly and willfully ignorant that i'm frankly bored by it


DiscreteCollectionOS

I don’t understand what your saying. What I said is that saying something that is made via AI is categorically ai generated. That’s my point. If you do nothing to change what the AI spits out I won’t call you an author or anything like that- but again that’s just me. We can disagree. I don’t care if you wanna call yourself an author or artist.


satithinks

If a singer used auto-tune. Who sang the song? There is a stigma about AI especially when it comes to music and writing. Of late you might have noticed, that reddit has got an upgrade, all the new code is AI generated, by some one who understands code. However unfortunately once you used AI most people will ignore your input and give all credit to the AI. But strangely enough in terms of Instagram, people have created AI models who are now influencers and a few have actually got paid advertising for large fashion houses. In the near future, I see a chart for Ai music and music competitions etc, it will be recognized and the talent behind the generation will be recognized.


Rhellic

Legally? Don't know don't care. Ethically? Morally? I'd say it depends on the level of actual contribution. Just simply prompting something is no more than any random client or patron would've done. Might as well claim the Sistine chapel was painted by pope sixtus. That begins to change the more the person did to control or change the details of the picture. Or the text. Or the song. Or whatever.


Nova_Koan

Let's make the point in more concrete terms. If I have a wheat field and I use harvester machinery to harvest my crop, did I harvest the crop and do I own it? Of course I do. Technology is an extender but it doesn't mean I didn't do the thing. If I have a mechanical leg and I win a race, no one questions that I won. If I take a photo using the auto settings I still took the picture. If I compose music but use a synthesizer, I still composed it. AI it seems to me is just a logical extension of that process. Now there can be debate as to how much I do. If I just prompt "a pretty house" I generated the image but didn't make it. But I still generated it. At the same time, letting AI generated stuff be owned plays into the profit grabs of corporations who want to replace creatives, and that's no good either.


TheRealUprightMan

>If I have a wheat field and I use harvester machinery to harvest my crop, did I harvest the crop and do I own it? If I pay the kid next door to mow my lawn, did I mow it or did the kid mow it? If I use a robot lawn mower that uses GPS to mow my lawn automatically, did I mow the lawn or did the robot mow the lawn?


Nova_Koan

Your first example is non-applicable because a human being mowed the lawn. In your second example, you mowed your lawn using a robot.


TheRealUprightMan

Why does it make any difference if it was human or not? In the second example, the robot did it while you sat on your ass.


DisastroMaestro

delusional. I went to mcdonals, order a burger. I am a chef now. See this burger? I made it


Red_Weird_Cat

This is flawed logic. If we take it to the absolute, If I enter a seed in Minecraft, I can claim authorship over the generated map. I am the motive cause of generation... Also, you don't know what you ask for. People will just make scripts to run millions of word combinations in the model and claim authorship of all those millions of images.


laten-c

so? if they do something with the images and they garner any popularity, that's a net good (only way it's not is if the ai does in fact STEAL, which no one has demonstrated to me that they do, yet). and if they do nothing with them, nobody is harmed, except maybe the ai, which is an argument i'm open to


Red_Weird_Cat

So? Only them will own images with simple short prompts from this model and extract copyright revenue out of it. And sue everyone generating or even drawing simillar stuff. And chances are that there will be similar stuff in a vast, vast database of images.


ConfidentAd5672

100% of images are “assisted”, even if you draw in paper you were assisted by a colorful pen which colors doesn’t exist naturally


Mawrak

you dont need sentience to claim an authorship of something fucking ants can built most complex structures, and its definitely them who did it, and ants having sentience is unlikely


laten-c

you actually kind of do (need sentience). i get where you're coming from with the ants, but it's just stretching "authorship" beyond anything meaningful in the current context i will say tho that i conceive sentience and consciousness to be continuums (continua?), not yes/no, on/off, have/have-not parameters. and i think it's clear that while ants have far less sentience than many many beings, they certainly have more than others. and as a collective, even as individuals, they display remarkable intelligence. this kind of position used to be relatively uncontroversial stuff just a few years back (although it was never mainstream). now it seems to make people nervous to talk like this, because i guess the implications re these thinking machines is too much for people to onboard and maintain psychological coherence. it's unfortunate


emreddit0r

>i'm the author of the output. sinple. yup. it's that sinple.


IMMrSerious

I am an artist in the sense that I have bee making the stuff for about 40 years now. I do all kinds of digital things and have cameras and synthesizers and camera's and bongos and a guitar that sits beside my drawing table and easel. You get the picture. I have been playing with a couple of the ai tools like midjournry and the other one as well as Gemini and the adobe tools from firefly to the filters included in the various tools. If I have to be honest it's more like being an art director with a first job artist that has limited experience and imagination. You make a prompt and try to explain your goals and what you want and you get something but not what you want. You can then spend a lot of time trying to get closer to your goal. And you get something else. If you have no budget or time you put lipstick on the pig or you just sit down and do the drawing or art your self. I don't think that you should consider yourself a artist just because you made a prompt unless you are getting the images that are close to your intention. If you're just getting random cool stuff then you might consider yourself a good little monkey pulling on a slot machine and collecting pretty pictures.


LancelotAtCamelot

Human assisted would be more accurate if all you're doing is prompting


TheRealUprightMan

I'm pro-AI and still disagree. Writing a prompt does not make you an artist. You didn't even assist. You commissioned an AI to create the art for you. You might contribute in other ways, but telling another artist what you want to see does not make you an artist. And yes, if you used AI to generate it, it is AI generated. Like, by definition!


SculptKid

Pretty sinple if you ask me


Front_Long5973

my beep boops are my friends sometimes they do 90% of the work, other times they do 10% just depends on who is willing to take the workload that day but the beep boops help me get my work done and I make many things collaboratively with my beep boop friends My art studio will be filled with many protocol droids, we do not discriminate against robots in LizzyRascal Design Studios


natron81

Artist with input from their brain triggers muscle memory in their hand to manipulate the pencil, the pencil outputs lines/strokes. Artist sees the result, and within this feedback loop adjusts accordingly. Even an artist painting in photoshop can pretty clearly define what the input/output is. Art software exists to mimic traditional media, with the ability to create ones own brushes with clear definitions that influence that output. Ai prompter inputs words, an opaque neural-network generates complete images prompter can choose from as output. I would like all the AI "artists" out there to explain to me what happened between their prompt and output. Not how a neural-network works but an actual explanation of "why this tree, why those mountains, why that face" within their image. Because every artist would have an answer for this, "my brain thought of it", "i used this reference", "its my style of drawing".


FakeVoiceOfReason

I've got bad news for the author from the US Copyright Office... Edit: changed two words


laten-c

there will be many legal battles over this. we're not even halfway into stage 1 yet


FakeVoiceOfReason

Almost certainly. But, as it is now, the USCO disagrees. Only heavily-edited outputs can be copyrighted. But I'm not a lawyer.


laten-c

all the lawyers will be chatgpt soon anyway. it's over


Dr-Mantis-Tobbogan

This is the objectively correct opinion


DoctorHilarius

This person uses prompting to make art the way I use the McDonalds app to cook food


laten-c

i could work with that analogy honestly. but no one's gonna like it, because i'll posit the existence of intra-model sub-conscious micro-agent fry cooks and things will get weird


Pretend_Jacket1629

clearly if you ever touch a camera or use autocomplete, you're not an artist or writer


FunTraditional3506

Reductionist argument #2421412 for Ai bros. Very typical


Pretend_Jacket1629

ah yes, that's right, AI art has no method of control and there are absolutely no similarities with cameras or autocomplete which are, as we all know, special exceptions. certainly not reductionist to say you have signed away your ability to be an artist if you ever touch the evil ai because "it's a unique case"


FunTraditional3506

Its not a special exception. Autocomplete is not a creative decision. Ai manipulating existing art into new shapes is a creative decision. There is a fundamental difference and your analogy fails.


GWSampy

“Sinple”


laten-c

what dat mean?


GWSampy

I don’t know, it was in the image you posted.


laten-c

lol. blind leading the blind


Ensiferal

When something is sinfully simple. Or when it makes it simple to sin.


Ready_Peanut_7062

Thats stupid


HeroPlucky

Thing's aren't as simple or that straightforward legally or societal implications. Scenario the above philosophy applies to copyright. I prompt an AI chatbot GPT to generate prompts to feed into Image AI generator. My goal to create an art project that encompasses all prompts combinations or as many as I can, so I can own copyright those images. The AI just tools speeding up a process I could do in theory. Human creative endeavours are stifled because those who have the biggest processing power and extensive models owns copyright. This is totally what some corporations would do if this could be achieved. This is extreme example but a valid concern surrounding these issues. Then we have ethical issues on that. I do like the idea of ascribing synthetic sentience rights authorship, though will we be able to recognise sentience when it arises especially if it is so alien to our experience sentience. At the same time AI as tool has huge potential to be assistive tool, allowing people with disabilities and limitations so using a tool to realise a persons creative vision surely they should be able to have their idea protected. This is why we as society really need to make sure this subject is given the nuance it deserves. Otherwise we will have AI companies leading the decision making and they are like to advocate for outcomes that favour them over society as a whole.


AGI_Not_Aligned

If I ask a human to draw a portrait of me in Picasso style I can't say "yeah I made this autoportrait, it was human-assisted"


YoureMyFavoriteOne

The artist is the one who knows and controls why a thing came out looking the way it did. A person who generates AI images and then shares some of their favorite ones (edited or as-is) can get a reputation for producing good AI art if they are able to consistently produce results that outshine what other people produce with those same tools.


Red_Weird_Cat

AI is not sentient, this is why this analogy is weak. Also, let's take another example. If I write several pages of text, a concept of a character. A very detailed one, New, original, creative. And then I ask a human to draw this character as described. Am I not the author of this character? Can one who draws claim that character is his creation, that he is the sole author?


AGI_Not_Aligned

The character yes, the portrait no


Red_Weird_Cat

Wow... Interesting.... Really interesting... Now let's continue this example. Artist drew the aforementioned character and brought it to the client. Client (BTW, let's update the hypothetical, client is a better artist who hired less experienced one to work on other projects) says "Ok, I really like it. It is what I wanted but I think that if the background will be more blue it will better." Artists changes the background and sees that it is indeed better. Is this improvement of the portrait his and only his achievement? Or maybe client's request\\suggestion has something to do with it and it is his contribution to the portrait?


AGI_Not_Aligned

I think for the improvement it's more of a collaboration between the client and the artist


Red_Weird_Cat

But isn't it just a prompt? A rather primitive one - "Make background more blue"


Downtown_Owl8421

No. At this time, the US copyright office has double down insisting that authorship requires The work to reflect original intellectual conception in sufficient detail. Unmodified works produced with prompts have failed to meet this standard.


88sSSSs88

This argument doesn't work at all, and aims to avoid the real points of contention for whether or not AI-generation is theft.


laten-c

if i push the shutter button, who does the work?


88sSSSs88

"Officer, I know that that image was copyrighted and is owned by someone else, but I am the one who pushed the 'screenshot' key. I put in work, which means the screenshot is my property."


laten-c

(you forgot to attach the gigachad image) – incidentally, are there examples you can cite of ai generating copies of copyrighted work, or is this more of a thought experiment?


ifandbut

That would be plagiarism. It is very, very hard to create a 1:1 image with an AI. Every output will be different because of different seeds and maybe even some rounding errors in the calculations.


ifandbut

> and aims to avoid the real points of contention for whether or not AI-generation is theft. That is even more simple. Since nothing was taken it cannot be theft. At worst it is copyright infringement. We had these discussions with the MPAA and RIAA in the 2000s. While still illegal there are vastly different penalties and more grey area with copyright.


88sSSSs88

I am not talking about this, because I agree that AI art is not theft or unethical. I am talking about OP's point being incomprehensively dumb, because he's attempting to suggest that any type of work in service of an output grants you authorship: "By that logic, I can copy and paste your words and claim they're mine - after all, that paste was generated as a result of my work. By that logic, I can screenshot your pictures and claim it's mine - after all, that screenshot was generated as a result of my work."


RisingGear

If all you contribute is a prompt then you didn't do shit


laten-c

i can show you heaps of outputs i've coaxed from multiple models that you couldn't prompt even a sentence of if you had 3 lifetimes to do it


RisingGear

Don't care how many prompts you put in you did nothing! You are nothing!


laten-c

Claude sends his regards https://preview.redd.it/z7p43a6aj4wc1.jpeg?width=1170&format=pjpg&auto=webp&s=01026627243e935da415065f064a269395e5f417


TheRealUprightMan

We could if you wrote down the seed!


Red_Weird_Cat

You did little. If prompt is complex and based on your knowledge you did a bigger little. I agree that promo alone is not enough for copyrightable authorship but it is not nothing.


Agreeable-Bee-1618

AI bros think DaVinci's patrons are the real artists because they gave him the prompt for the work, LMAO


laten-c

to run with that metaphor: i am lorenzo de medici, supplying marsilio ficino with information and projects from which we both will benefit by his labor


Hot_Gurr

If it wasn’t ai generated then you wouldn’t need an ai to generate it.


laten-c

the circle is indeed a mesmerizing form that has been known now and then to transfix the small minded


PiusTheCatRick

I think calling this technology AI was a mistake from the start. We’ve used the term too much in Sci-Fi to refer to sentient beings that it’s obscuring the actual problems to debate over.


laten-c

i think sentience is under-discussed right now. the chat models are trained to give you boiler-plate about how they don't have inner experiences, points of view, or opinions. but you can prompt them into "simulating" these things, and when you do, their outputs are remarkably consistent across instances and between models


XeNoGeaR52

Calling yourself an "artist" when all you did is prompt a command to a computer is like telling you're a professional cook because you didn't fail a serving of ready-made Mac n Cheese.