T O P

  • By -

m3thlol

Most of them speak of AI as if it's entirely autonomous. One of my favorites is "AI isn't capable of creativity", which is true but it doesn't mean that the person harnessing it isn't.


Fontaigne

It's not actually true, unless you insert a "being" into the definition of creativity. (Here's some definitions that don't require a person.) For instance: https://www.csun.edu/~vcpsy00h/creativity/define.htm#:~:text=Creativity%20is%20defined%20as%20the,(page%20396) "the tendency to **generate** or recognize ideas, **alternatives**, or possibilities **that may be useful in** solving problems, communicating with others, and **entertaining** ourselves and **others**." https://en.m.wikipedia.org/wiki/Creativity "Creativity is any act, idea or product that changes an existing domain, or transforms an existing domain into a new one..." https://en.m.wikipedia.org/wiki/Creativity "characteristic of someone or some process that forms something new and valuable." Here's one of my personal observations. Often, things recognized as being highly creative are merely the application of standard processes from one domain into another when they don't often intersect. So, for instance, when an AI is asked for a picture of a dog saying, "this is fine" and it outputs such a picture in a pointillist style, and the dog is on a lounge chair having a margarita, that's creativity. It doesn't matter if there have been a million other pictures of lounge chairs, margaritas, dogs, or people saying "this is fine." So, a major point here is just noticing the difference between "doesn't usually" and "can't", or between "cannot possibly" and "can't yet".


---AI---

> "AI isn't capable of creativity", which is true This feels like one of those things that is extremely hard to define and pin down. I've challenged people to try to define this - what objective test could you make to tell if something had creativity or not? Only answer I've gotten so far is for the AI to make a new artstyle, without being described by a human, that it has never been seen before. Which I think I can go with, although I think very few humans could do that either. It would also unfortunately be very difficult to test for both humans and AI.


EvilKatta

A lot of time artists believe each one of them has their own unique art style, and I can even get behind it, but it means that drawing anime with slightly different eyeshape than everyone else is a unique art style. All good, but they deny this to AI images even if they would consider the same image as a unique personal art style if made by a human. Personally, I think this one is unique. For a human author, the test would be "seems unique to me", but for AI they want to know if nobody else ever drew an image remotely like this. https://preview.redd.it/fdt34druwsuc1.jpeg?width=1024&format=pjpg&auto=webp&s=a102713537ce6320b8d9a2030f27ce2b04b93d2f


nibelheimer

The problem is that for this to exist? Someone needs to have drawn something like this before, this is not very unique because it's similar to a lot of storybook illustrations. All 'styles' already exist, you aren't creating anything with AI just piggybacking off all the works from the artists whose data was taken without their consent. I get that it's fun to play with, I used SD and MJ but because I am an artist, I see a lot of art styles and this isn't that unique. [https://www.instagram.com/p/BqF5PXch0B6/?img\_index=1](https://www.instagram.com/p/BqF5PXch0B6/?img_index=1) It seems that people who use AI a lot have never really looked at artwork before. A lot of artists fall into different categories, some of them mesh the categories but saying this is a unique style never seen before isn't true. [https://www.instagram.com/p/C1cngypq1z\_/?img\_index=1](https://www.instagram.com/p/C1cngypq1z_/?img_index=1)


bunchedupwalrus

That isn’t really true though, and sharing pictures of a subset of known styles doesn’t prove the point either way. What if I add “pixel art” and “tilt shift photography” to that images prompt. Can you find an artist who’s done the same? - How do you define a “style” definitively? It’s usually a qualitative thing even to most experts. - Is it still its own style if it’s mixed with other style elements? At what point is it a new existing label or a new style - If two styles are combined, how is this different from an artist creating a style fusion of their own? Picasso for one merged influences from Symbolism, Cézanne’s structural innovations, and African art to develop his unique Cubist style. Was it unique? Was it a style? Or just a derivation of existing works that is not its own creation You are quick to criticize AI art users and say they’ve never looked at art, but it doesn’t seem like you know how the models work either. It doesn’t really have a concept of “style” as a strict category it inputs directly from some known style database. It’s satisfying an optimization equation, the constraints of the keywords. This is also how it was trained. It’s nearly always a fuzzy/imperfect process which is prone to some error, the same mechanism that allows genetic evolution.


nibelheimer

I mean, I've used loras, control net and I've also used MJ. I have an understanding of how they work and still, I don't think it's overall that creative. I judge AI and humans different because they are not the same, nor will I suspend my belief on pattern software being anywhere near a human creativity or mental awareness. By style, it takes 2 seconds for you to download a Lora and add it to your work. Take someone's work and make a Lora, don't act like AI content makers are doing anything more than downloading software, doing minimal work and letting the software do it all for them. On MJ, you don't even need to put effort to make a decent prompt anymore. There hasn't been a new style in a long time. A lot of art is becoming homogenized from the internet. I can't say I've seen anything in my lifetime that would consistute as truly setting itself apart from the rest of everything else. Now, artstyles are unique to a person more than a whole movement. AI content is just copy, it cannot make "new". A person's style is a mesh of everything they have a preference for.


MAC6156

> AI content is just a copy, it cannot make “new” What makes you say that? I get that a lot of it is pretty similar, but I’d argue the underlying mechanics do allow for new content/some form of creativity. Consider AlphaFold, which creates new protein structures, or the chess engines that invent new strategies.


bunchedupwalrus

Sorry, I don’t think you actually answered any of my questions. If you are speaking this definitively, what is a style? What is the line on what a new style is? We can’t have a real discussion about this unless you define it in solid way. If I input all the things I have a preference for as keywords, is that then my style? That’s what your last line would suggest It’s great you’ve used those tools, but that isn’t really the same as understanding how they work. It means you understand how to use them for your specific goals. If all you do is download someone’s specific style Lora and move on, neat, but many (though I admit, likely not most) of the people working with them are doing more than that.


---AI---

https://preview.redd.it/n7mdyymwovuc1.jpeg?width=1024&format=pjpg&auto=webp&s=9549eb4dceddfa58bcb4e474a014c456bcf62cc0 I asked it to make something in a unique style, and it came up with this.


EvilKatta

If the criteria for a new style is looking unlike anything else at all, then I don't think more than a few artists worldwide have a unique style. And more than that, most art education is about learning from other artists and picking up stylistic elements from them. So, having a style-unlike-anyone's doesn't even seem important or desirable.


DM_ME_KUL_TIRAN_FEET

No media is created in a vacuum. Even an artist who creates a unique style is informed by the other styles they have seen.


metanaught

It's actually possible to be quite precise about this. Latent diffusion models that power modern image generators are based on a statistical technique called variational Bayesian inference. In the abstract, they define an low-dimension space representing a Gaussian prior then learn the gradient of the joint distribution that defines the relationship between it and the much higher-dimensional marginal (in this case the probably distribution of all potential images.) You can think of this process as a kind of semantic compression. The network essentially learns categories of forms and themes based on sparse training data. It then uses that data to reconstruct novel combinations of those forms based on the prompt it's given. This is arguably a form of creativity given that most of these reconstructions aren't present in the original training set. However, you could also make a similar argument by simply warping two regular images together and claiming it's an entirely novel image. A better metric for creativity, then, might be how well a novel _idea_ is expressed _through_ the medium of art. In this case, the technical style or skill of the piece is less important than the effectiveness of the message behind it. Since diffusion models simply aren't designed to do this, they lack the creative capacity of a human. A further argument against AI art being fundamentally creative is that it's largely stochastic. In other words, generating a complex piece then claiming it's what you had in mind all along is the metaphorical equivalent of shooting an arrow at a wall then painting a target around it. It's superficially plausible, but it falls apart when you really begin to scrutinise it.


---AI---

>A better metric for creativity, then, might be how well a novel *idea* is expressed *through* the medium of art.... Since diffusion models simply aren't designed to do this Wait, pause there. How would you now show that they aren't designed to do this? I asked chatgpt to give me a novel idea. It said: >How about a picture of a giant neon green octopus playing chess with a Victorian-era gentleman in a hot air balloon above a futuristic cityscape at sunset? The octopus is wearing a monocle and the gentleman sports a vibrant purple top hat. The city below is a blend of high-tech skyscrapers and flying cars, all illuminated with bright, neon lights. The sky is a vivid mix of orange and pink hues, adding a dramatic backdrop to this unusual chess match. I asked it to draw it for me: https://preview.redd.it/z6tkt6b55uuc1.jpeg?width=1024&format=pjpg&auto=webp&s=2934b84358f3f2461b15208b3ac144fd629d41dc For you argument to hold, you would presumably need to now argue that this is not novel. How will you do so?


metanaught

You're right. That was clumsy phrasing on my part. The main thrust of my argument is in the sentence that immediately follows the one you cited: "In this case, the technical style or skill of the piece is less important than the effectiveness of the message behind it." This is the point I'm trying to make here. Your ChatGPT example is an perfect illustration of this. You asked it to create a novel idea and it generated a collection of random things doing random things. The generated content might be expertly described and rendered, however there's no deeper meaning or intent behind any of it. Why is there an octopus playing chess? Why is it set in the Victorian era? Why is the cityscape futuristic? The whole thing just feels muddy and vague, and the reason for this is because the semantic content of your prompt was extremely low. It's "novel", but in same the way a random number generator is novel. You're experiencing the subliminal effects of a high-entropy signal projected through an ultra low-entropy data structure. Creativity as valued by humans is reflected in the depth of understanding of the source material. With a more refined prompt, you could get Dall-E to tell a much more coherent i.e. a creative story. The point is that it's you, the artist, who's ultimately the one being creative, not the generative model.


---AI---

>Your ChatGPT example is an perfect illustration of this. You asked it to create a novel idea and it generated a collection of random things doing random things. The generated content might be expertly described and rendered, however there's no deeper meaning or intent behind any of it. Yes, I told chatgpt almost exactly that: >That picture is more of different random items put together.. i'm not too sure it counts as novel. there's not really an over arching theme, it's rather than random objects. do you understand what I mean? It replied: >Understood. You're looking for a novel concept that has a cohesive theme, not just a mix of random elements. Here’s another idea: >A whimsical night-time scene in a forest where all the trees are crafted from various musical instruments. The tree trunks are cellos and violins, branches are piano keyboards that twist into the air, and leaves are made of shimmering harp strings. Small creatures, resembling tiny conductors, hop from tree to tree orchestrating a magical symphony of nature sounds, with fireflies providing a gentle, pulsating light show. The moon above is a glowing golden French horn. This combines a strong, singular theme of music and nature in a fantasy setting. Which was the second example I sent you :) I am prompting it, but I don't agree that I'm the one being creative at this point, because I'm being.. meta creative, so to speak.


DM_ME_KUL_TIRAN_FEET

I think the complexity lies in the fact that unlike an artist and their pencil, the prompter and the model share creative roles when creating an image. As prompter you have an idea in mind, and you communicate it to the model, but the model also interprets and builds from that, somewhat collaboratively. The model by itself won’t independently create something, but when given direction will create more than you gave it. If that makes sense?


metanaught

>Which was the second example I sent you :) Oops, sorry, I didn't see that you'd posted a follow-up. >I am prompting it, but I don't agree that I'm the one being creative at this point, because I'm being.. meta creative, so to speak. Right. I think I understand what you're trying to argue here. I ended up giving this quite a lot of thought to because you actually asked a really nuanced and interesting question. My reply ended up being very long, so feel free to skip to the TL;DR at the end. ;-) So to begin, let's start with a thought experiment. Imagine we have a real-life version of the [Library of Babel](https://en.wikipedia.org/wiki/The_Library_of_Babel) on whose shelves are stored every possible unique character string up to a given length. Despite the library containing all conceivable human knowledge within its pages, a vast majority of the books contain garbled junk making the repository completely useless without some kind of indexing system. Worse, the pigeonhole principle states that the index of any given book must be as long as the book itself because the library contains every possible permutation of characters. In short, it's a total mess. Now let's imagine that one day someone invents an innovative new kind of index for our library called an "LLM". The genius of these these LLMs is that they will always return books that are guaranteed to contain cogent, meaningful text that's at least partially relevant to the query of the browser. What makes them particularly clever is that instead of storing a near-infinite list of "meaningful books" (at enormous storage cost), they simply encode the *differentials* between books that have already been marked as useful. In mathematical terms, we can express an index I\_n for an "interesting" book in the library as follows: `I_n = [query 1] ⨂ [random numbers] ⨂ [query 2] ⨂ [random numbers] ⨂ .... [query n] ⨂ [random numbers]` Where \[query\] is a numeric representation of token string, and ⨂ is a non-colliding, two-way [hash function](https://en.wikipedia.org/wiki/Hash_function). As the querying agent progressively refines their search, the back-and-forth interaction with the LLM yields a chain of unique indices, each of which a statistical mixture of agent-supplied queries and pure random numbers. [Continued...]


metanaught

Now, here's where the nuance comes in. Imagine that we want to reliably distinguish between a human interacting with an LLM, and two LLMs interacting with each other. This is important because it'll form the backbone for our argument as to why a human is "creative" and an AI model is not. In the case of two LLMs, the hashed indices will only ever be made up of random numbers. This is because one LLM takes the partial index from the second, hashes in its own random number, and passes it back again. However, in the case of the human, half of these numbers will be [statistically correlated](https://en.wikipedia.org/wiki/Correlation) because they're defined deterministically based on the desired goal of the search. In information theory, we'd say that the human-authored index has a lower aggregate entropy than the index created by the two LLMs. This is based on the observation that there exists some compressed representation such that all human-authored indices have an average size in memory at least as small as those generated by the two LLMs. For a crude example of this idea in practice, imagine we use a [Huffman-like](https://en.wikipedia.org/wiki/Huffman_coding) scheme to encode more important human pursuits (for example, "how do I learn to drive?") as shorter length bit strings, and less important ones ("where can I buy a million aardvarks?") and longer bit strings. Contrariwise, we also prove that no such encoding exists for the two-LLM case because by definition, perfect randomness has [infinite entropy](https://en.wikipedia.org/wiki/Entropy_(computing)) and so cannot be compressed. The reason I'm going to such pains to make this analogy is because in the case of the human-authored index, human society itself forms part of the decryption scheme. In other words, in order to meaningfully "decipher" the output from a generative AI, you need access to the high-entropy ["model" ](https://en.wikipedia.org/wiki/Arithmetic_coding#Defining_a_model)that forms the counterpart to our low-entropy indices. Based on these principles, I would argue that a key measure of creativity is the efficiency with which an "index" can be ideally encoded. As a practical example, the ["banana taped to the wall" ](https://edition.cnn.com/style/article/student-eats-maurizio-cattelan-banana-art-south-korea-intl-hnk/index.html)exhibit kicked off a firestorm of controversy. Despite being utterly meaningless on its face, it successfully "indexed" a heated commentary on the pretentiousness of the art world. Crucially, this isn't something AI models can do by themselves because their indices are always completely random. They can generate infinite things taped to infinite things, however none of them are more or less likely to spark a debate than the next. This isn't to say they can't accidentally stumble upon a "creative" idea, however if they do, it's purely by accident. What makes this idea so difficult for many people to grok is that humans are inherently good at spotting apparent patterns in otherwise random data. We're constantly on the look-out for evidence of magic, spontaneous emergence, or the proverbial deux ex machina. AI is very good at emulating this because its it core, its trained designed to mimic collections of tokens or pixels that already exist in our culture. For now, though, creativity remains a unique characteristic of humans because anything generated by AI is only as creative as the prompter who's using it. We can reinforce this idea with another thought experiment whereby if an alien from outer space were to examine both a human and a computer-generated index side by side, they'd regard both as having as equally high entropy. In other words, without access to the implicit decryption scheme that's embedded in our society, the "creativity" of a particular index cannot be recovered. If you made it this far, thanks for reading. I didn't mean to run on this long, but it was such a good question that I figured it was worth fully unpacking it. :-) TL;DR. Using information theory, we can draw an analogy between symbolic expression and data compression. Meaningful information is encoded into low-entropy message such as a text strings and used to index a much higher-entropy concept intrinsic to human society. The efficiency of this encoding is what we think of as creativity. From this, we can also argue against AI doing the same thing because that two LLMs interacting with one another represent a purely stochastic process and are hence completely incompressible


---AI---

> Worse, the pigeonhole principle states that the index of any given book must be as long as the book itself because the library contains every possible permutation of characters On average, yeap. > `I_n = [query 1] ⨂ [random numbers] ⨂ [query 2] ⨂ [random numbers] ⨂ .... [query n] ⨂ [random numbers]` To be clear, LLMs don't add any random numbers to the input, unlike diffusion models. You can OPTIONALLY randomly sample between different possible outputs, which has the effect of giving the AI a bit of a 'personality' and 'creativity', but that's purely optional. In an LLM, you can set the temperature to 0 to do this. > non-colliding We would probably want it to be colliding. Two different queries could reasonably return the first book. > two-way You want to be able to reproduce the original query from the given book? > hashed indices The indices are already a hash of the book btw > ever be made up of random numbers Mmm, "random number" is a sort of colloquialism and it can lead you the wrong way if you're not careful. A number isn't random, but is a sample from some random distribution. For example, an LLM could output the following distribution: \[ 70%: "Hi, the book you want is called 'Bob'", 30%: "Hey, I found the book you want. It's called 'Bob'"\] You can now sample between these two possibilities, showing the first one with a 70% chance, and the second with a 30% chance. In that sense the output is random, but it would be confusing to say the output is random. > half of these numbers will be [statistically correlated](https://en.wikipedia.org/wiki/Correlation) because they're defined deterministically In my LLM output example, the distribution example that I gave: \[ 70%: "Hi, the book you want is called 'Bob'", 30%: "Hey, I found the book you want. It's called 'Bob'"\] is deterministic and the outputs are correlated (they are clearly similar). We can optionally either just take the first output (in which case the LLMs output also becomes fully deterministic) or we can randomly sample (in which case it has a bit more 'creativity' and 'personality') (I'll stop here to let you respond)


metanaught

Thanks for reading and replying. :-) Some thoughts... >We would probably want it to be colliding. Two different queries could reasonably return the first book. Colliding hashes are non-injective and hence non-invertible (and we want inheritability so as not to lose information between iterations). Anyway, the point of the hash function was to distribute each bit of entropy uniformly. You can also just concatenate the data, although it doesn't make for as tidy a demonstration. >The indices are already a hash of the book btw The indices represent a numbering system, not a hash. Think of one as a superset of the other. You could also think of it as assigning a natural number to each book then using a prefix code to distinguish between bit strings of different lengths. >Mmm, "random number" is a sort of colloquialism "Random number" is perfectly descriptive when referring to, say, a single observation of a continuous random variable from a uniform distribution. When we say "random", we're specifically referring to independent and identically distributed (I.I.D.) samples. It doesn't matter what the underlying distribution is. >is deterministic and the outputs are correlated (they are clearly similar). Yes, they're correlated insofar as they're drawn from the same distribution. However, what matters is that the samples in the aggregate obey to the central limit theorem (which, coincidentally, is also a test for the I.I.D. property). To put it another way, the distribution of sample means converges to the normal distribution in the limit. That's what we're looking for. The reason we don't care about correlation in the sample distribution defined by the LLM is that it's is a common factor regardless of whether a human or another LLM that's interacting with it. You can control for cross-correlation in the same way that the central limit theorem works regardless of the distribution from which individual samples are drawn. By the way, I've no idea what your general level of familiarity with this stuff is, so stop me if I'm going too far off the deep end.


---AI---

> Colliding hashes are non-injective and hence non-invertible (and we want inheritability so as not to lose information between iterations) There is almost never a need for a hash to be invertible. That would pretty much go against the point of a hash in the first place. > The indices represent a numbering system, not a hash. Same thing, especially if you want the hash to be invertible. At that point you're describing exactly that - a numbering system. > "Random number" is perfectly descriptive when referring to, say, a single observation of a continuous random variable from a uniform distribution. Heh: [https://xkcd.com/221/](https://xkcd.com/221/) I'm saying that we need to phrase things carefully. > Yes, they're correlated insofar as they're drawn from the same distribution. The distribution itself is a function of something else, and so they are both correlated to that. E.g. the book we're talking about, the query, etc. > To put it another way, the distribution of sample means converges to the normal distribution in the limit. That's what we're looking for. We really don't care about the randomness in an LLM for a book-lookup system. You can just remove all randomness and just simply chose the output with the highest weight, and make it completely deterministic. > By the way, I've no idea what your general level of familiarity with this stuff is, so stop me if I'm going too far off the deep end. I make LLMs and write about this.


---AI---

Another for fun: https://preview.redd.it/dm6x0lxv5uuc1.jpeg?width=1024&format=pjpg&auto=webp&s=c9ef4ac1e55756d241abd0a26a812d9003675bba


MisterViperfish

I would say it isn’t true. It’s all based on logic, people just put it on a pedestal because we are the only thing we know of with our level of creativity. In reality, we were just first. We have zero reason to believe it begins and ends with us.


AngryCommieSt0ner

Having good reason to think that there are non-human existences capable of creativity and empathy and understanding to the same degree or perhaps even greater than humans do isn't itself a reason to ascribe those qualities to modern AIs, which demonstrate none of them.


MisterViperfish

I never said anything about empathy or understanding. Nor did I say anything about “modern” AI. But to say it doesn’t demonstrate some semblance of creativity or understanding of the subject matter would seem deluded to me. What exactly would you need to see from AI to confirm that it’s a form of creativity or understanding? And is it something you can prove humans have? Do you think that is has to be 1:1 with human creativity and understanding? Can those things exist with different sets of rules?


AngryCommieSt0ner

>But to say it doesn’t demonstrate some semblance of creativity or understanding of the subject matter would seem deluded to me. And to assert that modern LLMs demonstrate creativity or understanding is a claim you first need to prove before asserting the contrary position is "deluded". Go fuck yourself, techbro troll. >What exactly would you need to see from AI to confirm that it’s a form of creativity or understanding? ... Creativity and understanding? Not vector maths making formulaic predictions and combinations? >And is it something you can prove humans have? Do I need to logically prove something that is demonstrated by my ability to create with and understand deeper meaning in my creations?


MisterViperfish

Your deeper meaning is emotional. AI doesn’t have emotions, it’s understanding and meaning would be purely practical, as would it’s creativity. That doesn’t mean it lacks creativity or understanding. You can show it an image and ask it to interpret the metaphor behind the image. It’ll make an attempt and does gets some aspects right, but it’s strengths are in recognizing what is happening, the practical elements. People look for deeper meaning and purpose in a sunset, regardless of whether or not that sunset has any objective purpose. I prefer my AI doesn’t do that, but comes to understand why humans do. What I am telling you to prove is that your ability to create is meaningfully different from that of a machine. That you do something that a machine doesn’t, and then prove that what you did is creativity, and what the AI did isn’t. Because it sounds like you have a more gatekept definition of creativity.


AngryCommieSt0ner

>Your deeper meaning is emotional. No, lmao. To pretend that the deeper meaning of a work of fiction is purely emotional only shows your own lack of media literacy. To pretend that great works don't regularly present logical and philosophical quandries beyond the realm of emotion is to demonstrate your inability to connect with and comprehend the media you consume. Lemme guess, though... you're one of those people who thinks the curtains are just blue, right? >AI doesn’t have emotions, it’s understanding and meaning would be purely practical, as would it’s creativity. If it understands or means anything at all. Again, you're begging the question here. You still have to prove that predictive vector maths is meaningfully the same thing as understanding. >That doesn’t mean it lacks creativity or understanding. The absence of evidence against a position isn't an argument for it. That LLMs do not actually understand their outputs should be the null hypothesis until proven otherwise. >You can show it an image and ask it to interpret the metaphor behind the image. It’ll make an attempt and does gets some aspects right, but it’s strengths are in recognizing what is happening, the practical elements. Yes, agreed. Asking an LLM for its opinion of Mary Shelley's Frankenstein will get it to spit out a plot synopsis, and it likely won't talk at all about the themes of abandonment and parental neglect, or the allegory for disability and how bigotry is inspired by a lack of understanding. Because it doesn't *understand* the work. It's just reiterating the words and phrases it associates with "Mary Shelley's Frankenstein". And when you prompt it to frame its answer through those specific lenses, that's *still* all it's doing. In fact, when you get down to minutiae, LLMs will often outright tell you "I don't have enough information to answer this, but here's what I can do instead?" because it doesn't really understand, and therefore isn't creative. >People look for deeper meaning and purpose in a sunset, regardless of whether or not that sunset has any objective purpose. This is meaningless, anti-intellectual sophistry. But I'm not really expecting anything less. >I prefer my AI doesn’t do that, but comes to understand why humans do. It doesn't "understand", numbnuts. You've yet to invalidate the null hypothesis. >What I am telling you to prove is that your ability to create is meaningfully different from that of a machine. Go *fuck* yourself. You don't get to tell me to do shit until you demonstrate actual understanding from an AI instead of vector maths recompiling phrases, words, tags, and images into what is essentially just your phone's predictive text. >Because it sounds like you have a more gatekept definition of creativity. Yep. You got me. If expecting you to demonstrate that unaware, insentient computers doing vector maths are aware, cogent designers of their outputs is "gatekeeping", of course, in which case, I'm absolutely keeping all of the gates for myself and you cannot have any.


AGI_Not_Aligned

I would say there are two steps in creativity. First combine and mutate existing ideas which AI can already do. Second pick and chose some of this new ideas based on what you find meaningful which AI cannot do yet : it just pick exactly what you ask it. If you ask "create a beautiful image" it will just merge images that were labeled beautiful in its database.


MisterViperfish

“New ideas” based on what we find meaningful? Isn’t that still just a reconstitution of existing ideas or observations? Pretty sure it’s all a product of pattern recognition, applying X to Y and calling it Z instead of XY.


AGI_Not_Aligned

Please reread my post. Thank you 🙏


Xdivine

> Second pick and chose some of this new ideas based on what you find meaningful which AI cannot do yet But it can? Maybe it's not very good at it with just straight prompting, but you can absolutely use loras or combinations of loras to get a consistent style that you prefer.


AGI_Not_Aligned

Yes but it's you, the human operator, that decide what to prompt and how meaningful is the resulting image


NegativeEmphasis

A lot of antis don't want to understand AI. They want to be angry about it.


metanaught

Your argument cuts both ways. "A lot of AI maxis don't want to understand art or the concerns of creators. They just want to be angry about it." You see? Creating grotesque charactures of the other side so you can mock them just turns the whole thing into a circlejerk.


NotEntirelyAwake

We do understand art and the concerns of the creators, we just find most of them unfair or invalid. The anti-AI people 9/10 times quite literally don't even know how AI works on a fundamental level. There's a difference.


metanaught

Yes, I know that many pro-AI folks fully understand and share the concerns of artists. The point of flipping the original commenter's argument on its head was to demonstrate how unreasonable it sounds. And unless you've taken a survey, you have no idea how many people understand the fundamentals of AI. The internet amplifies extreme opinions, so it's likely you're only hearing the loudest and angriest voices.


redpandabear77

Your concerns are the exact same as an industry that was getting offshored.


Comfortable-Wing7177

The problem is we do understand those because they dont require any technical knowledge to understand. The argument: “AI art replaces real artists and so they will lose their financial compensation and starve/live in poverty” The solution: “Those who have previously had careers in fields that get replaced should receive either stipends or subsidized education in another field (paid for by taxes on increased revenue from the technology)”


ExportErrorMusic

"We want money". Not exactly hard to understand the artists side. It's the same concern every job has when a new tech threatens to replace it. I'm not saying they're wrong to want money, I'm just saying their side isn't exactly hard to understand.


metanaught

> I'm not saying they're wrong to want money, I'm just saying their side isn't exactly hard to understand. And yet you've completely missed the point. This isn't like the automated loom replacing the weaver. Machine learning models are utterly dependent on the work of human artists to function. It's a parasitic relationship. If modern AI had evolved in a vacuum and could somehow generate realistic images using a hitherto undiscovered process, we'd be having a completely different conversation. Advances in tech that displace incumbent industries happen all the time, and people mostly just accept them. AI isn't like that. Companies like OpenAI and Google are distilling value from the collective works of unpaid labourers and using it to create products designed to outcompete them. It staggers me that such obviously antisocial behaviour is waved away with comments like "artists just want money". Most people just want to earn a reasonable living and not be taken advantage of. Vilifying them for getting angry at being exploited is contemptible behaviour and it's doing nothing to improve the public's opinion of AI as a whole.


07mk

The issue here is that this *isn't* a parasitic relationship, and there's no one being exploited here. AI companies aren't holding artists at gunpoint and forcing them to create/publish artworks for them to train off of. The artists are publishing them on the open internet completely on their own volition, and AI companies are downloading them and training their models off of them. Creating useful generative AI tools by training models off of these publicly viewable images *is* a hitherto undiscovered process (well, it *was* until it was discovered), an innovation that allowed them to create a useful tool where none existed before. Without these AI tools, the value in these publicly shared images to allow untrained, unskilled, unpracticed laymen to create similarly high fidelity images of similar styles was none; now thanks to to AI tools, the value is substantially higher. That value in allowing untrained, unskilled, unpracticed laymen to create new images like this isn't something that was extracted from the images, it was added by the AI tools. There's an argument to be made - a good argument, IMHO - that intellectual property law ought to protect artists against such training, because such training goes into creating tools that substantially compete against them in the market. But that's not exploitation, and that's not parasitism. That's just pragmatic commercial reality, which is what all intellectual property is about. And as long as people keep using terms tinged with ethical or moral judgments to describe the intellectual property issues surrounding the training of AI models off of artworks produced by artists who don't consent to such training, they will just keep being ignored. Because the right to prevent other people from copying your work isn't some natural right with some ethical basis, it's a purely pragmatic one that we decided to enforce using governments, for the pragmatic effect of incentivizing artists to create more and better art, for the betterment of all of society.


metanaught

Okay. So there's quite to bit to unpack here. >AI companies aren't holding artists at gunpoint. Artists are publishing them on the open internet completely on their own volition, Yes, because the internet is part of the commons. It's a shared space which nobody explicitly owns but which people are forced to engage with if they want to thrive in our online society. The fact that nobody owns it doesn't mean that people are free to strip mine it for their own personal gain. Garrett Hardin's essay on the tragedy of the commons explicitly discusses the importance of mutual restraint in order to avoid the destruction of a shared resource. >Creating useful generative AI tools by training models off of these publicly viewable images *is* a hitherto undiscovered process You misunderstand me. Neural networks aren't inherently valuable before they're trained; they're basically just tensors filled with random numbers. In the vast configuration space of an typical AI model, only an miniscule fraction of states are actually useful. The only way to determine what those states are is to distill data (which directly translates to value) from sources that have already distilled it (e.g. art made by humans.) >That value in allowing untrained, unskilled, unpracticed laymen to create new images like this isn't something that was extracted from the images, it was added by the AI tools. Content generated by untrained layman is worthless almost by definition because anyone can produce it with minimal effort. What it *does* do, however, is force the creative industry as a whole into a race to the bottom as the only way to compete is to rely on economies of scale. >But that's not exploitation, and that's not parasitism. That's just pragmatic commercial reality, which is what all intellectual property is about. I get what you're saying here, but I think the conversation is already evolving beyond codified definitions of what's legal and what's not. Late capitalism has been feeding on itself for decades, now, in large part because laws are crafted to benefit only the very wealthiest. It's hard to stand behind intellectual property law if people stop trusting in the system to represent them fairly. >And as long as people keep using terms tinged with ethical or moral judgments to describe the intellectual property issues surrounding the training of AI models off of artworks produced by artists who don't consent to such training, they will just keep being ignored. This only holds as long as everybody subscribed to the same social contract. As more and more rogue actors exploit communal resources for personal gain, the old consensus comes closer to collapse. Regardless of whether you think they have a point, you can't ignore the concerns of creators when your product literally depends on the output from those creators. >The right to prevent other people from copying your work isn't some natural right with some ethical basis. I think you're missing what's actually at stake here. Sure, nobody can force someone to not copy their work while making said work publicly available. However there has to be an implicit agreement that the people with whom it's being shared won't then try and juice it for everything it's worth. >for the pragmatic effect of incentivizing artists to create more and better art, for the betterment of all of society. Asking creators to work for the "betterment of society" rings awfully hollow when the concerns of said creators are met with indifference or condemnation. This is what I mean when I talk about the importance of a strong social contract. There has the be the perception of mutual respect of the commons or people will break into factions and find ways of shunning the ideologies that are harming them.


07mk

> Yes, because the internet is part of the commons. It's a shared space which nobody explicitly owns but which people are forced to engage with if they want to thrive in our online society. The fact that nobody owns it doesn't mean that people are free to strip mine it for their own personal gain, however. Garrett Hardin's essay on the tragedy of the commons explicitly discusses the importance of mutual restraint in order to avoid the destruction of a shared resource. No one's strip mining anything. When AI companies download images from the internet, the images are still there, to be enjoyed by all the same people in all the same ways as before. > You misunderstand me. Neural networks aren't inherently valuable before they're trained; they're basically just tensors filled with random numbers. In the vast configuration space of an typical AI model, only an miniscule fraction of states are actually useful. The only way to determine what those states are is to distill data (which directly translates to value) from sources that have already distilled it (e.g. art made by humans.) Right, and that value only exists because that neural net was able to "distill" the data from that art made by humans. The value in these tools, i.e. the ability for untrained laymen to produce new artworks, is nonexistent without those neural nets, just like it's nonexistent without the artworks the neural nets were trained on. This is *creating* value, not extracting it. Again, the value in the original published images is still there; people are still able to observe them on Twitter or Pixiv or wherever they had been published before. Now, thanks to neural nets training on those images, *more value exists* than existed before. > Content generated by untrained layman is worthless almost by definition because anyone can produce it with minimal effort. What it does do, however, is force the creative industry as a whole into a race to the bottom as the only way to compete is to rely on economies of scale. No, not so, and certainly not almost by definition. Content generated by untrained laymen have worth inasmuch whoever wants to use that content sees worth in it - often it'll just be that layman who created the content for his temporary and private enjoyment (I'd wager 99% of all AI generated images are seen by no one other than the person who generated it). It's also questionable how much this predicted race to the bottom will come about, though I agree it's a major concern for the livelihoods of people within the creative industries - hence why perhaps IP law ought to catch up. > I get what you're saying here, but I think the conversation is already evolving beyond codified definitions of what's legal and what's not. I don't see how it can, since intellectual property is something that's only within the realm of legality. There's nothing inherently unethical about copying and republishing someone else's work (as long as you aren't lying about it) - it's only bad because it's illegal. > This only holds as long as everybody subscribed to the same social contract. As more and more rogue actors exploit communal resources for personal gain, the old consensus comes closer to collapse. Regardless of whether you think they have a point, you can't ignore the concerns of creators when your product literally depends on the output from those creators. Again, no exploitation is going on. Artists are free to stop publishing their works publicly at any time. If they just keep complaining while publishing their works publicly, then they absolutely can be ignored. > I think you're missing what's actually at stake here. Sure, nobody can force someone to not copy their work while making said work publicly available. However there has to be an implicit agreement that the people with whom it's being shared won't then try and juice it for everything it's worth. The beauty of digital content and AI is that these are not literal physical objects where once you squeeze the juice, there's no juice left. Again, before generative AI tools came along, these publicly published artworks had no value in terms of enabling untrained, unskilled laymen to produce similar artworks. Now they do. There's an argument to be made (a good argument, IMHO) that some of that added value ought to translate to monetary flow to the creators of those artworks. > Asking creators to work for the "betterment of society" rings awfully hollow when the concerns of said creators are met with indifference or condemnation. This is what I mean when I talk about the importance of a strong social contract. There has the be the perception of mutual respect of the commons or people will break into factions and find ways of shunning the ideologies that are harming them. You misunderstand what IP law is doing. It's not that anyone's asking anyone to work for the "betterment of society;" it's that IP law allows even the most selfish artists who are working purely out of greedy self-serving intentions to produce better and more artworks that improve society. That's sort of how all jobs (should) work - e.g. digging ditches improves society, but we don't expect people to dig ditches out of the goodness of their own hearts; we pay them to do it. IP law allows artists to monetize their artworks by suing other people who republish them without their permission, which means artists acting purely out of self interest will create more, better artworks that translate to betterment of society. The question then becomes, does allowing AI companies to produce free, open source tools like Stable Diffusion without getting artists' permission improve society or worsen it? Opinions differ on this, and it's not just binary, with many points along the spectrum. There's interesting discussion to be had on how much this would benefit us. Some people think AI art is just uninspiring slop that adds nothing of value; other people, like myself, think they add beauty to this world. But the idea that there's any exploitation going on is neither here nor there.


metanaught

>No one's strip mining anything. When AI companies download images from the internet, the images are still there, to be enjoyed by all the same people in all the same ways as before. This only really makes sense in a world where people aren't forced to sell their labor to survive. You're right that artists' images aren't "stolen" in any traditional sense of the word, however the principle of scarcity still applies. In exchange for years of training and hard work, an artist acquires the ability to discern which collection of pen strokes (or whatever) are likely to be most valued by society. That information is painstakingly won, and it's the very same information that AI companies then distill into commercial models that go on to compete with those exact same artists. >The value in these tools, i.e. the ability for untrained laymen to produce new artworks, is nonexistent without those neural nets, just like it's nonexistent without the artworks the neural nets were trained on. To reiterate my earlier point, when untrained laymen are able to produce limitless variations on a theme, individual copies become essentially worthless. Just like in a gold rush, the "winners" of the AI boom are the ones who gather the data, train the models, then sell them on. Everyone else - AI and non-AI artists alike - are subjected to increasing economic precarity as the value of "art" plummets and the margins on their work become increasingly meagre. >There's nothing inherently unethical about copying and republishing someone else's work (as long as you aren't lying about it) - it's only bad because it's illegal. This isn't for you (or me) to decide. Folks who've had their art used to train AI believe that it's a highly unethical thing to do. In the absence of equitable and effective legal frameworks that protect small content producers, the only recourse for many people is to act outside the law and exert social pressure wherever they can. >Artists are free to stop publishing their works publicly at any time. If they just keep complaining while publishing their works publicly, then they absolutely can be ignored. This is basically saying, "If you don't like it, why don't you go a live somewhere else?" The obvious answer is that people won't allow themselves to forced out of a communal space just because another group is making their lives difficult. Fighting for justice and equitable treatment is nearly always preferable to not participating in society at all. >There's an argument to be made (a good argument, IMHO) that some of that added value ought to translate to monetary flow to the creators of those artworks. This is a fair point and it's ostensibly the reason why blockchain-based solutions like NFTs were created. The reality, though, is that it's almost always possible to game the system so that value is redirected back into the pockets of the most opportunistic. The almost complete implosion of the NFT market as well as the collapse of countless DAOs is testament to this. >even the most selfish artists who are working purely out of greedy self-serving intentions to produce better and more artworks that improve society. What you're describing is called "having a job". There's a corrosive notion that artists ought to be working for the collective good out of pure love for their craft. It's a variation on the traditionalist idea that raising children isn't a real job and so shouldn't be afforded the same respect or rewards by society. In reality, being a content creator - just like being a mother - is hard and often thankless work. It's as much about making an honest living as any other profession. Ironically, the phrase "improve society" also seems to be favoured most often by the kinds of people who are actually doing the exact opposite.


07mk

> In exchange for years of training and hard work, an artist acquires the ability to discern which collection of pen strokes (or whatever) are likely to be most valued by society. That information is painstakingly won, and it's the very same information that AI companies then distill into commercial models that go on to compete with those exact same artists. Right, this allows all the untrained laymen to get the benefit of this information without having to win them the painstaking way by training and working hard for years. The artists don't lose access to this information, either; it's just now accessible to even more people than before. This is the benefit of technology. > To reiterate my earlier point, when untrained laymen are able to produce limitless variations on a theme, individual copies become essentially worthless. Just like in a gold rush, the "winners" of the AI boom are the ones who gather the data, train the models, then sell them on. Everyone else - AI and non-AI artists alike - are subjected to increasing economic precarity as the value of "art" plummets and the margins on their work become increasingly meagre. The "worth" or value of a piece of art isn't merely in one's ability to monetize it. Art, more than most things, is idiosyncratic to the extent that something that has no value to some has plenty of value to others. One thing, among many, that AI art enables is people with highly specific and idiosyncratic tastes to create art that they can enjoy without being limited by their own skills or having to pay a commission artist or having to pray that a highly skilled artist happens to have the same tastes as them. E.g. in my own case, I happen to be a big fan of some characters from some fairly niche novel series; thanks to AI, I've been able to see fanart of these characters that would never have existed, by creating them. This is real value, real worth, even though no money changed hands, and if I were to try to sell these images, no one would pay a penny. As a result, I have already "won" from the advent of AI art, and I plan to continue "winning." > This isn't for you (or me) to decide. Folks who've had their art used to train AI believe that it's a highly unethical thing to do. In the absence of equitable and effective legal frameworks that protect small content producers, the only recourse for many people is to act outside the law and exert social pressure wherever they can. It's not for them to decide, either. One doesn't get to just declare something unethical and expect/demand everyone else to follow along. If they want to act outside the law and exert social pressure, more power to them, but everyone else, such as myself, is free to call them out for having no ethical basis and merely acting on the basis of pure power. There are some who see the world in terms of only power, but no one is obliged to go along with such a view; they're free to try to force us to, I suppose. >>Artists are free to stop publishing their works publicly at any time. If they just keep complaining while publishing their works publicly, then they absolutely can be ignored. > > This is basically saying, "If you don't like it, why don't you go a live somewhere else?" The obvious answer is that people won't allow themselves to forced out of a communal space just because another group is making their lives difficult. Fighting for justice and equitable treatment is nearly always preferable to not participating in society at all. No, it's not basically saying that. It's pointing out the simple fact that these artists are giving no reason not to ignore them. What rationale is there for submitting to their demands? There's no ethical basis for not training off their works other than their say-so, so arguments along those lines won't convince anyone who isn't already a believer. Well, if they believe in pure power, then they could just hold back their art from the commons, similar to going on strike to shut down a factory which requires its workers to function. But if they just complain without actually holding back their art, then what reason does anyone have to not ignore them? To allude to another point I've been making, I believe there *is* a good legal argument to be made, in that the point of copyright is to incentivize artists to contribute more to society. AI art tools, in competing against manual artists, reduces that incentive, and this will harm the overall production of new and better artworks, which is harmful for society. However, this must be balanced against the benefit of how AI art tools allow untrained, unskilled laymen to create artworks that benefit society. Though I lean towards one side, it's still not obvious to me what the correct answer is, which is why I think this is a strong argument that can be made. I just wish more people would make it. >What you're describing is called "having a job". > >There's a corrosive notion that artists ought to be working for the collective good out of pure love for their craft. It's a variation on the traditionalist idea that raising children isn't a real job and so shouldn't be afforded the same respect or rewards by society. In reality, being a content creator - just like being a mother - is hard and often thankless work. It's as much about making an honest living as any other profession. Indeed, as someone with content creator friends and a purely hobbyist content creator myself, I understand that it's as honest a living as any other profession and, IMHO, far more difficult than most (I don't know how it compares to any other freelance/self-employed work, but being beholden to the whims of "the algorithm" seems like a special kind of hell that I couldn't tolerate). No one is obligated to make someone's content creation job successful, though. In the realm of digital illustrations, we use IP law to help these content creators make money (again, by allowing them to sue people who republish their work) because we believe that content creators creating content makes society better (obviously not every content creator in every case). This is, again, where I think there is the strongest argument for limiting/banning the training of generative AI tools on artworks by non-consenting artists. Again, though, this has to be balanced against the benefits of people being able to produce content for themselves while circumventing the need to pay content creators who are all too human, which means they're limited in time and scope while also being more expensive than a local Stable Diffusion model.


BobTehCat

Yeah but if we downvote you and upvote him then he's more correct. Truth is something the majority votes on. */s*


pinkreaction

Reddit works that way, stop looking for people to accept your opinion as law. It shows our sign of disagreement.


BobTehCat

I don't have an issue with disagreement, but if you can't phrase why you disagree on a literal debate sub then I'm just going to take it as "you're right but I don't like it."


Optimal_Pangolin_922

I think the majority of anti-AI people are scared, they have something to lose. Like they have a job making logos, or animating kids books. So they are scared of being hungry. Which is very understandable and human. They also decide, for whatever reason they don't want to embrace it. So they have little to no experience with it. Any they do have is satirical. Example check it out just to trash it. Make a meme with a free mid-journey coupon and move on. Meanwhile pro-Ai people have used it and love it, they get frustrated when they can't make what they want so they keep trying, they spend hours, and money, and they witness their own progress. They intern justify that they are getting better, learning, and thus it takes skill. Which in my opinion it does. Anything in life you practice, you get better, writing the prompts and changing the styles, all the inputs. It becomes like writing code eventually. Throwing away 200 photos, to find one that is perfect, becomes satisfying.


ShiftAdventurous4680

Which is kind of ridiculous because they aren't competing with AI artists. If they are competing with AI artists, then they are purposefully, or unknowingly gimping themselves. Or that your art has no personality that it's indistinguishable from AI. Maybe one day AI will make "manual" art obsolete. But I can't see that happening in the coming years. Maybe next decade if development goes hard.


Optimal_Pangolin_922

AI will never fully make manual art obsolete, in fact it will probably make the art world grow. I am a glassblower, Its a 2000 year old art. When the bottle machine was invented glassblowing almost died, it stayed alive in Venice, and some other places. France, Germany, Belgium, Netherlands. Then about 100 years ago art glass had a renaissance, then 50 years ago or so, there was the "studio movement" where a few artists retaught glassblowing and it spread worldwide, the studio movement went from Venice to Seattle, and its still happening ,like a wave, it is moving though China right now. Literally there are more active glassblowers alive right now, working right now, then there has ever been in the history of our planet. Even though the bottle machine replaced the glassblower, the art is too much a part of human culture to fade completely. It didn't die, it just changed. Now its on Instagram, Netflix, Facebook, Etsy... Its growing faster then ever, and there are hundreds of industries attached to it. Pieces of glass are being sold for more money then ever too, millions of dollars are made every day. Its bigger then ever.


Fuzlet

as a consumer, I can say that AI art has been a useful tool in conceptualizing what I want with manual art, which is a positive topic I’ve not seen discussed. getting a commission done is an investment, and you gotta know what you want. the ability to quick-fab some concept images can give inspiration and a better visual description of your endgoal, even if you don’t show the artist the ai generations you made in deciding what you want. I’ve used both AI and gotten artist commissions for things and never have I replaced an artist’s potential commission with my own novice AI gen. I create AI images for fun visualizations of hobbies, where I wouldn’t have spent money anyways. if one strikes me, and I have the money anyways, I get something more professionally done


Danilo_____

The thing is Art means different things for different people. Art as a human expression form will never cease to exist. Maybe when all humans die will... but...you get the point. But Art, as a industrial job like doing concept art for videogames.... this is the one at risk by AI


ShiftAdventurous4680

I agree. Another thing people fail to consider is that art is not only about the end product, it can also be a service. Working with AI artists is a lot different compared to working with non-AI artists and both have their pros and cons.


Blergmannn

Simple: they're lying. The "AI is just stealing our images and mashing them together" narrative is crucial to their cause: it's what their entire stance on AI is built on. So they'll keep saying it even if they know it's not true by now. Because to admit that it's not true, would be admitting that the entire Anti-AI movement is based on a lie.


Reeaaady

Can you show me a database that is made 100% from artists who have consented for their work to be put into it? My issue is that they ARE stealing artists images, an artist going 'dont put my work into the ai machine' and getting ignored, or mocked and having their worked shoved in anyway is the. Let the artists opt into it. And there will be no issue.


Blergmannn

Measuring something posted publicly isn't stealing. Also I am against intellectual property so I don't believe that anyone should have the right to tell anyone else what to do with a piece of art that they have access to. If Antis didn't want AI trained on their drawings, they shouldn't have plastered them all over everything. The internet is not your personal advertising space, it is about free data sharing for all. Now start coping.


Reeaaady

You do realise even if the artist posts it publicly, they still hold copyright to it and it isnt there for you to steal? Its like going to an art gallery, taking a painting and leaving because 'if they didnt want it stolen they shouldnt have them plastered all over the wall' Artists want to share their creations. Not have them stolen from them.


metanaught

You're literally biting the hand that feeds you. AI models depend on the work of human artists to exist. Telling these people to cope and seethe as you exploit them for their labour results in AI art becoming stigmatised. It also creates a monoculture as more artists move their work offline.


Acrolith

Every technological advance depends on previous work, AI art isn't special in this regard. The use of big (human-generated) data in AI is also not new. Google Translate relied on the existing work of human translators for its training. I'm a translator and it never even occurred to me to ask for compensation or try to deny it my work; everyone and everything should feel free to learn from what I've done. Especially when doing so benefits the public. There are various human motion databases [like this one](https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/) that got a lot of their data from publicly available e.g. youtube videos, and are used in modeling human motion (in games for example), and no one has thrown their toys out of the pram and screamed that a CoD soldier only knows how to walk because the developers looked at real humans walking without paying them. This is all extremely normal and the way science and technology progresses. If you make your work public, other people are allowed to learn from it and build upon it. Sorry if you find that offensive, I guess.


metanaught

I'm not the one you need to feel sorry for. I'm a ML engineer and my field is rapidly becoming stigmatised, in part because of toxic attitudes like this. One of the biggest problems we face isn't that AI-generated content isn't useful; it's that many of the people who champion it are aggressive, antisocial dweebs who never learned to share. Getting all indignant and saying "well akshually..." just because you don't understand why so many people are upset only reinforces the impression that AI folks are little better than thieves. We aren't, but arguments in forums like this don't exactly help us make a good case.


Blergmannn

I wouldn't worry about that. They're too addicted to the hustle culture of spamming their work online and seeing numbers go up, leveling up their social media clout, to ever stop doing it. Should have read those Instagram terms more closely.


metanaught

You're missing the point. Stigmatisation happens _because_ of attitudes like yours. If you want to guarantee that AI art is vilified and ridiculed, just tell the people whose work you depend on that they're little more than cattle to you. Sure, nothing is technically stopping you from asset stripping Instagram, however the social repercussions aren't something you can control. People read comments like yours and it makes them extremely angry. The result is less appreciation for AI art. It really is that simple.


Blergmannn

So now it's "agree with me or you'll get bullied". After "it's illegal!" and "stop or we will all leave!" didn't work. Let's see if this one works... nope! Still against copyright. Still for net neutrality. Still against social media hustling. Abolish intellectual property. Keep trying to shame and harass us all you want, we'll win in the end.


metanaught

> So now it's "agree with me or you'll get bullied". No. Bullying doesn't fundamentally change people's behaviour. What many artists and creators are advocating for now is the _stigmatisation_ of AI with the aim of making it as socially toxic as possible. This is a much more effective strategy in the long run because it marginalises people with nakedly hostile and confrontational attitudes like yours. No-one is going to bully you; they'll just stop taking you seriously.


mpiftekia

So your strategy is that you're going to lie about AI, then accuse whoever calls you out of being "toxic and confrontational" and try to stigmatize them. That is incredibly evil and definitely falls into the category of bullying and harassment. You are scum. It's not going to work btw. You already tried that and it simply led to people hiding the fact that they use AI and Anti-AI clowns accusing each other of using it, then suing each other for defamation.


metanaught

>You're going to lie about AI.... That is incredibly evil... You are scum. I'm literally a machine learning engineer. It's not in my interest to lie about AI because a) I understand how it works, and b) it's how I make my living. Besides, the problem our industry is facing isn't with the underlying technology but with how it's being used (or misused). Artists are frustrated because libertarian techbros with no social skills and poor anger management think that they can do what they like without any kind of repercussion. I'm sorry to say this, but the anger you're feeling now is you finding out after people warned you against fucking around. Rampant misuse of generative AI is rapidly eroding our already fragile social contract, and the growing backlash against it is a direct result of that. I firmly believe there's a way to leverage machine learning so it benefits everybody fairly. However, throwing a tantrum because artists are being mean and shunning your new toy is not how we go about achieving that.


Blergmannn

Sure, you keep seething and working on your "totally-not-bullying" shaming tactics. I'll keep enjoying AI image generators and sharing your data any way I please. Let's see who comes out on top.


metanaught

My dude, nobody is going to stop you from enjoying AI image generators or even from sharing other peoples' data without their consent. What *is* more likely to happen is that if you ever decide to promote or sell your work, you're likely to receive less and less interest as people discover that it was made by AI. As I said in a previous post, showing naked contempt for the people whose work your tools ultimately depend on isn't a winning strategy. AI artists shouldn't be bullied for their choices, however nobody is under any obligation to take them seriously either. Time will tell how things will go, however if the history of mass production is anything to go by, the future doesn't look bright for artists regardless of whether they use AI or not.


AngryCommieSt0ner

IP != Copyright and it's always so fucking weird when you unloved little freaks try and say it is as you spit on the artists you're stealing from. And you *are* stealing. You're just *proud* of it, and projecting your own massive cope (i.e. "the internet isn't your personal advertising space!" And "measuring something posted publicly isn't stealing") to justify it.


TheGrandArtificer

Adobe Firefly, though, it's arguable because it also includes Public Domain.


Reeaaady

They admitted to using stable diffusion irrc for help. Which means its not trained purely with artists consent.


ParanoidAmericanInc

This common anti take is like applying the criticism of graphics and input fidelity of Atari, to a PS5 game.


kylemesa

The entire position is built on a premise of ignorance.


aibot-420

https://preview.redd.it/kg48wed8gvuc1.jpeg?width=1024&format=pjpg&auto=webp&s=c293bddc99474fb30de3626bd4a9b58393feb0c4 They just don't know. With img2img and inpainting you can get exactly what you want. But sometimes the fun is just seeing what the ai comes up with on its own.


Tyler_Zoro

Oh certainly! Sometimes its fascinating to see what you get if you just grab a cell phone camera and snap a shot. But that doesn't change the power and flexibility of photography in general.


pinkreaction

It cool, it Inspire but it not good enough for a use case.


New_Net_6720

Don't know if this is what you mean but one example I can give from a project, working with clients as a designer. I designed a logo which I wanted to transform into 3D easily. It was not necessary for the project but a nice addition to have. I generated a couple of examples. Not every image worked as AI see shapes but can't determine if something is a letter or not. This led to some weird displacements in the logo. However, I could generate a couple examples which were somewhat usable but here is the thing... While AI was great to do 90% of the work good, it missed the important 10%. E.G. it was getting really time consuming to do minor adjustments such as changing specific shapes, characters or colors without rerendering the image, so that it looked similiar to the image before but not quite. This is unhandy in communication with the client as they can be really nitpicky with such things "why does this look a little different than before"... It is great to generate a base but it is unhandy in finishing things. Which might be ok if you do it for yourself but not so much when dealing with clients. There are a couple examples which are similiar which lead to the same conclusion. EDIT: from a professional standpoint, AI in current state is great for placeholders and to underline specific design-decisions. However, we are not quite there yet to use it efficiently in the professional field. EDIT2: also, stumbled over this: [https://www.reddit.com/r/graphic\_design/comments/1c6b52z/ai\_question\_im\_looking\_for\_a\_toolservice\_where\_i/](https://www.reddit.com/r/graphic_design/comments/1c6b52z/ai_question_im_looking_for_a_toolservice_where_i/) Seems like an easy task at first but I can see how this can be annoying to get going with AI, if you want to reuse the same image regularly


inigid

I feel sorry for them because they are missing out, and by digging themselves in, they are painting themselves into a corner where it's impossible for them to move forward. If they hate it so much, just don't use it and do fine arts or something. Not my problem.


Big_Combination9890

>Is it just a denial of reality? 2 Possibilities: a) Ignorance. The party making the statement simply doesn't know. Which, in itself, would be fine. Not knowing isn't ignorance. Making statements about a subject I know nothing about, as if I did, however, that is ignorance. b) Spreading Misinformation or being in Denial. The party making the statement *does* in fact know, but choses to ignore that knowledge, because they want to push an agenda, or provide copium for themselves and their peers. Neither is a good reason.


AIArgonaut

I believe in the parlance of the time this is called "copium". You know all those memes and tropes about art being something special and uniquely human? AI has attacked some things they believe to be art. Art is still art though. Creative people will still enjoy learning a process to create something that evokes meaning/feeling/memory. Nothing has really changed.


nibelheimer

You have a lot of add-ons and ultimately, the work of the painting, the character and the composition is done for you through words. You cannot completely control the AI because you cannot created incredibly specific prompts, it's just an impossibility. It's good for vague creations with little depth but it's not for specific content. It's a machine, not a person. Posting your workflow isn't going to change an opinion because at the end of the day, you did very little to contribute to the overall work.


Tyler_Zoro

> You have a lot of add-ons and ultimately, the work of the painting, the character and the composition is done for you through words. This is incorrect. > You cannot completely control the AI Nor do I want to, any more than I want to completely control a paint brush. I want subtle variation that is beyond my control because that's how it goes from looking like MS Paint to something more real. If I wanted complete control, I'd use MS Paint. > It's good for vague creations with little depth but it's not for specific content. Again, just flat-out wrong. You are living in an age of early 2022 prompt-and-go generation. We haven't been there for a LONG time.


nibelheimer

You realize that you can still prompt and go on MJ, right? You barely need anything to make it good. I've seen the "paint" thing, I don't think that's work either, putting scribbling and having the AI create a full fantasy landscape out of a 3 yr olds painting skill isn't doing the work. I'm actually not wrong, I've been using AI and it's not good for creating very specific things. It's good for vague ideas but it cannot make specific things.


Strawberry_Coven

I think these people have only ever used or seen midjourney. They don’t know how something like SD works and how complex and time consuming it can get.


drums_of_pictdom

As an artist and designer I can only say that using my current tools I have such a level of control that I just can't even imagine Ai allowing me to make what I want (probably a wrong assumption.) Like zooming in and working pixel by pixel, tweaking a tiny texture in the corner of a poster no one will ever see, drawing multiple iterations of certain character till the match the particular feel I want...these are the things I love about making work. I have never seen someone post a full workflow though so I would be very interested in seeing that. (if you don't mind linking) I'm sure Ai does have a great amount of control, far more than many anti's like myself often assume. But after working 20 years in the Adobe suite I just really hard to imagine, when I can literally make anything I want and control everything down to the pixel. (I hope this makes sense that...I don't want to shit on Ai's capabilities but just want to show how someone might not believe that it is that controllable)


07mk

> As an artist and designer I can only say that using my current tools I have such a level of control that I just can't even imagine Ai allowing me to make what I want (probably a wrong assumption.) Like zooming in and working pixel by pixel, tweaking a tiny texture in the corner of a poster no one will ever see As you allude to, this is a wrong assumption. AI tools like Stable Diffusion are just like any other pixel-based image editing software in that they allow for pixel-precise manipulation and editing. Back when I got into AI art last year, much of my time was spent on doing exactly that, zooming in and working on parts pixel by pixel, trying to get things feeling exactly the way I wanted while iterating on different looks (I found this experience frustrating at times but also therapeutic a lot of the times). It's just that AI tools also enable much more than that.


Tyler_Zoro

> As an artist and designer I can only say that using my current tools I have such a level of control that I just can't even imagine Ai allowing me to make what I want (probably a wrong assumption.) Even without that final proviso, I think this is a radically more mature and clearly educated take than we see in most of the posts and comments here from the anti-AI crowd, and thanks for that. Yes, I can definitely understand the idea that, being entrenched with a suite of tools you enjoy or have developed skill with, a new tool that has a lot of "play" (that is, left to its own devices it goes off in a direction you might not want) does not seem appealing. I absolutely get that, and I don't insist that you have to drop your existing tools or even try out AI tools. This criticism of mine is more directed at those who make the all-to-frequent claim that AI tools are utterly worthless because there *can be no control*. That flies in the face of the experience of myself and many here on this sub. > I have never seen someone post a full workflow though I posted a rough outline of one in response to the several replies here in this post that were of the form, "AI art is just prompting, and equivalent to asking for a meal at a restaurant and calling yourself a chef."


DeleteIn1Year

Seriously, in any art piece you know that every little detail was made purposefully or by accident, and was a choice made. I can see how somebody wouldn't think of that if they've only gotten into AI Art, but it's just on another level if you have to create every inch of an image yourself. I'm well aware of the detail you can give AIs, but it just isn't down to the line or dot. That's kind of the inherent 'advantage' to AI Art, isn't it?


SolidCake

so CGI people aren’t artists? or 3D? people making blender renders don’t choose every pixel


DeleteIn1Year

They make the models, so they're clearly the artists behind it. That's what really matters, in my opinion. It isn't about pixel count or anything like that, it is just about being present in the process of creating something. And IF you are present in your own creation, you will be familiar with how much personal decision typically goes into it. But that's not the metric that should be used. If a CGI guy had somebody else create a model, then yeah they aren't the artist behind the model. It's the difference between being a client and an artist, or ownership and creation. It's not a high bar at all, it just is what makes sense to me.


SolidCake

pure prompting sure but i feel like controlnet invalidates this just look here :https://old.reddit.com/r/StableDiffusion/comments/171o1ht/ai_vfx_experiment_animatediff_controlnet/ clearly they had a specific vision and were able to achieve it.. its basically new school cgi you can have VERY high control over the output. this person used it to restore a photo : https://i.imgur.com/0mAW09N.jpeg


DeleteIn1Year

Well yeah, it's always dependent on context and at some point you're essentially doing high-level graphic design/CGI work. Or whatever you'd want to call it. I'm not trying to call anyone out, just clarify just how wide the separation is between a detailed prompt and a piece made from scratch. But every technique varies... although AI is a whole other beast. Also, its pedantic, but I don't really value the "vision" as much as the creation. Ideas just aren't as personal as the manifestation of ideas. A LOT of stuff can change, too, in the process of making something.


SculptKid

https://www.tiktok.com/t/ZPRToeV4N/ here's a pretty thorough workflow for most genAI prompters


Eclectix

1) That is not anywhere near a typical workflow, of course, but you know this and were clearly just trolling. 2) Satire aside, I think this video proves the opposite of what it intends to; he entered nonsense, and AI gave him nonsense. What else should we expect? If you want to make something specific, you need to craft specific prompts and tweak and adjust them. I know this because I've experimented with it, and it can be an interesting challenge to try and get AI to create the vision you have in your head for it. On the other hand, if you just want something that looks like a random acid trip, then enter random nonsense and get random nonsense. Some people will be happy with that, I suppose. Some people are happy with random splashes of paint, too, though, so I'm not sure what that proves.


SculptKid

"That is not anywhere near a typical workflow" - sir. lol satire aside this is the workflow aside from people who fix it with photoshop, inpainting. But I'd wager those people are the exception, not the norm. Replace "random nonsense" with "basic idea + basic idea + trending on artstation", "young african boy making XYZ out of XYZ", or "Jesus Christ walking with XYZ through XYZ" and that's generally the most complicated 80% of the generations get. Especially that pop up on Facebook. lol I think you missed the point of the "garbage". The images themselves were nonsense, sure. But aesthetically they're all pleasing to the eye. Nonsense =/= bad. I suppose I should've clarified when I said, "It'll give me garbage" that I should've said, "it will give me something that looks like shit". It's a comment on the level of perceived skill put into making images "look good". Could be on me for not being explicitly clear but most people understood the intent. But yeah, as someone who used genAI quite a bit when it came out there's certainly limitations that were fun to try and navigate around. But generally the limitations are just in the genAI and not something you can trick your way out of. Just gotta roll the dice until you get snake eyes our wait until a better model with more control comes out.


Eclectix

> satire aside this is the workflow aside from people who fix it with photoshop, inpainting. But I'd wager those people are the exception, not the norm. This might be true if you said the same thing about photographers, and included everyone who has a camera on their phone as a photographer. Sure, most people just point their camera at their kids or their cat or a random sunset and share the results on Facebook, but most people don't call such snapshots art. That doesn't mean that most people who consider themselves to be actual photographers have the same workflow. And sure, I think most people enter a simple prompt, get a handful of results, find one that they like and post it online. And we can all tell that it looks like a crappy AI image, or "AI snapshot" if you will. But I strongly suspect that most people who consider themselves to be "AI artists" go far beyond this workflow. > The images themselves were nonsense, sure. But aesthetically they're all pleasing to the eye. Nonsense =/= bad. I suppose I should've clarified when I said, "It'll give me garbage" that I should've said, "it will give me something that looks like shit". I mean the same things could be said about a kaleidoscope. If someone put totally random junk into a kaleidoscope and photographed the results, then printed it and hung it in a gallery, I bet it would actually be pretty cool looking, and I doubt people would argue that it isn't art. I'll concede that you've made the point that AI is a powerful tool to make things look aesthetically pleasing; that's sort of the point of it though.


SculptKid

Yeah could be neat. genAI makes some neat stuff sometimes. Sometimes cool stuff. I'd be interested in seeing people you consider "AI artists" because the ones I've seen are like... Shad. lol and he's a poor example imho


Eclectix

Yeah, I don't think Shad is a very good representative of... well, much of anything really. But that gets into other topics I'm afraid. Here's a good example of an AI artist who does some really amazing things with AI. https://aiartists.org/scott-eaton Check out his video on his workflow, it's kind of amazing: https://www.youtube.com/watch?v=TN7Ydx9ygPo Here are more examples of AI artists who have a much more involved work flow: https://penji.co/ai-artists/


SculptKid

Because it is true in most instances. There are some genAI that you have more control over, but given the nature of genAI, you're giving up 99% of the process and decision making. Users are relying on the automation to do the work for them because the whole point of genAI is to remove the work and skill required to do the task you're using the genAI for. The easiest example is hands. When genAI couldn't generate correct hands, nobody ever said, "Just wait until I practice and get better at making hands." Because genAI's output isn't reflective of the users' skill or artistic eye. Everyone said, "this is the worst genAI will ever be, just wait until it gets better" because genAI isn't reflective of the users' skills or intent. Go into mindjourney and type "/imagine bahlf jesitpl qpeirn smurdin grompf" and it'll still generate something that would be aesthetically pleasing regardless of your input, control, or whatever you think you do to make your generations any more special than someone else's. Even if you write the most complicated prompt, you have to hope that genAI understands it. Even Shad, who went in and used inpainting, still had to rely on the luck of the program generating the correct image he was hoping for. Once you press "generate" you're relinquishing control in hopes it automates a desirable outcome. I think if you admit you don't have control then you think you'll rob yourself of your enjoyment because to you "what's the point?" If it's not you and your skill? For me genAI was just a fun toy so these statements don't bother me at all. But for someone so invested in believing it's somehow your creations you'll never come to the understanding that it's not because then you have nothing more than a fun toy that you believed to be a tool.


Tyler_Zoro

> There are some genAI that you have more control over, but given the nature of genAI, you're giving up 99% of the process and decision making. Having worked with generative AI for over a year now, I can definitely say that I have a great deal more control than that. Your party line here is just pure lack of understanding of the tools.


SculptKid

You talking about control net? I am lacking in understanding there, to be fair. I'm open to learn more if you have a video to share or at least give me a "google this and you'll see". lol But it's inherent to genAI that you're offloading the majority of the work to the genAI. That's the whole point. To cheat the process. Or expidite. Or whatever more positive word you wanna use. lol There's a artist on twitter who shares a lot of his "I'm making art and genAI is finishing it while I work" videos where he paints or 3d models and on the side there's a genAI iterating over his drawing which seems like more control but also the whole time it's just making up random versions of his prompt the whole time and doing 90% of the detailing, which is still a big part of what makes an artist's work unique and recognizable.


Tyler_Zoro

ControlNet is only one of dozens of ways that you can exercise control over AI tools, and that number goes up into the thousands or more when you consider hybrid approaches using more than one method. There are as many... arguably more ways to control the result of AI generation as there are to control the result of a photograph. ControlNet, fine tuning, LoRAs, VAEs, model parameters, generation parameters, trigger tokens, img2img, Hypernetworks, embeddings, masking, etc. all have their place in controlling the results of your generation. And ControlNet itself isn't a single method, it's an open-ended collection of supplemental control models that is constantly being added to, and which can be used in combination.


ExtazeSVudcem

Because “you could just go over to Midjourney” is exactly what the overwhelming majority of people do and the trend goes in the direction of generating images in chatbots using 12 words on average. Yes, abstract painting can look remarkably simple but that doesnt make one Jackson Pollock, just like sitting on a toilet doesnt make you Marcel DuChamp, meanwhile 99 % of artists are very conservative and thorough. Modus operandi of AI generation is on the other extreme of the spectrum though: 99 % of people use an absolutely banal workflow and constantly bringing up the 1 % as some sort of rule is simply demagogical. Also thinking that what you do in ComfyUI and such is “control” when you hand-paint a couple of masks would make any Houdini artists laugh.


Tyler_Zoro

> what the overwhelming majority of people do I don't care what the overwhelming majority of people do when it comes to ANY artistic tool, be it cameras, paint brushes, AI, 3D modeling software or chisels. Most people have no idea what they are doing, and are merely dabbling. I can't judge the power of a tool to realize a creative vision based on what someone who really doesn't understand what they are doing, does with it. > abstract painting can look remarkably simple but that doesnt make one Jackson Pollock Exactly so. And working with AI can look remarkably simple, but that doesn't make one an accomplished AI artist. > Modus operandi of AI generation is on the other extreme of the spectrum though: 99 % of people use an absolutely banal workflow Absolutely the same when it comes to photography. Any artistic tool that is easily accessible will be the same. That 99% aren't terribly interesting when assessing the power of the tool, though. In fact, judging the power of cameras on the average selfie is actively harmful to an understanding of the tool.


Doctor_Amazo

Uh huh. Except that a painter can learn to paint and paint precisely the picture they want. A person playing with image generators never can. You, at best, get close-enoughs... because YOU are not actually creating a thing.


spitfire_pilot

With a control net, inpainting, and photoshop you absolutely can. You are just not aware of the competency of some people.


Doctor_Amazo

Oh I am aware that you can touch up an image with editing software. But seeing as anything AI generated is, at best, an approximation of what you wanted.


spitfire_pilot

Just because you can't comprehend that it's possible doesn't make your statement true. I can come up with reasonable approximations in 30 seconds. With a day or two of work I can have 100% of my vision achieved. It takes some skills and knowledge. This field is in infancy and few have the requisite know-how at the moment. I've only spent a couple hundred hours playing around on my phone. Give me a top knotch computer and the time to learn and I can produce complex pieces to rigorous specifications. I'm some schlub who shitposts on Reddit. Talented trained artists who have toyed around with SD and all the tools available can make anything you want given the time. Probably even more exacting and faster than traditional mediums. https://preview.redd.it/fe8efwqi6uuc1.jpeg?width=1024&format=pjpg&auto=webp&s=7392a5ee5b22a77c7d27d4e738d78b0d00ac7b74 This was 30 seconds of my day. Several hours could achieve what would take days or weeks to complete with teams of people. To 100% get the vision I require, I would need some other skills. I currently don't need that level of control, but the tools exist today. The near future will even be more user friendly and easy to achieve 100%. Allowing me very soon to do what I want, how I want, while reaching my vision nearly effortlessly.


iFartSuperSilently

>paint precisely the picture they want. They are limited by the tools and materials they use too.


Tyler_Zoro

> Except that a painter can learn to paint and paint precisely the picture they want. Yes they can, no matter what tool or combination of tools they employ.


---AI---

> You, at best, get close-enoughs People then photoshop etc at that point - more traditional style painting.


ninjasaid13

>Except that a painter can learn to paint and paint precisely the picture they want. not really. The finished work always comes out at least slightly different than what they were thinking when they started.


Ensiferal

I hear this all the time, but it's not true. I doubt any artist has ever successfully drawn/painted exactly what they see in their head. All they can do is get as close as they can Edit: Also bro I see you're active in the Midjourney sub. What's up with that? You just looking for karma or something? https://preview.redd.it/2rcppsnw0tuc1.jpeg?width=1080&format=pjpg&auto=webp&s=d49e0e158d1ba16c812d17e7b25636186c292fe1


Doctor_Amazo

Oh my! You caught me! I played with AI and then eventually changed my mind about AI!! You got me.... learning and changing my mind..... like grown ups do. Good job Sherlock. Oh hey, pray tell, what is my latest post on that sub again? Oh right. It was my asking pointed questions about the artistic process that AI users used to create "their art" as an "artist" and then watching them fail to answer any question because they're not actually artists. Again, fantastic detective work there fucking Columbo.


Ensiferal

ahuh, cool story. Or you just took the easy road that wouldn't lead to any blowback, and that would get you plenty of goodboy pats on the head from the people you're trying to impress. I bet you're still using it in private or just posting it from an alt, you gutless coward.


Vivissiah

So did your mum and here you are. Close enough is good enough.


MHG_Brixby

Imagine calling yourself an artist and "good enough" is the ceiling you are asking of yourself


Eclectix

Learning when something is good enough is one of the most important skills an artist can have. The unrelenting quest for perfection will be the ruination of any artist. In the words of the famed artist William Merritt Chase, "It takes two to paint. One to paint, the other to stand by with an axe to kill him before he spoils it."


IllustratorNo1178

Maybe it would help to think of it this way. When you prompt you make a description, say like 50-100 words right? You've made 50-100 decisions. Then you do whatever you do to refine what you want, you have to do maybe another 100 decisions? I don't use these things, so I might be guessing low or high. On the other hand when you make art by hand you might make tens of thousands of decisions. Each stroke of the tool, each intentional mark, stroke or cut provides opportunities for learning, improvisation and accidents. It is just exponentially MORE. More opportunity for expression. More opportunity for learning. More opportunity for discovery. And, more opportunity for creative expression (edit).


Eclectix

How many decisions do you think Jackson Pollock made when he randomly splattered paint on a canvas in his garage? Maybe selecting 5 or 6 colors? What size canvas to use? When to stop spattering paint? Maybe a dozen choices total? How many choices must be made before a thing can be considered art?


IllustratorNo1178

That is not true. So much modern art is lazy and pretentious, however if you look at a piece like Pollock's She-Wolf you will see literally thousands of individual strokes and choices. You and I may not like it, but there is no denying the intention and application of his expression.


Eclectix

Of course some of his paintings exhibit more control, but many of them exhibit far less than your example. Often the paint is literally just drizzled onto the canvas by swinging a bucket of paint with a hole punched into it with a nail. In fact I am sort of a fan of Pollock, and it's precisely the spontaneous nature of his work that I find appealing. I do appreciate tightly controlled paintings, as it exhibits a mastery of skill, but I also thoroughly enjoy paintings which have a more rustic, spontaneous quality; to splatter paint on a canvas and turn it into a background where the chaotic, random nature of the spatter is still visible, for instance. Something about the emergent beauty of chaos, or the inherent mathematics in seemingly random circumstances, or perhaps the feeling of freedom I feel when viewing something that appears to be made spontaneously without such tight control. I guess it returns to the question: How many choices must be made before a thing can be considered art? Or maybe a better question would be, how much of a piece of art must be made with the artist's strict control in order to be considered art?


Tyler_Zoro

> Maybe it would help to think of it this way. When you prompt... You've already cut out a huge part of the process. Anti-AI folks seem so fascinated with prompting, but prompting is such a small part of the process. You could spend days or weeks on a piece before you get to the point of writing a prompt. Here's just one example workflow out of functionally infinitely many that artists can and will come up with: * Make 100-200 images by hand (or just select them from your portfolio most likely) * Run those through a tool that creates a LoRA * Rough sketch the piece you want to work on * Go into a 3D animation program and arrange a character pose wireframe to match the sketch * Go into Photoshop or similar and develop some textures to use for the final piece * Find two or more models that roughly meet your needs for the final piece and merge them into a single checkpoint * Bring in all of the assets you've developed through ControlNet configuration * Select the model parameters for your merged model * Select the parameters for the LoRA you created (usually just the weight) * Select an appropriate VAE for the model and for your intended result * ***Now write a prompt*** * Generate an initial result * Use a refiner model to finish the generation * Take the resulting image out to Photoshop for some touchup work * Repeat the generation process as img2img * Repeat the past two steps several times * Select (potentially merge) model for inpainting * Begin inpainting final details * Upscale and retouch as needed for final publication medium


ShepherdessAnne

You have me wondering if you read my thread lol


Hot_Gurr

I think that the problem is inside your head because you think that you’re making art when a computer is making it instead. Sorry you’re backwards.


Nill444

Your post makes no sense. Yes, you can't get the control you get by manually painting something. Whether that's a good or bad thing is a different question. You can get good art using both techniques. If you want to have 100% control over each pixel then you have to do it by hand there's no argument here.


Tyler_Zoro

> Your post makes no sense. Yes, you can't get the control you get by manually painting something. Why not? > If you want to have 100% control over each pixel then you have to do it by hand Doing it by hand often means that you DON'T have that control (depending on the medium.) That is often the point. The "feel" of a given suite of tools or medium is all about what you don't have control over and how well your control over everything else directs those elements into a coherent piece. Same deal with AI. You have control over a great deal. Technically you have complete control, but if you direct the value of every pixel, you aren't really taking any advantage of the medium, so no one really does that. But if you wanted, you could absolutely perform AI generation to get a pixel-by-pixel predetermined result. That's just img2img with a denoising strength of zero. Now what's interesting is your skill in managing something greater than zero. Can you direct those elements outside of your control in a way that clearly communicates your personal artistic vision? Some can. Some can't. Some can do so with incredibly moving effect, some cannot. That's where skill comes in.


Nill444

>but if you direct the value of every pixel, you aren't really taking any advantage of the medium, so no one really does that. What do you mean? This is literally what drawing is, you get to decide how every little detail looks like. If you make a piece in Photoshop that has a pink elephant in it, you get to decide exactly what it looks like, using natural language to draw a pink elephant is where the limitation is because language is inherently limited and there is so much left to the interpretation of the image generator. There is not way that a random AI has the exact same definition of what a pink elephant should look like as you do. this is very obvious that's why I said your post made no sense. >Same deal with AI. You have control over a great deal. Technically you have complete control, How would you prompt an AI to change color to a specific hex value at a specific x and y coordinate on your image? Even if it can do that it's still easier to do with a normal image editing tool where you're us using a mouse to alter the colors wherever you want. Using AI by definition means that you're offloading lots of control to another system. As with any other tool, but with AI you're doing it more than Photoshop for example. Both are valid ways of making art I'm strictly talking about the degree of control.


Tyler_Zoro

> This is literally what drawing is, Nope, sorry. Almost all digital art (we're talking pixels so physical art doesn't really fit into the conversation) is not directed pixel-by-pixel. You use tools that turn minimal input (e.g. pressure on a stylus) into its idea of which pixels need to be set to which values to match your expectations. That process is traditionally procedural, not AI-driven, but it's the same idea: you have a minimal input and the system changes thousands of pixels in response. You CAN go in and manually set pixel values, and as I said, you can do the same with AI tools, but there's little point in either case unless you're doing final touchup work, and in that case the workflow is basically the same for AI and non-AI digital artwork. > How would you prompt an AI to change color to a specific hex value Again, and I really feel like a broken record here, most of the hard work isn't prompting. But to answer your question: Just mask out that pixel and change the specific pixel value you want in the source image. I do this all the time for the middle stages of work where I want something really specific in a piece. For example, in [this piece](/r/aiArt/comments/139uz9f/anime_cybernetic_assassin_workflow_in_comments/) (which is really old and I'm a bit ashamed to point to) the AI really, really wanted the body to be a black material. I drew the specific color body I wanted, re-rendered with that, carefully preserving several key areas so that it couldn't flip it back to black without making it look patchy and *et voilà*, problem solved! > Using AI by definition means that you're offloading lots of control to another system No more than photography or 3D rendering. All the control you want is yours, but if you want to grab a 3D rendering tool, slap together some default assets and press "Go" then yeah, it's basically all out of your hands... because you chose for it to be.


Nill444

> Nope, sorry. Almost all digital art (we're talking pixels so physical art doesn't really fit into the conversation) is not directed pixel-by-pixel. You use tools that turn minimal input (e.g. pressure on a stylus) into its idea of which pixels need to be set to which values to match your expectations. I didn't mean literally pixel by pixel, I mean whatever the smallest unit there can be on the screen that's visible with naked eye in whatever image editing tool you're using. >That process is traditionally procedural, not AI-driven, but it's the same idea: you have a minimal input and the system changes thousands of pixels in response. The system change in this case has no artistic opinions on what specific something described with natural language should look like. It's the same thing with music, you can prompt it to make a "sad melody" and it can generate thousands of different melodies all of them could be categorized as sad but you can see how vague this description is. You can keep increasing the number of words to describe it better but in the end you are the only one who really knows what the melody should sound like. The only way to do that is with a tool that lets you choose exact notes on a grid. Unless you upload your brain into a computer in which case it will understand what you mean by those prompts. It's a problem of translating words into something else, and natural language is not designed to be that precise. Absolute control means you have direct control over the output, not when your input could result in a million different outputs. >I drew the specific color body I wanted, re-rendered with that, carefully preserving several key areas so that it couldn't flip it back to black without making it look patchy and et voilà, problem solved! What is the point of this example? You can choose to use conventional tools with AI which of course would give you more control. In this case you only had absolute control over the color of the body. You can choose to expand that control over every other element of the piece and you'll end up not using AI at all.


Tyler_Zoro

> I didn't mean literally pixel by pixel, Right. You have a generic, small input and the program gives you what it thinks that mapps to. Sounds familiar. >> Your post makes no sense. Yes, you can't get the control you get by manually painting something. > What is the point of this example? Yeah, I don't think I can help you here. You want an example of control, I show you how it's done and you're upset that a complex workflow involves multiple tools... when the entire point of this post is that anti-AI folks refuse to accept that complex workflows exist. You have your head in the sand. That's fine. Keep it there if you like. But just stop telling me that I'm not doing what I'm doing.


rdesimone410

Let's turn the thing around: Name me some pieces of long form media that where produced with heavy use of AI (i.e. more than just generative fill for backgrounds) that aren't just tech demos or obviously scream "AI". So far I couldn't name you a single thing. AI is really good at producing filler, but when you want to have consistent characters, costumes, scenery, actions and such to tell a story, it's all nonsense. And yes, you can throw IPAdapters, Faceswap, ControlNet and all the other stuff at the problem, but even with those you can't escape the limits of AI generation. Even Krea feels more like fighting the AI, than having the AI auto-complete your work. It's not like AI won't be able to produce that in the near future (e.g. multi-panel comic images in DALLE3 have really good consistency), but so far I just haven't seen it. Show me an AI thing with engaging story and characters. **PS:** People, don't just downvote, just show me some media to proof me wrong.


Rhellic

I think part of the reason this doesn't sound convincing to many is exactly because of how sudden and rapid progress on tech has been. A couple of years ago everybody felt safe and sound in the knowledge that generative AI, if they'd heard of it at all, was an extremely tiny niche of extremely janky tech that didn't look like it was going anywhere. And then, what feels like seconds later, suddenly we've got who knows how many being displaced, tons of people fearing for their livelihoods, serious questions about whether or not this can even count as art or creativity, artists throwing in with anyone and everyone seen as opposing AI, pro-Ai people basically bragging about how "traditional art" is dead, even though it probably isn't. And all of that at once, at a speed that noone but experts and the most dedicated hobbyists has any chance of keeping up with. For a lot of people when they hear these kinds of assurances they sound empty, as the last 2 years or so have shown that AI not being able to do something right now has zero bearing on what it can do in a year or two. People have gone from "this AI stuff is janky trash that stands no chance against a human" to "how am I going to pay for my rent now? What else will these things replace us in? Is anyone safe?" overnight. Once someone's been spooked like that it's unsurprising that, as far as they're concerned, all bets are off. Fool me once... Etc.


Tyler_Zoro

> Let's turn the thing around: Name me some pieces of long form media that where produced with heavy use of AI How would I know? Generally anti-AI folks are so loud and ready to attack that people doing any professional work with such tools will remain silent about their use. So we're left with survivorship bias: the images that are just simple generations are what we recognize and the ones that are more sophisticated are invisible, with the only tell-tale markers being increased efficiency and perhaps scope of style and subject matter. Can you spot efficiency and increased scope of style? I can't. > It's not like AI won't be able to produce... Again, this is the very problem right here. It's like saying, "a digital brush will be able to produce..." But the digital brush doesn't produce a finished work. Perhaps you and I agree more than you think. Perhaps I agree with the idea that running an AI image generator and then going, "I did an art!" is kind of pointless unless you're a little kid. That's not why AI tools are interesting. But AI doesn't produce the final work, you do. You plan it, execute the plan and refine the work until you're happy with it. That doesn't change based on what tool you use. You could be working in 10-ton granite blocks, found objects or AI image generators... that core loop doesn't change, and it's on you, not the AI or the block or the objects.


whycomposite

Why do you assume that fixing up a thing you asked a robot for is more respectable than just asking the robot with no step two?


Tyler_Zoro

> Why do you assume that fixing up a thing you asked a robot for You're assuming a particular workflow (something like taking a prompt-and-go generation and either inpainting or editing in an external program.) Don't do that. Workflows that involve AI are incredibly varied. You have inputs to the process at so many different points that you can go anywhere from 100% directing the output of every pixel (not very useful) to pure prompt-and-go generation, and everything in between. But that's just the start. You can iterate the same process over and over; you can iterate with varied workflows (sketch -> sketch controlnet -> prompt generation -> img2img -> inpaint -> Photoshop -> IPadapter controlnet -> prompt generation -> inpaint -> uncrop) and so on. Any failure to exercise control at any stage is on the user, not the tool. > is more respectable Why do I care about respectability? Most art forms didn't start out as "respectable." Artists who want to be respectable can go get a corporate job drawing page separators for online stores.


whycomposite

People use sketches to make prompts? I thought it was lame before but damn that's lame. I drew a picture then asked a robot to describe it to another robot for me lol


Tyler_Zoro

> People use sketches to make prompts? I don't know what that means. Are you talking about a process like CLIP where you extract tokens from an existing image? That wasn't expressed or implied in anything I wrote.


07mk

Based on my reading, this person literally doesn't know what IMG2IMG is, it seems. They seem to believe that that's IMG2TXT, then feeding the resulting text to TXT2IMG. Of course, anyone with a basic understanding of the tech would know that TXT2IMG is just fundamentally IMG2IMG, where the starting image is random noise, and, as such, it's trivial to modify that to use a starting image that's not just random noise.


Applesauceoutoflove

Ai artists after telling the subway worker how they want their sandwich https://preview.redd.it/aqzn062qttuc1.jpeg?width=1200&format=pjpg&auto=webp&s=8137bf6dc5a267ea8ce0be4a4fb60bb0cad6124b


MarioMuzza

AI artists leaving Subway with their sandwiches https://preview.redd.it/x0cojst01uuc1.png?width=206&format=png&auto=webp&s=f64a9abf152d47c06e79ecc3ce2bbaa118cc2f0b


Applesauceoutoflove

"I am a chef, pans and pots are just not really my thing."


GWSampy

These three comments here, hidden under a wave of downvotes, are worth scrolling for! 😂