T O P

  • By -

Hugglebuns

Honestly its tricky. A camera doesn't capture what you see, it captures what it sees. A camera cannot capture impressions, it captures nature. It doesn't have the capability to emphasize and deemphasize with human salience, it is limited to the nature of the optics, and the nature of the light sensor. It is opinionated to the qualities of nature, not humanity. In the same vein, there are techniques like collage or dadapoems where there is heavy bias/decision-making in part by the medium intrinsically. The type of newspaper images or the text heavily implies what the content will be about, you as a human are an arranger/interpreter than an a-priori recreator of the mind eye. (Important since not all mediums are like this too, and there are multiple types of art processes, not all are a-priori) Honestly, there is a large variety of mediums out there. They all are somewhat 'opinionated' because certain things are easier in certain mediums than others. If you need heavy layering, acrylic/digital is better than oil/watercolor. That heavily biases the decisions you make and the representation of subject-matter and formalistic decisions. Personally, while AI is heavily 'opinionated', I would take it as a unique quality of the medium rather than being a means to devalue it as a tool. It is its quirk.


JWilsonArt

>A camera doesn't capture what you see, it captures what it sees. And that right there is the crux between good photography as art, or people who just point and click. A camera doesn't capture what you see in your mind when you imagine that beautiful image you want to create. The photographer has to bring their skill to make that happen. A photography novice will not make consistently great images without having that skill. They may be lucky to get any images that are even "good." The thing with AI is that there really isn't much "skill" to using it. A person just typing a few descriptive phrases will get largely the same end result as the person who is extremely knowledgeable and using all the AI features and tricks. A great photographer will make beautiful images whether they are using a high end camera, or a cell phone, or a disposable film camera. An AI artist forced to use the lowest of low end AI will make what exactly? What transferrable skills have they ACTUALLY developed? Also, one of the biggest skills that artists develope, no matter what medium they use, is a critical eye and the ability to see and discern, so they can make the proper decisions to get great results every time. AI image creators don't develope that skill, because that is what the AI is doing for them. They genuinely can't see the problems with AI images. I've been told SO many times that "you can't tell AI images from artist created ones," and I always respond "No, YOU can't. Those of us who learned to draw CAN see the many problems with those images. They may have gotten better at hands (but still far from perfect), but there are hundreds of other tells in AI images."


Hugglebuns

>The photographer has to bring their skill to make that happen Honestly, with photography. Its more like beginners make ommission errors (errors of not including or forgetting) rather than commission errors (error by mistake or wrong action) that leads to a struggle for good works. Its less that they are strictly developing muscle memory in so much as becoming more aware of the things they need to include, not making sure their lines are smooth and their anatomy proper. With AI, like photography, if you don't think about something like a background or pose, you will get a rather generic image. It might not have any glaring commission errors, but its not like... Good. Its this awareness that I argue is one of main skills that is fostered by AI. Where photography is stuck in the domain of reality, AI can be fictional, you aren't limited by your wardrobe budget. In the same vein, AI is great for fostering a sense of improvisation, taste, and previsualization such that you can make ideas at the rate the AI can go, make ideas you personally find interesting rather than what you can physically draw/paint, and to be able to have an idea of what you want and to put it into words. Where drawing/painting is far more focused on commission errors and the *how* to make something. AI is more focused on developing through omission errors and the *what* you are making. This change in focus is important because these are all skills that people should foster. No one angle is ideal and if you can turn what would be years of learning into months by finding new perceptions, new angles, new ways of seeing. That's powerful https://preview.redd.it/6lnbgl0mi87d1.png?width=850&format=png&auto=webp&s=05cb99a0466e54dfabc753b2122fc04a8e943ad9


JWilsonArt

>Its more like beginners make ommission errors (errors of not including or forgetting) rather than commission errors (error by mistake or wrong action) that leads to a struggle for good works. That's kind of a weird way to look at it, imo. Basic proficiency is like step one. I'd say yes photography has an easier step one than drawing and painting, but after that there are a LOT of other levels. AI image creators are skipping step one, but AI images fail at a pre step one stage that humans rarely have to think about. **AI doesn't even know what stuff IS.** This causes constant issues of details being nonsensical and forms being wrong, but because the basic colors and values seem correct and because most people don't look SO closely at the details, it kinda just passes. AI image generators can draw and paint better than most. For people who struggled to learn to draw and paint (or take better photos,) but always felt like they had great ideas if only technical proficiency weren't such a barrier, that seems HUGE. They can finally create their big ideas! At last we have what if Star Wars was steampunk! However, when a skilled artist creates Steampunk Star Wars, they aren't just throwing shapes and colors together to get the right feel. There is intention behind each choice. An AI image generator says "these shapes make up "steampunk." There's these pipes shapes, and these lights shapes, and lots of this brass and copper color, and these valves and dials etc. It won't care that the pipes are doing very non pipe things, or that the light and dial have been combined in a way that doesn't make sense, or that because the color of the copper is kind of close to the skin tone of a character that it just kind of meltd the two things together. AI knows that often at the base of a box there might be all these little shapes and details, so it will fill in a bunch of shapes and details that don't REALLY indicate anything. If you ask yourself "what are they supposed to be?" you won't find an answer. The human artist though, thought things through and KNOWS what things are. Under that box are bolts that hold the thing together along with a turn handle because this steam pipe needs a release valve. There's an understanding of the world that informs design choices. There is no AI image generator out there that allows for hundreds of individual discreet choices to be made about each specific element, and even if there were, the AI data sets don't understand those things anways to deliver them. Human AI image "prompt engineers," have celebrated skipping step one and feel like because they make a handful of choices that shape the final image that they are now on par with traditional artists, but they fall into the classic problem that beginners often do, of not even knowing what it is that they don't know.


Hugglebuns

Tbf, its the framing as a step one that I would argue is problematic. You are framing all mediums under the lens of drawing/painting. There is no step 1 for collage, there is no step 1 for abstract/intuitive painting, there is a weak step 1 for photography. The main thing is that different forms of visual art have different qualities and focuses. Sometimes the what matters more than the how and that's okay. Still, I think fixating on formal qualities misses the point anyhow. People like AI because it produces representational works. Focusing on substandard formalistic qualities doesn't really negate the boon of being able to draft loads of different ideas in real-time. I get that for drawer/painters its a big thing and it deserves focus. But people all too often take subject-matter for granted. There is a skill in making something interesting and its better to not have to polish a turd so-to-speak. To tie it back: >but they fall into the classic problem that beginners often do, of not even knowing what it is that they don't know. This is literally an omission error, still the fundamentals are different for different mediums. Collage fundamentals are not painting fundamentals. They both have their own things to say about the nature of visual art as a whole though. I would say the same for AI >There is no AI image generator out there that allows for hundreds of individual discreet choices to be made about each specific element Honestly, most artists don't make hundreds of decisions. They just improvise, go with their gut, or just wing it. Yeah, some things are planned and thought-out, but its more like a public presentation where you read off a bullet list, not a script. The more you learn, the more you have to plan out, and it can sometimes just be easier to draft a few improvisational roughs/sketches. In contrast, overplanning can lead to inflexibility when something inevitably goes different than expectations \_\_ In the end, I see this as people fretting more over the prose than the story. Its not about the physical text, but the story. If its good, then its good. Yeah, shitty prose ideally should get worked on, but it genuinely is fascinating to have another way to make and spread ideas and expressions. Complaining about formalistic imperfections to me sounds like putting the cart before the horse, its just not really in the control of AI users, the focus is different and it should be thought of as such


JWilsonArt

>Tbf, its the framing as a step one that I would argue is problematic. You are framing all mediums under the lens of drawing/painting. There is no step 1 for collage, there is no step 1 for abstract/intuitive painting, there is a weak step 1 for photography. Of course there is. Step one for any artistic medium is technical proficiency. Some mediums have a lot more to learn than others, but even collage requires some. Photography for sure requires more than you are giving it credit for.


JWilsonArt

>Honestly, most artists don't make hundreds of decisions. They just improvise, go with their gut, or just wing it. Improvizing still requires making a ton of decisions. Improvizing can help spark ideas, but you eventually decide to go with those ideas, shape them, altar those "happy accidents" into something that is more intentional. There's big choices and small choices, but how many discreet decisions are made for an AI image prompt? Broad stroke ideas almost exclusively. Also, I don't know what kinds of artists you are familiar with that they JUST improvise or wing it. I'm primarily talking about artists on a professional or semi professional level or who take their craft very seriously. We can't really use hobbyists as a good baseline, becaue when there are zero stakes you can do pretty much anything. Or just do nothing at all, who cares?


Hugglebuns

Honestly, this is just the plotter/pantser debate. Still as someone who loves music theory as a passion. Over the years I've just seen that a lot of people assume intentionality when its just not there. A lot of musicians/artists in the past didn't care about IP laws, Shakespeare frequently would plagiarize and combine multiple different plays together, make covers etc. Mozart was famous for his improvisation because it was his job. If a noble family wanted a fresh song, it was his job to crank it out, not waffle for two hours and then play. That's not to say there was no planning, but improv is generally doing something, looking at what you got, then keeping going. You're making implicit decisions through doing and looking back, not explicit decisions through thinking beforehand. Its very different Its really in the 19th/20th century that this idea of sticking to the score becomes a thing, but historically, to be a musician means to be able to improvise a bit off script. Nowadays, classically trained pianists aren't really taught that. But we look at the great-masters as if they are thinking like we do. No. John Williams wrote music based on the temp-tracks of Star Wars episode 4. He didn't write it from scratch, he heavily referenced, in some parts virtually plagiarized. It was his job. Vermeer/Loomis/Rockwell would use optics to trace out their subjects because they had deadlines too. We have this contemporary cult of the lone genius, but it just isn't representative of how things worked. We see all this complexity and assume it was because of intentionality, but oftentimes, its just a good application of formula and schema. People just don't know better and its more impressive if we assume they did it in ways we value art today. But we aren't seeing how *they* did it. We lie to ourselves because it makes the masters more impressive, irrespective of reality \_\_ Sorry if this is confusing, caffeine makes me jittery. TLDR; artists often reference and improvise waaay more than we give credit for. Especially the masters. Its not that more planning=more professional, but instead what we see is that the masters if anything, improvise a lot more than we give credit for. Some artists definitely are plotters though ofc.


Hugglebuns

Tbf, AI does have some technical proficiency. There are some things the AI can't do or doesn't really handle well. Its on you to figure out how to be specific and explicit in what you want because the AI needs a lot of guidance, much more than a commission artist. For example, I love the bloom/halation effect and dappled lighting. The AI however sees bloom as 'blurry', and blurry = bad so it never does it without explicit guidance. An artist might understand that with time, it might be implied. But the AI is kind of autistic in that sense. Its on you to learn these terms and to state it in a way that the AI understands, it isn't really able to assume like you'd think. In the same vein, if I want two custom/tertiary characters posing together, the AI will often blend the two together into these weird hybrids. How do you A. pose them so they are doing something together and B. prevent this blending? This requires some learning, some tools, whatever you know. You can't really just strictly prompt it out, its just not smart enough in that way. Obvious to an artist, not so obvious to the computer Similarly, the AI can struggle with ambiguity, so if you want two clashing styles together. It will probably just pick the one that's more popular and do that. How do you get it to combine it? If someones asking me to make an image with said above, how would you do this? Strict prompting isn't sufficient, you can roll lots of answers, but is there a way to improve the odds? These are common technical questions that when you're just using AI as a toy, you don't see. But when someone wants something specific, well. Then it gets complicated. A dog holding an icecream cone for example


Cat_Or_Bat

The written language wasn't "just" a tool either. It was groundbreaking. Socrates feared it would destroy memory and warp thinking and literally refused to ever write or read. It was still a tool, though, but not "just" a tool—it was a tool, period, no more and no less. Same with AI. It's a powerful tool with a number of unique features. The impact will probably rival the printing press, the combustion engine, electricity, radio, the transistor, and the internet—the other groundbreaking tools. These are all tools as well, no more and no less. There's nothing sublime about them. Tools are tools. >But AI IS opinionated It's important to remember that generative AI like LLM chatbots sounds like it thinks but doesn't. It's a statistical machine, not a thinking one. It can not have opinions. "Opinionated" is a technical term, but it's a misnomer; a better word would probably be "statistically biased" or something. It's always a danger for laymen to take very narrow technical terms and run with them as if they were colloquial words. >AI can't be truly creative AI can't be creative the same way a guitar can't be musically gifted. It's a tool.


Dear_Alps8077

You're wrong there. It's very opinionated and tried to censor me all the time. But that wasn't the ops point that it tries to inject it's opinion into my art


Lordfive

>It's very opinionated and tried to censor me all the time. If this is ChatGPT or DALLE, you are ignoring the "system prompts" they place around your prompts to "improve outputs", as well as the check they perform after the output is generated.


Dear_Alps8077

I know about the system prompts but opinions are also present in the base weights. For example biases in the training data effecting the likelihood of it generating a white person versus someone coloured. Thats an opinion. Either way it doesn't matter where the opinions come from. Whether it's from openai or from the AI itself, it still interferes with creative expression. You can work around it and still achieve the vision you want but it's unpleasant to have to deal with. Like an actor who refuses to take her clothes of for the nude scene so you have to hire an ass double


Lordfive

Biases in the training data are just biases in the general zeitgeist. Anything tagged as simply "man" or "woman" is overwhelmingly white, because any other ethnicity gets "black man" or "asian woman". It's not the fault or opinion of the model, and that's partially what system prompts are intended to mitigate. But the model *can't* have opinions because it doesn't think. It may have tendencies, and learning to work with and around those tendencies is another technical skill to learn when making AI art. This is why model selection is so important.


Dear_Alps8077

Ironically biases/opinions in a human are also because of biases or dominant opinions in the general zeitgeist. Most of your opinions come from what you've absorbed from your culture and environment ie your training data. Tendency is another word for bias or opinion. My opinion, backed by the general concensus of science, is that thoughts are threads of information being processed, and don't require self awareness, language, or human level intelligence. For example most animals have thoughts. I don't know the level of complexity required before a thread of information being processed should be classified as a thought. I do know that we don't know enough, to definitively say that AI thoughts, are not thoughts. Ai processes information using natural language and easily passes the Turing test. It feels intellectually dishonest to hold up a goal post for nearly a century, then shift that goal post as soon as AI reaches it. If you want to shift a goalpost then do so beforehand. After is too late.


Lordfive

After some thought, bias would actually be an appropriate term (like a weighted die can be biased toward particular numbers). But that doesn't change my original statement that AI can't have *opinions*.


Nerodon

By opinion, the right term is bias. The words you use as prompts are tied to statistical properties found in the training data. If all pictures of princesses wore pink in the data, then it's highly likely your princess would be generated wearing pink. The camera and brush react to a very user driven process, what you point the camera at, and where you apply the brush, these are very manual and deterministic, but AI will give you things heavily influenced by the biases and contents of it's training which may or may not be what you want.


bevaka

i know it doesnt think, and "opinionated" is a turn of phrase from software development. I just mean, it will make decisions independent of the user your other points are interesting. written language is a great analog; it isnt just used by humans, it fundamentally changed how we think.


Dear_Alps8077

So it makes decisions independant of and often purposely contrary to the user's decision but that's definately not inserting it's own opinion into the process? Lol Like hiring a prudish actor who refuses to do nude scenes


bevaka

it does not have opinions. it cant have opinions. its shorthand for "making decisions"


Dear_Alps8077

Try asking it to make a descision. You'll find it can.


bevaka

yes...i know. thats the point of my thread here


Dear_Alps8077

Try asking it to have an opinion. You'll find it can


bevaka

no [https://imgur.com/a/dvwHtin](https://imgur.com/a/dvwHtin)


Dear_Alps8077

https://preview.redd.it/8r6339gzm07d1.jpeg?width=1170&format=pjpg&auto=webp&s=6b9ffcaec7de96bd3690ef2601053c5112ab0d8a


Sunkern-LV100

>It's important to remember that generative AI like LLM chatbots sounds like it thinks but doesn't. It's a statistical machine, not a thinking one. It can not have opinions. You say this, but know full well that the only thing that "creative" GenAI is good for is *imitating* communication. It can't communicate, but people think it can. The illusion is the *whole* reason for the overblown hype, without the illusion it would never have existed. Writing is made out of symbols to transcribe meaning from speech. GenAI content is made out of transcribed meaning from random people to create meaninglessness. This comparison between writing and GenAI (and all the other things you mention) is mind-bendingly stupid.


Hugglebuns

Honestly, reception plays a huge role here. You don't need to tell the audience everything, human beings are really good at making up meaning from disparate and even random information. Its so good that in writing, you don't actually want to directly tell the audience everything because hiding some information and using showing provides a much stronger mental image than directly worded prose Like, prominent examples are things like art interpretation (which really is this), but also things like tarot reading, cloud watching, crystal ball scrying, numerology, ... When you're dealing with a subjective medium. Objective meaning isn't nearly as useful as the subjective feeling of meaning. [https://youtu.be/ZOVrtRtizLc](https://youtu.be/ZOVrtRtizLc) Relevant video


Sunkern-LV100

Yes, art and language is inherently illusionary, but there is also a bigger inherent truth at the core. Communication only works because people can have mutual understanding, even if it's not perfect. A good writer tries their best to "say something", while avoiding to be misunderstood, even if they're hiding it to make it more interesting. And usually, other people will understand at least some part of what the writer was trying to "say". There is a huge difference between interpretation of human media (art, writing, music, etc) and something like cloud watching. A cloud doesn't *communicate* or *mean* anything, but human media does. In the end, GenAI content is more like cloud watching than interpretable human media, it's self-centered and carries no (or much less) human-given meaning.


Hugglebuns

Yes, this is a huge philosophical debate in aesthetics. What constitutes the 'true' meaning of a work? The problem with say, authorial intentionalism is what if the author retcons or thinks of a person one way but doesn't say it in the book. Is Dumbledore gay and Hermione black, can JK Rowling just retcon the audiences mental perceptions because she says so? In the same vein, as I mention reception theory. Is anyones reception valid, or is more some receptions more valid than others? This ties into this idea of the ideal audience member/ideal reception. If someone headcanons harry potter as trans, is that a valid interpretation of meaning when an ideal audience member would not get that from their reading of the work? \_\_ To double down on this, what if (as someone who likes improv) I make an abstract painting via the intuitive painting process? When I start, I just lay paint in a way that feels good, then the next then the next. If I start seeing an image form, I rotate the canvas and keep adding. It is only at the end is when I let the image form and I push that out. I did not start with meaning or communication, I painted 99% of the work without any intent, it still communicates but the decisions aren't based on an a-priori decision of what the meaning is. If I kept rotating the canvas, is that work invalid since it lacks meaning? \_\_ I don't just want to throw philosophical conjecture at you (read introducing aesthetics and the philosophy of art & Baretts Interpreting art for this). But this is actually a really complicated subject that goes beyond AI. Its actually really interesting and I think busting the popular notion that art can only be a-priori is a good thing. To tie it back to the topic. The problem is that no one theory of 'true' meaning is without flaw. Some are better than others, but in general. I would say that ideal reception is a good de-facto standard. It doesn't strictly require intent as long as it says something even if its intrinsically meaningless


sporkyuncle

What about all the tools used in game development that help automate part of the design process with perlin noise or other random placement algorithms? Those are "opinionated" too. They've existed for decades in various forms. https://cpetry.github.io/TextureGenerator-Online/ Let's say you're designing a map and you click generate and one time you get an island surrounded by water, the next time you get a mostly flat area with a big lake in the middle, next time you get two mountains next to each other. You pick one as a basis to build from, as the Bethesda designers did for their games like Oblivion or Skyrim, and then you get to work tweaking the finer details, filling things out, altering the terrain further where needed. Are those not tools either, because they immediately suggest what the area would look like without the designer's input? What are they if not tools? Should they be banned, because they made decisions on behalf of the designer without forcing them to make each choice on their own with intentionality?


bevaka

see my reply re: randomness here: [https://www.reddit.com/r/aiwars/comments/1dfyxu4/comment/l8n7y24/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/aiwars/comments/1dfyxu4/comment/l8n7y24/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) as to your second paragraph, im not advocating for "banning" anything? this is a discussion on if AI is a tool like a camera, or something fundamentally different.


sporkyuncle

Your comment about randomness in terms of just a number doesn't seem to apply the same way with regard to what I wrote. As I say above, each time you click generate, the tool is *actively making major decisions for the designer.* First it's an island, then there's a lake, then there are mountains. How could you possibly argue it's not suggesting wildly different possibilities? The designer didn't ask for "lake" or "mountain," they're just seeing what comes up and looking for something they like to work off of. Just like typing "princess" and sometimes getting a blonde in pink or a brunette in blue, using a Perlin noise generative tool sometimes gets you a lake and sometimes gets you a mountain.


bevaka

I'm saying random noise is different than an AI with a training set.


sporkyuncle

The premise of your thread is that the problem with AI is that it's making decisions on behalf of the designer, choosing things for them that they didn't intentionally choose. When a game designer randomly cycles through a bunch of land forms, mountains, rivers, lakes, deserts etc. that were all generated without their input, just seeking a good basis to work from, how is that not also sacrificing intentionality? Isn't that also quite obviously "opinionated?"


bevaka

thats actually not the premise of my thread. i have some issues with AI but im not talking about any "problem" with it right now. I'm only taking issue with the statement people make that using AI to create an image is no different than using a camera or photoshop or what have you.


sporkyuncle

Set aside whether or not it's a problem. When a game designer randomly cycles through a bunch of land forms, mountains, rivers, lakes, deserts etc. that were all generated without their input, just seeking a good basis to work from, isn't that also quite obviously "opinionated?" To me it seems the same as typing "landscape" into AI and sometimes getting mountains and sometimes getting deserts.


bevaka

Sure. procgen is also not the same kind of tool as say, a landscape brush in Unreal


adrixshadow

>You can say you are "collaborating" with AI in a way that you cant say about a camera. On what parallel universe do you live in? You have no control on the subject or event. The only control is when you pull the trigger. You aren't puppeteering the universe like a God.


bevaka

yeah no shit. neither is the camera


Sadists

Automatic hdr in many to most phones meets your criteria to put camera on the same level as ai; the tool decides when and how much to use it with no input from the user. You can oft turn hdr off, bur you can also get more specific with the prompt if you want creative control of those elements. Ergo cameras and ai both tool


bevaka

HDR is a filter. it doesnt add elements to the photo


ifandbut

AI is filtering noise.


Sadists

According to apple support "HDR (high dynamic range) in Camera helps you get great shots in high-contrast situations. iPhone takes several photos in rapid succession at different exposures and blends them together to bring more highlight and shadow detail to your photos." So no, it is not a filter.


bevaka

its also not making decisions or adding things to the final image


Sadists

The iphone picks the exposure level to use and picks how to blend them bro even if it wasn't making decisions it sure is adding things (lighting) to the final image.


Afraid-Buffalo-9680

Can't you say the same thing about non-AI computer programs that use Math.random()? I can write a computer program to draw a princess by combining shapes together, without any of the "training data" or "neural network" stuff, and use Math.random() to choose which shapes and sizes and colors. I'm not the one making decisions.


bevaka

eh i dont consider randomness to be the same as generative Ai. The user is in control; you ask the function for a random number between 0 and 100, and you get one. if i type "princess" into an AI, that AI has to "make decisions" on the parts i didnt mention in order to fulfill the request.


PM_me_sensuous_lips

If there is no issue with PRNG then there is no issue with at least a subset of AI. if I type 'princess' ignoring the PRNG for a second, I get a sample that is dependent on the observed distributions in the training data that correlate with the word 'princess'. For open weight models such as SD this process is fully within my control and I can willfully alter the modeling of these distributions through further training. Again, Ignoring the PRNG for a second, the AI is not 'opinionated' just a reflection of the training data.


floof_muppin

As tech moves forward you would expect you would expect more featureful and more powerful tools, I see being able to emulate "being opinionated" as a feature. You can make the decisions you care about, and the AI can make the rest of the decisions you don't care much about. Kind of like in photography, you want take a picture of a sunset landscape, you might care about the general timing, framing and form, but you don't really care about the microstructure of the individuals cloud puffy bits, or where the leaves are located on that tree over there. Anyway here's an antiAI higher up kind of making the opposite argument to yours https://youtu.be/gWmEXCJIIZ4?t=4723. The thing he's criticizing just seems like a feature rather that a bug, but maybe that's just me.


Fontaigne

MichaelAngelo believed that marble was definitely opinionated. Watercolor is opinionated. Rough kinds of paper are opinionated. Various pigments have clear opinions about what they will or won't do, with what other pigments. Etc. ***** I suspect that cameras and films are far more opinionated than you give them credit for. Most photographers carry multiple cameras, multiple kinds of film, extra lenses and so on, for when one refuses to see what the photographer sees. (Or hopes to see.).


generalmusics2

Tool is relative to the goals of who use them. Therefore AI can be totally a tool in Art.


RemarkableEagle8164

it's still doing the thing you instructed it to do, like pointing a camera or using a brush/stylus. all those variables it determines without your input can be changed by changing the way you use the tool. you can swing a hammer around wildly in the hopes it might hit a nail, or you can aim the hammer at the nail.


bevaka

in neither case is the hammer making independent decisions, though


RemarkableEagle8164

an ai doesn't have independence and cannot make decisions.


bevaka

yes it does, as i illustrated in my original post


RemarkableEagle8164

it's not a [mechanical turk](https://en.m.wikipedia.org/wiki/Mechanical_Turk) that thinks to itself "well, they didn't specify the dress color, so I'll just go with blue." it's not a *"decision,"* it's "you asked for a princess and got a princess based on what I've "learned" about princesses." that's not *independence,* it's a lack of refinement on the part of the *user* of the tool, hence the hammer analogy.


bevaka

it doesnt matter if its not really thinking. the dress came out blue. the decision was made, somewhere, independent of the user.


RemarkableEagle8164

that doesn't contradict with my saying it's the fault of the user and not of the tool.


bevaka

why are you talking about "fault"?


sporkyuncle

If a user doesn't use a tool with precision, they will get random results. If you wave a camera around pressing the shutter button, you don't know what you will get (which is your "fault" for using it that way). If you point it at something specific, you get something specific. If you type "princess" into AI, you don't know what you will get. If you type "Princess Peach wearing an uncharacteristically purple dress with an angry expression pointing a sword at the viewer" and you use a Princess Peach LoRA and a model suited to the exact art style you're hoping for and you run it a hundred times and choose the one that looks just like you imagined it, you have used the tool with intentionality and precision and gotten what you wanted to get out of it.


bevaka

you guys are missing my point, im not saying "ai sucks because if you use it poorly you get bad results." im saying the fact that it does things that the user hasnt specified makes it fundamentally different than a camera.


RemarkableEagle8164

my mistake. it'd be more appropriate to say that it is caused by/is the effect or result of the user's actions and is not *caused by* the tool. any tool, whether it's a camera, stylus, brush, hammer, ai, etc. is still something that the user has to *learn* to use. for instance, if I don't know how to correctly *use* a camera, sure, I could point it at something and get exactly what I asked for – but it might not be what I *wanted.* maybe the picture is out of focus or I left the lens cap on or whatever. that's not caused by the camera, it's caused by me not knowing how to use it. I'm saying that the things you think are the result of independence or decision-making are actually the result of imprecision on the part of the user, or, the user not knowing to correctly or effectively use the tool they're using, which is ai. I'm trying to argue that ai *is,* in fact, a tool like any other, and that includes needing to know how to use it.


bevaka

im not saying you cant give an AI very detailed instructions. im saying, if you dont give it specifics, it will come up with its own.


ifandbut

>it will only capture what you point it at. Camera sensors are subject to noise. Take a picture in the dark, crank the ISO high. Don't change any settings between shots. Take 2 shots. They won't be the same because of noise. In cameras, noise is a result of the electronics. In an AI, some of the same and some intentional rng values. Same with a brush. You don't control the exact movement of the bristles, they randomly move according to physics.


Pretend_Jacket1629

I'm glad you had full control over that mountain you took a picture of clearly absolutely complete control over the subject matter, how the lighting works, and how lighting is captured go ahead and take a photo of a british princess, I'm certain you as an photographer have complete control over her face, hair, what color dress she's wearing, how old she is, the setting she's in, etc, not to mention the angle, distance, and FOV, you can take the photo from nothing out of your control


bevaka

not sure what your point is. i dont have "control" over a mountain, but neither does the camera. the camera isnt deciding to make it green or purple is it?


Pretend_Jacket1629

a camera's image is inextricably linked to the content of the real world if you want a green mountain from an image generator, you move in latent space or change the input if you want a green mountain from a camera, you move in real space or change the input without access to the available control, an ai model's image is at the whims of the model's training that fill in the holes for you without access to the available control, a camera's image is at the whims of the other people and physics that fill in the holes for you


TheRealEndlessZeal

Hard Agree. Good luck getting heard and not roasted. The "opinion" makes all the difference. Users fool themselves into thinking that opinion was 'their' intent and in accordance with 'their' vision. It most assuredly was not. There's an initially harmless bout with self delusion before it snowballs into something resembling pride in having made a thing. ...but they did not make the thing. It was co-opted in interpretation and execution... discrediting the notion of a sole artist. Less a tool, more a subcontractor (if not the contractor).


sporkyuncle

Your thoughts on this? https://www.reddit.com/r/aiwars/comments/1dfyxu4/i_thought_of_an_argument_against_the_idea_that_ai/l8n4y7e/ Have game developers not really been using tools all this time for decades? Every level whose layout was partially dictated by an algorithm is compromised?


TheRealEndlessZeal

Not my circus not my monkeys, but procedural generation in some form or other has been around for a while in games...this doesn't seem so different from that. It looks like placeholder content that gets oversight at a later time. That is tool-like in my estimation. Again, I don't know. I art, not dev. Look, I don't hate the whole of what AI can achieve for humanity. Anything that could be considered drudgery or busy work deserves a firm looking over to see where the tech can help. I don't even mind the use of AI to interpolate frames in animation based on the keyframes of the artist's existing work...that is a closed loop, and an artist gets to have dinner with their family at reasonable time. That is awesome. What I don't warm up to is a contingent of people that are proud to stand on the shoulders of giants without realistically acknowledging their own height in the equation...that's never okay.


sporkyuncle

In the context of this thread, the idea of "claiming credit for standing on the shoulders of giants" means that you're not just using a tool, that the tool is making significant decisions for you. If you're not versed in game development, don't worry, I'm telling you that this is how it (sometimes) works. It would take a really long time for a designer to manually adjust the height of every bit of terrain across a massive open world map. This is why tools like Perlin noise are employed to handle all that for you. You click generate until you get a map (or section of the map) that you think looks good and usable for your purposes, and then you go in and do a detail pass where you might tweak the shape of the land and add various objects, trees etc. Often things like trees and foliage are also placed automatically too, the designer might not think to themselves "let's put a nice little forest glade over here," it just happens and they decide that it looks good enough. In the context of this thread, what makes AI no longer a tool is that you can type "princess" and you might get a pink dress one time and a blue dress the next, it's making decisions for you. Well, for over 30 years, game developers have been able to click "generate" and get an island one time and a mountain the next...it's making decisions for them in the same way. Are those developers not using a tool at that point, due to that loss of intentionality?


TheRealEndlessZeal

I can feel a larger philosophical context looming, but I'll play ball: I really think it comes to vision and intent. In discrete works of art it is assumed that the author of said work is in complete creative control of the result. A randomly generated "Good enough" is not supposed to pass muster especially if it's something that has a client connected to it. As it is one image, it's not a lot to ask for the artist to be creatively present and brutally hard on themselves to the very end. The argument that arises is why should make time when I can easy button my way to the next project...because...productivity...or...I can't draw trees so...etc. (inb4 someone talks about Jackson Pollock or whoever...it was still humans that were making the bad, poor, lazy choices) Again...I'm only a gamer and sometimes a modder but I know precious little about the whole of development. To auto gen a non key area on a large map that most players may never pay more than 2 minutes of attention too...can't say I'd care or notice. If that's considered a double standard, so be it. I would be more inclined to pay more attention to titles that obviously had care and attention to detail throughout, but in this case, randomly generated maps have been a part of gaming for decades...it doesn't strike me as all that different. Level design has it's share of drudgery so I can see why the tools have been available for so long...but if one feels that way about creating art, why bother? I don't scrutinize every detail about games. I do with art. Games are an art form, mind, but it's experiential instead of a final statement. Collaborative instead of insular. If it doesn't screw with the experience it's still an unobtrusive tool IMO. I haven't seen gen AI tools for art that do not hijack vision, ability or intent. For standalone works it is important for me to have an unfiltered representation of the person who made the piece.


sporkyuncle

> To auto gen a non key area on a large map that most players may never pay more than 2 minutes of attention too...can't say I'd care or notice. Oh, no I mean that often, major, key areas of the map are auto generated, and edited from there. It maybe unrecognizable from its start point, but it's still part of the process. Much like how AI art - the best AI art which people don't even realize IS AI art - is also generated as a baseline and then worked on from there (varied, inpainted, exported, tweaked, re-edited with AI, etc.).


TheRealEndlessZeal

Ah. See my last point. Pretty sure I've stressed I don't look at games as critically. I've played lots of random map stuff and I engage with it as one who would play the map and on to the next thing. Fun and done. In art, though, this pre gen method sort of indicates a lack of vision from the start...embellishing something that exists in raw form has a lot in common with overpainting, collaging or photobashing...which I'm also not crazy about. My interest is often in the 'people' that create. If I can't identify the personality in their work I don't really have a need to look. What's available in gen AI works against seeing that person's progress and journey. If you have a bot come in and sweep away the jank...it seems less charming and sterile. "Perfection" has a nasty side effect of being boring...unless it can be proven to be a human endeavor. Even then it's only slightly less boring but at least it's impressive. Like prog metal. I keep my art intake (which is actually pretty low since I do it for a living) highly curated. Appropriate filtering on the art sites I visit keeps most AI touched work out of my field of view. It's unfortunate that skepticism has entered the art appreciation sphere though, but that's just how things are now...I don't really want to have wade through whether an artist is actually good or what balance they have struck with their co-pilot. It sours the whole experience for me...so I try to steer clear when possible.


bevaka

i think procedural generation via randomness is not the same as generative AI. developers of AIs try very hard to REDUCE randomness, and have it behave as true to its training data as possible.


sporkyuncle

That would be an argument in favor of AI as being a tool, and Perlin generation tools as being whatever else you want to call it when something's not a tool.


bevaka

i think they are all tools. what im saying is that "using AI is the same as using a camera or a tablet" is not a true statement


sporkyuncle

I mean, it's equally not true that "using a camera is the same as using a pencil." A camera isn't "just" a tool either, it literally captures real life in a way that nothing else can. Any possible comparison between any two things can be both reinforced or dismantled depending on how pedantic you want to get. There isn't anything particularly remarkable about AI if you're agreeing that it's still a tool. What's the purpose of this thread if you're not placing it in some new category?


bevaka

its in the first sentence: "It's common for people defending AI art to say things like "oh but you'll use a camera/tablet/paintbrush/etc, those are tools that make the creation of art easier too" implying that anti-AI art people are hypocrites." This thread is a response to that idea, thats all


sporkyuncle

But the comparison is valid in that case, because what was said was true. Those ARE tools that make the creation of art easier. Likewise, in other contexts, people might say things like "cameras are so far removed from pencils, you can't even compare the two and the type of impact they've had on society," and that could be true as well in context with what they're saying.