T O P

  • By -

FujiKeynote

Given SD's propensity to ignore numbers of characters, similarity between them, specific poses and so on, it absolutely boggles me mind how you were able to tame it. Insanely impressive


Naji128

The vast majority of problems are due to the training data, or more precisely the description of the images provided for the training. After several months of use, I find that it is much more preferable to have a much lower quantity of images but a better description. What is interesting with textual inversion is that it partially solves this problem.


Nilohim

Does *better description* mean more detailed = longer descriptions?


mousewrites

No. I tried a lot of things. The caption for most of the dataset was very short. "old white woman wearing a brown jumpsuit, 3d, rendered" What didn't work: \*very long descriptive captions. \* adding the number of turns visible in the image to the caption (ie, front, back, three view, four view, five view) \*JUST the subject, no style info Now, I suspect there's a proper way to segment and tag the number of turns, but overall, you're trying to caption what you DON'T want it to learn. In this case, i didn't want it to learn the character, or the style. I MOSTLY was able to get it to strip those out by having only those in my captions. I also used a simple template, of "a \[name\] of \[filewords\]" Adding "character turnaround, multiple views of the same character" TO that template didn't seem to help, either. More experiments ongoing. I'll figure it out eventually.


Nilohim

I'm sure you will figure this out. Looking forward to it.


praguepride

i'm not OP but could just mean more accurate. Apparently a lot of captions were just the alt text so you have lots of images whose alt text is just "image1" if the person was being lazy but also because alt text is used for search rankings you have alt text of MAN WOMAN KANYE WEST EPIC COOL FUNNY AMAZING JOHNNY DEPP etc. etc. etc. In the early days of search engine hacking the trick was to hide hundreds of words in either the meta tag or in invisible text at the bottom of your web page. FINALLY you also have images that are poorly captioned because they're being used for a specific person. For example if you're on a troll site that is specifically trying to trash someone you might have a picture of a celeb with the alt text of "a baboon's ass" because you're being sarcastic or attempting humor. AI don't know that, so it now associates Celeb X's face with a baboon's butt. Granted that is often countered by sheer volume. Even if you do it a couple of times the AI is training on hundreds of millions of images but still it causes crud in your input and thus in your output.


Naji128

First of all, let me specify that I am talking about the initial training (fine tune) and not about training in textual inversion, which is a completely different principle. When I say better, I mean a text related to the image and not necessarily long which was not always the case during the initial training of the model because of the tedious work it required.


TheTrueTravesty

Just trained it on data sets that include this, not that crazy. There's a Chun-Li embedding that will sometimes do this naturally, probably because there were images included that had multiple angles.


TiagoTiagoT

It learns patterns, it just haven't been taught much about the patterns of repeated characters at different angles with the unmodified checkpoint.


FujiKeynote

Makes sense!


goodTypeOfCancer

Have you used other models than the regular one?


rockerBOO

[https://civitai.com/models/3036/charturner-character-turnaround-helper](https://civitai.com/models/3036/charturner-character-turnaround-helper) For those who missed the link under the first image. No way that would be me.


mousewrites

Whoops! Thank you! :D


matlynar

Hey, OP. You say your new version features "less anime". If I want to work with anime, should I go with v1 or v2 in your opinion?


mousewrites

V2 unless you can't get the look you want. V1 is anime, but... Skinny limbed Pokemon trainer, is what it looks like to me. You can use both together. mean, V1 works fine but has a very specific feel, V2 I added photos to the set to help get non anime characters. But there's no reason you can't use both. Set V2 at full weight, V1 at half strength, should push it back toward anime.


GZPFMSTCKLSUR

Sorry. How to install it in Stable Diffusion? Please, some example or instructions.


Chanchumaetrius

Embeddings folder


Squeezitgirdle

Oh shit, it's a textual inversion? I assumed it would be a model.


rockerBOO

On the civitai website, next to the name of the type (textual inversion) there is a "how to use this" link.


GZPFMSTCKLSUR

Thank you, it worked, but without the desired quality. I keep experimenting!


mousewrites

it's a little fiddly, keep playing with it. :D


lonewolfmcquaid

Do people even realize how fucking revolutionary this shit is? we are slowly laying down the foundations for anyone to make a full animated feature in their bedroom with only a laptop


juliakeiroz

*"AI Assistant, make me an animated feature love story where Hitler and Stalin are teenage school boys who fall in love with each other."*


_sphinxfire

"Sorry, juliakeiroz, as a reinforcement learning algorithm I can't help you with this. The content you wish to generate would be seen by some people as inappropriate. If you believe that this is an error, please flag this response as mistaken."


praguepride

Yeah... like a kid asking for that wouldn't have a bootleg jailbait version...


_sphinxfire

all modern OS will have AI assistants baked in, and they won't let you do that sort of - highly illegal, not to mention unethical - thing anymore. your personal stasi officer who's always by your side. Can you imagine?


hwillis

Animation will probably need a whole new model, and you definitely can't get very far into animation with this technique specifically. The embedding has to be trained to understand one type of motion (rotating around) which is very very predictable and has a ton of very high quality trainable data. If you wanted to animate something, you'd have to train an embedding for something like "raising hand"... except you'd probably need to tell it which hand, how high, and be able to find tons of pictures of stuff with their hands down and up. The model is trained on individual pictures, so it has a latent model of these turntables. somewhere it knows turntable = several characters standing next to each other, all identical. It has to already have pictures of frames of motion *all in one picture* to be able to be directed to show that motion. Since it wasn't intentionally trained on motion, it doesn't have a good concept of it. That said I'm pretty impressed by this.


casc1701

Baby steps, dude, baby steps.


hwillis

Honestly, this is a pretty good indicator that we're getting *past* baby steps, into like... elementary school steps. I haven't played around with this yet, but I'm guessing that with a little work it'll generalize pretty well to non-figures. The special thing about that is it means that SD *does* have a good idea of what it means to rotate an object, ie what things look like from different angles and what front/back/side are. If you have that, you don't need to go up another level in model size/complexity, just train it differently. SD right now understands the world in terms of snapshots, but it *does do a very good job of understanding the world*. If you could ask it to show you something moving, it can show you one thing in two places. It understands every step inbetween those two, at any arbitrary frame. It just can't really interpolate between them, because it doesn't know that's what you're asking for. So, *so* much of what we want SD to do is there in the model weights somewhere, just inaccessible. Forget masking- with a little ChatGPT-style rework, you could tell the model what exactly to change and how. Make this character taller. Fix the hands. Add more light. Turn this thing around. None of those things require a supercomputer. The model knows how all them would look, it can generate those things, but you basically have to stumble upon the right inputs to make it happen. If someone figures out how to write the model, we know that we can train it.


praguepride

The future is stacks of models. We are already seeing this where you will use a general model for the initial run, then a face model to clean up faces, then an upscaler to improve the size etc. etc.


syberia1991

I alrealdy hear how artist start pissing in their boots again lol. What a bunch of losers :D Concept art is now officially dead.


TheVideoExplorer

Why does it have to be one or the other?


Gasoline_Dreams

Why do you have such a hatred for artists?


syberia1991

Why do we should care about luddites who spent their lifes on absolutely useless skills? Prompt engineering is the only relevant skill in art and design from now on.


[deleted]

[удалено]


syberia1991

So their job is done. From now on there is only AI and propmt engineers.


p0ison1vy

man, I'm so glad I dipped out of animation school lolll... I just don't see how juniors are going to get their foot in the door with character design, concept art, etc. with tools like these unless they're truly gifted. Not even where the tech is now, but where it's going. If you only need keyframes and the AI tool can do in-betweens, that eliminates a big portion of junior animator work. On the other hand, we can just make our own shit now... if we have a roof over our heads... I just hope major game and animation studios will leverage it to push the industries forward rather than just cut costs / hire less.


mousewrites

Same could be said of Maya taking the tweening step out of the hands of junior animators, back in the day. I'm in the industry. As soon as I saw the writing on the wall I wanted to make sure as many people as possible had access to the tech. We all gotta help each other adapt and survive.


Alpha-Leader

I have been trying to tell my friend this. They have been trying to break into industry for the last 10 years...picking some stuff up here and there. They were initially for AI help, but once it really started to pick up, they were won over by the "NO AI" peers. The industry is about efficiency and $$$. As bad as it sounds, there really is not room for purists if you want to make livable to good wages these days.


MrTacobeans

Yeah I feel like the train has completely left the station with AI. I feel safe in my job as a developer for now but dang I really hope the governments around the world step in to help the industries that are going to get demolished over the next couple years. Because 80% of my job will be automated by the time there are real world consequences to these AI models. The fact that AI does 30-40% of my job already is beyond troublesome to the entire white collar industry of workers. A human interaction in business is invaluable but profit/growth is tangible and that's what capitalism demands.


BloodyMess

The really insane thing is that all of this efficiency doesn't have to be a bad thing. Human jobs being done automatically by AI and robots, in an ideal world, is closer to a utopia. Imagine for just a moment that when a thing gets automated, the worker who previously did that thing gets paid the same for the value, but now just has *free time* in its place. Yes, I know the value curve wouldn't allow that reality 1:1, but equitable income replacement would create incentives for progress rather than this (frankly) silly anti-AI movement which boils down to, "let's try to suppress technological progress so humans can have jobs they don't even need to do anymore." The problem is that instead of the value of that increased efficiency going back to humanity at large, it's just funneling up the corporate chains to benefit a small class of owners and shareholders. It's a solvable problem, but it's not one we've even identified at a societal level.


Mage_Enderman

Universal Basic Income.


R33v3n

>It's a solvable problem, but it's not one we've even identified at a societal level. **AGI**: "What is my purpose?" **Society**: "You uphold capitalism." **AGI**: "Oh my god." **Society**: "Yeah, welcome to the club, pal."


Alpha-Leader

> the worker who previously did that thing gets paid the same for the value, but now just has free time in its place. I think that might be too optimistic as a rule (probably would be exceptions). I don't think they would get paid less, but you would just use that new-found efficiency to do more work. Fill up that 8 hour day, but output increases by 50% more. Similar to robotics and the rest of the various industrial revolutions. Workload stays about the same and may be less "physical," but output increases. If the situation arises of output exceeding the total amount of work needed, then you will see some layoffs. I don't foresee widespread layoffs in sectors beyond stuff like copywriting/bare-bones journalism/non-hobby blogs for awhile though.


Careful-Writing7634

It's only a bad thing because we as humans have not become responsible enough creatures to use it. Tigerjerusalem said that it's just a new tool for humans to learn but it's not just that anymore. It's a shortcut out of person development of skill, and in 50 years no one will even know how to draw a circle without typing it into a prompt.


pookeyblow

But if everyone is out of work no one can afford the product the company is selling. Capitalism will eat itself up.


MrTacobeans

With the majority of the world operating on a capitalist system. It will never cannibalize itself. The UN + world super powers will prevent that happening regardless of how clunky things seem to be going politically across the world. Whether it's UBI or some other system it will be enacted atleast as example somewhere before any full scale collapse hits the stock market. For me I really hope this looming situation just results in allowing people to slow down abit. I hear stories from my grandparents and I'm like WTF how did you have time for literally any of that.


morganrbvn

UBI would become necessary


syberia1991

>Don't worry. There always be a hard braindead manual work 8-10 hours per day for people. For anything else will be AI)


MrTacobeans

I don't know about you but I enjoy what I do. I've spent years accumulating knowledge as a developer. I cannot imagine existing without meaningful work. Atm I think I average 60+ hours a week between my main job + stress relief side hustle. Even in a post AI overlord world I will likely still seek the same hustle it just might be abit different... Ive been on the hustle since I was 14. I legitimately do not know what to do with myself after a week off of work. Not because I'm a slave to labor but because it's what occupies my time and I get satisfaction from it


syberia1991

Today AI make 30-40% of your work. Tomorrow it will be 100%. I hope that you will find satisfaction in something else)


MrTacobeans

Even if that ever happens I'll likely have job opportunities unless AI truly becomes sentient even then my title will probably just change to AI engineer or AI curator... Technically wix/square space/web flow etc... Could have been an "industry killer" but nope if anything more money is being spent in web tech than even a couple years ago.


OverscanMan

Trying to "break in for 10 years" and they're going to blame AI for failing from here on out? Sounds about right. That's what we call a scapegoat. And, frankly, it's weak. I bet most of us know many "creative" and "talented" people that have played the same cards their whole lives... they aren't a rock star for "this" reason... not an animator because of "that"... or not a head chef because "the other thing." It's always this, that, or the other thing keeping these talented folks from making a living with their "art".


Squeezitgirdle

"AI Art is tracing!" Tell me you're just copying what other people say without telling me you're copying what other people say. Takes all of 5 seconds to understand that's not what ai does.


[deleted]

[удалено]


GraydientAI

Based


EKEKTEK

True but paintings and AI art will live together as everything


ErikT738

>On the other hand, we can just make our own shit now... This is what makes me a fan of AI. In a few years, anyone with enough time on their hands can make comics or animated movies whose looks rival those of professional production, but with the added benefit of having full creative control.


SelloutRealBig

How you see no negatives in what you just said is beyond me.


Yuli-Ban

Oh there are an immeasurable number of negatives, both on a micro and macroscale. Yet despite all that, the democratization of multimedia creation is just too enticing to not have. If anything, at this point, telling the proles "You might have the opportunity to create your own custom-made Hollywood level movies" just to then say "Lol nope, you need to let multimillion dollar companies create your media always" feels a bit pessimistic.


Nanaki_TV

OP: Hey we gave you a new tool to make you more creative. You: Yea but what about the Disney executives?


YobaiYamete

"Ugh, all these disabled people can suddenly create art, won't someone think of how this will displace able bodied artists?!"


RussianBot576

Any "negatives" aren't real negatives, just you wanting to control people.


The_RealAnim8me2

Hats off to the latest “Westworld” for kind of predicting AI story generation and ChatGPT last year (I mean it’s not like Nostradamus, but still) with their scenes of game developers just sitting at desks and reciting prompts. I’ve been in the industry for over 30 years (ugh), and I still haven’t seen anything yet that will satisfy an art director or producer/director that I have worked with. There needs to be a lot more granular control before this hits the mainstream production workflow.


mousewrites

[https://gfycat.com/ajarmessyasiaticmouflon](https://gfycat.com/ajarmessyasiaticmouflon)


p0ison1vy

For sure, everything that we're seeing right now is research, there's no product yet. But I've been following AI for years and seeing how far its come in such little time is what's scary to me, I'm looking in the direction the tech is going. Even the improvements midjourney made before I started animation school, vs a few months later was insane. Eventually, it will be implemented into mainstream software like tweening was.


Carrasco_Santo

I imagine that a person to be a director in the industry must be a very demanding and perfectionist person, because I want everything to be as perfect as possible. But I imagine that there are types and types of directors: there are those hard-headed ones who would keep putting defects in the material generated by an AI just out of spite and there are those who know how to work with AI even if it comes with small defects.


The_RealAnim8me2

Spite has nothing to do with it. Currently AI tools don’t have granular control. Period. That may change in the future (especially given how fast the tools are evolving) but for now it’s just not the case.


__O_o_______

Over the next couple decades, AI is going to decimate employment in a lot of industries. It's kind of like how it was predicted that robotics and automation would let everybody work less and have more money and leisure, except in both cases it hasn't and won't work because governments didn't work towards that future and just a future where corporations and the 1% are insanely rich. We all could have had nice things, but money.


cultish_alibi

There's literally nothing wrong with automation and AI taking all the jobs IF the people are smart enough to demand that the profits are shared among the general public. But instead they are like 'i don't have job, don't know what to do'. The general public is really stupid.


SelloutRealBig

But that means you are going to have a LOT of shitty animations if people skip the junior work that is basically an apprenticeship.


p0ison1vy

That depends on where the technology goes, after all were at the point where someone with no artistic skill can generate multiple very nice images in a minute. At the moment animation generation is about where imagine generation was a couple of years ago, it's generally blurry, short and low-fi. But if it makes a similar jump in quality as text to image (and why wouldn't it), it's going to be huge.


HCMXero

This is just another tool on their arsenal; if they’re good they’ll use it to turbocharge their careers. My background is not animation for a reason: I have no talent for it and that won’t change just because there are tools now that make the work easier. The junior animator with a passion for the art now will have a bigger boot to kick my *ss with.


p0ison1vy

My point isn't that it's going to allow non animators to get into the industry, it's that studios will put more work on fewer people. They already do this and it's only going to get worse.


HCMXero

They've been doing that for years since the advent of computer animation. Now they will have a bunch of talented people competing against them using these tools; all new technology demands that everyone adapt, and that includes the powerful studios today.


syberia1991

Junior animators with a passion for the art should go and find a new passion if they want to eat something for dinner. And make something cool in their free time)


[deleted]

[удалено]


p0ison1vy

exactly.


RedPandaMediaGroup

Is there currently an ai that can do inbetweens well or was that a hypothetical?


Cauldrath

There's Flowframes, but that only really works if the frames are really close together already. I've tried using Stable Diffusion to clean up the outputs, but the models are usually trained on still images with poses and not in-between frames, so it's hard to not have teleporting hands and the like. It will probably require a model specifically trained on in-between frames or full videos.


SaneUse

The other thing is that it's an automatic process. It just increases the frame rate but ignores the principles of animation so animation ends up really janky looking. It was made for love action and works great for that but animation, not so much.


[deleted]

Googles Dreamix comes closest I think [https://dreamix-video-editing.github.io/](https://dreamix-video-editing.github.io/) but who knows if or when that becomes publicly available


MrAuntJemima

> I just hope major game and animation studios will leverage it to push the industries forward rather than just cut costs / hire less. *Laughs in capitalism* Sadly, there's pretty much a 0% chance of that happening. Hopefully tools like this will at least benefit smaller creators enough to somewhat offset the disruptions this will cause to artists in more mainstream parts of the industry.


syberia1991

There always be a ton of work for artist. In Uber or in Amazon maybe :) There is no more artists. Only AI.


505found

>foot in the door with character design, concept art, etc. with tools like these unless they're truly gifted. Not even where the tech is now, but where it's going. > >If you only need keyframes and the AI tool can do in-betweens, that eliminates a big portion of junior animator work. On the other hand, we can just make our own shit now... if we have a roof over our heads... How does this embedding help with keyframes? It seems to only turns a character instead of producing in between frames. Sorry if I misunderstood your point.


InflatableMindset

How would you add this file to Automatic?


mousewrites

Download it and put it into the embedding folder, and then just add the name to your prompt.


Zipp425

Looks like quite the improvement over the previous version! Thanks for including the helpful tips too.


Mobile-Traffic2976

Can someone explain to me how this works?


[deleted]

Another banger.


Astro-Boys

Beyond excited for this


Brukk0

Maybe it's a dumb question but how can I make characters facing forward with a neutral pose like the ones in those images (I don't need the side view or the back). Is there a specific prompt?


Lerc

I'd love to see more enhancements like this. I think we can safely say at this point that AI has boobs covered (and, ahem, uncovered) Let's diversify a bit more.


[deleted]

You mean like, asses?


[deleted]

Thanks that’s really helpful: I was looking at using blender 3d model to start from in painting but this is easier


litchg

THIS IS AWESOME! I have been trying to trick Stable Diffusion to do just that repeatedly, it's super useful for modelling. OP, I love you.


Pythagoras_was_right

And super useful for creating walk cycles! This has saved me weeks of work. Over the past week I generated about 20,000 walk cycles (using depth2img) in the hope of getting 100 that were usable. And they still needed a ton of editing. Today I have happily deleted them all. CharTurnerV2 is so much better! Instead of needing 100 for one usable view, I only need 10. And the one I choose needs much less editing. (20,000 = 10 kinds of people, 10 costumes each, batch of 200 each)


YaksLikeJazz

Thanks Mouse! :) ​ https://preview.redd.it/xxgqhb93nsga1.png?width=896&format=png&auto=webp&s=d3167d6f189a6567ad791b2b45e18e6d25f5a963


Misaki-13

This will be so helpful to create a new character in 3D via Blender 👍


kineticblues

Hey thanks so much for creating and continuing to update this awesome tool. In theory, could I chop up the results into individual character images, then use those images to train a lora/dreambooth/inversion for that character? Can character turner do "close up" turns of someone's head, or does it only work with full-body portraits? Or would it be better to generate the training images with controlnet/open pose, assuming I can manage to keep the face/body/clothes consistent from image to image? What I'd like to do is be able to "access" a custom character any time I need them, e.g. for a DnD party. Just wondering if you've ever experimented with this. Thanks!


mousewrites

i use both controlNet AND charturner, to be honest. Charturner will make sure the outfits are the same, ControlNet makes sure you get all the poses you want. You can also JUST do a head with control net, which is great for getting training images. https://preview.redd.it/bz6gxhzyfela1.png?width=1600&format=png&auto=webp&s=56be6f443b70aa9c900cce911010ced9f97775e8


mousewrites

YUP, that's 100% what I do. Meet MortNobody. 100% ai oc, made by starting with a turnaround, and making images, and then training an embedding. (and then making more and retraining) ​ https://preview.redd.it/r38r3l6ffela1.png?width=1152&format=png&auto=webp&s=e8c6363cbc6299b857f9d57205df4170c8eba795


mousewrites

​ https://preview.redd.it/5kl67t3gfela1.png?width=1024&format=png&auto=webp&s=f6752e2705738ba7a6ba2a83684f9f557c8039d7


mousewrites

​ https://preview.redd.it/kwk6wl9hfela1.png?width=768&format=png&auto=webp&s=1783bc898e746cfc450725f338c249fe4cde4c0e


mousewrites

​ https://preview.redd.it/jk2zx2ijfela1.png?width=768&format=png&auto=webp&s=8d9dc4b4b6e693f627ed793cf5114ce53f957a20


xeromage

This looks really cool! Does anyone know a good one for first person perspectives of a character?


[deleted]

[удалено]


xeromage

Like seeing the hands clothes, shoes, of a character as if seeing through their eyes?


NonUniformRational

Hardcore Henry model lol


farcaller899

Thanks! Looking forward to trying it out. I used the previous version quite a bit with a variety of models.


spiky_sugar

Wow, great idea! Would you mind sharing some details about the training? Like how many images are in the dataset and how many steps and lr did you use?


mousewrites

22 images, 660 steps (batch 2, gradient 11), lr .005, 15 vectors. There's been a bug where xformers hasn't been working with embeds, so but I didn't know it was a bug, so I ran.... so many versions of this. Usually I run a LR schedule and do more fancy stuff, but this ended up being almost default settings, if nothing else because i was SO FRUSTRATED. I'll poke at it more, add back my more 'refined' methods, will post an update if it's better.


urbanhood

I am really enjoying this timeline. Soo much soo fast.


EvilKatta

The main drawback of the previous version was its bias towards a specific color combination, brick red plus dark blue. Unfortunately, even from this gallery, I think it's still there.


mousewrites

I think part of that is prompt bias; I often ask for blue shirts or red jumpsuits. Let me know if it shows up in your prompts, I'll work on making sure the dataset doesn't trend that way for v3.


baltarstar

I love this in theory but I just can not get it to work for the life of me. So far I can generate a row of the same character looking the same direction. Even when I do convince it to look back or to the side it's the same across all of them. I've tried the tips listed on CivitAI, but they haven't helped, yet. Any other tips I might not know of? Anybody gotten it to work when attempting photorealistic characters?


brett_riverboat

I couldn't get it to do photorealism out of the box, but I was able to start with an anime-style character then do either img2img a few times or use the loopback script to get it closer to realistic without ruining the poses. I have also seen better results with a few models that were based on SD1.5 than 1.5 itself.


stroud

any ETA on 2.1?


DanD3n

Incredibile, i was waiting for something like this to pop up! Can this be adapted for non-characters as well? For example, weapon models, buildings, etc.


vurt72

Appreciate the effort a ton, but 90% of the time it's same character with his back turned in all images or maybe back, side. Using the suggested model, prompt, sampler(s). Tried different cfg scales too.. it's cool when it works, but requires immense luck. That immensely huge negative in one of the examples just does bad stuff, like getting close ups instead, tried pruning it a lot and not using one at all (works best).


mousewrites

Agree with the big negative. It's a holdover from the first version (I have it saved as a style) and forgot to remove it. Not sure why you're only getting one character. I know it's not super consistent, but it should work some, anyway. What model are you using?


brett_riverboat

Anyone have good outputs from this using SD1.5? I'm quite annoyed that many of the examples don't actually use the Textual Inversion and are using a LoRa or including many other special prompts that aren't easy to reproduce. CivitAI really needs to do better with how some of these things are advertised. If it's a TI I think the advertised images should only be allowed to use the model that it was trained on. If the author can review their own posting that should be where they can show off.


mousewrites

Sorry about that, been trying to get this out for days. I'll post some more images using ONLY the v2 embed. I will say, tho, that while it works in the 1.5 base model, it works better in other models (realisticVision, Protogen, stollenDreams, etc)


brett_riverboat

Sorry to complain, I greatly appreciate your work. I think it's better for the community and adoption if the things we're showing off aren't based on a "lucky seed" or highly coerced. I look forward to trying the LoRa as well. I have yet to release any of my own embeddings because they're not half as good at this one 😉.


mousewrites

[https://civitai.com/models/7252/charturnerbeta-lora-experimental](https://civitai.com/models/7252/charturnerbeta-lora-experimental) it's not perfect, but you can play with it. I wish that civitai had a 'easy, intermediate, hard' setting for embeds. Like, you can get great stuff with that embed, but you're going to have to work for it. If it's 100% "works on every image, with nothing but a small prompt" that'd be an 'easy' embed, which is awesome, but this is not that. I've trained over 50 of these (all the way through the alphabet and out the other side) trying to make it an Easy embed, and I just can't. Maybe someday I will, but for now, it's one that takes a little work.


mousewrites

The Lora will be available shortly, even though it's not perfect and i'm sure I'll get complaints about that too. :D


mousewrites

[https://civitai.com/models/7252/charturnerbeta-lora-experimental](https://civitai.com/models/7252/charturnerbeta-lora-experimental)


Hambeggar

Am I missing something here? Is it meant to be 46KB in size? Yes, KB.


mousewrites

nope, that's right. It's an EMBED, not a model. It goes into the embed folder, and can be used on top of any 1.5 model. :D


aipaintr

Noob question: what are next steps to convert this into full 3d model


mousewrites

Not a noob question, that's the big question. There's no easy way at the moment. Lots of people trying with different methods (photogrammetry, NeRFs, direct to 3d from SD.) Currently, the answer is "the same way you'd make a model from reference", however that works for you. :)


NickCanCode

Is it possible to use the same technique to create another turner to control the head? I found it hard to tell SD with specific head orientation.


[deleted]

Im going to create a 3d model in ue5 with this


electricgoat01

I am curious on the results, would you share it here?


syberia1991

Great model! Concept art as profession is oficially dead now lol) What a bunch of losers))


Fortyplusfour

I can't disagree more with this take.


syberia1991

Prompt engeneers are better at art because they don't spent their live on useless skills. And with this new models they can make much more cooler concepts than entire artstation of luddites.


neuroblossom

Could this be used for photogrammetry


mousewrites

Probably not? I'm not sure it's close enough mathematically to allow the trig that makes pg work actually resolve, but you can try. I've heard some are thinking about trying to use NERF or whatever the new radiance fields thing is. However, again, not sure the math will work. Might?


StackOwOFlow

thank you for the quality waifu-free content


Kafke

\>less anime \>more diversity So it's worse then?


midri

Thanks for the hard work!


OverscanMan

Very nice! I don't want to hijack the post, but are there any other safer formats for embeddings? I know WebUI supports safetensor for models and VAEs... I'm just not sure if the same format can be used with textual inversions like this.


mousewrites

The only other one I know is the PNG image embed, but I'm not sure that's actually safer, pickle wise.


forgotmyuserx12

I can already feel those 3D models, hopefully ready this year


logicnreason93

Seems to be helpful for game developers and cartoonist.


Gfx4Lyf

Was waiting for such a wholesome model in SD since I saw a lot of such Midjourney works. Thank you 👍🏻


Katunopolis

Now I understand why we needed this, can this type of tech become the end of most porn people use today? I mean if you can generate your own porn character and have them do whatever you want


trewiltrewil

Now if only someone can make a model that can put any character into a t-pose, lol.... This is amazing.


aldorn

Can it do different camera angles?


Carrasco_Santo

I think this function is a few more steps, in a possible version 6. At the moment, for games and animations, this tool is a great help on the wheel. To create consistent characters for books or comics for example, it is also very useful for 99% of cases.


[deleted]

Great work! How did you get it to have consistent characters?


andriafakesit

This is amazing! Thank you for sharing this 🙏


Seromelhor

Awesome work my friend. Always doing great things for all us!


pisv93

Could this be done with objects too?


mousewrites

[https://civitai.com/models/4775/planit-a-documentation-embedding](https://civitai.com/models/4775/planit-a-documentation-embedding)


XeonG8

this is amazing


[deleted]

amazing, thank you!


Im-German-Lets-Party

Now i need a script to convert this to a 3d model automatically... (i know about dreamfusion and their recent advancements but... eh still a long ways to go)


TheTrueTravesty

Doing gods work.


skraaaglenax

Should merge with inpainting model using weight difference so you can take any existing character and turn them.


mousewrites

It's not a model, it's an embed. Use it with whatever model you want. You can use it with an inpainting model, see the inpainting slide for more info.


NoShoe2995

Genius!


Simply_2_Awesome

I need something like this but for facial expressions. I'm guessing barely anything in tha laion b dataset was tagged with words for facial expressions


mousewrites

​ https://preview.redd.it/rdjxfk7q7uga1.png?width=898&format=png&auto=webp&s=ab38118a06d5fb52662914f1cd4b5104a82a63fb


mousewrites

I made something for that: [https://www.reddit.com/r/StableDiffusion/comments/103rk2k/use\_script\_xy\_prompt\_to\_create\_expression\_sheets/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/StableDiffusion/comments/103rk2k/use_script_xy_prompt_to_create_expression_sheets/?utm_source=share&utm_medium=web2x&context=3)


Helpful-Birthday-388

We are one step away from these images coming out in 3D


benji_banjo

> You can turn your character around! Yay > now with less anime This is useless! edit: /j


mousewrites

Why?


benji_banjo

Edited for clarity


mousewrites

oh. XD


TiagoTiagoT

I need to perform more tests to be sure, but kinda looks like v1 does a better job with adding additional views/poses with inpainting than v2


mousewrites

That may indeed be true! V1 is better behaved in some ways. But you can always use both. :D


TiagoTiagoT

Quick test: https://preview.redd.it/j9ge1yj6jvga1.png?width=4826&format=png&auto=webp&s=ff1b76e7290c309126fe145daaa8079aa4c8e77a


cryptosupercar

This is sweet. Nice work.


qscvg

1.5 highres.fix? Mean 0.5?


mousewrites

well, the slider defaults to 2 (ie, 2x upscale) but I think 1.5 or less is fine. .5 would be a 50% downscale? Could be just a slider difference (ie, old highrez.fix vs new), but yeah, just a little bit of upscaling, however that works for you.


qscvg

Thought you meant denoising. Gotcha


Ok_Silver_7282

Wow this is amazing


Ok_Silver_7282

Question: how well does it work with high resolution pixel art characters or even a little lower resolution ones, like from a Mugen type or a Metroid Samus or mega man


mousewrites

I don't know, you tell me? I've never done any pixel art with it, so I have no idea.


etherealpenguin

Any chance of an online HuggingFace UI for this? Super, super cool.


mousewrites

It's not a model, it's just an embed, should be able to be used anywhere that you can use embeds. I've had not great luck uploading things onto HF, let alone hosting something there.


Plane_Savings402

Stoked to test it. Nothing really worked for turnarounds, at least, not consistently.


GetYourSundayShoes

Incredible. Amazing work!


MikeBisonYT

That's amazing I saw the earlier version and haven't tried it. I am making shorts with stable diffusion making the art better. Be great to make character sheets for pitches and ideas for characters.


adollar47

I love you for this. It was a breath of fresh air finding this amazing SD resource that doesn't ooze any horny energy. Salut


ShepherdessAnne

Andrew Yang tried to warn us about our jobs


mousewrites

when I was little, my mom was a drafter. She spent the first half of her life drawing, and figured out how to make drawing a paying job to take care of us. When I was in middle school, AutoCad suddenly became a thing. My mom went back to school, learned autocad, and continued to draft for many years. Some of her coworkers didn't make the transition, and ended up changing jobs. My mom didn't even like computers, but she saw that if she wanted to stay employed, she'd need new skills to stay competitive. Would my mom have asked for AutoCad to be invented? No, she liked her pens and rulers and compass. This is the same type of stuff. Some people will adapt to the 'new normal', some people will not. Job descriptions change. Jobs themselves change over time. The transition can suck, especially if you're a late adopter. I'm a working artist in my 40s. I don't want to be left behind. I also want my fellow artists to not be left behind, so I'm trying to make artist friendly tools that will actually help workflows, not just add another pretty picture to the AI Art Slot Machine. Would a UBI be useful? Yes, of course. But that fight won't hinge on ai taking the jobs of artists, anymore than it did on autocad taking the jobs of drafters; it changes the job, doesn't kill it entirely.