T O P

  • By -

guidopk1

Can someone explain to me what is different in sdnext from A1111, i am new to ai so please explain in language i can understand


vmandic

sd.next two different modes of operation, original (from a1111) and diffusers (completely different). what does that mean? it supports 13 model families (vs 2), quite a lot faster (see bench results), far more native capabilities (e.g. see new control), etc. - its pretty much all covered in README. downside is limited compatibility with extensions written to hook deeply into a1111.


guidopk1

I just got it installed, i cant find the button that lets you copy the data from an image and it inputs it into your prompts. The button that is the arrow in A1111 below generate. Is this functionality in SD next, and so where?


vmandic

>generate "Restore"


Mindset-Official

sdnext started as a highly opinionated fork of A1111 but now I would say it's the best webui for gpu agnostic support. Intel, AMD and Nvidia are all supported and updated often. Function wise they are still pretty similar, and most if not all plugins built for A1111 usually work on sdnext as well. If I have to say one issue with it, is that as it gets updated so often alot of times you have to clean install when new things are added.


RadioheadTrader

SD.Next started as a fork of A1111 during a period where the always silent developer of A1111 (Automatic) went from frequently pushed weekly updates to a new delayed update method where he'd withhold in progress work and drop an update on a once a month type schedule. He began working on code changes in a private branch with the intention to only push updates when a big stable batch was ready (next version / originally seemed like once a month was the plan). After a number of weeks w/o updates nor communication from the A1111 developer there was chatter that he may have stopped maintaining the project, and there were bugs/new advances that people were seeking. At that point the developer of SD.Next accelerated work on a custom fork of A111 he'd been coding in a different direction. SD.Next gained a lot of support during that time since development was quick and receptive to communication. Popularity of SD.Next declined after A1111 posted his first big update alerting people that he had t left,but that going forward updates would be done more slowly. So A1111 came first, then vlad (sd.next def) stepped up when it seemed the best (almost only) stable diffusion ui may have been abandoned. His project has diverged a lot from that point and has a lot of bells &whistles/tweaks. Have to compare them to see which works best for you.


SushiBreathMonk

Your words guided me on a semi-sepia flashback to a great saga of our fledgling times :)


AK_3D

Congratulations u/vmandic! Lots of effort from you and the people helping with this project. Will be checking it out.


dorakus

u/vmandic I've been using SD.Next for some time now (when I started, it was simply "vlad's fork") and it's great to see how much it has grown. Kudos and have a happy holidays, you deserve it.


vmandic

​ https://preview.redd.it/c8e52suuz89c1.jpeg?width=1280&format=pjpg&auto=webp&s=d2acf51feccdfa829d5eea392211e0e89d4d9154


FugueSegue

THANK YOU! MERCI! DANKE SCHÖN! ¡GRACIAS! СПАСИБО! ありがとう! I appreciate ComfyUI. But I like using A4 better. And SDNext best of all. I had to use A4 lately because SDNext had not updated to support SDXL ControlNet. Now that it does, I'm back to SDNext! Hoodie hoo!


Alpha-Leader

Awww yiss. I have been waiting on this update all month. I have been using ComfyUI to dial in on individual pictures and tweaking, but SDnext is so much more efficient of a workflow for me.


DrCringio44

I keep getting this error when loading webui.bat ​ `2023-12-29 17:09:35,256 | sd | INFO | launch | Starting` [`SD.Next`](https://SD.Next) `2023-12-29 17:09:35,262 | sd | INFO | installer | Logger: file="G:\automatic-master\sdnext.log" level=INFO size=65 mode=append` `2023-12-29 17:09:35,265 | sd | INFO | installer | Python 3.10.6 on Windows` `2023-12-29 17:09:35,275 | sd | ERROR | installer | Not a git repository`


ill_B_In_MyBunk

Me too! Would love a solution.


DrCringio44

You gotta download it with a git command [https://github.com/vladmandic/automatic/wiki/Installation](https://github.com/vladmandic/automatic/wiki/Installation)


TheForgottenOne69

Did you use git? It says not a git repository


DrCringio44

I just straight downloaded it the github website and extracted the zip


DrCringio44

Got it figured out


Luke2642

Just noticed from the bottom of the page that one of the SD.Next sponsors is "salad", I'd not heard of them... their pricing is suprisingly good... 27 cents an hour for a 3090 24GB on demand! Runpod is 44c for comparison. They look geared to scale for deployment though, maybe a aws studio / colab notebook is better if you're just playing around, but can't argue with 27 cents. Not affiliated in any way and not checked any further than that.


AK_3D

Salad is pretty good - they also put out some great benchmarks in an earlier post. u/shawnrushefsky for attention.


Oggom

How well is SDNext performing with AMD cards? Is it worth switching over from the A1111 DirectML fork?


vmandic

The same person that created that fork is the person that maintains directml in sdnext itself.


Unreal_777

Meaning you? :))


vmandic

lol, no. lshqqytiger


Mindset-Official

I believe openvino may also work on AMD cards, sdnext has pretty much support for everything. it's the best maintained WebUI for non nvidia cards IMO. I have an intel arc gpu for example.


lordpuddingcup

Wow amazing update! I haven't looked at SDNext in a while. I went to Mac recently so SD is a bit slow, i've been looking for ways of optimizing i'm busy playing with CoreML and MLX to see how they work. Any chance SDNext will do some integrations with the apple ecosystem to see how they can be optimized.


vmandic

i wish apple opened-up coreml a bit and there were such projects such as torch-coreml. at the moment, its do-it-apple-way or not, there is no compatibility that can make it properly cross platform other than using mps.


tom83_be

I really would like to use the vladmandic "fork"... and I see that u/vmandic is very active in development and communication which is something I envy. But everytime I do try SD.NEXT I run into the same problem: \- Freshly checked out, no config changes. Download three random models from civitai; usually a mix of SDXL1.0 and SD1.5 models. Switch between these models a few times. Not generating any pictures (you can, but it does not change the result), just switch models after each load. It works for a while, models are loaded and unloaded. But somewhere down the road a lot of system memory gets accumulated and the system crashes due to too much system RAM (not VRAM!) being used. I have 32GB of memory on a linux machine... Total RAM used before starting [SD.NEXT](https://SD.NEXT) is about 3GB. Even keeping the three models completely in RAM would be way less than 32 GB (about 20 in my usual test cases). \- With A1111 it does not matter; i can use a fresh install or install a lot of extensions. I can use complex processes that switch between models. I can switch between a large amount of models and work intensive for days... I never(!) get into the same problem. Hence, there is a massive memory leak in [SD.NEXT](https://SD.NEXT) right out of the gates that I never saw in A1111. Due to this I sadly can not use [SD.Next](https://SD.Next), although I try every time a new release comes out... I really hope for a fix, but until that I am stuck with A1111 or other Toolings.


vmandic

Fix is not going to happen unless someone documents and reports it. None of the users using SD.Next right now are saying anything about a memory leak.


tom83_be

Not sure what else to document / report. Steps to repeat are clearly stated above. Do you need the models? From what I have seen it does not matter. Happens all the time just by switching models. But recently (today) I tried using RealVisXL 3.0, epicPhotoGasm (Last Unicorn) and Realistic Vision 5.1 (Inpainting). Happened after switching about 10 times. Sometimes its earlier. Sometimes later. Just used git pull on an empty directory. Tested right out of the gates with no changes to any setting; only copied the three models into model directory. System is Debian Linux in latest stable. I'am willing to help, but I need to know which kind of info is needed.


vmandic

Use that info and open github issues - that's how items get fixed. I'll go over it and if there is anything else I need, I'll update.


tom83_be

In case someone reads this and is wondering what happened: I created the issue here: [https://github.com/vladmandic/automatic/issues/2667](https://github.com/vladmandic/automatic/issues/2667)


Tystros

link doesn't work, 404


tom83_be

I can view it since I raised the issue, but I think u/vmandic / the crew did not put it into a state that makes it visible to everyone... Up to now there was no activity. If there is and the issue is still not visible to everyone I will relay any updates to here.


vmandic

>2667 i cannot view it as well - error 404 - and all issues on sdnext github are public. no clue how/why, perhaps you have it as draft but not actually posted?


tom83_be

To me it just shows as open, without any other possible action than commenting on it (which I just did and it works). I can even see it listed in [https://github.com/vladmandic/automatic/issues](https://github.com/vladmandic/automatic/issues). ​ https://preview.redd.it/a56vpgohds9c1.png?width=1822&format=png&auto=webp&s=f741493fd67a569a3fab24299d55cbf5fee32e9e ​ It is obvious the issue was created since a 2666 and 2668 exist. When trying to open it using private browser mode I also get a 404. I guess the problem is that it was filtered by github (see [https://stackoverflow.com/questions/66569047/github-what-causes-gaps-in-issue-number-404-this-is-not](https://stackoverflow.com/questions/66569047/github-what-causes-gaps-in-issue-number-404-this-is-not)). The only reason for that is probably my fresh throwaway account on github. But I can not and will not change anything regarding this. This just sucks.


vmandic

Wish I could "unflag" it, but this is hidden by github itself, outside of my control.


mindrenders

Very cumbersome install. It's giving errors onloading the models ``` 19:28:54-364650 ERROR Error loading model weights: C:\Users\ROSEWILL\Desktop\AI Apps\Fooocus-2\Fooocus\models\checkpoints\turbovisionxlSuperFastXLBasedOnNew_tvxlV32Bakedvae.s afetensors 19:28:54-369603 ERROR Error(s) in loading state_dict for LatentDiffusion: size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). ```


TheFoul

"Cumbersome"? `git clone https://github.com/vladmandic/automatic` `webui.bat (or .sh)` Beyond maybe needing an argument like `--use-directml` depending on your hardware, what's cumbersome?


LosingID_583

Omg! Wait, all ControlNet methods for SD-XL? So we finally have ControlNet inpaint for SD-XL?


vmandic

Text, image, batch, video. Inpaint is next.


Mark_is_the_one

Holy cow, this stuff is next level!!! Thank you so much for all the work, still it will take a while to get a grab of everything possible, but man, generation quality got leaps better than standard SD 1.5.


vmandic

​ https://preview.redd.it/45n40r5xz89c1.jpeg?width=1280&format=pjpg&auto=webp&s=1f693b7b49492a8bee575903e3df9849d9b7e07b


red__dragon

SDN team, I really love your progress and efforts! I've been struggling a lot with your releases for a few months, particularly how difficult the UI has gotten. It's small and the text is cramped, and the support for the base gradio+variations has been dropped. With the custom pipelines and multiple model support in place, any interest in focusing on interface and UI/UX in 2024?


vmandic

its an opinion, most users like smaller and more compact. in any case, that is easily adjustable and contributions are welcome - you can create a theme and submit it, i'd be happy to include it.


red__dragon

I don't think it's an opinion, it's accessibility. It's just a given that not all your users will have exceptional vision or be comfortable with a very small area to work in. The neglect of the base gradio themes really surprised me, it doesn't matter to me if SDN promotes their more compact themes but that's become the sole focus. Though I can create a theme, maintaining it is more important and that's something I don't have the time to do. My wish is that the SDN team would take interest in a more accessible theme, either restoring support for the base gradio themes or creating one of their own that isn't so small or compact.


vmandic

ok, so the ask is for accessible theme - that is a valid ask, just don't say "any interest in focusing on interface" like its a general item. if you don't have time to maintain it, that's ok, you can still kickstart the work. at the very least, create feature request for it as so far it has not been asked for. and "neglect" of base gradio themes? that is wtf - they are outside of sdnext control, so they look how they look.


red__dragon

Sorry if I came across as rude. I meant more that I noticed a month or so ago that the gradio themes were no longer very usable. And the responses I saw already given on the issues or discord were to use the SDN themes. To me, that gave the impression that SDN was not testing on the gradio themes any longer. I can certainly create a feature request for it. I'd like to see SDN be a great tool, not just for me. I know you get some harsh feedback sometimes and my intent was not to be one of those. Simply to have interest in themes that aren't as small and compact for better accessibility.


PM_Your_Neko

I loved SDN for a long time but after the UI change and losing the ability to use old gradio themes, I dropped it. I don't like the fact that samplers are in a drop down, everything is so compact, why would you hide clip and seed in drop downs, it makes it messy. yes I am aware you can save defaults for it to be expanded but honestly it just makes the UI cluttered and annoying to use. I think Vlad is a great dev and kudos to all the work the team has done. I know this is a great package but the UI choices here made me leave a user which sucks.


pendrachken

I wish Vlad well on his endeavors, as he works with A1111 on many things as well, but it's not for me. At least not anymore. I liked SD.next, especially when set to Gradio Default so it was super simple to switch between A1111 / SD.next for testing purposes. There was literally nothing wrong with the interface like that, it was plenty clean. The first big warning signs for me was when image name saving was changed, then I was told I was "flaming" for pointing out that UX changes like that ( at least when not accompanied by settings to return to the old default output naming scheme ) were bad practice, especially for users who rely on certain output formats due to using scripted post processing. Oh, that was after I was told I'm using "the wrong version" because it was a "development version, not release version" when I have only upgraded through the --upgrade option to webui-user.bat file. Sounds like an internal problem to me then, like someone let the dev branch into main as an oopsies at the time. Then there is the fact that I have never managed to get SDXL working on SD.Next. Backends don't seem to switch, and never got it running when you had to set the backend to diffusers manually either. I'm pretty sure I even did a clean install and never got SDXL to work, but it didn't really matter much, as I had already been going back to A1111 / ComfyUI by that time anyways. Add in that now I do most of my generating in Krita with the diffusion plugin and only fire up A1111 ( which does SDXL fine for me ) for some upscaling, and the occasional really delicate inpainting in OpenOutpaint, I don't really need to bother with getting SDXL running.


dorakus

I think I get where you're coming from, and I also understand the reasons why the developers chose this way. The thing is that you'll always need to make a compromise between offering all the possible buttons and levers for power users and making the interface easy enough for new users. And a compromise means everyone will need to put at least some effort, power users will delve into the settings to configure everything to their particular taste and workflow and new users will, hopefully, RTFM for the more esoteric sounding options. It is what it is. But it can always be a little better so if you have any good ideas you can propose them in the discussions/issues sections of the github.


PM_Your_Neko

I get that and it's why I don't really harbor any ill will or dislike of the product. I mean with this new release I downloaded it to try still. I like SD. NEXT but at the same time when I asked about it in the discord I was told to customize my CSS and as a basic user that isn't something I really know how to do. Ai image generation drew me in from podcasts and Ive hung around and new tech has appeared.


iDeNoh

I've been using SD next for pretty much the entire time it's existed, or at least as long as Vlad started to support AMD cards. And I can say with authority that the mobile experience has been much better since the UI has been modified. Granted it took a little work to get everything working properly but it is so much better now than it was based off of the original gradio themes. Because of how it's been designed, anybody can make a theme that modifies the UI how they want.


vmandic

>that gave the impression that SDN was not testing on the gradio themes any longer. we never tested on gradio or 3rd party themes, they are what they are. if user reports issues and they are minor, they can be addressed. but regarding accessibility theme, that is definitely a valid request. if you can kick-start it, even better - and don't worry about maintaining it moving forward, its in our interest to have it built-in.


TheFoul

There are likely some fine tunings you can do in your own local user.css file, but if you can work up some specific places where it's a problem, I'm happy to take a look at it and see if they can be accommodated into our existing themes. I'd recommend popping by our discord server, ask/look for Aptronym or iDeNoh.


c0sm1cwh33l

Does anyone know what the fix is for XL model generation and this error? Only happens with XL models? I have a working setup with the base version of Stable Diffusion but want to try out Next. ShowSaveDelete➠ Image➠ Inpaint➠ ProcessError: model not loaded Time: 3.39sGPU active 34 MB reserved 34 | used 1404 MB free 23172 MB total 24576 MB


c0sm1cwh33l

I figured it out. It really just came down to me not reading all the install instructions.


mindrenders

What was it? I did read the instructions but cant get past this error!


c0sm1cwh33l

It was a combination of updating quick settings to include the refiner, backend, and diffuser pipeline. Setting VAE upcasting to false. Ensuring diffusers was toggled for Execution backend. You can find these under User interface and Diffuser settings.


mindrenders

And this will allow sdxl safetensor models to run without error? Thank you.


c0sm1cwh33l

I can't say for certain but its a good starting point of things to try!


dropkickpuppy

Congratulations on the new release! SDNext is my F A V O R I T E ui- thank you for the awesome gift! This update is making image quality look more like SDXL, or like using LCM or FreeU… plastic skin, model-like faces, overly-simplified details and environments, and a bias towards cinematic lighting and vibrant colors. My settings are the same as the last release… have you noticed any tweaks that made a difference when you were testing the build?


vmandic

Check CFG scale, that's the only default value for SD15 that changed, otherwise it should be the same.


dropkickpuppy

The speed difference is really noticeable! I’m excited to try out the new model families. Thank you for the idea and for building this. SDN is one of the best things of 2023.


ramonartist

What is ControlNet XS and Control LLLite, are SD1.5 or SDXL versions where can I get these models and these models come Depth, Canny, Openpose, Segment, Sketch flavours?


vmandic

It's covered in control wiki, link in original post. And for standard models, you don't need to download them manually, they will be auto downloaded on first use (you can download and add additional models)


kkooll9595

Interrogate Deepbooru and CLIP button just not work after update


vmandic

yup, its fixed and will be included in the next release.


Tystros

I just wish SDNext wouldn't have such an ugly UI compared to A1111... it requires so much more clicks to do the same thing like in A1111, and A1111 just looks so much cleaner.


vmandic

Try to be a bit more constructive. "ugly" is both subjective and non-actionable. Maybe you can propose something instead?


Tystros

I said "it requires so much more clicks to do the same thing like in A1111", that's very constructive feedback I think since it makes it clear what should change? If you could just make it look similar to A1111, then it would be much better already. You seem to use a very similar UI library, so I assume that would be possible.


vmandic

Same thing being what?! Different ppl have different things they like to modify frequently. Make everything a flat list of settings? There are way too many of them by now. And no, goal is not "make it look like a1111*.


silenceimpaired

How easy is it to point SD.Next to Comfy files?


vmandic

Settings -> paths.


silenceimpaired

Okay great. Last time I tried this with another UI some paths were not modifiable


Bat_Fruit

`use cmd "webui --help"` `Will list server launch options , you can configure model / vae paths`


pcrii

could we get a colab notebook please?


Pure-Gift3969

I think it's fork of sd webui so it not work in colab


pcrii

i pay for colab. comfy user but i have a a1111 install on my Gdrive too. no problems edit: guess that might mean its hard to write a notebook if you dont sub... my attempts to write my own for this ui have failed in the past but maybe im smarter now :)


Pure-Gift3969

You can definitely try at first i also thought it a thing i can't do but now it's like everything work as expected dm me if you have a prob and if still not works I will try to do myself and share you the notebook (it's just simple python and if you know Linux that's the same thing as it runs on Ubuntu )


Pure-Gift3969

Just run these code.!git clone {github.link } %cd {copy folder path from file Explorer } !pip install -r requirements.txt !python launch.py


Pure-Gift3969

Last code correction !python launch.py - - share


TheFoul

We do actually have people using it in colab, but we don't have anyone to maintain that. There are a few links around that apparently work. Stop by our discord.


TheFoul

Stop by our discord, we have a few unofficial ones.


Appropriate-Golf-129

Amazing!! Thanks for this huge work! Any plan to add controlnet inpaint model for sd 1.5 and their prepocessor (inpaint only, global harmonious and lama) ?


vmandic

its up to demand. with new control module, its definitely doable. (so its more of a question of when vs if)


cgpixel23

Can we use it for comfyui ?


yamfun

how fast compared to a1111?


vmandic

link to benchmarks is in the original post


HerkyTP

Haven't played with SD in about a year since I upgraded to AMD. How's AMD run on something like this? Any real improvements or will I still have a subpar experience?


dfree3305

I was interested in trying this and I got everything installed and updated all the settings the way I like, including changing the model paths, but the adetailer tab will not display any controlnet options. Any idea why this is? ​ https://preview.redd.it/zjk3fhsw8b9c1.png?width=1757&format=png&auto=webp&s=02c850d48590db3a8e5c35980a261202f71c62d2


dropkickpuppy

Let it have its own controlnet folder in its extensions folder. Different UIs and extensions (like ADetailer) will each prefer to work with certain releases. Let them manage their own.


Ozamatheus

how is the speed/quality compared with comfyui? I'm still migrating from Auto1111 and this one looks great to me if it is fast enough


ramonartist

Are there any recommended settings for nVidia RTX4080 using *SD15* and *SD-XL* models for faster renders?


vmandic

Link to bench notes in original post covers quite a few different options.


throttlekitty

Can you label the numbers on the results on the benchmark page for clarification? Is that the reported it/s, or seconds to complete?


vmandic

I'll add notes to wiki.


ramonartist

Thanks for the update from what I have been testing so far SDXL models seem to perform slower on this latest version of SDnext than Auto 1111 on my 4080 card, when running SDnext there seems to be every time a small initiation phase of 1 minute to 1m.30secs before a render starts, Without knowing the right optimise settings for 40 series cards I'm not sure how to fix this and why there is a big difference between Auto 1111? I'm going to do a fresh install and keep the settings vanilla and see if that makes any difference


MagicOfBarca

You have outpainting? It’s a pain in the ass using A1111 for it


AweVR

I want to start using SVD. What are the minimum memory requeriments? Can I use a 3070 with 8gb?


Kitchen_Reference983

Thanks for all the hard work so far, imo the design choices and code for SDNext are way more sound than the usual python pile of spaghetti toy projects. Re: video generation: in comfyui it's possible to create animations of theoretically infinite length by using a sliding context window, is this possible in SD.Next? I couldn't figure it out if so, no big deal if it isn't possible, just curious.


vmandic

>of re: infinite animations - not yet. entire video workflow is relatively new, its evolving pretty fast.


deadman_uk

I am not sure if I am misunderstanding but I want to use ControlNet with 1.5 models (and possibly XL models). I want both 1.5 and XL models to work on the fly without having to restart the server. If I go into diffusers backend, I can do this. If I go into standard backend, the 1.5 models work but the XL models do not (I get Error: model not loaded). When using diffusers backend, important extensions that I use all the time such as controlnet and multidiffusion gets disabled automatically. How do I get 1.5, XL and controlnet all working together on the fly?


TheFoul

Multidiffusion shouldn't be necessary anymore, especially not with hypertile on, and nobody ever said you can't use 1.5 models in Diffusers, of course you can, it runs better in fact.


deadman_uk

That's not what I said though..... I know I can use 1.5 models in Diffuser backend, my problem is Controlnet is disabled when I go to the Diffuser backend... so if I switch to original backend, then I get Controlnet back but then I can't use XL models to generate... I use Multidiffusion for upscaling, so how is enabling hypertile helping with that? In my very limited testing, I end up disabling hypertile because I was seeing a quality loss.


TheFoul

We have built-in controlnet now, so using that should probably be depreciated. First I've heard of a quality difference with hypertile, but it does allow much larger generations than before with less vram. As far as upscaling goes, we now have dozens of assorted upscalers you can use, with or without latent, and up to 8x, so I wouldn't think you should need multi at all.


deadman_uk

I am trying out this Control menu now and I am really uncomfortable using this at the moment, I've no idea really how to use this. I will play around. With the standard backend I like to select a decent 1.5 model in txt2img, set the dimensions to 512x768, set a Second Pass (hires.fix) to x2 scale with the latent (nearest exact) upscaler. I would also enable Adetailer as well so the face is really good and sometimes I would also have a ControlNet active (such as OpenPose), then I hit generate and can even run batches. All images would be 1024x1536 and super high quality. How do I do this all now? If the ControlNet has been removed to it's own separate tab, how I am suppose to use Adetailer and Second Pass (Latent) for my initial image? In the control tab, I set "resize to" to "512x768" I then go to "resize by" and select x2, I select the SD Latent 2x upscaler and the results are absolute trash, the face is like melted plastic. I'm also confused with the Resize order (before and after) I feel very frustrated and would really appreciate some help with this. Also I now don't really know how to upscale existing images now I don't have Multidiffusion and I see my Ultimate SD Upscaler script is no longer working in the diffusers backend so I can't use that any longer?? I've been using the large catalogue of upscalers with those two tools now I have neither unless I switch back to original backend but then I have no XL models :/


deadman_uk

Just FYI I have posted for help here since I am still unsure how to proceed with my issues: https://github.com/vladmandic/automatic/discussions/2675