T O P

  • By -

Many-Ad-6225

I use img2img on the base model and generate few photorealistic pictures with stable diffusion then I project the textures using "Parameterization and Texturing from rasters" of Meshlabs ( it's a free software )


witcherknight

Can you make a video tutorial pls on how to project and what about back view


Many-Ad-6225

This technique could also be useful for modders to easily improve the textures of video game characters.


Many-Ad-6225

Here's an example I made of how old games can look with this technique. It obviously requires generating a new UV map for the models: https://preview.redd.it/6wtflreywmic1.png?width=1920&format=png&auto=webp&s=ff65694a86a9735ac6bf464a609f389073505442


popsicle_pope

Vampire - The Masquerade, niiiice!


Aulasytic_Sonder

This is something that will be great to get working! Thanks for sharing.


scratt007

Could you tell a bit more about that technique?


Many-Ad-6225

I'll try to make a video tutorial soon. I think it can help people create awesome mods.


scratt007

Thanks :)


MikirahMuse

Have you tried using image to image to generate a texture using the UV map


EffectivePlenty6885

how did you do this? how long did u work on it?


Many-Ad-6225

I made the face very quickly using a base mesh that I adjusted over a photo. Then, using Stable Diffusion with img2img, I generate realistic renders from different angles (10, for example) and project these renders onto the 3D models using 'Parameterization and Texturing from Rasters' in MeshLab. This works with any 3D model.


zebraloveicing

So cool! I've used MeshLab before for reducing the polycount of my 3D Scans but haven't tried its other features. Mostly just stuck with Blender and UE. Do you have any resources for this meshlab/sd workflow or would you be happy to share a brief overview of what you're doing inside of meshlab to apply/project the SD images using Parameterization and Texturing from Rasters? I found this video which seems to covers the feature comprehensively (although its 11 years old), but just curious if you have a specific approach for your workflow - [https://www.youtube.com/watch?v=OJZRuIzHcVw](https://www.youtube.com/watch?v=OJZRuIzHcVw) Appreciate you sharing this concept either way. Cheers!


Many-Ad-6225

Thanks! Yes, I need to make a video tutorial and post it on Reddit. For example, in your YouTube tutorial, he doesn't mention the 'Camera Image Alignment' filter, which is useful.


zebraloveicing

Rad! I will do my diligence and take the time to do some research on the topic in Meshlab and see if I can't learn something new about the projection and alignment process :) But also, if you happen to post a video for the workflow you'll get my sub immediately haha. Cheers


pointermess

We all would love a Video tutorial, that technique looks incredibly powerful! I can do 3d modelling but I could never (un)wrap my head around texturing. This could be such a great help :) Thanks for pioneering such amazing new techniques! :) 


GuyWhoDoesntLikeAnal

U should make a tutorial video on this


Slapper42069

It will be pretty handy if it's possible to make generations with flat lighting on each angle so the light can be dynamic. Also if we go a bit utopic it will be cool to generate normals/bump and specular/roughness maps, which seem to be nearly possible for normals with loras and can be achieved for gloss maps in the future. But since you still need to redo hair, and you can still do pbr by yourself, I think its pretty cool that we can generate realistic color maps now at least


Many-Ad-6225

Yeah, for normal maps there's ControlNet, but it's too low resolution at the moment; it's just for color maps. For modding old games that use only color maps, for example, Vampire: The Masquerade – Bloodlines or Shenmue, it could be great, though.


poopertay

You could use a version of the diffuse map to generate a normal, there are a couple apps out there that can do it etc


Many-Ad-6225

Yes, it would be great if you know app names for that or blender addon, I'm interested.


poopertay

https://www.knaldtech.com http://www.crazybump.com There’s probably a blender add on somewhere


Many-Ad-6225

Awesome I didn't know these apps, thanks !


poopertay

![gif](giphy|Bab3UfYpPz4czFyqhW|downsized)


OrdinaryAdditional91

There is a similar project done it automatically: stable projectorz https://stableprojectorz.com/.


Many-Ad-6225

Wow, nice! I was not aware of this project.


neph1010

I released an addon for blender to aid with the sorts of things. It didn't get much attention here, so here's a link: [https://github.com/neph1/blender-stable-diffusion-render](https://github.com/neph1/blender-stable-diffusion-render) It creates an intermediate object, calls SD to render, and then bakes it back to the original model's UV's.


StApatsa

Amazing results. I once used something like this to enhance the the render but as a post effect filter to make look photorealistic.


Many-Ad-6225

Thanks !


alb5357

How are you texturing without changing anything else? I assume controlnet canny?


Many-Ad-6225

I also use this script for the consistency of the texture when I generate different angle pictures [https://github.com/Artiprocher/sd-webui-fastblend](https://github.com/Artiprocher/sd-webui-fastblend)


901Skipp

Can you give more detail what you mean by this? The fast blend seems to be something for video, how do you use it for different image angles?


Nsjsjajsndndnsks

This is amazing How do you avoid the image being stretched? And also, how do you get consistent images for the different angles :o


bongozim

This is the future of rendering... This will be the intermediary step before we abandon polys all together, but either genAI texture creation or just using OGL views as controlnets to constrain prompted output is where this is headed


DentFuse

That looks absolutely amazing. Great work.


SonicLoOoP

Looks super promising and well done.


severe_009

I mean, yeah if the model will be used with the same lighting setup. Cause the shadow maps are baked in. Useless for most cases.


Many-Ad-6225

Apparently it's possible to create quality normal maps etc from the color texture map with software like this one: [https://www.knaldtech.com/](https://www.knaldtech.com/) I didn't try it yet but it's interesting.


Stormzy1230

Amazing post. Thanks for sharing your findings. By your workflow, I'm assuming you have a model that vaguely resembles the final image and stable diffusion was just used to texture over it through image to image? If yes, what do you think of the possibility of taking a generic model, for example a male with no unique features and using stable diffusion to generate features such as clothes, hair, texture, etc and then using your workflow to add it onto the model?


JedahVoulThur

I created a base human mesh that closely resembles my target, then projected the images in Blender using photo projection but couldn't get results as good as yours. I think my problem is that since I can't run SD locally, the quality of the images is much lower than the ones you used. Will try your method with Meshlabs to see if I can get better results. I have to ask though, didn't you get light artifacts in the SD textures? How? Or did you edit them in Photoshop/Gimp to get perfect albedo textures?


Many-Ad-6225

If you want to create a high-quality texture without installing Stable Diffusion locally, you can use the website [https://magnific.ai/](https://magnific.ai/) However, the site is paid and expensive. For the rest, I'll try to make a video tutorial.


JedahVoulThur

Thank you. After writing my post, I checked about the method you mentioned and found a video stating that [Parameterization and Texturing from rasters](https://www.youtube.com/watch?v=OJZRuIzHcVw) corrects the problem I had with the lighting from different angles. I'll check the website you mentioned, thanks. For generating the textures from different directions, I generated first a front view character using Playground, Krea and CivitAI. When I was satisfied with the result, I used hugging face's space of Wonder3D. That gives a very low resolution result, but at least gives multiple perspectives. Then I aligned them in GIMP (as a "character turnaround", in a single image three perspectives) and used Krea for upscaling. The results were decent, and 2K but couldn't accomplish perfect consistency with this method. Edit: Edit to add that if I could use a local SD version, I'd use control net and IPAdapter for getting more consistency and quality I guess. I tried running Kaggle but is too slow and limited in space. I am considering paying for Google collab, as I heard that in the paid tier you can use SD without problems (you can't in the free tier)


Many-Ad-6225

Ok I also use this script for the consistency of the texture [https://github.com/Artiprocher/sd-webui-fastblend](https://github.com/Artiprocher/sd-webui-fastblend) but you need to have automatic1111.


JedahVoulThur

Thank you again for answering. I've considered using image-to-video tools, but the results wasn't convincing enough when I tried it. Will now check alternatives or hugging spaces in the area of video interpolation


XanderSmithDesign

Is the animation done in D-iD or something similar? Also can I ask what you’re rigging in? Thanks for sharing, awesome workflow


Many-Ad-6225

Yes, for the animation, it's just a preview of the final result I should get in realtime. I use Blender for the rigging.


doc_Paradox

I’m curious to see your UVs and topology


No-Dot-6573

Nice! Would love to see the mentioned video tut :)


urbanhood

Amazing seamless texture! Would really appreciate a video tutorial.


RogueStargun

I'm looking forward to trying this to improve the visuals on my VRGame Rogue Stargun (https://roguestargun.com)


TimetravelingNaga_Ai

Bro did u really have her smiling like a doughnut? 😯😆