• For Ad free site experience, please register now and confirm your email address. Advertisements and popups will not be displayed to registered users.Being a registered member will also unlock hidden sections and let you request for your favourite fakes.

Tips Stable Diffusion - Editing Guide

This guide is for image generations through StableDiffusion. One of the ways is to depend on inpainting models and the second and powerful method to use ControlNet and use any model instead of being limited to inpainting models
 
Inpainting
  • Open Civitai.com (or Huggingface)
  • Get the model download URL
  • Download the model into your workspace eg:
Code:
wget "URL" -O /path/to/models/Stable-Diffusion/folder/fileName-version.safetensors
  • Refresh the list of models, choose the newly downloaded model
  • Navigate to img2img tab > inpaint tab and load the source image
  • Adjust width and height to match the image's ratio
  • Highlight / paint the areas in which you want Stable-Diffusion to render
  • From my experience, it is better to let Stable-Diffusion draw the entire image so that the image is seamless. If you choose to paint only the highlighted areas, the resultant images might have visible seams / skintone differences, which you should manually correct later.
  • For retaining details in the image and only turn it "cartoonish", keep lower denoising strength (<=0.4). To edit the attire, etc..., higher denoising values are needed (>=0.75). Play with this number
  • Prompting is key. To get impecable results, you need strong prompts and negatives. To start with use CLIP interrogation to automatically describe your image. Review and add/remove content from the prompt and run it to see how your image is rendered. Once you get a desired result, use the same seed and modify your prompts / negatives and repeat till you're satisfied with the output
 
ControlNet
  • With ControlNet, you can use any ordinary model to consistently inpaint. This means, if you create your own mix of models, you can still use that to inpaint.
  • First, enable atleast 3 ControlNet models from Settings > ControlNet, save and restart Stable Diffusion UI. This will give us flexibility to mix-and-match ControlNet models for different scenarios.
  • I prefer working with txt2img while loading the image into ControlNet. But you can still use img2img and load the image which should copy the same image to all ControlNet models automatically
  • I prefer to work with a combination of OpenPose as model#1 with strength of 1.3 and Depth as model#2 with strength of 0.7. With this setting, a denoise of even 1 (i.e. 100% redrawing) will preserve the pose of the model. Can either choose to use img2img or go through img2img > inpainting.
  • For scenarios where you need the model outline, go with LineArt model along with Depth
  • For rendering with higher similarity, use Tile with strength of 0.4 - 0.7 with strong prompts/negatives. Use this only if your prompts are good. Otherwise, the resulting image will look more-or-less the same.
 
Nice guide Bro. I tried using these tips but i am always getting some small amount of clothes. Is it the fault of prompts or the model itself?? I tried putting more words in prompts like nude,naked, topless, etc etc still the image is not completely nude.
 
If the image has a face with brown hair and your prompt also has brown hair, the brown color does not get associated with the model. So for your model to understand, you should describe the image accurately and describe what the model is wearing in that image. That way, when the training is completed, it will be flexible.

Correctly labeling the images are very importance
 
Inpainting
  • Open Civitai.com (or Huggingface)
  • Get the model download URL
  • Download the model into your workspace eg:
Code:
wget "URL" -O /path/to/models/Stable-Diffusion/folder/fileName-version.safetensors
  • Refresh the list of models, choose the newly downloaded model
  • Navigate to img2img tab > inpaint tab and load the source image
  • Adjust width and height to match the image's ratio
  • Highlight / paint the areas in which you want Stable-Diffusion to render
  • From my experience, it is better to let Stable-Diffusion draw the entire image so that the image is seamless. If you choose to paint only the highlighted areas, the resultant images might have visible seams / skintone differences, which you should manually correct later.
  • For retaining details in the image and only turn it "cartoonish", keep lower denoising strength (<=0.4). To edit the attire, etc..., higher denoising values are needed (>=0.75). Play with this number
  • Prompting is key. To get impecable results, you need strong prompts and negatives. To start with use CLIP interrogation to automatically describe your image. Review and add/remove content from the prompt and run it to see how your image is rendered. Once you get a desired result, use the same seed and modify your prompts / negatives and repeat till you're satisfied with the output
Which models do you use? Can you suggest some?
 
Which models do you use? Can you suggest some?
I use a custom mix model. Best to try different models to which suits best. I'd recommend URPM, GTM, aEros,. Liberty, Realistic Vision models
 
Inpainting
  • Open Civitai.com (or Huggingface)
  • Get the model download URL
  • Download the model into your workspace eg:
Code:
wget "URL" -O /path/to/models/Stable-Diffusion/folder/fileName-version.safetensors
  • Refresh the list of models, choose the newly downloaded model
  • Navigate to img2img tab > inpaint tab and load the source image
  • Adjust width and height to match the image's ratio
  • Highlight / paint the areas in which you want Stable-Diffusion to render
  • From my experience, it is better to let Stable-Diffusion draw the entire image so that the image is seamless. If you choose to paint only the highlighted areas, the resultant images might have visible seams / skintone differences, which you should manually correct later.
  • For retaining details in the image and only turn it "cartoonish", keep lower denoising strength (<=0.4). To edit the attire, etc..., higher denoising values are needed (>=0.75). Play with this number
  • Prompting is key. To get impecable results, you need strong prompts and negatives. To start with use CLIP interrogation to automatically describe your image. Review and add/remove content from the prompt and run it to see how your image is rendered. Once you get a desired result, use the same seed and modify your prompts / negatives and repeat till you're satisfied with the output
It would be awesome if you post an image example for each of the three types of models
 
For Controlnet - Example pics/tutorial Screenshots for each of Openpose, Tile, Lineart, Depth models.
There's nothing much to show. You have to choose the controlnet model and it will be loaded basd on your selection

Openpose will make a stick figure from the img2img loaded image

Lineart is useful to get an outline of the image loaded

Depth is useful if you need depth perception

Tile will render the same image loaded unless your text prompts are strong enough to affect your the output.
 
I guess anyone need to start a new thread with prompts and models they use with results that would help lots of creators
Every image warrants a different prompt. You can find a sample promot from my signature. And the results, these are the posts that I have in DF
 
There's nothing much to show. You have to choose the controlnet model and it will be loaded basd on your selection

Openpose will make a stick figure from the img2img loaded image

Lineart is useful to get an outline of the image loaded

Depth is useful if you need depth perception

Tile will render the same image loaded unless your text prompts are strong enough to affect your the output.
Plss plss suggeste any way for me to do stb in my android mobile, if anything?
 
Hi can anyone help me with the prompt, I m using image to image . I know prompt but not so good. Could you please provide one example of negative and positive prompt? Thanks
Please see my signature for sample prompts

Plss plss suggeste any way for me to do stb in my android mobile, if anything?
Not possible without GPU machine
 
If the image has a face with brown hair and your prompt also has brown hair, the brown color does not get associated with the model. So for your model to understand, you should describe the image accurately and describe what the model is wearing in that image. That way, when the training is completed, it will be flexible.

Correctly labeling the images are very importance
i didn't ask for training, i asked for inpainting guide because sometimes i get some amount of clothes. Anyway i figured it out. Thanks.
 
i didn't ask for training, i asked for inpainting guide because sometimes i get some amount of clothes. Anyway i figured it out. Thanks.

Oh, okay I misunderstood 😬
 
Back
Top