🔝Improving SD1.5 results quality
A case by case guide to making your results better
First things first: it's important to understand that training models is not (yet) a precise science. There are no magic numbers, only rules of thumb! You will need to iterate to find what works best for your specific use case.
My images don't look like the subject!
There can be multiple reasons for this. Start by improving your training set quality, before trying to tweak other parameters.
Improve the training set quality
Improve your training images. Only include high quality images, that are well lit, in focus, and show the subject clearly. More images is generally better, but only if all the images are good. Adding mediocre images will not help.
Crop your images to square format. The model is trained on 512x512 squares. Depending where your object is located in your training images, it may end up being cropped! By cropping the images yourself to a square format, you'll make sure the model is trained precisely on what you want. When training on a person, you can also use "Face Crop" which will help optimize cropping, but nothing beats careful manual cropping.
Make sure your images are large enough. The model is trained on 512x512 squares. If your images are smaller than that, they will be stretched which will degrade results.
Tweak the training parameters
Increase the number of training steps. Instead of 100 steps per image, increase to 110-120 steps.
Try out other base models. The different base models react differently to each subject, so it can be worth trying them all.
My images all look the same!
This is a clear sign of overfitting, which can be caused by:
Training for too many steps
The training images are too similar
Training with a too high learning rate. We don't recommend changing the default (
0.000001
) unless you have a good reason to
Last updated