Dreambooth Tutorial: Train Stable Diffusion Image AI With Your Own Model For Less Than a Dollar!

Chapters:
00:00 Intro and Preparation
6:11 Training on Runpod
25:21 Downloading the finished model
32:00 Live demonstration of trained corgi model
28:41 Conclusion and final thoughts

I made a quick tutorial on how to train Dreambooth and Stable Diffusion with your own images on a cloud GPU provider. In this video I go through the steps to prepare for training and go step-by-step through the process, and show a few common issues that you can run into when attempting to train your model. The final cost for the entire training ended up being only $0.35, so this is a very cost effective way to train your own models if you don’t have a higher powered GPU and computer.

Resources:
Link to the my Dreambooth-Stable-Diffusion fork: https://github.com/hungtruong/Dreambooth-Stable-Diffusion
My Runpod.io referral link: https://runpod.io?ref=q4k3ugru
Hugging Face Repo: https://huggingface.co/CompVis/stable-diffusion-v1-4
Backblaze: https://secure.backblaze.com/r/00bh52
Path for saving to cloud storage: /workspace/Dreambooth-Stable-Diffusion/trained_models

22 Comments

  1. damn I was asking for this in previous video. Thank youuuu! 🙂 Oh and I have a question, is it possible to make a drawing with AI from an actual photo? Like AI would keep the same pose, clothes, but it would make it to look in style of lets say… like a drawing of Boris Vallejo or some other artist. I mean colors, textures, maybe some background changes… I hope my question is clear 🙂

  2. Great tutorial thanks. You mentioned that there is a way to do this locally for those who have a good gpu without paying for service. I’ve run several training instances using free google colab which sometimes works, but sometimes after hours it quits or kicks you out so am interested in trying to make some training models locally as I want to experiment with various settings and images.

    Can you post (or point me to another video) tutorial on how to run training locally? Most videos online have instructions for training either using google colab or a paid gpu offsite. Thanks!

  3. Cool cool! thanks for showing how Runpod works, I only ever tried the Runpod web UI and have been baffled about how to approach notebooks on it. Looks a fair bit more accessible now, thanks!

  4. Thanks for posting. The video is very detailed. With the up coming Gpu like the RTX4090, do you think training will be faster ? I am looking forward to local training. Internet is not that stable for the area where I am living.

  5. In the training phase I always get:

    RuntimeError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 23.70 GiB total capacity; 18.20 GiB already allocated; 41.56 MiB free; 18.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

    Anyone know what can it be?

  6. Thanks for the tutorial, very informative. Trying to figure out s simple process of getting the damn thing on my gdrive made me want to eat the end of a fucking shotgun. I aint doing it the notebook way, and god forbid runpod made it easy to upload to a service like drop box.

  7. Is it possible to train a style, then use that style to train a model of yourself, because I want to use my face right but in a certain style that isn't in stable diffusion by default

  8. Hey, man you are really funny and have great content! I’ve been programming poorly for a few years and lately started working with AI, so I’m really enjoying your videos. Thanks for the tutorial. Subbed, best of luck.

  9. Thank you for the great video. Once you have your custom model, how do you setup a cloud instance of Stable Diffision web ui using the custom model? Do you know of any tutorials out there for that?

  10. Whatever you do don't use the community cloud pods. They download and upload at really awful speeds. I trained a model for an hour just to realize that it was gonna take 3 hours for them to upload it to google drive

  11. I had the most trouble pulling the model onto anything like Google Cloud or dropbox, so I just downloaded it locally. maybe 10 minutes and a few cents with good internet

  12. @Hung Tuong
    How many pictures would you recommend to use? Should they be cropped beforehand (512×512?) and should they be mostly the same or different perspective? With background or background removed? Always same lightning or different? Thanks in advance. I am new to this and have been trying to train a modle the past couple days and can't get it to work even when paying Colab

  13. This video was super helpful in all perspectives of this process, thank you!
    Now that the new 1.5 SD models are out, one of them (the 7gb one) is supposed to yield better results specifically in training dreambooth. do you have any idea on how to switch the model used in this process (1.4) with the new one which is also found on huggingface?
    if you could do it on your repo it would be very appreciated

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 AI Art Video Tutorials