DREAMBOOTH: Train MULTIPLE Subjects In Stable Diffusion At The Same Time For FREE!

With the recent development in AI technology and training, you can now use Dreambooth (based on a Google’s AI) to train a stable diffusion model with multiple subjects (people, style, objects) using your own images and all of that for Free! So in this video, I will show you what you need to do before training, any tips and tricks you need to know before you can train multiple subjects in one go. This will allow you to have multiple people trained on one single Ckpt file in less than an hour!

Did you manage to train multiple people or style? Let me know in the comments!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
SOCIAL MEDIA LINKS!
✨ Support my work on Patreon: https://www.patreon.com/aitrepreneur
⚔️ Join the Discord server: https://discord.gg/3ErYSdyUPt
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
TheLastBen’s Dreambooth Colab Doc: https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb

Resize your images for free using birme: https://www.birme.net/?target_width=512&target_height=512

Special thanks to Royal Emperor:
– DanO..

Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!

#stablediffusion #dreambooth #stablediffusiontutorial
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
WATCH MY MOST POPULAR VIDEOS:
RECOMMENDED WATCHING – My “Stable Diffusion” Playlist:
►► https://bit.ly/stablediffusion

RECOMMENDED WATCHING – My “Tutorial” Playlist:
►► https://bit.ly/TuTPlaylist

Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.

100 Comments

  1. Aitrepreneur – use a photoshop equivelent to paste your "person to be changed" beside someone wearing the "change", then inpaint out the "person to be changed" stuff you want changed- prompt they are identical twins – mind blowing

  2. Well I trained 3 diff subjects each with sks at the end of the name …. what happened was the likeness was unreal but the 2nd name even trained hours earlier always brought up the third name. Don't use sks in the training names I guess.

  3. I get this error when trying to test new trained model:
    "ImportError: cannot import name 'image_from_url_text' from 'modules.generation_parameters_copypaste' (/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/generation_parameters_copypaste.py)"
    Can anyone help?

  4. Dreambooth training is acting just like textual inversion for me. I can't change the details of the original photos. Just a stream of smiling faces that I trained but can't change where they are or appear as. Always similar as the trained images. So if I type like "in plate armor" still shows up in a bikini or tshirt. Was hoping drrambooth was the answer to include new subjects but unless I'm not trained enough (faces are near identical so doesn't seem to be the case) is this just something we can't use drrambooth for? Will always include original details?

  5. im having issues about my trained model of 1.5 it keeps making the photo very close to what i trained with it, even if my prompts are good, i cant create artistic image of myself, if i reduce or add cfg the image does not look like me. any suggestions?

  6. Fantastic, I"ve Been waiting for something like this to train artistic styles, and once again K your Tutorials are simple, straightforward, and easy to follow! Your Number#1! at this! Colab should hire you to do their training videos.

  7. is there any way to train with waifu diffusion? i keep trying to upload the ckpt file either through google drive or colab, but i cant, its keeps giving me conversion errors, and the button to fix it doesn't work… im really curies to see what happens if you train yourself into an anime model happens

  8. Hello, I congratulate you for this great explanation of this new functionality. I managed to use two identifiers for different characters and you can mix them together or at the prompt select one or the other. But I wonder how it would be possible to see them both interact, like a simple handshake or where both characters are present. Thanks

  9. Amazing, the results are just amazing! I've been trying Hypernetwork to subpar results. This was a hit. I uploaded 58 images and trained it at 4000 steps, yet the result blew my mind.😁

    P.S I forgot to tick checkpoint so it did not save any, and the space it allocated on my drive was 2 Gb, so maybe use a checkpoint every 1500 step, the result will be great anyway

  10. Spent all day on this after previous failed attempts (see other comments) – here's the trick. Do not overtrain. This method works flawlessly but stick to 2000 per 20 I tried going to 3 4 5 and even 6k and the 3k file is good but the 2k file seems the best. 3000 for 30 in that case. Let me just way the results im now getting ill be posting on rentry wiki because they're so good once I finish training. Can't thank you enough ai guys.

  11. Thanks for the great review, remember to add weight to the instance name with () to get a more precise result, because with the new method, the weights are treated a bit differently. 👍

  12. I appreciate my overlord's passion for educating the masses in the ways of AI. I was successful in training a model on my family. Could you elaborate on whether you think it's best to train with a wide age range of photos (child thru adult), or if creating different models based on age is more practical? Also, I'm curious what the numbers during training mean (loss=0.0102, lr=4.34e-7). Keep up the positive vibes.

  13. So if I have 500 images of 25 (lets say – people), do I have to train this model for 50000 steps? Cos I done it before and all images I generated was very inacurate to images I uploaded and trained with. Or is it just too much for this method?

  14. I have trained two models and want to use them in checkpoint merger. But only one of them is in the list. What should I do to make both models appear in the list? Unfortunately in the video I did not understand how to do it. Thank you very much for your work.

  15. Thanks so much! can you please tell how we can add, besides our photos – a regulation? I mean set of images which might be without us with the respect the style/location, in order to get results of us in this style/location. appreciate it!

  16. Hi, I have a question for anyone who could help…I tried a training with this fast dreambooth colab, but I wanted to use as model base de EMA version of waifu diffusion 1.2. I could made this train with the runpod solution with no problems, but, when using this colab it shows up an error related to using EMA version of the model, Could anyone tell me (us) how to fix this in the colab version?…it is a "funny" error beacause EMA versions seems to be better for training purposes… Thanks everyone

  17. If I combine my model A (portrait of a person) and model B (watercolor style) – then I lose the individual resemblance of the person or style (depending on the settings). But when I use only model B (watercolor style) and add some famous personality (for example Scarlett Johansson) to the prompt – then it turns out flawlessly. Is there any way to get a lossless result using my two models to preserve the portrait likeness of the person and personal style (technique)?

  18. How can I use the model I trained?
    I trained a model then tried it on the Collab, and made a couple of images, but then after refreshing the page I lost the interface where I was using prompts.

  19. Is there any way, models can be rained via API? Like send 10 images via API and start running the PY script to train the model, and somehow get that model, put it on a server w/ Stable diffusion and request prompts from that specific model? I am super new to Stable Diffusion. Looks really interesting!

  20. Say i wanted to train a Ckpt file with my own art style? Would this be a good dream booth to use or would another version by better? What would be the best number of images to use to train an entire style?

  21. Hi Aitrepreneur, thanks for your sharing!! I have trained a person and two hats, I fed in 20 images per class. In short, the human performance is perfect, but hats are not doing so great. Is there any params need to setup for training an non-human object?

  22. Your videos are so valuable and informative. I know this stuff is constantly changing but my head is starting to hurt! I'm getting really confused between models, styles, embeddings, textual inversion, hypernetwork. I suspect that these are improved new ways of doing certain tasks but it's really hard to keep up.

  23. Cool stuff! One question about the token. I want to make use of something the model already knows. Lets say i call the pics "amazon alexa (1)" … Do you think it is possible to basically introduce a " " (empty space) into it?? I kinda target, that the model already knows a bit about the thing i want to train on and it should basically be more a "finer fine-tuning".

    Thank you so much!!

  24. Thanks for a nice tutorial. What's the reason you want write access on the huggingface token? It seems to me that read access should be enough so I tried it, and it worked fine. 🤷‍♂

  25. Thanks @Aitrepreneur – the tech is coming a long way. I do notice when training multiple subjects of the same gender together and then creating images, one character tends to be dominant/slightly blended. It's only if the two subjects are different genders they appear very different. Would be good for a new video when the tech improves further and same gender subjects appear unique and without one dominating the other.

  26. Hello, it's great, just have a one question. On the cell to run SD ui, that you can see at 8:00, there's an option to use custom path, do anyone know how this path should be formatted. I'm trying to use the model I already trained, but I'm not able to provide the correct path to it

  27. Awesome video, as usual; I guess it could be great a video on how to train a subject/style (like you did on a previous runpod video)and if thats possible with this colab. Also, Is it possible to train a subject on a different model, not SD1.5 but for instance, WD1.2?. Thanks a lot for your awesome work teaching plainly AI for everyone…

  28. Hello!
    Thank you so much for the videos! Can't wait for upcoming tutorials- I don't miss any of them ❤
    I'm not sure about the difference between resuming and creating a new session after using an excising module I previously made (in "Create/ Load s session").
    For example- I'm not happy with the module I made a week ago and decide to train some new images…
    Thank you again!

  29. So one can learn faces of a character, object, styles… Will it learn body structure too if one passes not only face images but images of the characters body?

  30. Always grateful for your videos. Question, why didn't you use "person" after the instance prompt like in the other DB tutorial? Is that because you didn't use reg. images?

  31. I've found that if you have the steps too high it can lead to the influence of the source images being too strong where it'll pretty much ignore your prompt altogether. I've found success at increasing the input images to around 70 but keeping steps at 2000 whereas using 3000 steps and 30 images led to the issue mentioned earlier. I'm still playing around with it so I'm not sure about the sweet spot of steps vs image count, and it may just differ depending on the input images.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 AI Art Video Tutorials