With the recent development in AI technology and training, you can now use Dreambooth (based on a Google’s AI) to train a stable diffusion model with multiple subjects (people, style, objects) using your own images and all of that for Free! So in this video, I will show you what you need to do before training, any tips and tricks you need to know before you can train multiple subjects in one go. This will allow you to have multiple people trained on one single Ckpt file in less than an hour!
Did you manage to train multiple people or style? Let me know in the comments!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
SOCIAL MEDIA LINKS!
✨ Support my work on Patreon: https://www.patreon.com/aitrepreneur
⚔️ Join the Discord server: https://discord.gg/3ErYSdyUPt
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
TheLastBen’s Dreambooth Colab Doc: https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
Resize your images for free using birme: https://www.birme.net/?target_width=512&target_height=512
Special thanks to Royal Emperor:
– DanO..
Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
#stablediffusion #dreambooth #stablediffusiontutorial
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
WATCH MY MOST POPULAR VIDEOS:
RECOMMENDED WATCHING – My “Stable Diffusion” Playlist:
►► https://bit.ly/stablediffusion
RECOMMENDED WATCHING – My “Tutorial” Playlist:
►► https://bit.ly/TuTPlaylist
Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx <3
"K" – Your Ai Overlord
can you do a comparison between runpod and colab.
I am getting weird results, everytime I do a prompt its basically coming up with a variation of the images I trained it with. Nothing like my prompt.
Make a video about Stable Diffusion AMD GPU installation. I have an rx580
Aitrepreneur – use a photoshop equivelent to paste your "person to be changed" beside someone wearing the "change", then inpaint out the "person to be changed" stuff you want changed- prompt they are identical twins – mind blowing
Is ther any way to do this dreambooth method locally yet? I get the best results with Dreambooth as oposed to SD hypernetwork training.
Finally! thanks, I´ll try it today
Well I trained 3 diff subjects each with sks at the end of the name …. what happened was the likeness was unreal but the 2nd name even trained hours earlier always brought up the third name. Don't use sks in the training names I guess.
I get this error when trying to test new trained model:
"ImportError: cannot import name 'image_from_url_text' from 'modules.generation_parameters_copypaste' (/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/generation_parameters_copypaste.py)"
Can anyone help?
this is the best dreambooth colab doc.
Dreambooth training is acting just like textual inversion for me. I can't change the details of the original photos. Just a stream of smiling faces that I trained but can't change where they are or appear as. Always similar as the trained images. So if I type like "in plate armor" still shows up in a bikini or tshirt. Was hoping drrambooth was the answer to include new subjects but unless I'm not trained enough (faces are near identical so doesn't seem to be the case) is this just something we can't use drrambooth for? Will always include original details?
Entrepreneurship
im having issues about my trained model of 1.5 it keeps making the photo very close to what i trained with it, even if my prompts are good, i cant create artistic image of myself, if i reduce or add cfg the image does not look like me. any suggestions?
Fantastic, I"ve Been waiting for something like this to train artistic styles, and once again K your Tutorials are simple, straightforward, and easy to follow! Your Number#1! at this! Colab should hire you to do their training videos.
Is it better to remove backgrounds from images when training people's faces?
Excellent
is there any way to train with waifu diffusion? i keep trying to upload the ckpt file either through google drive or colab, but i cant, its keeps giving me conversion errors, and the button to fix it doesn't work… im really curies to see what happens if you train yourself into an anime model happens
Hello, I congratulate you for this great explanation of this new functionality. I managed to use two identifiers for different characters and you can mix them together or at the prompt select one or the other. But I wonder how it would be possible to see them both interact, like a simple handshake or where both characters are present. Thanks
Thats great but I really wait for local training 😀
Can't we have this version of Dreambooth to run locally?
Hi Aitrepreneur, thanks so much. After training two or more characters or styles, could you retrain another on the same cptk? without affecting the terms already set.
Has anyone had success with using the 1.4 model? Im getting conversion errors even with the compatibility box checked
If I wanted to train a style and not a subject, how to I get it to not recreate the objects in the image, but just recreate the art style?
Amazing, the results are just amazing! I've been trying Hypernetwork to subpar results. This was a hit. I uploaded 58 images and trained it at 4000 steps, yet the result blew my mind.😁
P.S I forgot to tick checkpoint so it did not save any, and the space it allocated on my drive was 2 Gb, so maybe use a checkpoint every 1500 step, the result will be great anyway
Now, how can we use the trained model with SD v1.5? Is there a way to merge them?
Spent all day on this after previous failed attempts (see other comments) – here's the trick. Do not overtrain. This method works flawlessly but stick to 2000 per 20 I tried going to 3 4 5 and even 6k and the 3k file is good but the 2k file seems the best. 3000 for 30 in that case. Let me just way the results im now getting ill be posting on rentry wiki because they're so good once I finish training. Can't thank you enough ai guys.
Thanks for the great review, remember to add weight to the instance name with () to get a more precise result, because with the new method, the weights are treated a bit differently. 👍
What is the best google collab notebook available now for SD that has text prompts?
Is there a way to do this that doesn't save it to google drive? I prefer to just download it directly from the colab
Can you tell me why the size of trained new trained model is 2gb (ckpt) while the original model size is 4gb?
Can you tell me why after the training is finished the ckpt file of training contains only my trained face.. no art syles, no other trained faces (musk, trump, etc)?
awesome video thank you!
I appreciate my overlord's passion for educating the masses in the ways of AI. I was successful in training a model on my family. Could you elaborate on whether you think it's best to train with a wide age range of photos (child thru adult), or if creating different models based on age is more practical? Also, I'm curious what the numbers during training mean (loss=0.0102, lr=4.34e-7). Keep up the positive vibes.
If you overtrained your model.
How or where can you get access to the 500 steps models?
Ok I did it, awesome! Thanks! Now, has anyone been able to put the two subjects the same composition not making a weird fusion?
Is it possible to gen images with the two characters in the same image?
perfect! .. only small problem may be that you can only pay google colab with credit card and no other payment methods are offered.
MultiSubject on Runpod/vasAI please!
Multisubject on SHIVAMs
So if I have 500 images of 25 (lets say – people), do I have to train this model for 50000 steps? Cos I done it before and all images I generated was very inacurate to images I uploaded and trained with. Or is it just too much for this method?
I have trained two models and want to use them in checkpoint merger. But only one of them is in the list. What should I do to make both models appear in the list? Unfortunately in the video I did not understand how to do it. Thank you very much for your work.
can i train a person and a style? can i use both in the same model?
Now it would be greate to be able to do it for more subjects locally! 😀
tnx
Is it possible to run this on a runpod? I'd like to train more but google colab is limiting
Bedankt
I am slightly confused about this, does it train two individuals into the model, or create a weird combination of two people?
Our Ai Ovelord! THANK YOU!
Thanks so much! can you please tell how we can add, besides our photos – a regulation? I mean set of images which might be without us with the respect the style/location, in order to get results of us in this style/location. appreciate it!
Hi, I have a question for anyone who could help…I tried a training with this fast dreambooth colab, but I wanted to use as model base de EMA version of waifu diffusion 1.2. I could made this train with the runpod solution with no problems, but, when using this colab it shows up an error related to using EMA version of the model, Could anyone tell me (us) how to fix this in the colab version?…it is a "funny" error beacause EMA versions seems to be better for training purposes… Thanks everyone
please make a video on clip guided prompt.
If I combine my model A (portrait of a person) and model B (watercolor style) – then I lose the individual resemblance of the person or style (depending on the settings). But when I use only model B (watercolor style) and add some famous personality (for example Scarlett Johansson) to the prompt – then it turns out flawlessly. Is there any way to get a lossless result using my two models to preserve the portrait likeness of the person and personal style (technique)?
How to use this in local instalation? How to set names and classes?
How can I use the model I trained?
I trained a model then tried it on the Collab, and made a couple of images, but then after refreshing the page I lost the interface where I was using prompts.
Is there any way, models can be rained via API? Like send 10 images via API and start running the PY script to train the model, and somehow get that model, put it on a server w/ Stable diffusion and request prompts from that specific model? I am super new to Stable Diffusion. Looks really interesting!
Your videos are always so good! You explain so well :') thank you for making these videos
your definition of "free" needs to be amended.
Say i wanted to train a Ckpt file with my own art style? Would this be a good dream booth to use or would another version by better? What would be the best number of images to use to train an entire style?
Thank you 💯
Thank you for the video!
I have a question, do anyone know if by using this method the system can output an image where both instance appear? Thank you in advance!
Hi Aitrepreneur, thanks for your sharing!! I have trained a person and two hats, I fed in 20 images per class. In short, the human performance is perfect, but hats are not doing so great. Is there any params need to setup for training an non-human object?
Awesome tutorial, this worked flawlessly thanks!
Your videos are so valuable and informative. I know this stuff is constantly changing but my head is starting to hurt! I'm getting really confused between models, styles, embeddings, textual inversion, hypernetwork. I suspect that these are improved new ways of doing certain tasks but it's really hard to keep up.
After training two subjects using a single session can I use both in a single image?
Cool stuff! One question about the token. I want to make use of something the model already knows. Lets say i call the pics "amazon alexa (1)" … Do you think it is possible to basically introduce a " " (empty space) into it?? I kinda target, that the model already knows a bit about the thing i want to train on and it should basically be more a "finer fine-tuning".
Thank you so much!!
Does anyone else agree that this method generally producing inferior results to the joepenna notebook? (1 to 1, obv multiple at the same time is a benefit)
Thanks for a nice tutorial. What's the reason you want write access on the huggingface token? It seems to me that read access should be enough so I tried it, and it worked fine. 🤷♂
Thanks @Aitrepreneur – the tech is coming a long way. I do notice when training multiple subjects of the same gender together and then creating images, one character tends to be dominant/slightly blended. It's only if the two subjects are different genders they appear very different. Would be good for a new video when the tech improves further and same gender subjects appear unique and without one dominating the other.
it seems to have over fitted on my images, any ideas how i can fix that?
well done
Hello, it's great, just have a one question. On the cell to run SD ui, that you can see at 8:00, there's an option to use custom path, do anyone know how this path should be formatted. I'm trying to use the model I already trained, but I'm not able to provide the correct path to it
Is the 512 x 512 because that's what you plan to output? If I'm outputting a higher resolution do you crop them to the higher resolution or is 512 just the required training resolution?
I get this error when I try to upload the images. can anyone help?
"RangeError: Maximum call stack size exceeded google"
Hey. How to return to the model to the next day? If I close all the tabs and so on. Help me please,
Absolutely fantastic!! Thanks
Am I able to use this new "fast" method to train styles? If so, how? Because there's no field of inputting a class or reference prompt for the uploaded images
hi, i trained a model on my face and another one with a style, how do i merge the two so i can keep my face as the main character using the dreambooth of this video ?
Awesome video, as usual; I guess it could be great a video on how to train a subject/style (like you did on a previous runpod video)and if thats possible with this colab. Also, Is it possible to train a subject on a different model, not SD1.5 but for instance, WD1.2?. Thanks a lot for your awesome work teaching plainly AI for everyone…
Thanks for this nice tutorial! Just for my understanding: Dreambooth isn't the same as the "Train" option that's available in local SD?
The Google Colab link in the description has the "New Fast Method" missing. Does anyone know how I enable it or what to do?
can you do a guide on how to transfer a style , like Arcane Diffusion or the Disney model ?
Is it possible to ask for both characters in one scene
The results are not good, I tried other similars colabs that with same image input give me a lot better results.
Do you have a video for video-interpolation with stable diffusion? Thanks! this is awesome Bro!
If you wanted to add to the cpkt file at a later date, would it be possible or would you need to retrain with all the original images again?
the result im getting is just like the original, what did i missed?
i hate that there's no working local repo for this, i don't wanna be forced to use colab
Does this method add the training to the base model you choose? Or is it a new model that does not include the original training?
free greg rutkowski!
Hello!
Thank you so much for the videos! Can't wait for upcoming tutorials- I don't miss any of them ❤
I'm not sure about the difference between resuming and creating a new session after using an excising module I previously made (in "Create/ Load s session").
For example- I'm not happy with the module I made a week ago and decide to train some new images…
Thank you again!
So one can learn faces of a character, object, styles… Will it learn body structure too if one passes not only face images but images of the characters body?
Can i use cartoon/anime face?
Hey! I have a question for debug. In the training part, I met a message that shows /bin/bash: accelerate: command not found. How can I fixed it ?
Any tutorial on how to re train an old model after reloading a new colab session?
great tutorial, in the training part of this colab, i see a line called Enable_text_encoder_training: whats this and what setting should we put here? thank you
its not giving me the option to upload files in the instance images section
Always grateful for your videos. Question, why didn't you use "person" after the instance prompt like in the other DB tutorial? Is that because you didn't use reg. images?
Hey this collab doc worked fine for the first couple days but in the last 2 days stoped working.
It trains but saves no ckpt files
Wait a minute, if I’m supposed to train on 30 images and I have four characters. How many steps am I supposed to train?? 9,000 steps?
dumb qeustion but what do i do with the ckpt file?
I've found that if you have the steps too high it can lead to the influence of the source images being too strong where it'll pretty much ignore your prompt altogether. I've found success at increasing the input images to around 70 but keeping steps at 2000 whereas using 3000 steps and 30 images led to the issue mentioned earlier. I'm still playing around with it so I'm not sure about the sweet spot of steps vs image count, and it may just differ depending on the input images.