Chapters:
00:00 Intro
00:34 Taking the Photos
01:20 Resizing the Photos
01:42 Dreambooth Google Colab
03:12 Hugging Face Account Setup
04:03 Dreambooth Continued
05:42 Start Training
07:10 Generating Images
07:43 Lexica
08:09 Guidance Scale
08:37 Conclusion
Make AI art based off of your own likeness by using #dreambooth and #stablediffusion. You do not need a high spec GPU to run it, Google Colab handles it all!
Looking for a more recent colab tutorial, I have one here: https://youtu.be/MT8HhBSMaWM
◆ TUTORIAL LINKS ◆
| BIRME
‘Resize any image to 512×512 for free, I used Photoshop to do mine but you can use any tool that will let you set pixel size to 512×512’’
https://www.birme.net/?target_width=512&target_height=512
| DREAMBOOTH IN GOOGLE COLAB
‘An online workspace that will run python code which will ultimately generate your images’
https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb
| HUGGING FACE
‘Where you get your tokens from, these will allow access to Stable Diffusion in the Dreambooth Google Collab’
Old v1.4 Link: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
New v1.5 Link: https://huggingface.co/runwayml/stable-diffusion-v1-5
| LEXICA ART
‘Where you can see other AI art creations and the prompts used to build them up. It’s super handy if you want to start churning higher quality are out quicker’
https://lexica.art/
| DISCORD
‘Pretty empty right now but the aim’s to eventually have it as a creative network where people can share insights into art and tech’
https://discord.gg/HNecqYUwZT
◆ GUIDES/ RESOURCES ◆
| STABLE DIFFUSION WEBUI TUTORIAL
‘Want to create AI images of anything for free? By downloading the Stable Diffusion git repository you’ll be well on your way to doing this locally from home using your own hardware. It’s a little more complicated to get up and running but you can start by following this video:’
https://youtu.be/KFdtf1JKXmQ?t=121
◆ MUSIC ◆
Charlie Ryan – Close Call
Omri Smadar – Daffodils
Tomas Novoa – Tornasol
Tobias Bergson – Road Up
Downtown Binary – Gravity
◆ THANKS FOR YOUR SUPPORT ◆
Building a channel takes a lot of work and I wouldn’t be able to do it without all the support, so it’s much appreciated!
◆ LINKS ◆
SPOTIFY: https://open.spotify.com/artist/0iryLKSMqCRmYx0niXqc0x?si=qiI-PEWLScG-yK3OYbKTJQ
CREDITS: https://www.imdb.com/name/nm6382359/?ref_=fn_al_nm_3
◆ FOLLOW ME ◆
TWITTER: https://twitter.com/jcunliffeuk
INSTAGRAM: https://www.instagram.com/jcunliffeuk/
SOUNDCLOUD: https://soundcloud.com/alias-here
DISCORD: https://discord.gg/HNecqYUwZT
◆ HELP THE CHANNEL ◆
If you want to help me you can share the videos with your friends or on social media.
Mobile app that wraps all this together in 3…2…1
(PS it’s “INference”)
James this doesn't work anymore can you update
How do I make multiple people? Need to generate more API key in hugging face?
This was so easy to follow! Thanks James! I've found that setting guidance_scale to between 3.5 and 5 has given me the best results.
fuck Google, they de-platform !
Thanks for the awesome TUT James
I just have a question what happens too your photos you upload and train the AI after your done using it ? do they get deleted ?
I have a personal query I wish to pose. With this particular set of apps you have demonstrated that the AI could be trained to reproduce the same character over and over. However however can the ai also be trained to reproduce the same set or settings, i.e. Furniture appearing the same. Especially if viewed from different angles and such?
To anyone getting errors when they click on start training. If you see a URL included in your training message related to hugging face, click on it. It will take you back to the hugging face website and just accept to access the repository again. Now run the script again and it should work.
Sometimes you also need to run the script again for "Setting and run" and then run the training script again.
This was extremely frustrating as the video made everything look like it run without any hiccups but a lot of people are encountering errors.
Thank you for the video though James. This technology is mind blowing.
The video I was looking for! It's finally here! Thanks man! Did you write the code your self? I mean other people won't give you the access for free ..Just Thanks
Thanks for the video. How can i use the model I've already created again after refreshing the page?
I saw the code and thought "Nope, no way. Totally out of my league." I'm a techy person but not this techy lol but I followed all of your steps and it worked! Thanks so much for making this so clear and easy to follow! It's almost like the lottery, trying to get a good batch but it's worth it when you do.
Hey, I followed your steps, but when I want to start training it said: "/bin/bash: accelerate: command not found" – What is this?
After watching countless videos on this I bet this is correct, easy and friendly way to do this…Thank you so much.
hi
i got an ereur in the accelerate lunch i just did what you did I it didn't work I don't know why can you plz help me
I did it and it worked, but when I generate images I see other people sometimes, it looks like the model has been trained with other people besides me… Or its just random? the prompts are correct.
I keep getting some error about “pipe” not sure why
Trying to get this up and running but hitting an error when I click the Inference node. "OSError Traceback (most recent call last)
<ipython-input-13-bb26acbc4cb5> in <module>
6 model_path = OUTPUT_DIR # If you want to use previously trained model saved in gdrive, replace this with the full path of model in gdrive"
If someone can help let me know how to resolve it that would be greatly appreciated. Thank you again for this video! Great job.
I am having an issue getting the gallery to work. Just says Error when I go to generate, but I have followed all the steps. Anyone else run into this>?
Is there no way to get this run with local diffusion
Halfway down the tutorial, when going about the 'Inference' node, it returns an error.
Can't seem to figure out why, when apparently everyone else in the comments seems to be doing fine.
i accidently logged out my huggingface token to see what would happen and now i can't write my new tokens on collab what would i do?
i accidently logged out my huggingface token to see what would happen and now i can't write my new tokens on collab what would i do?
Wow – absolutely going to act like I will try this but truthfully will watch a bunch of videos before finally forgettin
AWOME THANKS !
Awsome
Can someone please help me?
I get this error in the inference note: Error no file named model_index.json found in directory /content/drive/MyDrive/stable_diffusion_weights/DaliOutput.
What to do?
is this what corridor digital used as well?
Whoa things just got crazy! Great job on this video exploring AI
I keep getting error in "Start Training"
I've tried like a 1000 times
Great job so much amaaaaaaaaaazing

I tried a few times I got this error. OSError: Error no file named model_index.json found in directory /content/drive/MyDrive/stable_diffusion_weights/KeithOutput. What step makes the model_index.json file?
You've probably answered this already, but I chose to save the model data and was wondering how to reuse that data to generate more images, without having to train the model again.
oddly doesn't work anymore – repeated the process a couple times all the way up till Training, but always get error messages at the training phase :/
@james do we have to go through the whole process every time we want to generate images or is there a way to to start generating images right away?
The first three models I created went without problems but now they won't download anymore for some reason. Says that they are but they seem to end up outside of my Google Drive. Still have plenty of free space but something is happening with the save path.
I'm stuck at "Start training" and always getting this error message:
"The following values were not passed to `accelerate launch` and had defaults used instead:
`–num_processes` was set to a value of `1`
`–num_machines` was set to a value of `1`
`–mixed_precision` was set to a value of `'no'`
`–num_cpu_threads_per_process` was set to `1` to improve out-of-box performance
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.7/dist-packages/requests/models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/model_index.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py", line 234, in get_config_dict
revision=revision,
File "/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py", line 1057, in hf_hub_download
timeout=etag_timeout,
File "/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py", line 1359, in get_hf_file_metadata
hf_raise_for_status(r)
File "/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_errors.py", line 254, in hf_raise_for_status
raise HfHubHTTPError(str(HTTPError), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: <class 'requests.exceptions.HTTPError'> (Request ID: 8hutDT3-ecVO8fkg0fEVc)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train_dreambooth.py", line 695, in <module>
main()
File "train_dreambooth.py", line 376, in main
args.pretrained_model_name_or_path, torch_dtype=torch_dtype, use_auth_token=True
File "/usr/local/lib/python3.7/dist-packages/diffusers/pipeline_utils.py", line 373, in from_pretrained
revision=revision,
File "/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py", line 256, in get_config_dict
"There was a specific connection error when trying to load"
OSError: There was a specific connection error when trying to load CompVis/stable-diffusion-v1-4:
<class 'requests.exceptions.HTTPError'> (Request ID: 8hutDT3-ecVO8fkg0fEVc)
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
args.func(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', 'train_dreambooth.py', '–pretrained_model_name_or_path=CompVis/stable-diffusion-v1-4', '–instance_data_dir=/content/data/vagasilacInput', '–class_data_dir=/content/data/person', '–output_dir=/content/drive/MyDrive/stable_diffusion_weights/vagasilacOutput', '–with_prior_preservation', '–prior_loss_weight=1.0', '–instance_prompt=vagasilac', '–class_prompt=person', '–seed=1337', '–resolution=512', '–train_batch_size=1', '–train_text_encoder', '–mixed_precision=fp16', '–use_8bit_adam', '–gradient_accumulation_steps=1', '–learning_rate=1e-6', '–lr_scheduler=constant', '–lr_warmup_steps=0', '–num_class_images=50', '–sample_batch_size=4', '–max_train_steps=800']' returned non-zero exit status 1."
NOTE for NooBs (like me): If you don't see "Access Tokens" available in the Hugging Face settings, that means you have to go the email you registered with and look for the email from Hugging Face then click the link to verify your account.
This is exactly the walk through I needed
[̲̅p][̲̅r][̲̅o][̲̅m][̲̅o][̲̅s][̲̅m]
ty bruu <3
why do you have to have the 1:1 aspect ratio?
is that for memory concerns?
My pics are always portraits and my hair never changes.
I keep crashing/failing when I run the Inference part! Why?
Seems as if the colab page has changed a bit. Wondering if anyone else is still able to properly get it up and running. I seem to be running into an issue with specifying the class prompt.
Some how the huggingface login doesn't show up anymore, i got it working before but seems like theres been a update with stabbledif. 1.5 has that something to do with it?
Thanks for the effort but the colab has changed too much for the guide to work for me
Anyone figured out how to train multiple subjects? Trying to generate 1 image with 2 people.
Hello! The colab recently changed, any chance we can get a quick update on how to run it since the update? Thank you!
I was only able to try this once, but now the steps have change on g colab link and I now have no clue…
i have been struggling to do this, it seems they changed the dreambooth link, so now im struggling to import my images to train the ai
pho'os it so funny mate btw thanks for your video!
The result is amazing. Im so suprised !
is actually update the colab pague the process you could do a update of the tutorial? thank man great content
Has anyone been able to get this to work recently? I've been trying to get it to work all day and all I get are error messages.
Can't wait for AI to break into a performance art scene.
it work on my pc thx bro vеry much
Woah, great video mate!
Just subscribe, thank you James. I have a question "Login HF and Run is not showing" help!!!
On the google colab doc i don't have the same interface for hugging face and when i put my token and execute nothing happen, can you help me ?
I downloaded the notebook from the link you provided. There is no option of LOGIN in Login to Hugging Face cell. What should I do? Also, the notebook in the video is different from the link.
how i cant get Img of better resolution, like 1024×1024?
Hey brother make Another video on This Same topic using Google Collab but Stable Diffusion v1-5
The new and updated notebook, what is the difference between'"instance_prompt" and "class_prompt". Where should the photos be uploaded?
There was a recent commit in the diffusers repo where an argument called "revision" is now necessary in the step where you start the model training. You should add the following argument:
–revision="main"
That will point to the right git branch and will be able to find the model.
Hi, thanks for this great tut, any chance you know how to train a style? what should we set the class and the instance prompt to get a good result?
I cant Upload the images because I dont have the INSTANCE_DIR to edit at all . its missing I dont know where to upload the pictures
That was absolutely amazing James. Thank you so much!
I'm from Bangladesh
Does not work for me. In Dream Booth at the step where you define model path etc. (around 4:18) i don´t have the point to put the input folder. It only gives me "save_to_gdrive", Model_Name and Output_Dir, but no input path so i have no idea where to put my pictures.
/edit
. Could you tell me where i need to put my model file on my local Stable Diffusion install to use it there?
Somewho it worked this time. When i pushed play on the next step, it prompted to upload my pictures
Thanks mate.
What are fô-ôs?
hey! I don’t have the « instance_dir », I just have the « output_dir » so I can’t put my photos anywhere.. Do you u have a solution ?
great video btw
colab was updated it doesnt work anymore with this tutorial
ive been trying to get this to work forever and still no luck always get an error when running the interference node if anyone wants to help that would be super cool
It Will kill real artist
I set the seed but on each inference run I get different results, is there a way to keep it same picture ?
Thks for the tuto, it seems the .ipnyb changed, it's not the same model name and impossible to import our pictures
this is so hard lol
Would it kill them if make it user friendly.
Dreambooth has updated, this tutorial no longer works James
For those who feel that the outputs don't quite resemble their face, increase the number of training steps, I personally feel 1000 is a bit too low. I tried with 2500 and got much better results in terms of the outputs looking like me.
I am getting errors at the AI training stage. I don't know what is wrong, everything else is green and the pictures are in the directory. Did the pictures have to be png format?
The collar py file has changed and this instruction set no longer works
How come the file nodes look different to me? I dont see a login button for hugging face. I see an output file directory but not an input file path to write. There isn't even a "class" field like the one in your video. Was this whole thing updated in the past month? Or do I need to purchase a different version of google collab? I feel a bit confused and would love some pointers. Thank you!
Hi, Excellent video congratulations! where can I find the same colab as the video, the colab of the link is very different is not the same
The Google Colab Doc has been updated since this tutorial.
I have 2 alternative methods for those who are struggling.
Local Hypernetworks: https://youtu.be/P1dfwViVOIU | In this one, I show you how you can get similar results to Dreambooth using Hypernetworks instead which is run locally through Automatic1111
Alternative Dreambooth Colab: https://youtu.be/MT8HhBSMaWM | In this one I show you how to use another Colab version of Dreambooth but also show you how you can integrate it with your local Automatic1111 to get more out of it.
Thanks for supporting my channel, really is crazy to see it go from 180 subs & 5k total views less than a month ago to over 3300 with 200K views.
yoo mine is different, theres is no sks as in the video and the appearence of my dreambooth is quite different
this is outta my league and don't know what should I do, pls help.
I used this colab and convert my weight to ckpt. then wanted to download the ckpt file. But the download speed was so low that i got disconnected. Is there a way to save the ckpt file directly to my google drive ?
its so easy to get lost during tutorials if you treat the people learning like they already know. i have to keep rewinding bc i keep getting lost SLOW DOWN next time
Great tutorial, clear and thorough. Thank you! One question, how do you download the checkpoint model to use locally?
is there a way to use two models on the same image? Like if I wanna make a pic of me and idk my boyfriend or family
Path for images conceptual training is gone!! ???
Is there a video tutorial, on how to run it locally?
Nice video! I want to install this version of DB on my VM on Runpod, i am using the joepenna version of DB, that no have prior preservation nor text encoding, the results are no so good as this version from the video. If anyone just have installed sucessfully let me know. Thanks!
Thank you for this tutorial but there is no log in button on "HUGGINGFACE_TOKEN:" in DREAMBOOTH IN GOOGLE COLAB page. So when I enter the token code and hit start nothing happens. So I am stuck here. Also in "Settings and run" page the contents are also different than what you show in the video. I only have Model_Name and Output_Dir that's all.
Any idea how to solve this so I can continue learning
Thanks for the great content again
Basically we need to pay
why is all my code jacked up lol. I didnt get a class or instance
great tutorial
sir i got an error white trying this the name of error is "name 'WEIGHTS_DIR' is not defined" please help
I've accepted the model license on both v1.4 and v1.5 but my Google Colab won't acknowledge my login? A person icon pops up briefly only to disappear and the green checkmark is there, but I can not continue to the instance stage. Can anybody help me out please?
My command prompt script install_windows.bat brings error once i click enter