Chapters:
0:00 Intro
0:18 Step #1
0:48 Huggin Face
1:23 Setting Up
2:24 Start Dreambooth
2:50 Open Gradio
3:06 Start Prompting !
3:31 Easy Re-Run
3:53 Outro
0:00 Intro
0:18 Step #1
0:48 Huggin Face
1:23 Setting Up
2:24 Start Dreambooth
2:50 Open Gradio
3:06 Start Prompting !
3:31 Easy Re-Run
3:53 Outro
Join our Discord channel :
———————————————–
Official Website:
https://www.thedorbrothers.com/
———————————————–
Tutorial Links :
1. https://github.com/TheLastBen/fast-stable-diffusion
(this link has changed, the one below brings you directly to the Notebook!)
———————————————–
Music :
Mak AV – The Sun is Ticking:
Can I download the model and use on my local stable diffusion?
I was waiting looking for that for a while now. I was so happy after you announced it on the discord channel. It totally worked, thank you so much 🙂
You sound Israeli אתה מישראל ? 😮
Neta en serio, mil gracias.
Hey can you help us with the prompts, I really liked all your renders. Would you mind teaching us what should we prompt?
What type of prompts were you using to get those example images at the start? 🔥
wow that song in the beginning was like an epic documentary intro, what is the name?
what was that song in the beginning? thanks for the helpful tutorial
Some things have changed in the Notebook !
For the easiest method :
– Change "With_Prior_Preservation" to NO
– In "Number_of_subject_images" type in the number of images you uploaded of your face
– Simply skip the [Optional] cell after it.
Fastest/most comprehensive tutorial I’ve seen thank you
would be super cool to have a tutorial for all the things we can do once were inside the image generation software (so many cool feats added in a couple of days)
איזה מגניב
Oh my God!
Life will never be the same again!
did everything you said and it took 1 hour 23 minutes to give me this amazing result 🙁 🙁 🙁 The model doesn't exist on you Gdrive, use the file explorer to get the path
Anything you think I may have done wrong? 🙁
After running the test once. I do not get the option of Gradio interface under test your model panel. Any help
How do i rerun the Colab the second time without going through all that setup and training, to use the model that I've just trained? Anyone?
Hi! Been really enjoying your videos! However the discord link is dead.. at least for me =)), looking forward to seeing more content!
Hi! Big fan of your artwork. Your content is super useful and got me mid journey. Would definitely love to see more of your guides and tutorials! Also, it seems that the link to your discord channel is invalid. Is there other way I could join it please?
it sayts that my gpu is not supported, how can I see what it's supported and what's not?
What is the computer spec need for stable diffusion
"joe ghogan " the best part
idk what to do since the notebook has changed and in the end its telling me to put the full path but idk what path to put so i puy the gdrive path with the name but it dosent work it turns red… plssss helpppp!
How to download the model in a skpt for local use¿?¿?
hey, great video. what about the class dir and do we really have to upload 200 to 400 class images, like the collab suggests ?
Bro can we do this thing on a mobile phone because majority of people have only mobile phone
not working it fail no bash file found
is there no way to run this locally? I dont want to use google colab.
I'm getting the following error while running the 'Training' cell:
train_dreambooth.py: error: argument –save_starting_step: invalid int value: ''
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
args.func(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '–save_starting_step=', '–stop_text_encoder_training=', '–save_n_steps=', '–Session_dir=', '–pretrained_model_name_or_path=', '–instance_data_dir=', '–output_dir=', '–instance_prompt=', '–seed=', '–resolution=512', '–mixed_precision=', '–train_batch_size=1', '–gradient_accumulation_steps=1', '–use_8bit_adam', '–learning_rate=2e-6', '–lr_scheduler=polynomial', '–center_crop', '–lr_warmup_steps=0', '–max_train_steps=']' returned non-zero exit status 2.
Something went wrong
Please help.
All was fine until the last step – the the trained model. Clicked on play for that cell, and it ran for ages, ended with a green "Connected" message and no link to actually use the result….
"use your own face with a plagarism machine!" cuck. learn to draw
Such a good video! Quick, to the point. . . . but so frustrating because even though the video is only 1 month old, Colab is already setup differently. The 'Test The Trained Model' section is now different and doesn't work the same way when you come back to a model the next day now 🙁 I'll have to keep hunting to figure out how to just take my own face and make a few fun pics. Wild how over complicated this whole process is. Whoever streamlines this for people will get rich!
so weird, followed all steps, the new SD opened up in the URL, model is loaded (person.ckpt) as weights on dreambooth – but if i type "a person close up sitting near a table" i get nothing like me. different people.