Create Art From Your Face With AI For Free | Dreambooth Tutorial | Stable Diffusion Guide

Chapters:
00:00 Intro
00:34 Taking the Photos
01:20 Resizing the Photos
01:42 Dreambooth Google Colab
03:12 Hugging Face Account Setup
04:03 Dreambooth Continued
05:42 Start Training
07:10 Generating Images
07:43 Lexica
08:09 Guidance Scale
08:37 Conclusion

Make AI art based off of your own likeness by using #dreambooth and #stablediffusion. You do not need a high spec GPU to run it, Google Colab handles it all!

Looking for a more recent colab tutorial, I have one here: https://youtu.be/MT8HhBSMaWM

◆ TUTORIAL LINKS ◆

🖼️ | BIRME
‘Resize any image to 512×512 for free, I used Photoshop to do mine but you can use any tool that will let you set pixel size to 512×512’’
https://www.birme.net/?target_width=512&target_height=512

💤 | DREAMBOOTH IN GOOGLE COLAB
‘An online workspace that will run python code which will ultimately generate your images’
https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb

🤗 | HUGGING FACE
‘Where you get your tokens from, these will allow access to Stable Diffusion in the Dreambooth Google Collab’
Old v1.4 Link: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
New v1.5 Link: https://huggingface.co/runwayml/stable-diffusion-v1-5

🎨 | LEXICA ART
‘Where you can see other AI art creations and the prompts used to build them up. It’s super handy if you want to start churning higher quality are out quicker’
https://lexica.art/

💬 | DISCORD
‘Pretty empty right now but the aim’s to eventually have it as a creative network where people can share insights into art and tech’
https://discord.gg/HNecqYUwZT

◆ GUIDES/ RESOURCES ◆

🤖 | STABLE DIFFUSION WEBUI TUTORIAL
‘Want to create AI images of anything for free? By downloading the Stable Diffusion git repository you’ll be well on your way to doing this locally from home using your own hardware. It’s a little more complicated to get up and running but you can start by following this video:’
https://youtu.be/KFdtf1JKXmQ?t=121

◆ MUSIC ◆

Charlie Ryan – Close Call
Omri Smadar – Daffodils
Tomas Novoa – Tornasol
Tobias Bergson – Road Up
Downtown Binary – Gravity

◆ THANKS FOR YOUR SUPPORT ◆

Building a channel takes a lot of work and I wouldn’t be able to do it without all the support, so it’s much appreciated!

◆ LINKS ◆

SPOTIFY: https://open.spotify.com/artist/0iryLKSMqCRmYx0niXqc0x?si=qiI-PEWLScG-yK3OYbKTJQ
CREDITS: https://www.imdb.com/name/nm6382359/?ref_=fn_al_nm_3

◆ FOLLOW ME ◆

TWITTER: https://twitter.com/jcunliffeuk
INSTAGRAM: https://www.instagram.com/jcunliffeuk/
SOUNDCLOUD: https://soundcloud.com/alias-here
DISCORD: https://discord.gg/HNecqYUwZT

◆ HELP THE CHANNEL ◆

If you want to help me you can share the videos with your friends or on social media.

100 Comments

  1. I have a personal query I wish to pose. With this particular set of apps you have demonstrated that the AI could be trained to reproduce the same character over and over. However however can the ai also be trained to reproduce the same set or settings, i.e. Furniture appearing the same. Especially if viewed from different angles and such?

  2. To anyone getting errors when they click on start training. If you see a URL included in your training message related to hugging face, click on it. It will take you back to the hugging face website and just accept to access the repository again. Now run the script again and it should work.

    Sometimes you also need to run the script again for "Setting and run" and then run the training script again.

    This was extremely frustrating as the video made everything look like it run without any hiccups but a lot of people are encountering errors.

    Thank you for the video though James. This technology is mind blowing.

  3. The video I was looking for! It's finally here! Thanks man! Did you write the code your self? I mean other people won't give you the access for free ..Just Thanks

  4. I saw the code and thought "Nope, no way. Totally out of my league." I'm a techy person but not this techy lol but I followed all of your steps and it worked! Thanks so much for making this so clear and easy to follow! It's almost like the lottery, trying to get a good batch but it's worth it when you do.

  5. I did it and it worked, but when I generate images I see other people sometimes, it looks like the model has been trained with other people besides me… Or its just random? the prompts are correct.

  6. Trying to get this up and running but hitting an error when I click the Inference node. "OSError Traceback (most recent call last)
    <ipython-input-13-bb26acbc4cb5> in <module>
    6 model_path = OUTPUT_DIR # If you want to use previously trained model saved in gdrive, replace this with the full path of model in gdrive"

    If someone can help let me know how to resolve it that would be greatly appreciated. Thank you again for this video! Great job.

  7. Halfway down the tutorial, when going about the 'Inference' node, it returns an error.
    Can't seem to figure out why, when apparently everyone else in the comments seems to be doing fine.

  8. Can someone please help me?
    I get this error in the inference note: Error no file named model_index.json found in directory /content/drive/MyDrive/stable_diffusion_weights/DaliOutput.
    What to do?

  9. I tried a few times I got this error. OSError: Error no file named model_index.json found in directory /content/drive/MyDrive/stable_diffusion_weights/KeithOutput. What step makes the model_index.json file?

  10. You've probably answered this already, but I chose to save the model data and was wondering how to reuse that data to generate more images, without having to train the model again.

  11. The first three models I created went without problems but now they won't download anymore for some reason. Says that they are but they seem to end up outside of my Google Drive. Still have plenty of free space but something is happening with the save path.

  12. I'm stuck at "Start training" and always getting this error message:

    "The following values were not passed to `accelerate launch` and had defaults used instead:

    `–num_processes` was set to a value of `1`

    `–num_machines` was set to a value of `1`

    `–mixed_precision` was set to a value of `'no'`

    `–num_cpu_threads_per_process` was set to `1` to improve out-of-box performance

    To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.

    Traceback (most recent call last):

    File "/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status

    response.raise_for_status()

    File "/usr/local/lib/python3.7/dist-packages/requests/models.py", line 941, in raise_for_status

    raise HTTPError(http_error_msg, response=self)

    requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/model_index.json

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):

    File "/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py", line 234, in get_config_dict

    revision=revision,

    File "/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py", line 1057, in hf_hub_download

    timeout=etag_timeout,

    File "/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py", line 1359, in get_hf_file_metadata

    hf_raise_for_status(r)

    File "/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_errors.py", line 254, in hf_raise_for_status

    raise HfHubHTTPError(str(HTTPError), response=response) from e

    huggingface_hub.utils._errors.HfHubHTTPError: <class 'requests.exceptions.HTTPError'> (Request ID: 8hutDT3-ecVO8fkg0fEVc)

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):

    File "train_dreambooth.py", line 695, in <module>

    main()

    File "train_dreambooth.py", line 376, in main

    args.pretrained_model_name_or_path, torch_dtype=torch_dtype, use_auth_token=True

    File "/usr/local/lib/python3.7/dist-packages/diffusers/pipeline_utils.py", line 373, in from_pretrained

    revision=revision,

    File "/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py", line 256, in get_config_dict

    "There was a specific connection error when trying to load"

    OSError: There was a specific connection error when trying to load CompVis/stable-diffusion-v1-4:

    <class 'requests.exceptions.HTTPError'> (Request ID: 8hutDT3-ecVO8fkg0fEVc)

    Traceback (most recent call last):

    File "/usr/local/bin/accelerate", line 8, in <module>

    sys.exit(main())

    File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main

    args.func(args)

    File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 837, in launch_command

    simple_launcher(args)

    File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher

    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)

    subprocess.CalledProcessError: Command '['/usr/bin/python3', 'train_dreambooth.py', '–pretrained_model_name_or_path=CompVis/stable-diffusion-v1-4', '–instance_data_dir=/content/data/vagasilacInput', '–class_data_dir=/content/data/person', '–output_dir=/content/drive/MyDrive/stable_diffusion_weights/vagasilacOutput', '–with_prior_preservation', '–prior_loss_weight=1.0', '–instance_prompt=vagasilac', '–class_prompt=person', '–seed=1337', '–resolution=512', '–train_batch_size=1', '–train_text_encoder', '–mixed_precision=fp16', '–use_8bit_adam', '–gradient_accumulation_steps=1', '–learning_rate=1e-6', '–lr_scheduler=constant', '–lr_warmup_steps=0', '–num_class_images=50', '–sample_batch_size=4', '–max_train_steps=800']' returned non-zero exit status 1."

  13. NOTE for NooBs (like me): If you don't see "Access Tokens" available in the Hugging Face settings, that means you have to go the email you registered with and look for the email from Hugging Face then click the link to verify your account.

  14. Some how the huggingface login doesn't show up anymore, i got it working before but seems like theres been a update with stabbledif. 1.5 has that something to do with it?

  15. I downloaded the notebook from the link you provided. There is no option of LOGIN in Login to Hugging Face cell. What should I do? Also, the notebook in the video is different from the link.

  16. There was a recent commit in the diffusers repo where an argument called "revision" is now necessary in the step where you start the model training. You should add the following argument:

    –revision="main"

    That will point to the right git branch and will be able to find the model.

  17. Hi, thanks for this great tut, any chance you know how to train a style? what should we set the class and the instance prompt to get a good result?

  18. Does not work for me. In Dream Booth at the step where you define model path etc. (around 4:18) i don´t have the point to put the input folder. It only gives me "save_to_gdrive", Model_Name and Output_Dir, but no input path so i have no idea where to put my pictures.

    /edit
    Somewho it worked this time. When i pushed play on the next step, it prompted to upload my pictures 🙂 . Could you tell me where i need to put my model file on my local Stable Diffusion install to use it there?
    Thanks mate.

  19. hey! I don’t have the « instance_dir », I just have the « output_dir » so I can’t put my photos anywhere.. Do you u have a solution ?
    great video btw 🙂

  20. For those who feel that the outputs don't quite resemble their face, increase the number of training steps, I personally feel 1000 is a bit too low. I tried with 2500 and got much better results in terms of the outputs looking like me.

  21. I am getting errors at the AI training stage. I don't know what is wrong, everything else is green and the pictures are in the directory. Did the pictures have to be png format?

  22. How come the file nodes look different to me? I dont see a login button for hugging face. I see an output file directory but not an input file path to write. There isn't even a "class" field like the one in your video. Was this whole thing updated in the past month? Or do I need to purchase a different version of google collab? I feel a bit confused and would love some pointers. Thank you!

  23. The Google Colab Doc has been updated since this tutorial.

    I have 2 alternative methods for those who are struggling.

    Local Hypernetworks: https://youtu.be/P1dfwViVOIU | In this one, I show you how you can get similar results to Dreambooth using Hypernetworks instead which is run locally through Automatic1111

    Alternative Dreambooth Colab: https://youtu.be/MT8HhBSMaWM | In this one I show you how to use another Colab version of Dreambooth but also show you how you can integrate it with your local Automatic1111 to get more out of it.

    Thanks for supporting my channel, really is crazy to see it go from 180 subs & 5k total views less than a month ago to over 3300 with 200K views.

  24. yoo mine is different, theres is no sks as in the video and the appearence of my dreambooth is quite different😭 this is outta my league and don't know what should I do, pls help.

  25. I used this colab and convert my weight to ckpt. then wanted to download the ckpt file. But the download speed was so low that i got disconnected. Is there a way to save the ckpt file directly to my google drive ?

  26. Nice video! I want to install this version of DB on my VM on Runpod, i am using the joepenna version of DB, that no have prior preservation nor text encoding, the results are no so good as this version from the video. If anyone just have installed sucessfully let me know. Thanks!

  27. Thank you for this tutorial but there is no log in button on "HUGGINGFACE_TOKEN:" in DREAMBOOTH IN GOOGLE COLAB page. So when I enter the token code and hit start nothing happens. So I am stuck here. Also in "Settings and run" page the contents are also different than what you show in the video. I only have Model_Name and Output_Dir that's all.

    Any idea how to solve this so I can continue learning 🙂

    Thanks for the great content again

  28. I've accepted the model license on both v1.4 and v1.5 but my Google Colab won't acknowledge my login? A person icon pops up briefly only to disappear and the green checkmark is there, but I can not continue to the instance stage. Can anybody help me out please?

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2024 AI Art Video Tutorials