Optimize Your Google Colab Experience: Run Stable Diffusion for Free, Without Disconnect
Have you been trying to run stable diffusion web UI, also known as automatic 1111, in Google Colab? Unfortunately, on the free plan, you'll almost immediately get disconnected. But don't worry, you can still use stable diffusion in Colab; you just can't use the graphical interface. Today, I'll show you how to bypass the disconnect and still generate unlimited images in Google Colab's free plan.
Getting Started
The first thing you need to do is go to Google Colab and log in with your Google account. Once you're in, click on "File" and then "New Notebook". This is where we'll be writing our code and running it to generate an image.
All the documentation for stable diffusion is available in the Hugging Face Diffusion library. This page provides all the code you need to generate images using stable diffusion, including text to image and image to image. Today, we'll cover the basics to give you a good understanding of how diffusers work.
Installation
To install the necessary packages, we'll use pip in Google Colab. Start by renaming the notebook to "Stable Diffusion". Then, in a new code section, add the following code:
!pip install diffusers torch !pip install transformers !pip install accelerate !pip install git+https://github.com/huggingface/diffusion.git
Make sure to add an exclamation mark before each pip install command. This tells Colab to run the command in the command line.
Running the Code
Now we can start running our code to generate images using stable diffusion. Here's an example of the code you can use:
import torch
from diffusers import stable_diffusion_pipeline
# Set up the pipeline
pipe = stable_diffusion_pipeline(
checkpoint="stable_diffusion_1.5",
device=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
)
# Generate an image
prompt = "A photo of an astronaut riding a horse on Mars"
image = pipe(prompt=prompt)
# Display the image
image.show()
Before running the code, make sure to click on "Connect" to connect to a GPU. This will ensure faster image generation. You can do this by clicking on "Runtime" and then "Change runtime type" and selecting "GPU".
Once you're connected to a GPU, you can run each code section by clicking on the play button. The output will show below the code section.
Customizing Your Image
There are many settings you can customize to generate different types of images. For example, you can change the height, width, number of inference steps, and guidance scale. You can also add a negative prompt to specify what you don't want in the image.
To change the settings, modify the code as follows:
# Set the height and width
height = 800
width = 640
# Set the number of inference steps
steps = 25
# Set the guidance scale
guidance = 7.5
# Set the negative prompt
neg = "extra foot, missing digits, deformed limbs, ugly face, mutilated hands, etc."
# Generate an image with the new settings
image = pipe(
prompt="A photo of an astronaut riding a horse on Mars",
height=height,
width=width,
steps=steps,
guidance=guidance,
neg_prompt=neg,
)
Feel free to experiment with different prompts, settings, and checkpoints to generate your desired images.
Saving Your Image
To save your image, simply right-click on it and select "Save Image".
FAQs
Q: Can I generate NSFW (Not Safe for Work) images using stable diffusion in Google Colab?
A: Yes, you can generate NSFW images by adding an uncensored prompt to your code. However, please note that some images may be blanked out due to content restrictions. To bypass this, set the safety checker of your pipeline to "none".
Q: How can I use a different checkpoint or model for image generation?
A: To use a different checkpoint or model, browse the Hugging Face Diffusion models and find the checkpoint you want. Then, copy the model name and use it in the "checkpoint" parameter of the pipeline.
Q: Can I generate anime-style images using stable diffusion in Google Colab?
A: Yes, you can generate anime-style images by using checkpoints specifically designed for that style. Browse the Hugging Face Diffusion models and search for anime-style checkpoints.
Q: How can I create image-to-image transformations using stable diffusion?
A: To create image-to-image transformations, you can explore the advanced features of the stable diffusion pipeline. Check out the documentation for more information on how to use image-to-image transformations and control nets.
Q: Are there any limitations to using stable diffusion in Google Colab?
A: While stable diffusion is a powerful tool, there are some limitations when using it in Google Colab, especially on the free plan. You may experience slower performance compared to paid alternatives, and there may be restrictions on the types of models and prompts you can use.




