Stable Diffusion
A demo of stable diffusion in napari
A demo of stable diffusion in napari.
This napari plugin was generated with Cookiecutter using @napari's cookiecutter-napari-plugin template.
Installation¶
You can install napari-stable-diffusion
via pip:
pip install napari-stable-diffusion
To install latest development version :
pip install git+https://github.com/kephale/napari-stable-diffusion.git
You will also need to sign up with HuggingFace and generate an access token to get access to the Stable Diffusion model we use.
When you have generated your access token you can either permanently
set the HF_TOKEN_SD
environment variable in your .bashrc
or whichever file
your OS uses, or you can include it on the command line
HF_TOKEN_SD="hf_aaaAaaaasdadsadsaoaoaoasoidijo" napari
For more information on the Stable Diffusion model itself, please see https://huggingface.co/CompVis/stable-diffusion-v1-4.
Apple M1 specific instructions¶
To utilize the M1 GPU, the nightly version of PyTorch needs to be
installed. Consider using conda
or mamba
like this:
mamba create -c pytorch-nightly -n napari-stable-diffusion python=3.9 pip pyqt pytorch torchvision
pip install git+https://github.com/kephale/napari-stable-diffusion.git
Next steps¶
- Image 2 Image support
- Inpainting support
Contributing¶
Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.
License¶
Distributed under the terms of the BSD-3 license, "napari-stable-diffusion" is free and open source software
Issues¶
If you encounter any problems, please file an issue along with a detailed description.
Supported data:
- Information not submitted
Plugin type:
GitHub activity:
- Stars: 11
- Forks: 3
- Issues + PRs: 2