A demo of stable diffusion in napari
A demo of stable diffusion in napari.
You can install
napari-stable-diffusion via pip:
pip install napari-stable-diffusion
To install latest development version :
pip install git+https://github.com/kephale/napari-stable-diffusion.git
You will also need to sign up with HuggingFace and generate an access token to get access to the Stable Diffusion model we use.
When you have generated your access token you can either permanently
HF_TOKEN_SD environment variable in your
.bashrc or whichever file
your OS uses, or you can include it on the command line
For more information on the Stable Diffusion model itself, please see https://huggingface.co/CompVis/stable-diffusion-v1-4.
Apple M1 specific instructions¶
To utilize the M1 GPU, the nightly version of PyTorch needs to be
installed. Consider using
mamba like this:
mamba create -c pytorch-nightly -n napari-stable-diffusion python=3.9 pip pyqt pytorch torchvision pip install git+https://github.com/kephale/napari-stable-diffusion.git
- Image 2 Image support
- Inpainting support
Contributions are very welcome. Tests can be run with tox, please ensure the coverage at least stays the same before you submit a pull request.
Distributed under the terms of the BSD-3 license, "napari-stable-diffusion" is free and open source software
If you encounter any problems, please file an issue along with a detailed description.
- 03 July 2023
- 27 October 2022
- Information not submitted
- Stars: 11
- Forks: 3
- Issues + PRs: 2