mirror of
https://github.com/Stability-AI/stablediffusion.git
synced 2024-12-21 23:24:59 +00:00
Update UNCLIP.MD
This commit is contained in:
parent
f2aa661ea3
commit
4e409af95c
1 changed files with 5 additions and 3 deletions
|
@ -5,14 +5,16 @@ trained to invert CLIP image embeddings.
|
|||
We finetuned SD 2.1 to accept a CLIP ViT-L/14 image embedding in addition to the text encodings.
|
||||
This means that the model can be used to produce image variations, but can also be combined with a text-to-image
|
||||
embedding prior to yield a full text-to-image model at 768x768 resolution.
|
||||
|
||||
If you would like to try a demo of this model on the web, please visit https://clipdrop.co/stable-diffusion-reimagine
|
||||
|
||||
We provide two models, trained on OpenAI CLIP-L and OpenCLIP-H image embeddings, respectively,
|
||||
available from [https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip](https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip/tree/main).
|
||||
To use them, download from Hugging Face, and put and the weights into the `checkpoints` folder.
|
||||
To use them, download from Hugging Face, and put and the weights into the `checkpoints` folder.
|
||||
|
||||
#### Image Variations
|
||||
![image-variations-l-1](../assets/stable-samples/stable-unclip/unclip-variations.png)
|
||||
|
||||
If you would like to try a demo of this model, please visit https://clipdrop.co/stable-diffusion-reimagine
|
||||
|
||||
Run
|
||||
|
||||
```
|
||||
|
|
Loading…
Reference in a new issue