diff --git a/README.md b/README.md index 3c09f7a..01d1fff 100644 --- a/README.md +++ b/README.md @@ -8,6 +8,7 @@ new checkpoints. The following list provides an overview of all currently availa ## News + **March 24, 2023** *Stable UnCLIP 2.1* diff --git a/doc/UNCLIP.MD b/doc/UNCLIP.MD index f217e57..c51ab6c 100644 --- a/doc/UNCLIP.MD +++ b/doc/UNCLIP.MD @@ -3,7 +3,7 @@ [unCLIP](https://openai.com/dall-e-2/) is the approach behind OpenAI's [DALLĀ·E 2](https://openai.com/dall-e-2/), trained to invert CLIP image embeddings. We finetuned SD 2.1 to accept a CLIP ViT-L/14 image embedding in addition to the text encodings. -This means that the model can be used to produce image variations, but can also be combined with a text-to-image +This means that the model can be used to produce image variations, but can also be combined with a text-to-image embedding prior to yield a full text-to-image model at 768x768 resolution. If you would like to try a demo of this model on the web, please visit https://clipdrop.co/stable-diffusion-reimagine