From 21236e8c3a4169a565e87fa3599842b5abafa4e9 Mon Sep 17 00:00:00 2001 From: hardmaru Date: Sat, 25 Mar 2023 00:30:29 +0900 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 08c2288..1bc2a49 100644 --- a/README.md +++ b/README.md @@ -15,6 +15,8 @@ new checkpoints. The following list provides an overview of all currently availa - New stable diffusion finetune (_Stable unCLIP 2.1_, [Hugging Face](https://huggingface.co/stabilityai/)) at 768x768 resolution, based on SD2.1-768. This model allows for image variations and mixing operations as described in [*Hierarchical Text-Conditional Image Generation with CLIP Latents*](https://arxiv.org/abs/2204.06125), and, thanks to its modularity, can be combined with other models such as [KARLO](https://github.com/kakaobrain/karlo). Comes in two variants: [*Stable unCLIP-L*](https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip/blob/main/sd21-unclip-l.ckpt) and [*Stable unCLIP-H*](https://huggingface.co/stabilityai/stable-diffusion-2-1-unclip/blob/main/sd21-unclip-h.ckpt), which are conditioned on CLIP ViT-L and ViT-H image embeddings, respectively. Instructions are available [here](doc/UNCLIP.MD). +- A public demo of SD-unCLIP is already available at clipdrop.co/stable-diffusion-reimagine + **December 7, 2022**