Update depth2img.py

In the `paint()` function, you are using the `torch.autocast()` function to cast the batch tensor to the `cuda` device. However, the `batch` tensor is already on the `cuda` device, so there is no need to cast it again. This can be fixed by removing the `torch.autocast()` call from the `paint()` function.
This commit is contained in:
Mohamad Zamini 2023-07-11 08:11:42 -06:00 committed by GitHub
parent cf1d67a6fd
commit 506403501f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -62,7 +62,7 @@ def paint(sampler, image, prompt, t_enc, seed, scale, num_samples=1, callback=No
wm_encoder.set_watermark('bytes', wm.encode('utf-8')) wm_encoder.set_watermark('bytes', wm.encode('utf-8'))
with torch.no_grad(),\ with torch.no_grad(),\
torch.autocast("cuda"): torch.onnx.set_training(model, False):
batch = make_batch_sd(image, txt=prompt, device=device, num_samples=num_samples) batch = make_batch_sd(image, txt=prompt, device=device, num_samples=num_samples)
z = model.get_first_stage_encoding(model.encode_first_stage(batch[model.first_stage_key])) # move to latent space z = model.get_first_stage_encoding(model.encode_first_stage(batch[model.first_stage_key])) # move to latent space
c = model.cond_stage_model.encode(batch["txt"]) c = model.cond_stage_model.encode(batch["txt"])