You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Memory check before inference to avoid VAE Decode using exceeded VRAM.
Check if free memory is not less than expected before doing actual decoding,
and if it fails, try to free for required amount of memory,
and if it still fails, switch to tiled VAE decoding directly.
It seems PyTorch may continue occupying memory until the model is destroyed
after OOM occurs. This commit tries to avoid OOM from happening in the first
place for VAE Decode.
This is for VAE Decode ran with exceeded VRAM from #5737.
0 commit comments