Zero-Shot Video Deraining with Video Diffusion Models

WACV 2026

Flawless AI LogoFlawless AI

We present ZSVD, a zero-shot video deraining method for complex dynamic scenes, leveraging a pretrained video diffusion model with attention switching and negative prompting to remove rain without model fine-tuning or supervised training.

Abstract

Existing video deraining methods are often trained on paired datasets, either synthetic, which limits their ability to generalize to real-world rain, or captured by static cameras, which restricts their effectiveness in dynamic scenes with background and camera motion. Furthermore, recent works in fine-tuning diffusion models have shown promising results, but the fine-tuning tends to weaken the generative prior, limiting generalization to unseen cases. In this paper, we introduce the first zero-shot video deraining method for complex dynamic scenes that does not require synthetic data nor model fine-tuning, by leveraging a pretrained text-to-video diffusion model that demonstrates strong generalization capabilities. By inverting an input video into the latent space of diffusion models, its reconstruction process can be intervened and pushed away from the model's concept of rain using negative prompting. At the core of our approach is an attention switching mechanism that we found is crucial for maintaining dynamic backgrounds as well as structural consistency between the input and the derained video, mitigating artifacts introduced by naive negative prompting. Our approach is validated through extensive experiments on real-world rain datasets, demonstrating substantial improvements over prior methods and showcasing robust generalization without the need for supervised training.

Method

Overview of ZSVD approach. First, video inversion is performed to extract the initial noise latent. Next, the model performs a reconstruction step with the null prompt and a rain condition step with the negative prompt. The two paths are then combined using classifier-free guidance. Attention switching is applied for selected blocks, where the K and V are extracted from the null condition and are used to replace their conditional equivalents.

Comparison

Results from real-world videos using different deraining approaches.

Additional real-world deraining results.

Results on Desnowing

Selected frames from desnowed real-world videos. Base refers to the ablative experiment without attention switching.