Recent research has explored leveraging event cameras, known for their prowess in capturing scenes with nonuniform motion, for video deraining, leading to performance improvements. However, the existing event-based method still faces the challenge that the complex spatiotemporal distribution disrupts temporal information fusion and complicates feature separation. This article proposes a novel end-to-end learning framework for video deraining that effectively extracts the rich dynamic information provided by the event stream.
View Article and Find Full Text PDF