VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing
Paper
•
2502.17258
•
Published
•
79
image
imagewidth (px) 512
512
|
|---|
Github (⭐ Star our GitHub )
If you think this dataset is helpful, please feel free to leave a star⭐️⭐️⭐️ and cite our paper:
This is the dataset proposed in our paper VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing (ICLR 2025).
VideoGrain is a zero-shot method for class-level, instance-level, and part-level video editing.
data/
├── 2_cars
│ ├── 2_cars # original videos frames
│ └── layout_masks # layout masks subfolders (e.g., bg, left, right)
├── 2_cats
│ ├── 2_cats
│ └── layout_masks
├── 2_monkeys
├── badminton
├── boxer-punching
├── car
├── cat_flower
├── man_text_message
├── run_two_man
├── soap-box
├── spin-ball
├── tennis
└── wolf
Install the datasets library first, by:
pip install datasets
Then it can be downloaded automatically with
import numpy as np
from datasets import load_dataset
dataset = load_dataset("XiangpengYang/VideoGrain-dataset")
This dataset are licensed under the CC BY-NC 4.0 license.
@article{yang2025videograin,
title={VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing},
author={Yang, Xiangpeng and Zhu, Linchao and Fan, Hehe and Yang, Yi},
journal={arXiv preprint arXiv:2502.17258},
year={2025}
}
If you have any questions, feel free to contact Xiangpeng Yang ([email protected]).