cyd0806 commited on
Commit
8a68bb0
·
verified ·
1 Parent(s): 96b5a4c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +238 -0
README.md ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <h1> SubjectGenius </h1>
3
+
4
+ <h3>SubjectGenius: Unified Multi-Conditional Combination with Diffusion Transformer</h3>
5
+ <b>Haoxuan Wang</b>, Jinlong Peng, Qingdong He, Hao Yang, Ying Jin, <br>
6
+ Jiafu Wu, Xiaobin Hu, Yanjie Pan, Zhenye Gan, Mingmin Chi, Bo Peng, Yabiao Wang <br>
7
+ <br>
8
+ <a href="https://arxiv.org/abs/2503.09277"><img src="https://img.shields.io/badge/arXiv-2503.09277-A42C25.svg" alt="arXiv"></a>
9
+ <a href="https://huggingface.co/Xuan-World/SubjectGenius"><img src="https://img.shields.io/badge/🤗_HuggingFace-Model-ffbd45.svg" alt="HuggingFace"></a>
10
+ <a href="https://huggingface.co/datasets/Xuan-World/SubjectSpatial200K"><img src="https://img.shields.io/badge/🤗_HuggingFace-Dataset-ffbd45.svg" alt="HuggingFace"></a>
11
+ </div>
12
+
13
+ ## 🌠 Key Features
14
+
15
+ <img src='assets/cover.png' width='100%' />
16
+ <br>
17
+ Fantastic results of our proposed SubjectGenius on multi-conditional controllable generation: <br>
18
+
19
+ - (a) Subject-Insertion task.
20
+ - (b) and (c) Subject-Spatial task.
21
+ - (d) Multi-Spatial task.
22
+
23
+ Our unified framework effectively handles any combination of input conditions and achieves remarkable alignment with all of them, including but not limited to text prompts, spatial maps, and subject images.
24
+
25
+ ## 🚩 **Updates**
26
+
27
+ - ✅ March 12, 2025. We release SubjectSpatial200K dataset.
28
+ - ✅ March 12, 2025. We release SubjectGenius framework.
29
+
30
+ ## 🔧 Dependencies and Installation
31
+
32
+ ```bash
33
+ conda create -n SubjectGenius python=3.12
34
+ conda activate SubjectGenius
35
+ pip install -r requirements.txt
36
+ ```
37
+ Due to the issues of _diffusers_ library, you need to update the `cite-package` code manually.
38
+ You can find the location of your _diffusers_ library by running the following command.
39
+ ```bash
40
+ pip show diffusers
41
+ ```
42
+
43
+ Then add the following entry to the dictionary `_SET_ADAPTER_SCALE_FN_MAPPING` located in `diffusers/loaders/peft.py`:
44
+ ```
45
+ "SubjectGeniusTransformer2DModel": lambda model_cls, weights: weights
46
+ ```
47
+
48
+ ## 📥 Download Models
49
+ Place all the model weights in the `ckpt` directory. Of course, it's also acceptable to store them in other directories.
50
+ 1. **FLUX.1-schnell**
51
+ ```bash
52
+ huggingface-cli download black-forest-labs/FLUX.1-schnell --local-dir ./ckpt/FLUX.1-schnell
53
+ ```
54
+ 2. **Condition-LoRA**
55
+ ```bash
56
+ huggingface-cli download Xuan-World/SubjectGenius --include "Condition_LoRA/*" --local-dir ./ckpt/Condition_LoRA
57
+ ```
58
+
59
+ 3. **Denoising-LoRA**
60
+ ```bash
61
+ huggingface-cli download Xuan-World/SubjectGenius --include "Denoising_LoRA/*" --local-dir ./ckpt/Denoising_LoRA
62
+ ```
63
+
64
+ 4. FLUX.1-schnell-training-assistant-LoRA (optional)
65
+
66
+ Download it if you want to train your LoRA on the FLUX-schnell.
67
+
68
+ ```bash
69
+ huggingface-cli download ostris/FLUX.1-schnell-training-adapter --local-dir ./ckpt/FLUX.1-schnell-training-adapter
70
+ ```
71
+
72
+ > Schnell is a step distilled model, meaning it can generate an image in just a few steps.
73
+ > However, this makes it impossible to train on it directly because every step you train breaks down the compression more and more.
74
+ > With this adapter enabled during training, that doesnt happen.
75
+ > It is activated during the training process, and disabled during sampling.
76
+ > After the LoRA is trained, this adapter is no longer needed.
77
+
78
+ ## 🎮 Inference on Demo
79
+ - We provide the `inference.py` script to offer a simplest and fastest way for you to run our model. <br>
80
+ - Replace the arguments `--version` from `training-based` to `training-free`, then you don't need to provide the **Denoising-LoRA** module.
81
+ - Adjust the scale of `--denoising_lora_weight` to get a balance between the editability and the consistency when using Custom Prompts.
82
+ ### 1. Subject-Insertion
83
+ Default Prompts:
84
+ ```bash
85
+ python inference.py \
86
+ --condition_types fill subject \
87
+ --denoising_lora ckpt/Denoising_LoRA/subject_fill_union \
88
+ --denoising_lora_weight 1.0 \
89
+ --fill examples/window/background.jpg \
90
+ --subject examples/window/subject.jpg \
91
+ --json "examples/window/1634_rank0_A decorative fabric topper for windows..json" \
92
+ --version training-based
93
+ ```
94
+
95
+ ### 2. Subject-Canny
96
+ Default Prompts:
97
+ ```bash
98
+ python inference.py \
99
+ --condition_types canny subject \
100
+ --denoising_lora ckpt/Denoising_LoRA/subject_canny_union \
101
+ --denoising_lora_weight 1.0 \
102
+ --canny examples/doll/canny.jpg \
103
+ --subject examples/doll/subject.jpg \
104
+ --json "examples/doll/1116_rank0_A spooky themed gothic doll..json" \
105
+ --version training-based
106
+ ```
107
+ Custom Prompts:
108
+ ```bash
109
+ python inference.py \
110
+ --condition_types canny subject \
111
+ --denoising_lora ckpt/Denoising_LoRA/subject_canny_union \
112
+ --denoising_lora_weight 0.6 \
113
+ --canny examples/doll/canny.jpg \
114
+ --subject examples/doll/subject.jpg \
115
+ --json "examples/doll/1116_rank0_A spooky themed gothic doll..json" \
116
+ --version training-based \
117
+ --prompt "She stands amidst the vibrant glow of a bustling Chinatown alley, \
118
+ her pink hair shimmering under festive lantern light, clad in a sleek black dress adorned with intricate lace patterns. "
119
+ ```
120
+ ### 3. Subject-Depth
121
+ Default Prompts:
122
+ ```bash
123
+ python inference.py \
124
+ --condition_types depth subject \
125
+ --denoising_lora ckpt/Denoising_LoRA/subject_depth_union \
126
+ --denoising_lora_weight 1.0 \
127
+ --depth examples/car/depth.jpg \
128
+ --subject examples/car/subject.jpg \
129
+ --json "examples/car/2532_rank0_A sturdy ATV with rugged looks..json" \
130
+ --version training-based
131
+ ```
132
+ Custom Prompts:
133
+ ```bash
134
+ python inference.py \
135
+ --condition_types depth subject \
136
+ --denoising_lora ckpt/Denoising_LoRA/subject_depth_union \
137
+ --denoising_lora_weight 0.6 \
138
+ --depth examples/car/depth.jpg \
139
+ --subject examples/car/subject.jpg \
140
+ --json "examples/car/2532_rank0_A sturdy ATV with rugged looks..json" \
141
+ --version training-based \
142
+ --prompt "It is positioned on a snow-covered path in a forest, its green body dusted with frost and black tires caked with packed snow. \
143
+ The vehicle retains its sturdy build with handlebars glinting ice particles and headlights cutting through falling snowflakes, surrounded by tall pine trees draped in white."
144
+ ```
145
+ ### 4. Depth-Canny
146
+ Default Prompts:
147
+ ```bash
148
+ python inference.py \
149
+ --condition_types depth canny \
150
+ --denoising_lora ckpt/Denoising_LoRA/depth_canny_union \
151
+ --denoising_lora_weight 1.0 \
152
+ --depth examples/toy/depth.jpg \
153
+ --canny examples/toy/canny.jpg \
154
+ --json "examples/toy/1616_rank0_A soft, plush toy with cuddly features..json" \
155
+ --version training-based
156
+ ```
157
+ Custom Prompts:
158
+ ```bash
159
+ python inference.py \
160
+ --condition_types depth canny \
161
+ --denoising_lora ckpt/Denoising_LoRA/depth_canny_union \
162
+ --denoising_lora_weight 0.6 \
163
+ --depth examples/toy/depth.jpg \
164
+ --canny examples/toy/canny.jpg \
165
+ --json "examples/toy/1616_rank0_A soft, plush toy with cuddly features..json" \
166
+ --version training-based \
167
+ --prompt "It sits on a moonlit sandy beach, a small sandcastle partially washed by gentle tides beside it, \
168
+ under a night sky where the full moon casts silvery trails across waves, with distant seagulls gliding through star-dappled darkness."
169
+ ```
170
+
171
+ ## 🗂️ Download Dataset (optional)
172
+ 1. Download SubjectSpatial200K
173
+
174
+ Place our SubjectSpatial200K dataset in the `dataset` directory. Of course, it's also acceptable to store them in other directories. <br>
175
+
176
+ ```bash
177
+ huggingface-cli download Xuan-World/SubjectSpatial200K --repo-type dataset --local-dir ./dataset
178
+ ```
179
+
180
+ 2. Filter and Partition the SubjectSpatial200K dataset into training and testing sets.
181
+
182
+ The default partition scheme is identical to our paper.
183
+ You can customize it as you wish.
184
+
185
+ ```bash
186
+ python src/partition_dataset.py \
187
+ --dataset dataset/SubjectSpatial200K/data_labeled \
188
+ --output_dir dataset/split_SubjectSpatial200K \
189
+ --partition train
190
+ ```
191
+ ```bash
192
+ python src/partition_dataset.py \
193
+ --dataset dataset/SubjectSpatial200K/Collection3/data_labeled \
194
+ --output_dir dataset/split_SubjectSpatial200K/Collection3 \
195
+ --partition train
196
+ ```
197
+ ```bash
198
+ python src/partition_dataset.py \
199
+ --dataset dataset/SubjectSpatial200K/data_labeled \
200
+ --output_dir dataset/split_SubjectSpatial200K \
201
+ --partition test
202
+ ```
203
+ ```bash
204
+ python src/partition_dataset.py \
205
+ --dataset dataset/SubjectSpatial200K/Collection3/data_labeled \
206
+ --output_dir dataset/split_SubjectSpatial200K/Collection3 \
207
+ --partition test
208
+ ```
209
+ ## 🧩 Train in single-conditional setting
210
+ Refer to https://github.com/Yuanshi9815/OminiControl to train your **Condition-LoRA** modules. We will release our reimplementation using diffusers soon.
211
+
212
+ ## 🔥 Train in multi-conditional setting
213
+ Use our SubjectSpatial200K dataset or your customized multi-conditional dataset to train your **Denoising-LoRA** module.
214
+ 1. Configure Accelerate Environment
215
+ ```bash
216
+ accelerate config
217
+ ```
218
+ 2. Launch Distributed Training
219
+ ```bash
220
+ accelerate launch train.py
221
+ ```
222
+
223
+ ## 📊 Batch Inference on Dataset
224
+ - We provide a script for batch inference on the SubjectSpatial200K dataset in both training-free and training-based version.
225
+ - It can also be run on your custom datasets through your Dataset and DataLoader implementations.
226
+ ```bash
227
+ python test.py
228
+ ```
229
+
230
+ ## 📚 Citation
231
+ ```
232
+ @article{wang2025SubjectGenius,
233
+ title={SubjectGenius: Unified Multi-Conditional Combination with Diffusion Transformer},
234
+ author={Wang, Haoxuan and Peng, Jinlong and He, Qingdong and Yang, Hao and Jin, Ying and Wu, Jiafu and Hu, Xiaobin and Pan, Yanjie and Gan, Zhenye and Chi, Mingmin and others},
235
+ journal={arXiv preprint arXiv:2503.09277},
236
+ year={2025}
237
+ }
238
+ ```