Skip to content

Commit e30d3bf

Browse files
linoytsabanbghirahlkysayakpaulgithub-actions[bot]
authored
[LoRA] add LoRA support to HiDream and fine-tuning script (#11281)
* initial commit * initial commit * initial commit * initial commit * initial commit * initial commit * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com> * move prompt embeds, pooled embeds outside * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by: hlky <hlky@hlky.ac> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by: hlky <hlky@hlky.ac> * fix import * fix import and tokenizer 4, text encoder 4 loading * te * prompt embeds * fix naming * shapes * initial commit to add HiDreamImageLoraLoaderMixin * fix init * add tests * loader * fix model input * add code example to readme * fix default max length of text encoders * prints * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training * smol fix * unpatchify * unpatchify * fix validation * flip pred and loss * fix shift!!! * revert unpatchify changes (for now) * smol fix * Apply style fixes * workaround moe training * workaround moe training * remove prints * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae) https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207 * refactor to align with HiDream refactor * refactor to align with HiDream refactor * refactor to align with HiDream refactor * add support for cpu offloading of text encoders * Apply style fixes * adjust lr and rank for train example * fix copies * Apply style fixes * update README * update README * update README * fix license * keep prompt2,3,4 as None in validation * remove reverse ode comment * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> * vae offload change * fix text encoder offloading * Apply style fixes * cleaner to_kwargs * fix module name in copied from * add requirements * fix offloading * fix offloading * fix offloading * update transformers version in reqs * try AutoTokenizer * try AutoTokenizer * Apply style fixes * empty commit * Delete tests/lora/test_lora_layers_hidream.py * change tokenizer_4 to load with AutoTokenizer as well * make text_encoder_four and tokenizer_four configurable * save model card * save model card * revert T5 * fix test * remove non diffusers lumina2 conversion --------- Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com> Co-authored-by: hlky <hlky@hlky.ac> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
1 parent 6ab62c7 commit e30d3bf

File tree

10 files changed

+2437
-7
lines changed

10 files changed

+2437
-7
lines changed

docs/source/en/api/loaders/lora.md

+5
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ LoRA is a fast and lightweight training method that inserts and trains a signifi
2828
- [`WanLoraLoaderMixin`] provides similar functions for [Wan](https://huggingface.co/docs/diffusers/main/en/api/pipelines/wan).
2929
- [`CogView4LoraLoaderMixin`] provides similar functions for [CogView4](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogview4).
3030
- [`AmusedLoraLoaderMixin`] is for the [`AmusedPipeline`].
31+
- [`HiDreamImageLoraLoaderMixin`] provides similar functions for [HiDream Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hidream)
3132
- [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.
3233

3334
<Tip>
@@ -91,6 +92,10 @@ To learn more about how to load LoRA weights, see the [LoRA](../../using-diffuse
9192

9293
[[autodoc]] loaders.lora_pipeline.AmusedLoraLoaderMixin
9394

95+
## HiDreamImageLoraLoaderMixin
96+
97+
[[autodoc]] loaders.lora_pipeline.HiDreamImageLoraLoaderMixin
98+
9499
## LoraBaseMixin
95100

96101
[[autodoc]] loaders.lora_base.LoraBaseMixin

examples/dreambooth/README_hidream.md

+133
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
# DreamBooth training example for HiDream Image
2+
3+
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject.
4+
5+
The `train_dreambooth_lora_hidream.py` script shows how to implement the training procedure with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) and adapt it for [HiDream Image](https://huggingface.co/docs/diffusers/main/en/api/pipelines/).
6+
7+
8+
This will also allow us to push the trained model parameters to the Hugging Face Hub platform.
9+
10+
## Running locally with PyTorch
11+
12+
### Installing the dependencies
13+
14+
Before running the scripts, make sure to install the library's training dependencies:
15+
16+
**Important**
17+
18+
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
19+
20+
```bash
21+
git clone https://github.com/huggingface/diffusers
22+
cd diffusers
23+
pip install -e .
24+
```
25+
26+
Then cd in the `examples/dreambooth` folder and run
27+
```bash
28+
pip install -r requirements_sana.txt
29+
```
30+
31+
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
32+
33+
```bash
34+
accelerate config
35+
```
36+
37+
Or for a default accelerate configuration without answering questions about your environment
38+
39+
```bash
40+
accelerate config default
41+
```
42+
43+
Or if your environment doesn't support an interactive shell (e.g., a notebook)
44+
45+
```python
46+
from accelerate.utils import write_basic_config
47+
write_basic_config()
48+
```
49+
50+
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
51+
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.14.0` installed in your environment.
52+
53+
54+
### Dog toy example
55+
56+
Now let's get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example.
57+
58+
Let's first download it locally:
59+
60+
```python
61+
from huggingface_hub import snapshot_download
62+
63+
local_dir = "./dog"
64+
snapshot_download(
65+
"diffusers/dog-example",
66+
local_dir=local_dir, repo_type="dataset",
67+
ignore_patterns=".gitattributes",
68+
)
69+
```
70+
71+
This will also allow us to push the trained LoRA parameters to the Hugging Face Hub platform.
72+
73+
Now, we can launch training using:
74+
> [!NOTE]
75+
> The following training configuration prioritizes lower memory consumption by using gradient checkpointing,
76+
> 8-bit Adam optimizer, latent caching, offloading, no validation.
77+
> Additionally, when provided with 'instance_prompt' only and no 'caption_column' (used for custom prompts for each image)
78+
> text embeddings are pre-computed to save memory.
79+
80+
```bash
81+
export MODEL_NAME="HiDream-ai/HiDream-I1-Dev"
82+
export INSTANCE_DIR="dog"
83+
export OUTPUT_DIR="trained-hidream-lora"
84+
85+
accelerate launch train_dreambooth_lora_hidream.py \
86+
--pretrained_model_name_or_path=$MODEL_NAME \
87+
--instance_data_dir=$INSTANCE_DIR \
88+
--output_dir=$OUTPUT_DIR \
89+
--mixed_precision="bf16" \
90+
--instance_prompt="a photo of sks dog" \
91+
--resolution=1024 \
92+
--train_batch_size=1 \
93+
--gradient_accumulation_steps=4 \
94+
--use_8bit_adam \
95+
--rank=16 \
96+
--learning_rate=2e-4 \
97+
--report_to="wandb" \
98+
--lr_scheduler="constant" \
99+
--lr_warmup_steps=0 \
100+
--max_train_steps=1000 \
101+
--cache_latents \
102+
--gradient_checkpointing \
103+
--validation_epochs=25 \
104+
--seed="0" \
105+
--push_to_hub
106+
```
107+
108+
For using `push_to_hub`, make you're logged into your Hugging Face account:
109+
110+
```bash
111+
huggingface-cli login
112+
```
113+
114+
To better track our training experiments, we're using the following flags in the command above:
115+
116+
* `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before.
117+
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
118+
119+
## Notes
120+
121+
Additionally, we welcome you to explore the following CLI arguments:
122+
123+
* `--lora_layers`: The transformer modules to apply LoRA training on. Please specify the layers in a comma seperated. E.g. - "to_k,to_q,to_v" will result in lora training of attention layers only.
124+
* `--rank`: The rank of the LoRA layers. The higher the rank, the more parameters are trained. The default is 16.
125+
126+
We provide several options for optimizing memory optimization:
127+
128+
* `--offload`: When enabled, we will offload the text encoder and VAE to CPU, when they are not used.
129+
* `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done.
130+
* `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library.
131+
* `--instance_prompt` and no `--caption_column`: when only an instance prompt is provided, we will pre-compute the text embeddings and remove the text encoders from memory once done.
132+
133+
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/) of the `HiDreamImagePipeline` to know more about the model.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
accelerate>=1.4.0
2+
torchvision
3+
transformers>=4.50.0
4+
ftfy
5+
tensorboard
6+
Jinja2
7+
peft>=0.14.0
8+
sentencepiece
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,220 @@
1+
# coding=utf-8
2+
# Copyright 2024 HuggingFace Inc.
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
16+
import logging
17+
import os
18+
import sys
19+
import tempfile
20+
21+
import safetensors
22+
23+
24+
sys.path.append("..")
25+
from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402
26+
27+
28+
logging.basicConfig(level=logging.DEBUG)
29+
30+
logger = logging.getLogger()
31+
stream_handler = logging.StreamHandler(sys.stdout)
32+
logger.addHandler(stream_handler)
33+
34+
35+
class DreamBoothLoRAHiDreamImage(ExamplesTestsAccelerate):
36+
instance_data_dir = "docs/source/en/imgs"
37+
pretrained_model_name_or_path = "hf-internal-testing/tiny-hidream-i1-pipe"
38+
text_encoder_4_path = "hf-internal-testing/tiny-random-LlamaForCausalLM"
39+
tokenizer_4_path = "hf-internal-testing/tiny-random-LlamaForCausalLM"
40+
script_path = "examples/dreambooth/train_dreambooth_lora_hidream.py"
41+
transformer_layer_type = "double_stream_blocks.0.block.attn1.to_k"
42+
43+
def test_dreambooth_lora_hidream(self):
44+
with tempfile.TemporaryDirectory() as tmpdir:
45+
test_args = f"""
46+
{self.script_path}
47+
--pretrained_model_name_or_path {self.pretrained_model_name_or_path}
48+
--pretrained_text_encoder_4_name_or_path {self.text_encoder_4_path}
49+
--pretrained_tokenizer_4_name_or_path {self.tokenizer_4_path}
50+
--instance_data_dir {self.instance_data_dir}
51+
--resolution 32
52+
--train_batch_size 1
53+
--gradient_accumulation_steps 1
54+
--max_train_steps 2
55+
--learning_rate 5.0e-04
56+
--scale_lr
57+
--lr_scheduler constant
58+
--lr_warmup_steps 0
59+
--output_dir {tmpdir}
60+
--max_sequence_length 16
61+
""".split()
62+
63+
test_args.extend(["--instance_prompt", ""])
64+
run_command(self._launch_args + test_args)
65+
# save_pretrained smoke test
66+
self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")))
67+
68+
# make sure the state_dict has the correct naming in the parameters.
69+
lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))
70+
is_lora = all("lora" in k for k in lora_state_dict.keys())
71+
self.assertTrue(is_lora)
72+
73+
# when not training the text encoder, all the parameters in the state dict should start
74+
# with `"transformer"` in their names.
75+
starts_with_transformer = all(key.startswith("transformer") for key in lora_state_dict.keys())
76+
self.assertTrue(starts_with_transformer)
77+
78+
def test_dreambooth_lora_latent_caching(self):
79+
with tempfile.TemporaryDirectory() as tmpdir:
80+
test_args = f"""
81+
{self.script_path}
82+
--pretrained_model_name_or_path {self.pretrained_model_name_or_path}
83+
--pretrained_text_encoder_4_name_or_path {self.text_encoder_4_path}
84+
--pretrained_tokenizer_4_name_or_path {self.tokenizer_4_path}
85+
--instance_data_dir {self.instance_data_dir}
86+
--resolution 32
87+
--train_batch_size 1
88+
--gradient_accumulation_steps 1
89+
--max_train_steps 2
90+
--cache_latents
91+
--learning_rate 5.0e-04
92+
--scale_lr
93+
--lr_scheduler constant
94+
--lr_warmup_steps 0
95+
--output_dir {tmpdir}
96+
--max_sequence_length 16
97+
""".split()
98+
99+
test_args.extend(["--instance_prompt", ""])
100+
run_command(self._launch_args + test_args)
101+
# save_pretrained smoke test
102+
self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")))
103+
104+
# make sure the state_dict has the correct naming in the parameters.
105+
lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))
106+
is_lora = all("lora" in k for k in lora_state_dict.keys())
107+
self.assertTrue(is_lora)
108+
109+
# when not training the text encoder, all the parameters in the state dict should start
110+
# with `"transformer"` in their names.
111+
starts_with_transformer = all(key.startswith("transformer") for key in lora_state_dict.keys())
112+
self.assertTrue(starts_with_transformer)
113+
114+
def test_dreambooth_lora_layers(self):
115+
with tempfile.TemporaryDirectory() as tmpdir:
116+
test_args = f"""
117+
{self.script_path}
118+
--pretrained_model_name_or_path {self.pretrained_model_name_or_path}
119+
--pretrained_text_encoder_4_name_or_path {self.text_encoder_4_path}
120+
--pretrained_tokenizer_4_name_or_path {self.tokenizer_4_path}
121+
--instance_data_dir {self.instance_data_dir}
122+
--resolution 32
123+
--train_batch_size 1
124+
--gradient_accumulation_steps 1
125+
--max_train_steps 2
126+
--cache_latents
127+
--learning_rate 5.0e-04
128+
--scale_lr
129+
--lora_layers {self.transformer_layer_type}
130+
--lr_scheduler constant
131+
--lr_warmup_steps 0
132+
--output_dir {tmpdir}
133+
--max_sequence_length 16
134+
""".split()
135+
136+
test_args.extend(["--instance_prompt", ""])
137+
run_command(self._launch_args + test_args)
138+
# save_pretrained smoke test
139+
self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")))
140+
141+
# make sure the state_dict has the correct naming in the parameters.
142+
lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))
143+
is_lora = all("lora" in k for k in lora_state_dict.keys())
144+
self.assertTrue(is_lora)
145+
146+
# when not training the text encoder, all the parameters in the state dict should start
147+
# with `"transformer"` in their names. In this test, we only params of
148+
# `self.transformer_layer_type` should be in the state dict.
149+
starts_with_transformer = all(self.transformer_layer_type in key for key in lora_state_dict)
150+
self.assertTrue(starts_with_transformer)
151+
152+
def test_dreambooth_lora_hidream_checkpointing_checkpoints_total_limit(self):
153+
with tempfile.TemporaryDirectory() as tmpdir:
154+
test_args = f"""
155+
{self.script_path}
156+
--pretrained_model_name_or_path={self.pretrained_model_name_or_path}
157+
--pretrained_text_encoder_4_name_or_path {self.text_encoder_4_path}
158+
--pretrained_tokenizer_4_name_or_path {self.tokenizer_4_path}
159+
--instance_data_dir={self.instance_data_dir}
160+
--output_dir={tmpdir}
161+
--resolution=32
162+
--train_batch_size=1
163+
--gradient_accumulation_steps=1
164+
--max_train_steps=6
165+
--checkpoints_total_limit=2
166+
--checkpointing_steps=2
167+
--max_sequence_length 16
168+
""".split()
169+
170+
test_args.extend(["--instance_prompt", ""])
171+
run_command(self._launch_args + test_args)
172+
173+
self.assertEqual(
174+
{x for x in os.listdir(tmpdir) if "checkpoint" in x},
175+
{"checkpoint-4", "checkpoint-6"},
176+
)
177+
178+
def test_dreambooth_lora_hidream_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
179+
with tempfile.TemporaryDirectory() as tmpdir:
180+
test_args = f"""
181+
{self.script_path}
182+
--pretrained_model_name_or_path={self.pretrained_model_name_or_path}
183+
--pretrained_text_encoder_4_name_or_path {self.text_encoder_4_path}
184+
--pretrained_tokenizer_4_name_or_path {self.tokenizer_4_path}
185+
--instance_data_dir={self.instance_data_dir}
186+
--output_dir={tmpdir}
187+
--resolution=32
188+
--train_batch_size=1
189+
--gradient_accumulation_steps=1
190+
--max_train_steps=4
191+
--checkpointing_steps=2
192+
--max_sequence_length 16
193+
""".split()
194+
195+
test_args.extend(["--instance_prompt", ""])
196+
run_command(self._launch_args + test_args)
197+
198+
self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-2", "checkpoint-4"})
199+
200+
resume_run_args = f"""
201+
{self.script_path}
202+
--pretrained_model_name_or_path={self.pretrained_model_name_or_path}
203+
--pretrained_text_encoder_4_name_or_path {self.text_encoder_4_path}
204+
--pretrained_tokenizer_4_name_or_path {self.tokenizer_4_path}
205+
--instance_data_dir={self.instance_data_dir}
206+
--output_dir={tmpdir}
207+
--resolution=32
208+
--train_batch_size=1
209+
--gradient_accumulation_steps=1
210+
--max_train_steps=8
211+
--checkpointing_steps=2
212+
--resume_from_checkpoint=checkpoint-4
213+
--checkpoints_total_limit=2
214+
--max_sequence_length 16
215+
""".split()
216+
217+
resume_run_args.extend(["--instance_prompt", ""])
218+
run_command(self._launch_args + resume_run_args)
219+
220+
self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-6", "checkpoint-8"})

0 commit comments

Comments
 (0)