当前位置:   article > 正文

[sd_scripts]之config_min_bucket_reso and max_bucket_reso are ignored if

min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set,

https://github.com/kohya-ss/sd-scripts/blob/main/docs/config_README-ja.mdicon-default.png?t=N7T8https://github.com/kohya-ss/sd-scripts/blob/main/docs/config_README-ja.md[Stable Diffusion]训练你的LoRA(Linux) - 知乎简介LoRA 是一种参数高效微调方法(PEFT),最早由 LoRA: Low-Rank Adaptation of Large Language Models 提出并应用于微调语言大模型之中,后来由 Low-rank Adaptation for Fast Text-to-Image Diffusion Fine-tu…icon-default.png?t=N7T8https://zhuanlan.zhihu.com/p/640144661配置文件格式TOML:

  1. [general]
  2. shuffle_caption = true
  3. caption_extension = '.txt'
  4. keep_tokens = 1
  5. # This is a DreamBooth-style dataset
  6. [[datasets]]
  7. resolution = 512
  8. batch_size = 4
  9. keep_tokens = 2
  10. [[datasets.subsets]]
  11. image_dir = 'C:\hoge'
  12. class_tokens = 'hoge girl'
  13. # This subset has keep_tokens = 2 (using the value of the parent datasets)
  14. [[datasets.subsets]]
  15. image_dir = 'C:\fuga'
  16. class_tokens = 'fuga boy'
  17. keep_tokens = 3
  18. [[datasets.subsets]]
  19. is_reg = true
  20. image_dir = 'C:\reg'
  21. class_tokens = 'human'
  22. keep_tokens = 1
  23. # This is a fine-tuning-style dataset
  24. [[datasets]]
  25. resolution = [768, 768]
  26. batch_size = 2
  27. [[datasets.subsets]]
  28. image_dir = 'C:\piyo'
  29. metadata_file = 'C:\piyo\piyo_md.json'
  30. # This subset has keep_tokens = 1 (using the general value)

在此示例中,将训练三个目录作为512x512(批量大小4)的dreambooth数据集,以及一个目录作为768x768(批量大小2)的微调数据集。

  1. C:\
  2. ├─ hoge -> [[datasets.subsets]] No.1 ┐ ┐
  3. ├─ fuga -> [[datasets.subsets]] No.2 |-> [[datasets]] No.1 |-> [general]
  4. ├─ reg -> [[datasets.subsets]] No.3 ┘ |
  5. └─ piyo -> [[datasets.subsets]] No.4 --> [[datasets]] No.2

 所有方法均可使用的参数:[general]

dreambooth-style 特有的参数:

fine-tuning-style特有的参数:

train_db.py参数

  1. [-h] [--v2] [--v_parameterization]
  2. [--pretrained_model_name_or_path PRETRAINED_MODEL_NAME_OR_PATH]
  3. [--tokenizer_cache_dir TOKENIZER_CACHE_DIR]
  4. [--train_data_dir TRAIN_DATA_DIR]
  5. [--shuffle_caption]
  6. [--caption_extension CAPTION_EXTENSION]
  7. [--caption_extention CAPTION_EXTENTION]
  8. [--keep_tokens KEEP_TOKENS]
  9. [--caption_prefix CAPTION_PREFIX]
  10. [--caption_suffix CAPTION_SUFFIX]
  11. [--color_aug]
  12. [--flip_aug]
  13. [--face_crop_aug_range FACE_CROP_AUG_RANGE]
  14. [--random_crop]
  15. [--debug_dataset]
  16. [--resolution RESOLUTION]
  17. [--cache_latents]
  18. [--vae_batch_size VAE_BATCH_SIZE]
  19. [--cache_latents_to_disk]
  20. [--enable_bucket]
  21. [--min_bucket_reso MIN_BUCKET_RESO]
  22. [--max_bucket_reso MAX_BUCKET_RESO]
  23. [--bucket_reso_steps BUCKET_RESO_STEPS]
  24. [--bucket_no_upscale]
  25. [--token_warmup_min TOKEN_WARMUP_MIN]
  26. [--token_warmup_step TOKEN_WARMUP_STEP]
  27. [--dataset_class DATASET_CLASS]
  28. [--caption_dropout_rate CAPTION_DROPOUT_RATE]
  29. [--caption_dropout_every_n_epochs CAPTION_DROPOUT_EVERY_N_EPOCHS]
  30. [--caption_tag_dropout_rate CAPTION_TAG_DROPOUT_RATE]
  31. [--reg_data_dir REG_DATA_DIR]
  32. [--output_dir OUTPUT_DIR]
  33. [--output_name OUTPUT_NAME]
  34. [--huggingface_repo_id HUGGINGFACE_REPO_ID]
  35. [--huggingface_repo_type HUGGINGFACE_REPO_TYPE]
  36. [--huggingface_path_in_repo HUGGINGFACE_PATH_IN_REPO]
  37. [--huggingface_token HUGGINGFACE_TOKEN]
  38. [--huggingface_repo_visibility HUGGINGFACE_REPO_VISIBILITY]
  39. [--save_state_to_huggingface]
  40. [--resume_from_huggingface]
  41. [--async_upload]
  42. [--save_precision {None,float,fp16,bf16}]
  43. [--save_every_n_epochs SAVE_EVERY_N_EPOCHS]
  44. [--save_every_n_steps SAVE_EVERY_N_STEPS]
  45. [--save_n_epoch_ratio SAVE_N_EPOCH_RATIO]
  46. [--save_last_n_epochs SAVE_LAST_N_EPOCHS]
  47. [--save_last_n_epochs_state SAVE_LAST_N_EPOCHS_STATE]
  48. [--save_last_n_steps SAVE_LAST_N_STEPS]
  49. [--save_last_n_steps_state SAVE_LAST_N_STEPS_STATE]
  50. [--save_state]
  51. [--resume RESUME]
  52. [--train_batch_size TRAIN_BATCH_SIZE]
  53. [--max_token_length {None,150,225}]
  54. [--mem_eff_attn]
  55. [--xformers]
  56. [--sdpa]
  57. [--vae VAE]
  58. [--max_train_steps MAX_TRAIN_STEPS]
  59. [--max_train_epochs MAX_TRAIN_EPOCHS]
  60. [--max_data_loader_n_workers MAX_DATA_LOADER_N_WORKERS]
  61. [--persistent_data_loader_workers]
  62. [--seed SEED]
  63. [--gradient_checkpointing]
  64. [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
  65. [--mixed_precision {no,fp16,bf16}]
  66. [--full_fp16]
  67. [--full_bf16]
  68. [--ddp_timeout DDP_TIMEOUT]
  69. [--clip_skip CLIP_SKIP]
  70. [--logging_dir LOGGING_DIR]
  71. [--log_with {tensorboard,wandb,all}]
  72. [--log_prefix LOG_PREFIX]
  73. [--log_tracker_name LOG_TRACKER_NAME]
  74. [--log_tracker_config LOG_TRACKER_CONFIG]
  75. [--wandb_api_key WANDB_API_KEY]
  76. [--noise_offset NOISE_OFFSET]
  77. [--multires_noise_iterations MULTIRES_NOISE_ITERATIONS]
  78. [--ip_noise_gamma IP_NOISE_GAMMA]
  79. [--multires_noise_discount MULTIRES_NOISE_DISCOUNT]
  80. [--adaptive_noise_scale ADAPTIVE_NOISE_SCALE]
  81. [--zero_terminal_snr]
  82. [--min_timestep MIN_TIMESTEP]
  83. [--max_timestep MAX_TIMESTEP]
  84. [--lowram]
  85. [--sample_every_n_steps SAMPLE_EVERY_N_STEPS]
  86. [--sample_every_n_epochs SAMPLE_EVERY_N_EPOCHS]
  87. [--sample_prompts SAMPLE_PROMPTS]
  88. [--sample_sampler {ddim,pndm,lms,euler,euler_a,heun,dpm_2,dpm_2_a,dpmsolver,dpmsolver++,dpmsingle,k_lms,k_euler,k_euler_a,k_dpm_2,k_dpm_2_a}]
  89. [--config_file CONFIG_FILE]
  90. [--output_config]
  91. [--metadata_title METADATA_TITLE]
  92. [--metadata_author METADATA_AUTHOR]
  93. [--metadata_description METADATA_DESCRIPTION]
  94. [--metadata_license METADATA_LICENSE]
  95. [--metadata_tags METADATA_TAGS]
  96. [--prior_loss_weight PRIOR_LOSS_WEIGHT]
  97. [--save_model_as {None,ckpt,safetensors,diffusers,diffusers_safetensors}]
  98. [--use_safetensors]
  99. [--optimizer_type OPTIMIZER_TYPE]
  100. [--use_8bit_adam]
  101. [--use_lion_optimizer]
  102. [--learning_rate LEARNING_RATE]
  103. [--max_grad_norm MAX_GRAD_NORM]
  104. [--optimizer_args [OPTIMIZER_ARGS [OPTIMIZER_ARGS ...]]]
  105. [--lr_scheduler_type LR_SCHEDULER_TYPE]
  106. [--lr_scheduler_args [LR_SCHEDULER_ARGS [LR_SCHEDULER_ARGS ...]]]
  107. [--lr_scheduler LR_SCHEDULER]
  108. [--lr_warmup_steps LR_WARMUP_STEPS]
  109. [--lr_scheduler_num_cycles LR_SCHEDULER_NUM_CYCLES]
  110. [--lr_scheduler_power LR_SCHEDULER_POWER]
  111. [--dataset_config DATASET_CONFIG]
  112. [--min_snr_gamma MIN_SNR_GAMMA]
  113. [--scale_v_pred_loss_like_noise_pred]
  114. [--v_pred_like_loss V_PRED_LIKE_LOSS]
  115. [--debiased_estimation_loss]
  116. [--weighted_captions]
  117. [--learning_rate_te LEARNING_RATE_TE]
  118. [--no_token_padding]
  119. [--stop_text_encoder_training STOP_TEXT_ENCODER_TRAINING]

 train_network.py

  1. [-h] [--v2] [--v_parameterization]
  2. [--pretrained_model_name_or_path PRETRAINED_MODEL_NAME_OR_PATH]
  3. [--tokenizer_cache_dir TOKENIZER_CACHE_DIR]
  4. [--train_data_dir TRAIN_DATA_DIR]
  5. [--shuffle_caption]
  6. [--caption_extension CAPTION_EXTENSION]
  7. [--caption_extention CAPTION_EXTENTION]
  8. [--keep_tokens KEEP_TOKENS]
  9. [--caption_prefix CAPTION_PREFIX]
  10. [--caption_suffix CAPTION_SUFFIX]
  11. [--color_aug]
  12. [--flip_aug]
  13. [--face_crop_aug_range FACE_CROP_AUG_RANGE]
  14. [--random_crop]
  15. [--debug_dataset]
  16. [--resolution RESOLUTION]
  17. [--cache_latents]
  18. [--vae_batch_size VAE_BATCH_SIZE]
  19. [--cache_latents_to_disk]
  20. [--enable_bucket]
  21. [--min_bucket_reso MIN_BUCKET_RESO]
  22. [--max_bucket_reso MAX_BUCKET_RESO]
  23. [--bucket_reso_steps BUCKET_RESO_STEPS]
  24. [--bucket_no_upscale]
  25. [--token_warmup_min TOKEN_WARMUP_MIN]
  26. [--token_warmup_step TOKEN_WARMUP_STEP]
  27. [--dataset_class DATASET_CLASS]
  28. [--caption_dropout_rate CAPTION_DROPOUT_RATE]
  29. [--caption_dropout_every_n_epochs CAPTION_DROPOUT_EVERY_N_EPOCHS]
  30. [--caption_tag_dropout_rate CAPTION_TAG_DROPOUT_RATE]
  31. [--reg_data_dir REG_DATA_DIR]
  32. [--in_json IN_JSON]
  33. [--dataset_repeats DATASET_REPEATS]
  34. [--output_dir OUTPUT_DIR]
  35. [--output_name OUTPUT_NAME]
  36. [--huggingface_repo_id HUGGINGFACE_REPO_ID]
  37. [--huggingface_repo_type HUGGINGFACE_REPO_TYPE]
  38. [--huggingface_path_in_repo HUGGINGFACE_PATH_IN_REPO]
  39. [--huggingface_token HUGGINGFACE_TOKEN]
  40. [--huggingface_repo_visibility HUGGINGFACE_REPO_VISIBILITY]
  41. [--save_state_to_huggingface]
  42. [--resume_from_huggingface]
  43. [--async_upload]
  44. [--save_precision {None,float,fp16,bf16}]
  45. [--save_every_n_epochs SAVE_EVERY_N_EPOCHS]
  46. [--save_every_n_steps SAVE_EVERY_N_STEPS]
  47. [--save_n_epoch_ratio SAVE_N_EPOCH_RATIO]
  48. [--save_last_n_epochs SAVE_LAST_N_EPOCHS]
  49. [--save_last_n_epochs_state SAVE_LAST_N_EPOCHS_STATE]
  50. [--save_last_n_steps SAVE_LAST_N_STEPS]
  51. [--save_last_n_steps_state SAVE_LAST_N_STEPS_STATE]
  52. [--save_state]
  53. [--resume RESUME]
  54. [--train_batch_size TRAIN_BATCH_SIZE]
  55. [--max_token_length {None,150,225}]
  56. [--mem_eff_attn]
  57. [--xformers]
  58. [--sdpa]
  59. [--vae VAE]
  60. [--max_train_steps MAX_TRAIN_STEPS]
  61. [--max_train_epochs MAX_TRAIN_EPOCHS]
  62. [--max_data_loader_n_workers MAX_DATA_LOADER_N_WORKERS]
  63. [--persistent_data_loader_workers]
  64. [--seed SEED]
  65. [--gradient_checkpointing]
  66. [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
  67. [--mixed_precision {no,fp16,bf16}]
  68. [--full_fp16]
  69. [--full_bf16]
  70. [--ddp_timeout DDP_TIMEOUT]
  71. [--clip_skip CLIP_SKIP]
  72. [--logging_dir LOGGING_DIR]
  73. [--log_with {tensorboard,wandb,all}]
  74. [--log_prefix LOG_PREFIX]
  75. [--log_tracker_name LOG_TRACKER_NAME]
  76. [--log_tracker_config LOG_TRACKER_CONFIG]
  77. [--wandb_api_key WANDB_API_KEY]
  78. [--noise_offset NOISE_OFFSET]
  79. [--multires_noise_iterations MULTIRES_NOISE_ITERATIONS]
  80. [--ip_noise_gamma IP_NOISE_GAMMA]
  81. [--multires_noise_discount MULTIRES_NOISE_DISCOUNT]
  82. [--adaptive_noise_scale ADAPTIVE_NOISE_SCALE]
  83. [--zero_terminal_snr]
  84. [--min_timestep MIN_TIMESTEP]
  85. [--max_timestep MAX_TIMESTEP]
  86. [--lowram]
  87. [--sample_every_n_steps SAMPLE_EVERY_N_STEPS]
  88. [--sample_every_n_epochs SAMPLE_EVERY_N_EPOCHS]
  89. [--sample_prompts SAMPLE_PROMPTS]
  90. [--sample_sampler {ddim,pndm,lms,euler,euler_a,heun,dpm_2,dpm_2_a,dpmsolver,dpmsolver++,dpmsingle,k_lms,k_euler,k_euler_a,k_dpm_2,k_dpm_2_a}]
  91. [--config_file CONFIG_FILE]
  92. [--output_config]
  93. [--metadata_title METADATA_TITLE]
  94. [--metadata_author METADATA_AUTHOR]
  95. [--metadata_description METADATA_DESCRIPTION]
  96. [--metadata_license METADATA_LICENSE]
  97. [--metadata_tags METADATA_TAGS]
  98. [--prior_loss_weight PRIOR_LOSS_WEIGHT]
  99. [--optimizer_type OPTIMIZER_TYPE]
  100. [--use_8bit_adam]
  101. [--use_lion_optimizer]
  102. [--learning_rate LEARNING_RATE]
  103. [--max_grad_norm MAX_GRAD_NORM]
  104. [--optimizer_args [OPTIMIZER_ARGS ...]]
  105. [--lr_scheduler_type LR_SCHEDULER_TYPE]
  106. [--lr_scheduler_args [LR_SCHEDULER_ARGS ...]]
  107. [--lr_scheduler LR_SCHEDULER]
  108. [--lr_warmup_steps LR_WARMUP_STEPS]
  109. [--lr_scheduler_num_cycles LR_SCHEDULER_NUM_CYCLES]
  110. [--lr_scheduler_power LR_SCHEDULER_POWER]
  111. [--dataset_config DATASET_CONFIG]
  112. [--min_snr_gamma MIN_SNR_GAMMA]
  113. [--scale_v_pred_loss_like_noise_pred]
  114. [--v_pred_like_loss V_PRED_LIKE_LOSS]
  115. [--debiased_estimation_loss]
  116. [--weighted_captions]
  117. [--no_metadata]
  118. [--save_model_as {None,ckpt,pt,safetensors}]
  119. [--unet_lr UNET_LR]
  120. [--text_encoder_lr TEXT_ENCODER_LR]
  121. [--network_weights NETWORK_WEIGHTS]
  122. [--network_module NETWORK_MODULE]
  123. [--network_dim NETWORK_DIM]
  124. [--network_alpha NETWORK_ALPHA]
  125. [--network_dropout NETWORK_DROPOUT]
  126. [--network_args [NETWORK_ARGS ...]]
  127. [--network_train_unet_only]
  128. [--network_train_text_encoder_only]
  129. [--training_comment TRAINING_COMMENT]
  130. [--dim_from_weights]
  131. [--scale_weight_norms SCALE_WEIGHT_NORMS]
  132. [--base_weights [BASE_WEIGHTS ...]]
  133. [--base_weights_multiplier [BASE_WEIGHTS_MULTIPLIER ...]]
  134. [--no_half_vae]

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/104010
推荐阅读