赞
踩
代码:
from transformers import TrainingArguments
training_args = TrainingArguments(output_dir=model_dir,
per_device_train_batch_size=16,
num_train_epochs=5,
logging_steps=100)
TrainingArguments 类在以下代码里:
/xxx/anaconda/envs/your_env/lib/python3.11/site-packages/transformers/training_args.py
huggingface TrainingArguments
Github 源代码
str
)float
, optional, defaults to 3.0)int
, optional, defaults to 8)int
, optional, defaults to 8)float
, optional, defaults to 5e-5)AdamW
] optimizer.float
, optional, defaults to 0)AdamW
] optimizer.int
, optional, defaults to 0)learning_rate
. Overrides any effect of warmup_ratio
.int
or float
, optional, defaults to 500)logging_strategy="steps"
. Should be an integer or a float in range [0,1)
. If smaller than 1, will be interpreted as ratio of total training steps.str
or [~trainer_utils.IntervalStrategy
], optional, defaults to "no"
) The evaluation strategy to adopt during training. Possible values are:
- `"no"`: No evaluation is done during training.
- `"steps"`: Evaluation is done (and logged) every `eval_steps`.
- `"epoch"`: Evaluation is done at the end of each epoch.
str
or [~trainer_utils.IntervalStrategy
], optional, defaults to "steps"
)The checkpoint save strategy to adopt during training. Possible values are:
- `"no"`: No save is done during training.
- `"epoch"`: Save is done at the end of each epoch.
- `"steps"`: Save is done every `save_steps`.
int
or float
, optional, defaults to 500):save_strategy =“steps”
,则两个检查点保存之前的更新步骤数。 应该是整数 或 范围“[0,1)”内的浮点数。 如果小于 1,将被解释为总训练步数的比率。save_strategy="steps"
. Should be an integer or a float in range [0,1)
. If smaller than 1, will be interpreted as ratio of total training steps.int
, optional, defaults to 1)Number of updates steps to accumulate the gradients for, before performing a backward/update pass.
<Tip warning={true}>
When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every `gradient_accumulation_steps * xxx_step` training examples.
</Tip>
int
, optional)bool
, optional)~notebook.NotebookTrainingTracker
] 生成的 tqdm 进度条和指标表。 如果日志记录级别设置为警告或较低(默认),则默认为“True”,否则为“False”。bool
, optional, defaults to False
)Whether or not to load the best model found during training at the end of training. When this option is enabled, the best checkpoint will always be saved. See [`save_total_limit`](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.save_total_limit) for more.
When set to `True`, the parameters `save_strategy` needs to be the same as `evaluation_strategy`, and in the case it is "steps", `save_steps` must be a round multiple of `eval_steps`.
str
, optional)Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。