当前位置:   article > 正文

预训练中文GPT2(包括重新训练tokenizer)_gpt 训练文本

gpt 训练文本

训练数据

1.json后缀的文件

2.数据是json line格式,一行一条json

3. json结构如下

  1. {
  2. "content": "①北京和上海户籍的游客可获得韩国多次签证;②“整容客”可以不经由韩国使领馆、直接在网上申请签证;③中泰免签的实施日期尚未敲定;④越南已向中国持通行证旅游的公民全面开放。"
  3. }

tokenizer训练(BPE)

  1. from transformers import AutoTokenizer
  2. from datasets import load_dataset
  3. path = r'/tmp/pycharm_project_806/LCSTS_new/train.json' # a chinese text dataset
  4. raw_data = load_dataset("json", data_files=path, split='train')
  5. training_corpus = (
  6. raw_data[i : i + 1000]["content"]
  7. for i in range(0, len(raw_data), 1000)
  8. )
  9. old_tokenizer = AutoTokenizer.from_pretrained("/home/chenjq/model/gpt2")
  10. tokenizer = old_tokenizer.train_new_from_iterator(training_corpus, 52000)
  11. example = '就是去美国大使馆的官方网站,它有中文版,去把每一条仔细研究透了,把每一个表格和材料都准备好了' # chinese text
  12. old_tokens = old_tokenizer.tokenize(example)
  13. print('old_tokens:',old_tokens)
  14. new_tokens = tokenizer.tokenize(example)
  15. print('new_tokens',new_tokens)
  16. tokenizer.save_pretrained("./my-tok")

tokenizer训练(sentencePiece)

  1. from tokenizers import (
  2. decoders,
  3. models,
  4. normalizers,
  5. pre_tokenizers,
  6. processors,
  7. trainers,
  8. Tokenizer,
  9. )
  10. from datasets import load_dataset
  11. from tokenizers import Regex
  12. path = r'wiki.json' # a chinese text dataset
  13. # path = r'all_train.json' # a chinese text dataset
  14. # path = r'cluener.jsonl' # a chinese text dataset
  15. # path = r'/tmp/pycharm_project_806/cluener.json' # a chinese text dataset
  16. raw_data = load_dataset("json", data_files=path, split='train')
  17. # raw_data = raw_data.select(range(10000))
  18. training_corpus = (
  19. raw_data[i : i + 1000]["content"]
  20. for i in range(0, len(raw_data), 1000)
  21. )
  22. tokenizer = Tokenizer(models.Unigram())
  23. # NLG不应当加入 normalizers.Lowercase(),因为在decode的时候,就无法生成大写的了
  24. # 在bert等NLU模型中,可以加入 normalizers.Lowercase(),因为NLU一般不用于文本生成,而是用于文本理解(如文本分类,实体抽取),
  25. # 这种情况下其实大写小写无所谓
  26. tokenizer.normalizer = normalizers.Sequence(
  27. [
  28. normalizers.Replace("``", '"'),
  29. normalizers.Replace("''", '"'),
  30. normalizers.NFKD(),
  31. normalizers.StripAccents(),
  32. normalizers.Replace(Regex(" {2,}"), " "),
  33. ]
  34. )
  35. tokenizer.pre_tokenizer = pre_tokenizers.Metaspace()
  36. print(tokenizer.pre_tokenizer.pre_tokenize_str("北京是中国的首都,今天天气真好。Let's test this tokenizer."))
  37. print(1)
  38. special_tokens = ["<bos>","<eos>", '<sep>'] + [f'<unused{i}>' for i in range(50)]
  39. trainer = trainers.UnigramTrainer(
  40. vocab_size=52000, special_tokens=special_tokens, unk_token="<unk>",max_piece_length=4,
  41. )
  42. tokenizer.train_from_iterator(training_corpus, trainer=trainer)
  43. encoding = tokenizer.encode("北京是中国的首都,今天天气真好。Let's test this tokenizer.")
  44. print(encoding.tokens)
  45. bos_token_id = tokenizer.token_to_id("<bos>")
  46. eos_token_id = tokenizer.token_to_id("<eos>")
  47. sep_token_id = tokenizer.token_to_id("<sep>")
  48. tokenizer.post_processor = processors.TemplateProcessing(
  49. single=f"<bos>:0 $A:0 <eos>:0",
  50. pair=f"<bos>:0 $A:0 <sep>:0 $B:1 <eos>:1",
  51. special_tokens=[("<bos>", bos_token_id), ("<eos>", eos_token_id), ("<sep>", sep_token_id)],
  52. )
  53. encoding = tokenizer.encode("北京是中国的首都,今天天气真好。Let's test this tokenizer.")
  54. print(encoding.tokens)
  55. encoding = tokenizer.encode("北京是中国的首都,今天天气真好。Let's test this tokenizer." ,'i am happy.')
  56. print(encoding.tokens)
  57. print(tokenizer.decode(encoding.ids))
  58. tokenizer.decoder = decoders.Metaspace()
  59. print(tokenizer.decode(encoding.ids))
  60. from transformers import PreTrainedTokenizerFast
  61. wrapped_tokenizer = PreTrainedTokenizerFast(
  62. tokenizer_object=tokenizer,
  63. bos_token="<bos>",
  64. eos_token="<eos>",
  65. sep_token="<sep>",
  66. )
  67. wrapped_tokenizer.save_pretrained('./sp-tok-v4')
  68. print(wrapped_tokenizer.tokenize("北京是中国的首都,今天天气真好。"))

模型训练

  1. #!/usr/bin/env python
  2. # coding=utf-8
  3. # Copyright 2020 The HuggingFace Inc. team. All rights reserved.
  4. #
  5. # Licensed under the Apache License, Version 2.0 (the "License");
  6. # you may not use this file except in compliance with the License.
  7. # You may obtain a copy of the License at
  8. #
  9. # http://www.apache.org/licenses/LICENSE-2.0
  10. #
  11. # Unless required by applicable law or agreed to in writing, software
  12. # distributed under the License is distributed on an "AS IS" BASIS,
  13. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  14. # See the License for the specific language governing permissions and
  15. # limitations under the License.
  16. """
  17. Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...) on a text file or a dataset.
  18. Here is the full list of checkpoints on the hub that can be fine-tuned by this script:
  19. https://huggingface.co/models?filter=text-generation
  20. """
  21. # You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments.
  22. import logging
  23. import math
  24. import os
  25. import sys
  26. import warnings
  27. from dataclasses import dataclass, field
  28. from itertools import chain
  29. from typing import Optional
  30. import datasets
  31. import evaluate
  32. import torch
  33. from datasets import load_dataset
  34. import transformers
  35. from transformers import (
  36. CONFIG_MAPPING,
  37. MODEL_FOR_CAUSAL_LM_MAPPING,
  38. AutoConfig,
  39. AutoModelForCausalLM,
  40. AutoTokenizer,
  41. HfArgumentParser,
  42. Trainer,
  43. TrainingArguments,
  44. default_data_collator,
  45. is_torch_tpu_available,
  46. set_seed,
  47. )
  48. from transformers.testing_utils import CaptureLogger
  49. from transformers.trainer_utils import get_last_checkpoint
  50. from transformers.utils import check_min_version, send_example_telemetry
  51. from transformers.utils.versions import require_version
  52. # Will error if the minimal version of Transformers is not installed. Remove at your own risks.
  53. check_min_version("4.37.0.dev0")
  54. require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
  55. logger = logging.getLogger(__name__)
  56. MODEL_CONFIG_CLASSES = list(MODEL_FOR_CAUSAL_LM_MAPPING.keys())
  57. MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
  58. @dataclass
  59. class ModelArguments:
  60. """
  61. Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
  62. """
  63. model_name_or_path: Optional[str] = field(
  64. default=None,
  65. metadata={
  66. "help": (
  67. "The model checkpoint for weights initialization. Don't set if you want to train a model from scratch."
  68. )
  69. },
  70. )
  71. model_type: Optional[str] = field(
  72. default=None,
  73. metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
  74. )
  75. config_overrides: Optional[str] = field(
  76. default=None,
  77. metadata={
  78. "help": (
  79. "Override some existing default config settings when a model is trained from scratch. Example: "
  80. "n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index"
  81. )
  82. },
  83. )
  84. config_name: Optional[str] = field(
  85. default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
  86. )
  87. tokenizer_name: Optional[str] = field(
  88. default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
  89. )
  90. cache_dir: Optional[str] = field(
  91. default=None,
  92. metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
  93. )
  94. use_fast_tokenizer: bool = field(
  95. default=True,
  96. metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
  97. )
  98. model_revision: str = field(
  99. default="main",
  100. metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
  101. )
  102. token: str = field(
  103. default=None,
  104. metadata={
  105. "help": (
  106. "The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
  107. "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
  108. )
  109. },
  110. )
  111. use_auth_token: bool = field(
  112. default=None,
  113. metadata={
  114. "help": "The `use_auth_token` argument is deprecated and will be removed in v4.34. Please use `token` instead."
  115. },
  116. )
  117. trust_remote_code: bool = field(
  118. default=False,
  119. metadata={
  120. "help": (
  121. "Whether or not to allow for custom models defined on the Hub in their own modeling files. This option"
  122. "should only be set to `True` for repositories you trust and in which you have read the code, as it will "
  123. "execute code present on the Hub on your local machine."
  124. )
  125. },
  126. )
  127. torch_dtype: Optional[str] = field(
  128. default=None,
  129. metadata={
  130. "help": (
  131. "Override the default `torch.dtype` and load the model under this dtype. If `auto` is passed, the "
  132. "dtype will be automatically derived from the model's weights."
  133. ),
  134. "choices": ["auto", "bfloat16", "float16", "float32"],
  135. },
  136. )
  137. low_cpu_mem_usage: bool = field(
  138. default=False,
  139. metadata={
  140. "help": (
  141. "It is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded. "
  142. "set True will benefit LLM loading time and RAM consumption."
  143. )
  144. },
  145. )
  146. def __post_init__(self):
  147. if self.config_overrides is not None and (self.config_name is not None or self.model_name_or_path is not None):
  148. raise ValueError(
  149. "--config_overrides can't be used in combination with --config_name or --model_name_or_path"
  150. )
  151. @dataclass
  152. class DataTrainingArguments:
  153. """
  154. Arguments pertaining to what data we are going to input our model for training and eval.
  155. """
  156. dataset_name: Optional[str] = field(
  157. default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
  158. )
  159. dataset_config_name: Optional[str] = field(
  160. default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
  161. )
  162. train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
  163. validation_file: Optional[str] = field(
  164. default=None,
  165. metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
  166. )
  167. max_train_samples: Optional[int] = field(
  168. default=None,
  169. metadata={
  170. "help": (
  171. "For debugging purposes or quicker training, truncate the number of training examples to this "
  172. "value if set."
  173. )
  174. },
  175. )
  176. max_eval_samples: Optional[int] = field(
  177. default=None,
  178. metadata={
  179. "help": (
  180. "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
  181. "value if set."
  182. )
  183. },
  184. )
  185. streaming: bool = field(default=False, metadata={"help": "Enable streaming mode"})
  186. block_size: Optional[int] = field(
  187. default=None,
  188. metadata={
  189. "help": (
  190. "Optional input sequence length after tokenization. "
  191. "The training dataset will be truncated in block of this size for training. "
  192. "Default to the model max input length for single sentence inputs (take into account special tokens)."
  193. )
  194. },
  195. )
  196. overwrite_cache: bool = field(
  197. default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
  198. )
  199. validation_split_percentage: Optional[int] = field(
  200. default=5,
  201. metadata={
  202. "help": "The percentage of the train set used as validation set in case there's no validation split"
  203. },
  204. )
  205. preprocessing_num_workers: Optional[int] = field(
  206. default=None,
  207. metadata={"help": "The number of processes to use for the preprocessing."},
  208. )
  209. keep_linebreaks: bool = field(
  210. default=True, metadata={"help": "Whether to keep line breaks when using TXT files or not."}
  211. )
  212. def __post_init__(self):
  213. if self.streaming:
  214. require_version("datasets>=2.0.0", "The streaming feature requires `datasets>=2.0.0`")
  215. if self.dataset_name is None and self.train_file is None and self.validation_file is None:
  216. raise ValueError("Need either a dataset name or a training/validation file.")
  217. else:
  218. if self.train_file is not None:
  219. extension = self.train_file.split(".")[-1]
  220. assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file."
  221. if self.validation_file is not None:
  222. extension = self.validation_file.split(".")[-1]
  223. assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file."
  224. def main():
  225. # See all possible arguments in src/transformers/training_args.py
  226. # or by passing the --help flag to this script.
  227. # We now keep distinct sets of args, for a cleaner separation of concerns.
  228. parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
  229. if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
  230. # If we pass only one argument to the script and it's the path to a json file,
  231. # let's parse it to get our arguments.
  232. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
  233. else:
  234. model_args, data_args, training_args = parser.parse_args_into_dataclasses()
  235. if model_args.use_auth_token is not None:
  236. warnings.warn(
  237. "The `use_auth_token` argument is deprecated and will be removed in v4.34. Please use `token` instead.",
  238. FutureWarning,
  239. )
  240. if model_args.token is not None:
  241. raise ValueError("`token` and `use_auth_token` are both specified. Please set only the argument `token`.")
  242. model_args.token = model_args.use_auth_token
  243. # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The
  244. # information sent is the one passed as arguments along with your Python/PyTorch versions.
  245. send_example_telemetry("run_clm", model_args, data_args)
  246. # Setup logging
  247. logging.basicConfig(
  248. format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
  249. datefmt="%m/%d/%Y %H:%M:%S",
  250. handlers=[logging.StreamHandler(sys.stdout)],
  251. )
  252. if training_args.should_log:
  253. # The default of training_args.log_level is passive, so we set log level at info here to have that default.
  254. transformers.utils.logging.set_verbosity_info()
  255. log_level = training_args.get_process_log_level()
  256. logger.setLevel(log_level)
  257. datasets.utils.logging.set_verbosity(log_level)
  258. transformers.utils.logging.set_verbosity(log_level)
  259. transformers.utils.logging.enable_default_handler()
  260. transformers.utils.logging.enable_explicit_format()
  261. # Log on each process the small summary:
  262. logger.warning(
  263. f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, "
  264. + f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}"
  265. )
  266. logger.info(f"Training/evaluation parameters {training_args}")
  267. # Detecting last checkpoint.
  268. last_checkpoint = None
  269. if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
  270. last_checkpoint = get_last_checkpoint(training_args.output_dir)
  271. if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
  272. raise ValueError(
  273. f"Output directory ({training_args.output_dir}) already exists and is not empty. "
  274. "Use --overwrite_output_dir to overcome."
  275. )
  276. elif last_checkpoint is not None and training_args.resume_from_checkpoint is None:
  277. logger.info(
  278. f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
  279. "the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
  280. )
  281. # Set seed before initializing model.
  282. set_seed(training_args.seed)
  283. # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
  284. # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
  285. # (the dataset will be downloaded automatically from the datasets Hub).
  286. #
  287. # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
  288. # 'text' is found. You can easily tweak this behavior (see below).
  289. #
  290. # In distributed training, the load_dataset function guarantee that only one local process can concurrently
  291. # download the dataset.
  292. if data_args.dataset_name is not None:
  293. # Downloading and loading a dataset from the hub.
  294. raw_datasets = load_dataset(
  295. data_args.dataset_name,
  296. data_args.dataset_config_name,
  297. cache_dir=model_args.cache_dir,
  298. token=model_args.token,
  299. streaming=data_args.streaming,
  300. )
  301. if "validation" not in raw_datasets.keys():
  302. raw_datasets["validation"] = load_dataset(
  303. data_args.dataset_name,
  304. data_args.dataset_config_name,
  305. split=f"train[:{data_args.validation_split_percentage}%]",
  306. cache_dir=model_args.cache_dir,
  307. token=model_args.token,
  308. streaming=data_args.streaming,
  309. )
  310. raw_datasets["train"] = load_dataset(
  311. data_args.dataset_name,
  312. data_args.dataset_config_name,
  313. split=f"train[{data_args.validation_split_percentage}%:]",
  314. cache_dir=model_args.cache_dir,
  315. token=model_args.token,
  316. streaming=data_args.streaming,
  317. )
  318. else:
  319. data_files = {}
  320. dataset_args = {}
  321. if data_args.train_file is not None:
  322. data_files["train"] = data_args.train_file
  323. if data_args.validation_file is not None:
  324. data_files["validation"] = data_args.validation_file
  325. extension = (
  326. data_args.train_file.split(".")[-1]
  327. if data_args.train_file is not None
  328. else data_args.validation_file.split(".")[-1]
  329. )
  330. if extension == "txt":
  331. extension = "text"
  332. dataset_args["keep_linebreaks"] = data_args.keep_linebreaks
  333. raw_datasets = load_dataset(
  334. extension,
  335. data_files=data_files,
  336. cache_dir=model_args.cache_dir,
  337. token=model_args.token,
  338. **dataset_args,
  339. )
  340. # If no validation data is there, validation_split_percentage will be used to divide the dataset.
  341. if "validation" not in raw_datasets.keys():
  342. raw_datasets["validation"] = load_dataset(
  343. extension,
  344. data_files=data_files,
  345. split=f"train[:{data_args.validation_split_percentage}%]",
  346. cache_dir=model_args.cache_dir,
  347. token=model_args.token,
  348. **dataset_args,
  349. )
  350. raw_datasets["train"] = load_dataset(
  351. extension,
  352. data_files=data_files,
  353. split=f"train[{data_args.validation_split_percentage}%:]",
  354. cache_dir=model_args.cache_dir,
  355. token=model_args.token,
  356. **dataset_args,
  357. )
  358. # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
  359. # https://huggingface.co/docs/datasets/loading_datasets.
  360. # Load pretrained model and tokenizer
  361. #
  362. # Distributed training:
  363. # The .from_pretrained methods guarantee that only one local process can concurrently
  364. # download model & vocab.
  365. config_kwargs = {
  366. "cache_dir": model_args.cache_dir,
  367. "revision": model_args.model_revision,
  368. "token": model_args.token,
  369. "trust_remote_code": model_args.trust_remote_code,
  370. }
  371. if model_args.config_name:
  372. config = AutoConfig.from_pretrained(model_args.config_name, **config_kwargs)
  373. elif model_args.model_name_or_path:
  374. config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs)
  375. else:
  376. config = CONFIG_MAPPING[model_args.model_type]()
  377. logger.warning("You are instantiating a new config instance from scratch.")
  378. if model_args.config_overrides is not None:
  379. logger.info(f"Overriding config: {model_args.config_overrides}")
  380. config.update_from_string(model_args.config_overrides)
  381. logger.info(f"New config: {config}")
  382. tokenizer_kwargs = {
  383. "cache_dir": model_args.cache_dir,
  384. "use_fast": model_args.use_fast_tokenizer,
  385. "revision": model_args.model_revision,
  386. "token": model_args.token,
  387. "trust_remote_code": model_args.trust_remote_code,
  388. }
  389. if model_args.tokenizer_name:
  390. tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)
  391. elif model_args.model_name_or_path:
  392. tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs)
  393. else:
  394. raise ValueError(
  395. "You are instantiating a new tokenizer from scratch. This is not supported by this script. "
  396. "You can do it from another script, save it, and load it from here, using --tokenizer_name."
  397. )
  398. if model_args.model_name_or_path:
  399. torch_dtype = (
  400. model_args.torch_dtype
  401. if model_args.torch_dtype in ["auto", None]
  402. else getattr(torch, model_args.torch_dtype)
  403. )
  404. model = AutoModelForCausalLM.from_pretrained(
  405. model_args.model_name_or_path,
  406. from_tf=bool(".ckpt" in model_args.model_name_or_path),
  407. config=config,
  408. cache_dir=model_args.cache_dir,
  409. revision=model_args.model_revision,
  410. token=model_args.token,
  411. trust_remote_code=model_args.trust_remote_code,
  412. torch_dtype=torch_dtype,
  413. low_cpu_mem_usage=model_args.low_cpu_mem_usage,
  414. )
  415. else:
  416. model = AutoModelForCausalLM.from_config(config, trust_remote_code=model_args.trust_remote_code)
  417. n_params = sum({p.data_ptr(): p.numel() for p in model.parameters()}.values())
  418. logger.info(f"Training new model from scratch - Total size={n_params/2**20:.2f}M params")
  419. # We resize the embeddings only when necessary to avoid index errors. If you are creating a model from scratch
  420. # on a small vocab and want a smaller embedding size, remove this test.
  421. embedding_size = model.get_input_embeddings().weight.shape[0]
  422. if len(tokenizer) > embedding_size:
  423. model.resize_token_embeddings(len(tokenizer))
  424. # Preprocessing the datasets.
  425. # First we tokenize all the texts.
  426. if training_args.do_train:
  427. column_names = list(raw_datasets["train"].features)
  428. else:
  429. column_names = list(raw_datasets["validation"].features)
  430. # text_column_name = "text" if "text" in column_names else column_names[0]
  431. text_column_name = "content"
  432. # since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function
  433. tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base")
  434. def tokenize_function(examples):
  435. with CaptureLogger(tok_logger) as cl:
  436. output = tokenizer(examples[text_column_name])
  437. # clm input could be much much longer than block_size
  438. if "Token indices sequence length is longer than the" in cl.out:
  439. tok_logger.warning(
  440. "^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits"
  441. " before being passed to the model."
  442. )
  443. return output
  444. with training_args.main_process_first(desc="dataset map tokenization"):
  445. if not data_args.streaming:
  446. tokenized_datasets = raw_datasets.map(
  447. tokenize_function,
  448. batched=True,
  449. num_proc=data_args.preprocessing_num_workers,
  450. remove_columns=column_names,
  451. load_from_cache_file=not data_args.overwrite_cache,
  452. desc="Running tokenizer on dataset",
  453. )
  454. else:
  455. tokenized_datasets = raw_datasets.map(
  456. tokenize_function,
  457. batched=True,
  458. remove_columns=column_names,
  459. )
  460. if hasattr(config, "max_position_embeddings"):
  461. max_pos_embeddings = config.max_position_embeddings
  462. else:
  463. # Define a default value if the attribute is missing in the config.
  464. max_pos_embeddings = 1024
  465. if data_args.block_size is None:
  466. block_size = tokenizer.model_max_length
  467. if block_size > max_pos_embeddings:
  468. logger.warning(
  469. f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). "
  470. f"Using block_size={min(1024, max_pos_embeddings)} instead. You can change that default value by passing --block_size xxx."
  471. )
  472. if max_pos_embeddings > 0:
  473. block_size = min(1024, max_pos_embeddings)
  474. else:
  475. block_size = 1024
  476. else:
  477. if data_args.block_size > tokenizer.model_max_length:
  478. logger.warning(
  479. f"The block_size passed ({data_args.block_size}) is larger than the maximum length for the model "
  480. f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}."
  481. )
  482. block_size = min(data_args.block_size, tokenizer.model_max_length)
  483. # Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
  484. def group_texts(examples):
  485. # Concatenate all texts.
  486. concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
  487. total_length = len(concatenated_examples[list(examples.keys())[0]])
  488. # We drop the small remainder, and if the total_length < block_size we exclude this batch and return an empty dict.
  489. # We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
  490. total_length = (total_length // block_size) * block_size
  491. # Split by chunks of max_len.
  492. result = {
  493. k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
  494. for k, t in concatenated_examples.items()
  495. }
  496. result["labels"] = result["input_ids"].copy()
  497. return result
  498. # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder
  499. # for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower
  500. # to preprocess.
  501. #
  502. # To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
  503. # https://huggingface.co/docs/datasets/process#map
  504. with training_args.main_process_first(desc="grouping texts together"):
  505. if not data_args.streaming:
  506. lm_datasets = tokenized_datasets.map(
  507. group_texts,
  508. batched=True,
  509. num_proc=data_args.preprocessing_num_workers,
  510. load_from_cache_file=not data_args.overwrite_cache,
  511. desc=f"Grouping texts in chunks of {block_size}",
  512. )
  513. else:
  514. lm_datasets = tokenized_datasets.map(
  515. group_texts,
  516. batched=True,
  517. )
  518. if training_args.do_train:
  519. if "train" not in tokenized_datasets:
  520. raise ValueError("--do_train requires a train dataset")
  521. train_dataset = lm_datasets["train"]
  522. if data_args.max_train_samples is not None:
  523. max_train_samples = min(len(train_dataset), data_args.max_train_samples)
  524. train_dataset = train_dataset.select(range(max_train_samples))
  525. if training_args.do_eval:
  526. if "validation" not in tokenized_datasets:
  527. raise ValueError("--do_eval requires a validation dataset")
  528. eval_dataset = lm_datasets["validation"]
  529. if data_args.max_eval_samples is not None:
  530. max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples)
  531. eval_dataset = eval_dataset.select(range(max_eval_samples))
  532. def preprocess_logits_for_metrics(logits, labels):
  533. if isinstance(logits, tuple):
  534. # Depending on the model and config, logits may contain extra tensors,
  535. # like past_key_values, but logits always come first
  536. logits = logits[0]
  537. return logits.argmax(dim=-1)
  538. metric = evaluate.load("accuracy")
  539. def compute_metrics(eval_preds):
  540. preds, labels = eval_preds
  541. # preds have the same shape as the labels, after the argmax(-1) has been calculated
  542. # by preprocess_logits_for_metrics but we need to shift the labels
  543. labels = labels[:, 1:].reshape(-1)
  544. preds = preds[:, :-1].reshape(-1)
  545. return metric.compute(predictions=preds, references=labels)
  546. # Initialize our Trainer
  547. trainer = Trainer(
  548. model=model,
  549. args=training_args,
  550. train_dataset=train_dataset if training_args.do_train else None,
  551. eval_dataset=eval_dataset if training_args.do_eval else None,
  552. tokenizer=tokenizer,
  553. # Data collator will default to DataCollatorWithPadding, so we change it.
  554. data_collator=default_data_collator,
  555. compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,
  556. preprocess_logits_for_metrics=preprocess_logits_for_metrics
  557. if training_args.do_eval and not is_torch_tpu_available()
  558. else None,
  559. )
  560. # Training
  561. if training_args.do_train:
  562. checkpoint = None
  563. if training_args.resume_from_checkpoint is not None:
  564. checkpoint = training_args.resume_from_checkpoint
  565. elif last_checkpoint is not None:
  566. checkpoint = last_checkpoint
  567. train_result = trainer.train(resume_from_checkpoint=checkpoint)
  568. trainer.save_model() # Saves the tokenizer too for easy upload
  569. metrics = train_result.metrics
  570. max_train_samples = (
  571. data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
  572. )
  573. metrics["train_samples"] = min(max_train_samples, len(train_dataset))
  574. trainer.log_metrics("train", metrics)
  575. trainer.save_metrics("train", metrics)
  576. trainer.save_state()
  577. # Evaluation
  578. if training_args.do_eval:
  579. logger.info("*** Evaluate ***")
  580. metrics = trainer.evaluate()
  581. max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)
  582. metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset))
  583. try:
  584. perplexity = math.exp(metrics["eval_loss"])
  585. except OverflowError:
  586. perplexity = float("inf")
  587. metrics["perplexity"] = perplexity
  588. trainer.log_metrics("eval", metrics)
  589. trainer.save_metrics("eval", metrics)
  590. kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "text-generation"}
  591. if data_args.dataset_name is not None:
  592. kwargs["dataset_tags"] = data_args.dataset_name
  593. if data_args.dataset_config_name is not None:
  594. kwargs["dataset_args"] = data_args.dataset_config_name
  595. kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}"
  596. else:
  597. kwargs["dataset"] = data_args.dataset_name
  598. if training_args.push_to_hub:
  599. trainer.push_to_hub(**kwargs)
  600. else:
  601. trainer.create_model_card(**kwargs)
  602. def _mp_fn(index):
  603. # For xla_spawn (TPUs)
  604. main()
  605. if __name__ == "__main__":
  606. main()
  607. """
  608. python run_clm.py \
  609. --train_file /tmp/pycharm_project_806/LCSTS_new/train.json \
  610. --tokenizer_name /home/chenjq/pythonWork/nlp/train_new_gpt2/my-tok \
  611. --model_type gpt2 \
  612. --num_train_epochs 2 \
  613. --per_device_train_batch_size 4 \
  614. --gradient_accumulation_steps 8 \
  615. --do_train \
  616. --output_dir ./tmp/test-clm
  617. /tmp/pycharm_project_806/LCSTS_new/train.json
  618. /tmp/pycharm_project_806/cluener.json
  619. --gradient_accumulation_steps 8 \
  620. --max_train_samples 1000
  621. """

训练代码参考:

https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/README.md

 结果对比

推理代码

  1. from transformers import GPT2Tokenizer,GPT2LMHeadModel, set_seed
  2. set_seed(42)
  3. # model_path = '/tmp/pycharm_project_806/tmp/test-clm/checkpoint-5500'
  4. model_path = "/home/chenjq/model/gpt2"
  5. tokenizer = GPT2Tokenizer.from_pretrained(model_path)
  6. # add the EOS token as PAD token to avoid warnings
  7. model = GPT2LMHeadModel.from_pretrained(model_path,pad_token_id=tokenizer.eos_token_id)
  8. # encode context the generation is conditioned on
  9. input_ids = tokenizer.encode('美国', return_tensors='pt')
  10. # generate text until the output length (which includes the context length) reaches 50
  11. greedy_output = model.generate(input_ids, max_length=50)
  12. print("Output:\n" + 100 * '-')
  13. print(tokenizer.decode(greedy_output[0], skip_special_tokens=True))
  14. # activate beam search and early_stopping
  15. beam_output = model.generate(
  16. input_ids,
  17. max_length=50,
  18. num_beams=5,
  19. early_stopping=True
  20. )
  21. print("Output:\n" + 100 * '-')
  22. print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
  23. # set no_repeat_ngram_size to 2
  24. beam_output = model.generate(
  25. input_ids,
  26. max_length=50,
  27. num_beams=5,
  28. no_repeat_ngram_size=2,
  29. early_stopping=True
  30. )
  31. print("Output:\n" + 100 * '-')
  32. print(tokenizer.decode(beam_output[0], skip_special_tokens=True))
  33. # set return_num_sequences > 1
  34. beam_outputs = model.generate(
  35. input_ids,
  36. max_length=50,
  37. num_beams=5,
  38. no_repeat_ngram_size=2,
  39. num_return_sequences=5,
  40. early_stopping=True
  41. )
  42. # now we have 3 output sequences
  43. print("Output:\n" + 100 * '-')
  44. for i, beam_output in enumerate(beam_outputs):
  45. print("{}: {}".format(i, tokenizer.decode(beam_output, skip_special_tokens=True)))
  46. # activate sampling and deactivate top_k by setting top_k sampling to 0
  47. sample_output = model.generate(
  48. input_ids,
  49. do_sample=True,
  50. max_length=50,
  51. top_k=0
  52. )
  53. print("Output:\n" + 100 * '-')
  54. print(tokenizer.decode(sample_output[0], skip_special_tokens=True))
  55. # use temperature to decrease the sensitivity to low probability candidates
  56. sample_output = model.generate(
  57. input_ids,
  58. do_sample=True,
  59. max_length=50,
  60. top_k=0,
  61. temperature=0.7
  62. )
  63. print("Output:\n" + 100 * '-')
  64. print(tokenizer.decode(sample_output[0], skip_special_tokens=True))
  65. # set top_k to 50
  66. sample_output = model.generate(
  67. input_ids,
  68. do_sample=True,
  69. max_length=50,
  70. top_k=50
  71. )
  72. print("top_k Output:\n" + 100 * '-')
  73. print(tokenizer.decode(sample_output[0], skip_special_tokens=True))
  74. # deactivate top_k sampling and sample only from 92% most likely words
  75. sample_output = model.generate(
  76. input_ids,
  77. do_sample=True,
  78. max_length=50,
  79. top_p=0.92,
  80. top_k=0
  81. )
  82. print("top_p Output:\n" + 100 * '-')
  83. print(tokenizer.decode(sample_output[0], skip_special_tokens=True))

原始GPT2

自己训练的GPT2 (BPE tokenizer)

自己训练的GPT2  (sentencePiece tokenizer)

未使用bos 和 eos包裹训练的效果

(这也违规我草)

使用bos 和 eos包裹的效果

结论

1.训练数据采用了LCSTS数据集,LCSTS_new是中文短摘要最常用的LCSTS短摘要数据集的升级版本,在数据量、质量方面均有显著提升,在信息摘要与提炼的过程中,与原文的事实一致性需要得到重点关注。

  1. {
  2. "id": 6,
  3. "summary": "中国游客大增多国放宽签证",
  4. "content": "①北京和上海户籍的游客可获得韩国多次签证;②“整容客”可以不经由韩国使领馆、直接在网上申请签证;③中泰免签的实施日期尚未敲定;④越南已向中国持通行证旅游的公民全面开放。"
  5. }

2.从生成结果上看,自己训练的比原始的更好。

3.训练数据大约500M,都是短文本,新闻数据,缺乏多样性。可以尝试增加数据多样性,增加文本长度。

GPT2训练注意点

1. 预训练阶段,批量输入数据格式是怎样的?

a.一种选择是将所有文本拼成一大段,然后按固定长度如1024去切成每一个样本。参考如下代码:

  1. def group_texts(examples):
  2. # Concatenate all texts.
  3. concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
  4. total_length = len(concatenated_examples[list(examples.keys())[0]])
  5. # We drop the small remainder, and if the total_length < block_size we exclude this batch and return an empty dict.
  6. # We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
  7. total_length = (total_length // block_size) * block_size
  8. # Split by chunks of max_len.
  9. result = {
  10. k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
  11. for k, t in concatenated_examples.items()
  12. }
  13. result["labels"] = result["input_ids"].copy()
  14. return result

b. 也可以使用bos和eos token将一段话包裹(上述sentencepiece tokenizer已经实现),然后拼接成一大段,然后按照a方法进行切分。(本博客采用此方案,实验证明此方案比a方案好,生成内容更有逻辑性)

c. 不进行拼接,使用eos token单独填充(填充在右边)每个样本,形成batch input(未尝试过)

2. 如果我想训练一个文本生成模型,输入应该是什么样子的?

按1.b所示方法进行构建数据。

3.gpt2为什么不能进行左填充?

因为gpt2采用的绝对位置编码,如果使用左填充,训练的时候,正文位置编码可能不是从1开始,但是推理的时候,正文位置编码始终是从1开始。(这也是为什么GPT2一般不支持批量推理的原因)

4.GPT2没有pad token,通常都是将pad token设置为eos token, 同事相应的label应该设置为-100(因为-100不会计算损失)

代码地址

https://github.com/minmie/gpt2-pretrain

预训练好的模型

https://download.csdn.net/download/u014403221/88794676

本实验所用数据

https://download.csdn.net/download/u014403221/88755559

 https://download.csdn.net/download/u014403221/88761912

声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号