当前位置:   article > 正文

datawhale课程《transformers入门》笔记7:Transformers解析序列标注任务_conll2003 load_dataset

conll2003 load_dataset

Transformers解析序列标注任务

本文主要来自datawhale的transformer教程4.2天国之影学习笔记

一、序列标注任务简介

  • 序列标注可以看作时token级别的分类问题,为文本中的每一个token预测一个标签
  • token级别的分类任务:
    1. NER(Named-entity recognition 名词-实体识别)分辨出文本中的名词和实体(person人名, organization组织机构名, location地点名…)
    2. POS(Part-of-speech tagging词性标注)根据语法对token进行词性标注(noun名词、verb动词、adjective形容词…)
    3. Chunk(Chunking短语组块)将同一个短语的tokens组块放在一起

  只要预训练的transformer模型最顶层有一个token分类的神经网络层(比如上一篇章提到的BertForTokenClassification,需要对应的预训练模型有fast tokenizer这个功能,参考这个表),那么本notebook理论上可以使用各种各样的transformer模型模型面板),解决任何token级别的分类任务。

  如果您所处理的任务有所不同,大概率只需要很小的改动便可以使用本notebook进行处理。同时,您应该根据您的GPU显存来调整微调训练所需要的btach size大小,避免显存溢出。

# 设置分类任务
task = "ner" 
# 设置BERT模型
model_checkpoint = "distilbert-base-uncased"
# 根据GPU调整batch_size大小,避免显存溢出
batch_size = 16
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

二、加载数据

#加载数据和评测方式
from datasets import load_dataset, load_metric
  • 1
  • 2

  本文使用的是CONLL 2003 dataset数据集。来处理Datasets库中的任何token分类任务。如果要加载自定义的json/csv文件数据集,可以参考数据集文档来学习如何加载。自定义数据集可能需要在加载属性名字上做一些调整

# 加载conll2003数据集
datasets = load_dataset("conll2003")
  • 1
  • 2
Reusing dataset conll2003 (C:\Users\hurui\.cache\huggingface\datasets\conll2003\conll2003\1.0.0\40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6)
  • 1
datasets
  • 1

datasets对象本身是一种DatasetDict数据结构。可以使用对应的key得到相应的数据

    DatasetDict({
        train: Dataset({
            features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
            num_rows: 14041
        })
        validation: Dataset({
            features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
            num_rows: 3250
        })
        test: Dataset({
            features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],
            num_rows: 3453
        })
    })
#label列对应tokens的标注
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
# 查看训练集第一条数据
datasets["train"][0]
  • 1
  • 2
    {'id': '0',
     'tokens': ['EU',
      'rejects',
      'German',
      'call',
      'to',
      'boycott',
      'British',
      'lamb',
      '.'],
     'pos_tags': [22, 42, 16, 21, 35, 37, 16, 21, 7],
     'chunk_tags': [11, 21, 11, 12, 21, 22, 11, 12, 0],
     'ner_tags': [3, 0, 7, 0, 0, 0, 7, 0, 0]}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

  所有的数据标签labels都已经被编码成了整数,可以直接被预训练transformer模型使用。这些整数的编码所对应的实际类别储存在features中。

# 查看features属性
datasets["train"].features[f"ner_tags"]
  • 1
  • 2
Sequence(feature=ClassLabel(num_classes=9, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC'], names_file=None, id=None), length=-1, id=None)
  • 1

  以NER为例,0对应的标签类别是”O“, 1对应的是”B-PER“等等。具体标签含义对应如下:

  • PER:person
  • ORG:organization
  • LOC:location
  • MISC:miscellaneous
  • O:没有特别实体(no special entity)
  • B-*:实体开始的token
  • I-*:实体中间的token
label_list = datasets["train"].features[f"{task}_tags"].feature.names
label_list
  • 1
  • 2
['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC']
  • 1

定义下面的函数,从数据集里随机选择几个例子进行展示

from datasets import ClassLabel, Sequence
import random
import pandas as pd
from IPython.display import display, HTML

def show_random_elements(dataset, num_examples=10):
    """从数据集中随机选择几条数据"""
    assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
    picks = []
    for _ in range(num_examples):
        pick = random.randint(0, len(dataset)-1)
        while pick in picks:
            pick = random.randint(0, len(dataset)-1)
        picks.append(pick)
    
    df = pd.DataFrame(dataset[picks])
    for column, typ in dataset.features.items():
        if isinstance(typ, ClassLabel):
            df[column] = df[column].transform(lambda i: typ.names[i])
        elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):
            df[column] = df[column].transform(lambda x: [typ.feature.names[i] for i in x])
    display(HTML(df.to_html()))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
show_random_elements(datasets["train"])
  • 1
idtokenspos_tagschunk_tagsner_tags
04143[The, 85-year-old, nun, said, in, the, past, that, she, was, praying, for, the, couple, ,, whose, divorce, is, expected, to, become, final, next, week, .][DT, JJ, NN, VBD, IN, DT, NN, IN, PRP, VBD, VBG, IN, DT, NN, ,, WP\$, NN, VBZ, VBN, TO, VB, JJ, JJ, NN, .][B-NP, I-NP, I-NP, B-VP, B-PP, B-NP, I-NP, B-SBAR, B-NP, B-VP, I-VP, B-PP, B-NP, I-NP, O, B-NP, I-NP, B-VP, I-VP, I-VP, I-VP, B-NP, I-NP, I-NP, O][O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O]
12442[2., Marie-Jose, Perec, (, France, ), 49.72][CD, NNP, NNP, (, NNP, ), CD][B-NP, I-NP, I-NP, O, B-NP, O, B-NP][O, B-PER, I-PER, O, B-LOC, O, O]
21090[There, were, no, significant, differences, between, the, groups, receiving, garlic, and, placebo, ,, ", they, wrote, in, the, Journal, of, the, Royal, College, of, Physicians, .][EX, VBD, DT, JJ, NNS, IN, DT, NNS, VBG, NN, CC, NN, ,, ", PRP, VBD, IN, DT, NNP, IN, DT, NNP, NNP, IN, NNPS, .][B-NP, B-VP, B-NP, I-NP, I-NP, B-PP, B-NP, I-NP, B-VP, B-NP, I-NP, I-NP, O, O, B-NP, B-VP, B-PP, B-NP, I-NP, B-PP, B-NP, I-NP, I-NP, B-PP, B-NP, O][O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, B-ORG, I-ORG, I-ORG, I-ORG, I-ORG, I-ORG, I-ORG, O]
31972[Pakistan, first, innings][NNP, RB, NN][B-NP, B-ADVP, B-NP][B-LOC, O, O]
413714[The, Taiwan, dollar, closed, slightly, firmer, on, Thursday, amid, tight, Taiwan, dollar, liquidity, in, the, banking, system, ,, and, dealers, said, the, rate, was, likely, to, move, narrowly, in, the, near, term, .][DT, NNP, NN, VBD, RB, JJR, IN, NNP, IN, JJ, NNP, NN, NN, IN, DT, NN, NN, ,, CC, NNS, VBD, DT, NN, VBD, JJ, TO, VB, RB, IN, DT, JJ, NN, .][B-NP, I-NP, I-NP, B-VP, B-ADVP, B-ADJP, B-PP, B-NP, B-PP, B-NP, I-NP, I-NP, I-NP, B-PP, B-NP, I-NP, I-NP, O, O, B-NP, B-VP, B-NP, I-NP, B-VP, B-ADJP, B-VP, I-VP, I-VP, B-PP, B-NP, I-NP, I-NP, O][O, B-LOC, O, O, O, O, O, O, O, O, B-LOC, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O, O]
54806[nine, of, the, superbike, world, championship, on, Sunday, :][CD, IN, DT, JJ, NN, NN, IN, NNP, :][B-NP, B-PP, B-NP, I-NP, I-NP, I-NP, B-PP, B-NP, O][O, O, O, O, O, O, O, O, O]
67452[The, accident, happened, when, the, Sanchez, Zarraga, family, took, their, boat, out, for, a, nighttime, spin, ,, Civil, Defence, and, Coast, Guard, officials, said, .][DT, NN, VBD, WRB, DT, NNP, NNP, NN, VBD, PRP\$, NN, RP, IN, DT, NN, NN, ,, NNP, NN, CC, NNP, NNP, NNS, VBD, .][B-NP, I-NP, B-VP, B-ADVP, B-NP, I-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-ADVP, B-PP, B-NP, I-NP, I-NP, O, B-NP, I-NP, O, B-NP, I-NP, I-NP, B-VP, O][O, O, O, O, O, B-PER, I-PER, O, O, O, O, O, O, O, O, O, O, B-ORG, I-ORG, I-ORG, I-ORG, I-ORG, O, O, O]
72332[7., Julie, Baumann, (, Switzerland, ), 13.36][NNP, NNP, NNP, (, NNP, ), CD][B-NP, I-NP, I-NP, O, B-NP, O, B-NP][O, B-PER, I-PER, O, B-LOC, O, O]
89786[The, pilot, said, several, hijackers, appeared, to, be, placed, around, the, plane, .][DT, NN, VBD, JJ, NNS, VBD, TO, VB, VBN, IN, DT, NN, .][B-NP, I-NP, B-VP, B-NP, I-NP, B-VP, I-VP, I-VP, I-VP, B-PP, B-NP, I-NP, O][O, O, O, O, O, O, O, O, O, O, O, O, O]
93451[(, 7-4, ), 6-2][(, CD, ), CD][B-LST, B-NP, O, B-NP][O, O, O, O]

三、预处理数据

3.1 数据预处理流程

  • 数据预处理工具:Tokenizer
  • 流程:
    1. 对输入数据进行tokenize,得到tokens
    2. 将tokens转化为预训练模型中需要对应的token ID
    3. 将token ID转化为模型需要的输入格式
        为了达到数据预处理的目的,我们使用AutoTokenizer.from_pretrained方法实例化我们的tokenizer,这样可以确保:
  • 我们得到一个与预训练模型一一对应的tokenizer。
  • 使用指定的模型checkpoint对应的tokenizer的时候,我们也下载了模型需要的词表库vocabulary,准确来说是tokens vocabulary。
  • 这个被下载的tokens vocabulary会被缓存起来,从而再次使用的时候不会重新下载

3.2 构建模型对应的tokenizer

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
  • 1
  • 2

  以下代码要求tokenizer必须是transformers.PreTrainedTokenizerFast类型,因为我们在预处理的时候需要用到fast tokenizer的一些特殊特性(比如多线程快速tokenizer)。在这里big table of models查看模型是否有fast tokenizer。
  tokenizer既可以对单个文本进行预处理,也可以对一对文本进行预处理,tokenizer预处理后得到的数据满足预训练模型输入格式

import transformers
# 模型使用的时fast tokenizer
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)
  • 1
  • 2
  • 3
tokenizer("Hello, this is one sentence!")
  • 1
{'input_ids': [101, 7592, 1010, 2023, 2003, 2028, 6251, 999, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}
  • 1
tokenizer(["Hello", ",", "this", "is", "one", "sentence", "split",
          "into", "words", "."], is_split_into_words=True)
  • 1
  • 2
{'input_ids': [101, 7592, 1010, 2023, 2003, 2028, 6251, 3975, 2046, 2616, 1012, 102], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
  • 1

补充:

  • transformer预训练模型会将切分后的word,继续用tokenizer分词器切分为subword
example = datasets["train"][4]
print(example["tokens"])
  • 1
  • 2
['Germany', "'s", 'representative', 'to', 'the', 'European', 'Union', "'s", 'veterinary', 'committee', 'Werner', 'Zwingmann', 'said', 'on', 'Wednesday', 'consumers', 'should', 'buy', 'sheepmeat', 'from', 'countries', 'other', 'than', 'Britain', 'until', 'the', 'scientific', 'advice', 'was', 'clearer', '.']
  • 1
tokenized_input = tokenizer(example["tokens"], is_split_into_words=True)
tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])
print(tokens)
  • 1
  • 2
  • 3
['[CLS]', 'germany', "'", 's', 'representative', 'to', 'the', 'european', 'union', "'", 's', 'veterinary', 'committee', 'werner', 'z', '##wing', '##mann', 'said', 'on', 'wednesday', 'consumers', 'should', 'buy', 'sheep', '##me', '##at', 'from', 'countries', 'other', 'than', 'britain', 'until', 'the', 'scientific', 'advice', 'was', 'clearer', '.', '[SEP]']
  • 1

单词"Zwingmann" 和 "sheepmeat"继续被切分成了3个subtokens

3.3 解决subtokens对齐问题

  由于标注数据通常是在word级别进行标注的,既然word还会被切分成subtokens,那么意味着我们还需要对标注数据进行subtokens的对齐。同时,由于预训练模型输入格式的要求,往往还需要加上一些特殊符号比如: [CLS] 和 [SEP]。

# 使用word_ids方法解决subtokens对齐问题
print(tokenized_input.word_ids())
  • 1
  • 2
[None, 0, 1, 1, 2, 3, 4, 5, 6, 7, 7, 8, 9, 10, 11, 11, 11, 12, 13, 14, 15, 16, 17, 18, 18, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, None]
  • 1

  word_ids将每一个subtokens位置都对应了一个word的下标。比如第1个位置对应第0个word,然后第2、3个位置对应第1个word。特殊字符对应了None。有了这个list,我们就能将subtokens和words还有标注的labels对齐啦。

# 获取subtokens位置
word_ids = tokenized_input.word_ids()

# 将subtokens、words和标注的labels对齐
aligned_labels = [
    -100 if i is None else example[f"{task}_tags"][i] for i in word_ids]

print(len(aligned_labels), len(tokenized_input["input_ids"]))

39 39#输出结果
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

  我们通常将特殊字符的label设置为-100,在模型中-100通常会被忽略掉不计算loss

两种对齐label的方式:

  • 多个subtokens对齐一个word,对齐一个label
  • 多个subtokens的第一个subtoken对齐word,对齐一个label,其他subtokens直接赋予-100.
    以上两种方式通过label_all_tokens = True切换

3.4 整合预处理函数

将以上所有内容合起来变成我们的预处理函数,is_split_into_words=True在上面已经结束啦(?)

label_all_tokens = True
def tokenize_and_align_labels(examples):
    tokenized_inputs = tokenizer(
        examples["tokens"], truncation=True, is_split_into_words=True)

    labels = []
    for i, label in enumerate(examples[f"{task}_tags"]):
        # 获取subtokens位置
        word_ids = tokenized_inputs.word_ids(batch_index=i)
        previous_word_idx = None
        label_ids = []
        
        # 遍历subtokens位置索引
        for word_idx in word_ids:
            if word_idx is None:
                # 将特殊字符的label设置为-100
                label_ids.append(-100)
            # We set the label for the first token of each word.
            elif word_idx != previous_word_idx:
                label_ids.append(label[word_idx])
            # For the other tokens in a word, we set the label to either the current label or -100, depending on
            # the label_all_tokens flag.
            else:
                label_ids.append(label[word_idx] if label_all_tokens else -100)
            previous_word_idx = word_idx
        
        # 对齐word
        labels.append(label_ids)

    tokenized_inputs["labels"] = labels
    return tokenized_inputs
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

以上的预处理函数可以处理一个样本,也可以处理多个样本exapmles(返回多个样本被预处理之后的结果list)

tokenize_and_align_labels(datasets['train'][:5])
  • 1
{'input_ids': [[101, 7327, 19164, 2446, 2655, 2000, 17757, 2329, 12559, 1012, 102], [101, 2848, 13934, 102], [101, 9371, 2727, 1011, 5511, 1011, 2570, 102], [101, 1996, 2647, 3222, 2056, 2006, 9432, 2009, 18335, 2007, 2446, 6040, 2000, 10390, 2000, 18454, 2078, 2329, 12559, 2127, 6529, 5646, 3251, 5506, 11190, 4295, 2064, 2022, 11860, 2000, 8351, 1012, 102], [101, 2762, 1005, 1055, 4387, 2000, 1996, 2647, 2586, 1005, 1055, 15651, 2837, 14121, 1062, 9328, 5804, 2056, 2006, 9317, 10390, 2323, 4965, 8351, 4168, 4017, 2013, 3032, 2060, 2084, 3725, 2127, 1996, 4045, 6040, 2001, 24509, 1012, 102]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], 'labels': [[-100, 3, 0, 7, 0, 0, 0, 7, 0, 0, -100], [-100, 1, 2, -100], [-100, 5, 0, 0, 0, 0, 0, -100], [-100, 0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -100], [-100, 5, 0, 0, 0, 0, 0, 3, 4, 0, 0, 0, 0, 1, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, -100]]}
  • 1

3.5 对数据集datasets所有样本进行预处理

  使用map函数将预处理函数应用到(map)所有样本上。

tokenized_datasets = datasets.map(tokenize_and_align_labels, batched=True)
  • 1
Loading cached processed dataset at C:\Users\hurui\.cache\huggingface\datasets\conll2003\conll2003\1.0.0\40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6\cache-fa2382f441f8d16d.arrow
Loading cached processed dataset at C:\Users\hurui\.cache\huggingface\datasets\conll2003\conll2003\1.0.0\40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6\cache-8057d57320e0ee7a.arrow
Loading cached processed dataset at C:\Users\hurui\.cache\huggingface\datasets\conll2003\conll2003\1.0.0\40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6\cache-ea32e2b3f93b1edb.arrow
  • 1
  • 2
  • 3

  返回的结果会自动被缓存,避免下次处理的时候重新计算(但是也要注意,如果输入有改动,可能会被缓存影响!)。datasets库函数会对输入的参数进行检测,判断是否有变化,如果没有变化就使用缓存数据,如果有变化就重新处理。但如果输入参数不变,想改变输入的时候,最好清理调这个缓存。清理的方式是使用load_from_cache_file=False参数。另外,上面使用到的batched=True这个参数是tokenizer的特点,以为这会使用多线程同时并行对输入进行处理。

四、微调预训练模型

  既然我们是做seq2seq任务,那么我们需要使用AutoModelForSequenceClassification 这个类。和tokenizer相似,from_pretrained方法同样可以帮助我们下载并加载模型,同时也会对模型进行缓存,就不会重复下载模型啦。

4.1 加载分类模型

from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer

model = AutoModelForTokenClassification.from_pretrained(
    model_checkpoint, num_labels=len(label_list))
  • 1
  • 2
  • 3
  • 4
Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForTokenClassification: ['vocab_layer_norm.weight', 'vocab_projector.weight', 'vocab_projector.bias', 'vocab_layer_norm.bias', 'vocab_transform.bias', 'vocab_transform.weight']
- This IS expected if you are initializing DistilBertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DistilBertForTokenClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
  • 1
  • 2
  • 3
  • 4
  • 5

  由于我们微调的任务是文本分类任务,而我们加载的是预训练的语言模型,所以会提示我们加载模型的时候扔掉了一些不匹配的神经网络参数(比如:预训练语言模型的神经网络head被扔掉了,同时随机初始化了文本分类的神经网络head)

4.2 设定训练参数

  Trainer是一个简单但功能完整的 PyTorch 训练和评估循环,针对

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/421244
推荐阅读
相关标签