当前位置:   article > 正文

基于tensorflow2.0+使用bert获取中文词、句向量并进行相似度分析_bert获取中文词向量

bert获取中文词向量

本文基于transformers库,调用bert模型,对中文、英文的稠密向量进行探究

开始之前还是要说下废话,主要是想吐槽下,为啥写这个东西呢?因为我找了很多文章要么不是不清晰,要么就是基于pytorch,所以特地写了这篇基于tensorflow2.0+的

运行环境

这个环境没有严格要求,仅供参考
win10 + python 3.8 + tensorflow 2.9.1 + transformers 4.20.1

导入库

from transformers import AutoTokenizer, TFAutoModel
import tensorflow as tf
import matplotlib.pyplot as plt
  • 1
  • 2
  • 3

加载模型

model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModel.from_pretrained(model_name,
                                    output_hidden_states=True)  # 是否返回bert所有隐藏层的稠密向量
  • 1
  • 2
  • 3
  • 4

输入测试句子

utt = ['今天的月亮又大又圆', '月亮真的好漂亮啊', '今天去看电影吧', "爱情睡醒了,天琪抱着小贝进酒店", "侠客行风万里"]
inputs = tokenizer(utt, return_tensors="tf", padding="max_length", truncation=True, max_length=64)
outputs = model(inputs)
hidden_states = outputs[2]  # 获得各个隐藏层输出
  • 1
  • 2
  • 3
  • 4

解释下输出(hidden_states):

  1. The layer number (13 layers)
  2. The batch number (5 sentence) 也就是输入句子的个数
  3. The word / token number (64 tokens in our sentence) 也就是max_length
  4. The hidden unit / feature number (768 features)

疑惑点:
1.为啥是13层?bert不是12层吗?
第一层是输入的嵌入层,其余12层才是bert的

打印出出看下shape:

print("Number of layers:", len(hidden_states), "  (initial embeddings + 12 BERT layers)")
# Number of layers: 13   (initial embeddings + 12 BERT layers)

layer_i = 0
print("Number of batches:", len(hidden_states[layer_i]))
# umber of batches: 5

batch_i = 0
print("Number of tokens:", len(hidden_states[layer_i][batch_i]))
# Number of tokens: 64

token_i = 0
print("Number of hidden units:", len(hidden_states[layer_i][batch_i][token_i]))
# Number of hidden units: 768
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

查看下第一个句子第五个词在第五层的表示

batch_i = 0
token_i = 5
layer_i = 5
vec = hidden_states[layer_i][batch_i][token_i]
  • 1
  • 2
  • 3
  • 4

嗯,看下分布吧

plt.figure(figsize=(10, 10))
plt.hist(vec, bins=200)
plt.show()
  • 1
  • 2
  • 3

请添加图片描述

现在多个句子的张量做一些改动

因为hidden_states是元组,所以现在要把他的维度嵌入到张量中

sentence_embeddings = tf.stack(hidden_states, axis=0)  # 在维度0的位置插入,也就是把13放入最前面
print(f"sentence_embeddings.shape : {sentence_embeddings.shape}")
# sentence_embeddings.shape : (13, 5, 64, 768)
  • 1
  • 2
  • 3

调换维度,使每个词都有13层的嵌入表示

sentence_embeddings_perm = tf.transpose(sentence_embeddings, perm=[1, 2, 0, 3])
print(f"sentence_embeddings_perm.shape : {sentence_embeddings_perm.shape}")
# sentence_embeddings_perm.shape : (5, 64, 13, 768)
  • 1
  • 2
  • 3

获取词的稠密向量

第一种方式:拼接后四层的稠密向量

for sentence_embedding in sentence_embeddings_perm:  # 获取每个句子的embedding
    print(f"sentence_embedding.shape: {sentence_embedding.shape}")
    token_vecs_cat = []
    for token_embedding in sentence_embedding:  # 获取句子每个词的embedding
        print(f"token_embedding.shape : {token_embedding.shape}")
        cat_vec = tf.concat([token_embedding[-1], token_embedding[-2], token_embedding[-3], token_embedding[-4]], axis=0)
        print(f"cat_vec.shape : {cat_vec.shape}")
        token_vecs_cat.append(cat_vec)
    print(f"len(token_vecs_cat) : {len(token_vecs_cat)}")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

第二种方式:加和后四层的稠密向量

for sentence_embedding in sentence_embeddings_perm:  # 获取每个句子的embedding
    print(f"sentence_embedding.shape: {sentence_embedding.shape}")
    token_vecs_cat = []
    for token_embedding in sentence_embedding:  # 获取句子每个词的embedding
        print(f"token_embedding.shape : {token_embedding.shape}")
        cat_vec = sum(token_embedding[-4:])
        print(f"cat_vec.shape : {cat_vec.shape}")
        token_vecs_cat.append(cat_vec)
    print(f"len(token_vecs_cat) : {len(token_vecs_cat)}")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

获取句子的稠密向量

平均每个token倒数第二层的稠密向量

token_vecs = sentence_embeddings[-2]
print(f"token_vecs.shape : {token_vecs.shape}")
# token_vecs.shape : (5, 64, 768)
sentences_embedding = tf.reduce_mean(token_vecs, axis=1)
print(f"sentences_embedding.shape : {sentences_embedding.shape}")
# sentences_embedding.shape : (5, 768)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

相似度探讨

不同句子间的相似度

tensor_test = sentences_embedding[0]
consine_sim_tensor = tf.keras.losses.cosine_similarity(tensor_test, sentences_embedding)
print(f"consine_sim_tensor : {consine_sim_tensor}")
# consine_sim_tensor : [-0.99999994 -0.9915971  -0.9763253  -0.7641263  -0.9695324 ]
  • 1
  • 2
  • 3
  • 4

探讨下相同词bank在不同上下文时其vector的相似度

utt = ["After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."]
inputs = tokenizer(utt, return_tensors="tf", padding="max_length", truncation=True, max_length=22)
"""
0 [CLS]
1 after
2 stealing
3 money
4 from
5 the
6 bank
7 vault
8 ,
9 the
10 bank
11 robber
12 was
13 seen
14 fishing
15 on
16 the
17 mississippi
18 river
19 bank
20 .
21 [SEP]

bank单词的位置分别在6, 10, 19
"""
outputs = model(inputs)
hidden_states = outputs[2]  # 获得各个隐藏层输出
tokens_embedding = tf.reduce_sum(hidden_states[-4:], axis=0) # 使用加和方式
bank_vault = tokens_embedding[0][6]
bank_robber = tokens_embedding[0][10]
river_bank = tokens_embedding[0][19]
consine_sim_tensor = tf.keras.losses.cosine_similarity(bank_vault, [bank_robber, river_bank])
print(f"consine_sim_tensor : {consine_sim_tensor}")
# consine_sim_tensor : [-0.93863535 -0.69570863]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37

可以看出**bank_vault(银行金库)和bank_robber(银行抢劫犯)**中的bank相似度更高些,合理!

完整代码

from transformers import AutoTokenizer, TFAutoModel
import tensorflow as tf
import matplotlib.pyplot as plt

# 加载模型
model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModel.from_pretrained(model_name,
                                    output_hidden_states=True)  # Whether the model returns all hidden-states.

# 输入测试句子
utt = ['今天的月亮又大又圆', '月亮真的好漂亮啊', '今天去看电影吧', "爱情睡醒了,天琪抱着小贝进酒店", "侠客行风万里"]
inputs = tokenizer(utt, return_tensors="tf", padding="max_length", truncation=True, max_length=64)
outputs = model(inputs)
hidden_states = outputs[2]  # 获得各个隐藏层输出
"""
解释下输出(hidden_states):
1. The layer number (13 layers)
2. The batch number (5 sentence) 也就是输入句子的个数
3. The word / token number (64 tokens in our sentence) 也就是max_length
4. The hidden unit / feature number (768 features)

疑惑点:
1.为啥是13层?bert不是12层吗?
第一层是输入的嵌入层,其余12层才是bert的
"""
print("Number of layers:", len(hidden_states), "  (initial embeddings + 12 BERT layers)")

layer_i = 0
print("Number of batches:", len(hidden_states[layer_i]))

batch_i = 0
print("Number of tokens:", len(hidden_states[layer_i][batch_i]))

token_i = 0
print("Number of hidden units:", len(hidden_states[layer_i][batch_i][token_i]))

# For the 5th token in our sentence, select its feature values from layer 5.
token_i = 5
layer_i = 5
vec = hidden_states[layer_i][batch_i][token_i]

# Plot the values as a histogram to show their distribution.
plt.figure(figsize=(10, 10))
plt.hist(vec, bins=200)
plt.show()


# Concatenate the tensors for all layers. We use `stack` here to
# create a new dimension in the tensor.
sentence_embeddings = tf.stack(hidden_states, axis=0)  # 在维度0的位置插入,也就是把13放入最前面
print(f"sentence_embeddings.shape : {sentence_embeddings.shape}")

# 调换维度,使每个词都有13层的嵌入表示
sentence_embeddings_perm = tf.transpose(sentence_embeddings, perm=[1, 2, 0, 3])
print(f"sentence_embeddings_perm.shape : {sentence_embeddings_perm.shape}")

# 获取词的稠密向量
## 第一种方式:拼接后四层的稠密向量
for sentence_embedding in sentence_embeddings_perm:  # 获取每个句子的embedding
    print(f"sentence_embedding.shape: {sentence_embedding.shape}")
    token_vecs_cat = []
    for token_embedding in sentence_embedding:  # 获取句子每个词的embedding
        print(f"token_embedding.shape : {token_embedding.shape}")
        cat_vec = tf.concat([token_embedding[-1], token_embedding[-2], token_embedding[-3], token_embedding[-4]], axis=0)
        print(f"cat_vec.shape : {cat_vec.shape}")
        token_vecs_cat.append(cat_vec)
    print(f"len(token_vecs_cat) : {len(token_vecs_cat)}")

## 第二种方式:加和后四层的稠密向量
for sentence_embedding in sentence_embeddings_perm:  # 获取每个句子的embedding
    print(f"sentence_embedding.shape: {sentence_embedding.shape}")
    token_vecs_cat = []
    for token_embedding in sentence_embedding:  # 获取句子每个词的embedding
        print(f"token_embedding.shape : {token_embedding.shape}")
        cat_vec = sum(token_embedding[-4:])
        print(f"cat_vec.shape : {cat_vec.shape}")
        token_vecs_cat.append(cat_vec)
    print(f"len(token_vecs_cat) : {len(token_vecs_cat)}")


# 获取句子的稠密向量
## 平均每个token倒数第二层的稠密向量
token_vecs = sentence_embeddings[-2]
print(f"token_vecs.shape : {token_vecs.shape}")
sentences_embedding = tf.reduce_mean(token_vecs, axis=1)
print(f"sentences_embedding.shape : {sentences_embedding.shape}")


# 计算余弦相似度
## 不同句子间的相似度
tensor_test = sentences_embedding[0]
consine_sim_tensor = tf.keras.losses.cosine_similarity(tensor_test, sentences_embedding)
print(f"consine_sim_tensor : {consine_sim_tensor}")


##探讨下相同词bank在不同上下文时其vector的相似度
utt = ["After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."]
inputs = tokenizer(utt, return_tensors="tf", padding="max_length", truncation=True, max_length=22)
"""
0 [CLS]
1 after
2 stealing
3 money
4 from
5 the
6 bank
7 vault
8 ,
9 the
10 bank
11 robber
12 was
13 seen
14 fishing
15 on
16 the
17 mississippi
18 river
19 bank
20 .
21 [SEP]

bank单词的位置分别在6, 10, 19
"""
outputs = model(inputs)
hidden_states = outputs[2]  # 获得各个隐藏层输出
tokens_embedding = tf.reduce_sum(hidden_states[-4:], axis=0) # 使用加和方式
bank_vault = tokens_embedding[0][6]
bank_robber = tokens_embedding[0][10]
river_bank = tokens_embedding[0][19]
consine_sim_tensor = tf.keras.losses.cosine_similarity(bank_vault, [bank_robber, river_bank])
print(f"consine_sim_tensor : {consine_sim_tensor}")
# consine_sim_tensor : [-0.93863535 -0.69570863]
# 可以看出bank_vault(银行金库)和bank_robber(银行抢劫犯)中的bank相似度更高些,合理!
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号