当前位置:   article > 正文

NLP实践——基于SBERT的语义搜索,语义相似度计算,SimCSE、GenQ等无监督训练

sbert

0. 由SBERT引发的一些思考

从去年sentence-transformer的概念提出,到今年上半年对比学习在深度学习领域广受关注,尤其是SimCSE等方法,在无监督和小样本的场景下取得了很大的成功。无论是Sentence Bert还是SimCSE其原理都非常朴素,实现起来也很容易。

当时由于sentence-transformer模块的环境配置问题(不想把transformers库的版本升到太高),就自己用bert4keras分别写了单塔和双塔结构的sentence bert,在STS数据集,以及STS的中文翻译版本上进行了训练,因为参考论文的描述和代码是一个编码器,但是论文的图上好像是两个编码器,就对单塔和双塔结构分别做了实验,结果是差别并不明显。所以个人认为在语义空间预训练模型的微调这类任务上没有必要耗费近乎多一倍的参数量去搭建双塔结构。

但是前不久在看SBERT的官网的时候发现,sentence-transformer已经更新到了2.0版本,并且在huggingface上上传了很多预训练模型,新增了一些具体应用场景的样例,所以在此决定对其中的内容进行介绍,主要是搬运翻译SBERT的官方说明文档。不得不承认,开源社区的力量实在是太强大,短短几个月的时间,已经有了很大的发展。之前我个人主要在用keras,记得苏神说过,torch能做到的事情,keras同样能够做到。我对这句话一直深信不疑,但问题是预训练模型的发展速度和huggingface社区的活跃远超出我的预期,如果继续用keras就不得不维护两套代码,并且在两种风格上不停地迁移,耗费很多时间。所以还是决定放弃做bert4keras的信徒,换用transformers作为主要的工具。

1. SBERT介绍

这篇博客主要是对SBERT的官方文档的搬运和介绍。
SBERT官方帮助文档:www.sbert.net
sentence-transformer GitHub:https://github.com/UKPLab/sentence-transformers
Huggingface上预训练模型地址:https://huggingface.co/sentence-transformers

官网的介绍已经比较详细,更多具体的应用实例可以参考git上example,对于一些常用的应用,在这篇博客中也进行了整理。

sentence-transformer是基于huggingface transformers模块的,如果环境上没有sentence-transformer模块的话,只使用transformers模块同样可以使用它的预训练模型。在环境配置方面,目前的2.0版本,最好将transformers,tokenizers等相关模块都升级到最新,尤其是tokenizers,如果不升级的话在创建Tokenizer的时候会报错。

目前sentence-transformers一共公开了98个预训练模型。
帮助文档中列出的一部分预训练模型
在预训练模型的选择上,如果是要求准确度的话,就选择mpnet-base-v2(这个模型好像还在flickr30k等数据集上进行了训练,可以用于对图像的编码,我还没有尝试使用)。如果应用场景是中文的话,可以选择multilingual相关的模型,这个模型目前支持50多种语言,以语义相似度计算为例,多语种的模型可以比较中文和英文之间的语义相似度,但是目前还没有专门的中文预训练模型。如果对模型效率有要求,那就选择distil相关的模型。

如果是对称语义搜索问题(query和answer的长度相近,例如两句话比较语义相似度)则采用https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models中给出的预训练模型;
而如果是非对称语义搜索问题(query很短,但是需要检索出的answer是一篇比较长的文档),则采用https://www.sbert.net/docs/pretrained-models/msmarco-v3.html中给出的模型。

2. 基本应用

2.1 语义相似度计算

预训练模型在作为编码器使用时非常简单。

from sentence_transformers import SentenceTransformer, util
# 【创建模型】
# 这里的编码器可以换成mpnet-base-v2等
# 模型自动下载,并在/root/.cache下创建缓存
# 如果是想加载本地的预训练模型,则类似于huggingface的from_pretrained方法,把输入参数换成本地模型的路径
model = SentenceTransformer('paraphrase-MiniLM-L12-v2')
# model= SentenceTransformer('path-to-your-pretrained-model/paraphrase-MiniLM-L12-v2/')

# 计算编码
sentence1 = 'xxxxxx'
sentence2 = 'xxxxxx'
embedding1 = model.encode(sentence1, convert_to_tensor=True)
embedding2 = model.encode(sentence2, convert_to_tensor=True)

# 计算语义相似度
cosine_score = util.pytorch_cos_sim(embedding1, embedding2)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

除了像上面这样计算两句话的语义相似度,还可以比较两个list

sentences1 = ['The cat sits outside',
             'A man is playing guitar',
             'The new movie is awesome']

sentences2 = ['The dog plays in the garden',
              'A woman watches TV',
              'The new movie is so great']

#Compute embedding for both lists
embeddings1 = model.encode(sentences1, convert_to_tensor=True)
embeddings2 = model.encode(sentences2, convert_to_tensor=True)

#Compute cosine-similarits
cosine_scores = util.pytorch_cos_sim(embeddings1, embeddings2)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

如果是希望在一堆输入的文字中找出相近的话:

# Single list of sentences
sentences = ['The cat sits outside',
             'A man is playing guitar',
             'I love pasta',
             'The new movie is awesome',
             'The cat plays in the garden',
             'A woman watches TV',
             'The new movie is so great',
             'Do you like pizza?']

# Compute embeddings
embeddings = model.encode(sentences, convert_to_tensor=True)

# 两两计算相似度
cosine_scores = util.pytorch_cos_sim(embeddings, embeddings)

# 找到相似度最高的句对
pairs = []
for i in range(len(cosine_scores)-1):
    for j in range(i+1, len(cosine_scores)):
        pairs.append({'index': [i, j], 'score': cosine_scores[i][j]})

# 根据相似度大小降序排列
pairs = sorted(pairs, key=lambda x: x['score'], reverse=True)

for pair in pairs[0:10]:
    i, j = pair['index']
    print("{} \t\t {} \t\t Score: {:.4f}".format(sentences[i], sentences[j], pair['score']))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

2.2 语义搜索

语义搜索与相似度计算其实是一样的,只是把sentences1看做是query,把sentences2看做是corpus,然后还是利用util.pytorch_cos_sim()去计算相似度,再返回相似度最高的topk即可。
写了一个简单的方法:

def semantic_search(query, corpus_or_emb, topk=1, model=model):
	"""
	:param query: 查询语句
	:param corpus_or_emb: 候选答案或候选答案的编码
	:param topk: 返回前多少个答案
	:param model: 用于编码的模型
	:return [(most_similar, score)]: 结果和分数
	---------------
	ver: 2021-08-23
	by: changhongyu
	"""
	topk = min(topk, len(corpus))
	q_emb = model.encode(query, convert_to_tensor=True)
	if type(corpus_or_emb) == list:
		c_emb = model.encode(corpus, convert_to_tensor=True)
	elif type(corpus_or_emb) == torch.Tensor:
		c_emb = corpus_or_emb
	else:
		raise TypeError("Attribute 'corpus_or_emb' must be list or tensor.")
	
	cosine_scores = util.pytorch_cos_sim(q_emb, c_emb)[0]
	top_res = torch.topk(cosine_scores, k=topk)
	
	return [(corpus[int(index.cpu())], float(score.cpu().numpy()) for score, index in zip(top_res[0], top_res[1])]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

2.3 聚类和主题模型

利用sentence-transformer系列的模型获取的embedding除了用于相似度计算之外,也可以作为特征直接应用于聚类。提供了K-means,Agglomerative cluster以及fast cluster三种方法。
K-means聚类也是利用了skearn的k-means,取SBERT的编码特征作为k-means的输入特征:

from sentence_transformers import SentenceTransformer
from sklearn.cluster import KMeans

embedder = SentenceTransformer('paraphrase-MiniLM-L6-v2')

# Corpus with example sentences
corpus = ['A man is eating food.',
          'A man is eating a piece of bread.',
          'A man is eating pasta.',
          'The girl is carrying a baby.',
          'The baby is carried by the woman',
          'A man is riding a horse.',
          'A man is riding a white horse on an enclosed ground.',
          'A monkey is playing drums.',
          'Someone in a gorilla costume is playing a set of drums.',
          'A cheetah is running behind its prey.',
          'A cheetah chases prey on across a field.'
          ]
corpus_embeddings = embedder.encode(corpus)

# Perform kmean clustering
num_clusters = 5
clustering_model = KMeans(n_clusters=num_clusters)
clustering_model.fit(corpus_embeddings)
cluster_assignment = clustering_model.labels_

clustered_sentences = [[] for i in range(num_clusters)]
for sentence_id, cluster_id in enumerate(cluster_assignment):
    clustered_sentences[cluster_id].append(corpus[sentence_id])

for i, cluster in enumerate(clustered_sentences):
    print("Cluster ", i+1)
    print(cluster)
    print("")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

以sentence transformer作为编码器的主题模型,包括Top2VecBERTopic等。

topic

2.4 图片检索

sentence-transformer同样提供了基于vit的预训练模型,因而可以计算图像和文字的相似度。用法与文本相类似。

from sentence_transformers import SentenceTransformer, util
from PIL import Image

#Load CLIP model
model = SentenceTransformer('clip-ViT-B-32')

#Encode an image:
img_emb = model.encode(Image.open('two_dogs_in_snow.jpg'))

#Encode text descriptions
text_emb = model.encode(['Two dogs in the snow', 'A cat on a table', 'A picture of London at night'])

#Compute cosine similarities 
cos_scores = util.cos_sim(img_emb, text_emb)
print(cos_scores)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

除此之外,利用sentence-transformer还可以支持一些其他的应用,例如召回+重排列等。具体内容可以参考官方文档。

3. 无监督方法的训练

3.1 SimCSE

SimCSE采用一个编码器,利用dropout机制中的随机原理,构建训练的正样本,拉进空间中正例之间的距离。
SimCSE
帮助文档中给出的简单的训练样例:

from sentence_transformers import SentenceTransformer, InputExample
from sentence_transformers import models, losses
from torch.utils.data import DataLoader

# Define your sentence transformer model using CLS pooling
model_name = 'distilroberta-base'
word_embedding_model = models.Transformer(model_name, max_seq_length=32)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])

# Define a list with sentences (1k - 100k sentences)
train_sentences = ["Your set of sentences",
                   "Model will automatically add the noise",
                   "And re-construct it",
                   "You should provide at least 1k sentences"]

# Convert train sentences to sentence pairs
train_data = [InputExample(texts=[s, s]) for s in train_sentences]

# DataLoader to batch your data
train_dataloader = DataLoader(train_data, batch_size=128, shuffle=True)

# Use the denoising auto-encoder loss
train_loss = losses.MultipleNegativesRankingLoss(model)

# Call the fit method
model.fit(
    train_objectives=[(train_dataloader, train_loss)],
    epochs=1,
    show_progress_bar=True
)

model.save('output/simcse-model')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

3.2 TSDAE

Transformers and Sequential Denoising Auto-Encoder (TSDAE) 是基于transformer的一个encoder-decoder结构,用于将输入的包含噪声的文本去噪。
TSDAE
训练代码:

from sentence_transformers import SentenceTransformer
from sentence_transformers import models, util, datasets, evaluation, losses
from torch.utils.data import DataLoader

# Define your sentence transformer model using CLS pooling
model_name = 'bert-base-uncased'
word_embedding_model = models.Transformer(model_name)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), 'cls')
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])

# Define a list with sentences (1k - 100k sentences)
train_sentences = ["Your set of sentences",
                   "Model will automatically add the noise", 
                   "And re-construct it",
                   "You should provide at least 1k sentences"]

# Create the special denoising dataset that adds noise on-the-fly
train_dataset = datasets.DenoisingAutoEncoderDataset(train_sentences)

# DataLoader to batch your data
train_dataloader = DataLoader(train_dataset, batch_size=8, shuffle=True)

# Use the denoising auto-encoder loss
train_loss = losses.DenoisingAutoEncoderLoss(model, decoder_name_or_path=model_name, tie_encoder_decoder=True)

# Call the fit method
model.fit(
    train_objectives=[(train_dataloader, train_loss)],
    epochs=1,
    weight_decay=0,
    scheduler='constantlr',
    optimizer_params={'lr': 3e-5},
    show_progress_bar=True
)

model.save('output/tsdae-model')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

这篇文章我还没有看,在这里记录一下论文地址:
https://arxiv.org/abs/2104.06979

3.3 GenQ

GenQ的应用场景是非对称语义搜索。在没有labeled查询-结果句对的情况下,首先利用T5模型生成查询,构建出’silver dataset’,再用构建的数据集去精调双塔结构的SBERT模型。
GenQ
首先给入一段话,生成问句。注意这里的预训练模型,是query-gen-msmarco-t5-large-v1,而不是google发布的T5模型,如果是T5模型,生成的结果就不是对应的问句,而是与para类似的一个片段。

另外,BeIR只发布了三个模型,都是在英文预料上进行预训练的,所以如果要应用在中文的场景,只能通过翻译的方法先把文档翻译成英文,生成英文问句之后再回译。

from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch

tokenizer = T5Tokenizer.from_pretrained('BeIR/query-gen-msmarco-t5-large-v1')
model = T5ForConditionalGeneration.from_pretrained('BeIR/query-gen-msmarco-t5-large-v1')
model.eval()

para = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."

input_ids = tokenizer.encode(para, return_tensors='pt')
with torch.no_grad():
    outputs = model.generate(
        input_ids=input_ids,
        max_length=64,
        do_sample=True,
        top_p=0.95,
        num_return_sequences=3)

print("Paragraph:")
print(para)

print("\nGenerated Queries:")
for i in range(len(outputs)):
    query = tokenizer.decode(outputs[i], skip_special_tokens=True)
    print(f'{i + 1}: {query}')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

利用生成的数据集训练一个SBERT模型

from sentence_transformers import SentenceTransformer, InputExample, losses, models, datasets
import os

train_examples = []
for para in paras:
	# 所有的未标注的篇章数据
	for query in para['queries']:
		# 根据自己存储的数据格式调整
		train_examples.append(InputExample(texts=[query, para]))

# For the MultipleNegativesRankingLoss, it is important
# that the batch does not contain duplicate entries, i.e.
# no two equal queries and no two equal paragraphs.
# To ensure this, we use a special data loader
train_dataloader = datasets.NoDuplicatesDataLoader(train_examples, batch_size=64)

# Now we create a SentenceTransformer model from scratch
word_emb = models.Transformer('distilbert-base-uncased')
pooling = models.Pooling(word_emb.get_word_embedding_dimension())
model = SentenceTransformer(modules=[word_emb, pooling])

# MultipleNegativesRankingLoss requires input pairs (query, relevant_passage)
# and trains the model so that is is suitable for semantic search
train_loss = losses.MultipleNegativesRankingLoss(model)


#Tune the model
num_epochs = 3
warmup_steps = int(len(train_dataloader) * num_epochs * 0.1)
model.fit(train_objectives=[(train_dataloader, train_loss)], epochs=num_epochs, warmup_steps=warmup_steps, show_progress_bar=True)

os.makedirs('output', exist_ok=True)
model.save('output/programming-model')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

接下来就可以利用上文所述的语义搜索的方法对一个新的query进行查询。

3.4 CT

CT是另一种无监督的训练方法,它的思路是利用两个编码器对两组输入分别进行编码,如果是同一个句子,编码器1和编码器2对其的编码应该是类似的,而不同的句子,两个编码器给出的编码则不一样。于是训练的目标是令前者两个编码向量的点积尽可能大,而后者点积尽可能小。

CT
训练代码:

import math
from sentence_transformers import models, losses, SentenceTransformer
import tqdm

## Training parameters
model_name = 'distilbert-base-uncased'  # 如果是本地模型则修改为路径
batch_size = 16
pos_neg_ratio = 8   # batch_size must be devisible by pos_neg_ratio
num_epochs = 1
max_seq_length = 75
output_name = ''  # 模型保存路径

model_output_path = 'output/train_ct{}-{}'.format(output_name, datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))

# 建立编码器,可以采用SBERT的预训练模型
word_embedding_model = models.Transformer(model_name, max_seq_length=max_seq_length)

# Apply mean pooling to get one fixed sized sentence vector
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])

################# Read the train corpus  #################
train_sentences = ["Your set of sentences",
                   "Model will automatically add the noise",
                   "And re-construct it",
                   "You should provide at least 1k sentences"]

# For ContrastiveTension we need a special data loader to construct batches with the desired properties
train_dataloader =  losses.ContrastiveTensionDataLoader(train_sentences, batch_size=batch_size, pos_neg_ratio=pos_neg_ratio)

# As loss, we losses.ContrastiveTensionLoss
train_loss = losses.ContrastiveTensionLoss(model)

warmup_steps = math.ceil(len(train_dataloader) * num_epochs * 0.1)  # 10% of train data for warm-up

# Train the model
model.fit(train_objectives=[(train_dataloader, train_loss)],
          epochs=num_epochs,
          warmup_steps=warmup_steps,
          optimizer_params={'lr': 5e-5},
          checkpoint_path=model_output_path,
          show_progress_bar=True,
          use_amp=False  # Set to True, if your GPU supports FP16 cores
          )
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44

总而言之,sentence-transformer是一个非常好用的编码器,并且上述所用功能的原理都非常容易理解,在此基础上也可以根据自己的实际应用场景去开发一些具有创造性的功能。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/350481
推荐阅读
相关标签
  

闽ICP备14008679号