当前位置:   article > 正文

sentence_transformers 教程_sentence transformer loss

sentence transformer loss

文档:Search — Sentence-Transformers documentation

用途:

该模主要用来做句子嵌入,下游常用来做语意匹配

ContrastiveLoss

别名:

pairwise ranking loss

pairwise loss

公式:

在这里插入图片描述

 loss = y*||anchor - positive|| +(1-y)*max(margin-||anchor - negative||, 0).

  1. # 损失函数代码表示,摘自 sentence transformers
  2. losses = 0.5 * (labels.float() * distances.pow(2) + (1 - labels).float() * F.relu(self.margin - distances).pow(2))
  1. from sentence_transformers import SentenceTransformer, LoggingHandler, losses, InputExample
  2. from torch.utils.data import DataLoader
  3. model = SentenceTransformer('all-MiniLM-L6-v2')
  4. train_examples = [
  5. InputExample(texts=['This is a positive pair', 'Where the distance will be minimized'], label=1),
  6. InputExample(texts=['This is a negative pair', 'Their distance will be increased'], label=0)]
  7. train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=2)
  8. train_loss = losses.ContrastiveLoss(model=model)
  9. model.fit([(train_dataloader, train_loss)], show_progress_bar=True)

CosineSimilarityLoss

计算出样本的余弦相似度,和label做MSE损失

  1. from sentence_transformers import SentenceTransformer, InputExample, losses
  2. from torch.utils.data import DataLoader
  3. #Define the model. Either from scratch of by loading a pre-trained model
  4. model = SentenceTransformer('distilbert-base-nli-mean-tokens')
  5. #Define your train examples. You need more than just two examples...
  6. train_examples = [InputExample(texts=['My first sentence', 'My second sentence'], label=0.8),
  7. InputExample(texts=['Another pair', 'Unrelated sentence'], label=0.3)]
  8. #Define your train dataset, the dataloader and the train loss
  9. train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=16)
  10. train_loss = losses.CosineSimilarityLoss(model)
  11. #Tune the model
  12. model.fit(train_objectives=[(train_dataloader, train_loss)], epochs=1, warmup_steps=100)

MultipleNegativesRankingLoss

对比损失,同一批次的,其它样本视为负样本,分别两两求余弦相似度,最后做交叉熵损失,正样本的得分应该最高

  1. train_examples = [InputExample(texts=['Anchor 1', 'Positive 1']),
  2. InputExample(texts=['Anchor 2', 'Positive 2'])]
  3. train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=32)
  4. train_loss = losses.MultipleNegativesRankingLoss(model=model)

TripletLoss

loss = max(||anchor - positive|| - ||anchor - negative|| + margin, 0).

  1. from sentence_transformers import SentenceTransformer, SentencesDataset, LoggingHandler, losses
  2. from sentence_transformers.readers import InputExample
  3. model = SentenceTransformer('distilbert-base-nli-mean-tokens')
  4. train_examples = [InputExample(texts=['Anchor 1', 'Positive 1', 'Negative 1']),
  5. InputExample(texts=['Anchor 2', 'Positive 2', 'Negative 2'])]
  6. train_dataset = SentencesDataset(train_examples, model)
  7. train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=train_batch_size)
  8. train_loss = losses.TripletLoss(model=model)

本文内容由网友自发贡献,转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号