当前位置:   article > 正文

bert模型蒸馏实战

bert模型蒸馏

由于bert模型参数很大,在用到生产环境中推理效率难以满足要求,因此经常需要将模型进行压缩。常用的模型压缩的方法有剪枝、蒸馏和量化等方法。比较容易实现的方法为知识蒸馏,下面便介绍如何将bert模型进行蒸馏。

一、知识蒸馏原理

模型蒸馏的目的是用一个小模型去学习大模型的知识,让小模型的效果接近大模型的效果,小模型被称为student,大模型被称为teacher。

知识蒸馏的实现可以根据teacher和student的网络结构的不同设计不同的蒸馏步骤,基本结构如下所示:

 损失函数需要计算两个部分,cross entropy loss和mse loss,计算的时候需要注意有soft target和hard target。有两个参数需要定义,通过这两个参数对student和teacher进行拟合。其中一个是温度(T),对logits进行缩放。另一个是权重\alpha,用来计算加权损失。hard target就是原始的标注标签。soft target计算公式如下:

 加权损失计算如下:

 二、将simBert模型蒸馏到simase孪生网络上

蒸馏的步骤示意图可以参考下图:

 核心代码如下:

  1. class Distill_model(tf.keras.Model):
  2. '''
  3. 使用dssm进行知识蒸馏
  4. '''
  5. def __init__(self,
  6. config,
  7. teacher_network,
  8. vocab_size,
  9. word_vectors,
  10. **kwargs):
  11. self.config = config
  12. self.vocab_size = vocab_size
  13. self.word_vectors = word_vectors
  14. #冻结teacher network的参数
  15. for layer in teacher_network.layers:
  16. layer.trainable = False
  17. #定义学生模型输入
  18. query = tf.keras.layers.Input(shape=(None,), dtype=tf.int64, name='input_x_ids')
  19. sim_query = tf.keras.layers.Input(shape=(None,), dtype=tf.int64, name='input_y_ids')
  20. #定义老师模型输入
  21. word_ids_a = tf.keras.layers.Input(shape=(None,), dtype=tf.int32, name='input_word_ids_a')
  22. mask_a = tf.keras.layers.Input(shape=(None,), dtype=tf.int32, name='input_mask_a')
  23. type_ids_a = tf.keras.layers.Input(shape=(None,), dtype=tf.int32, name='input_type_ids_a')
  24. word_ids_b = tf.keras.layers.Input(shape=(None,), dtype=tf.int32, name='input_word_ids_b')
  25. mask_b = tf.keras.layers.Input(shape=(None,), dtype=tf.int32, name='input_mask_b')
  26. type_ids_b = tf.keras.layers.Input(shape=(None,), dtype=tf.int32, name='input_type_ids_b')
  27. input_a = [word_ids_a, mask_a, type_ids_a]
  28. input_b = [word_ids_b, mask_b, type_ids_b]
  29. teacher_input = [input_a, input_b]
  30. #teacher_softlabel
  31. teacher_output = teacher_network(teacher_input)
  32. teacher_soft_label = softmax_t(self.config['t'], teacher_output['logits'])
  33. # embedding层
  34. # 利用词嵌入矩阵将输入数据转成词向量,shape=[batch_size, seq_len, embedding_size]
  35. class GatherLayer(tf.keras.layers.Layer):
  36. def __init__(self, config, vocab_size, word_vectors):
  37. super(GatherLayer, self).__init__()
  38. self.config = config
  39. self.vocab_size = vocab_size
  40. self.word_vectors = word_vectors
  41. def build(self, input_shape):
  42. with tf.name_scope('embedding'):
  43. if not self.config['use_word2vec']:
  44. self.embedding_w = tf.Variable(tf.keras.initializers.glorot_normal()(
  45. shape=[self.vocab_size, self.config['embedding_size']],
  46. dtype=tf.float32), trainable=True, name='embedding_w')
  47. else:
  48. self.embedding_w = tf.Variable(tf.cast(self.word_vectors, tf.float32), trainable=True,
  49. name='embedding_w')
  50. self.build = True
  51. def call(self, inputs, **kwargs):
  52. return tf.gather(self.embedding_w, inputs, name='embedded_words')
  53. def get_config(self):
  54. config = super(GatherLayer, self).get_config()
  55. return config
  56. shared_net = tf.keras.Sequential([GatherLayer(config, vocab_size, word_vectors),
  57. shared_lstm_layer(config)])
  58. query_embedding_output = shared_net.predict_step(query)
  59. sim_query_embedding_output = shared_net.predict_step(sim_query)
  60. # 余弦函数计算相似度
  61. # cos_similarity余弦相似度[batch_size, similarity]
  62. query_norm = tf.sqrt(tf.reduce_sum(tf.square(query_embedding_output), axis=-1), name='query_norm')
  63. sim_query_norm = tf.sqrt(tf.reduce_sum(tf.square(sim_query_embedding_output), axis=-1), name='sim_query_norm')
  64. dot = tf.reduce_sum(tf.multiply(query_embedding_output, sim_query_embedding_output), axis=-1)
  65. cos_similarity = tf.divide(dot, (query_norm * sim_query_norm), name='cos_similarity')
  66. self.similarity = cos_similarity
  67. # 预测为正例的概率
  68. cond = (self.similarity > self.config["neg_threshold"])
  69. pos = tf.where(cond, tf.square(self.similarity), 1 - tf.square(self.similarity))
  70. neg = tf.where(cond, 1 - tf.square(self.similarity), tf.square(self.similarity))
  71. predictions = [[neg[i], pos[i]] for i in range(self.config['batch_size'])]
  72. self.logits = self.similarity
  73. student_soft_label = softmax_t(self.config['t'], self.logits)
  74. student_hard_label = self.logits
  75. if self.config['is_training']:
  76. #训练时候蒸馏
  77. outputs = dict(student_soft_label=student_soft_label, student_hard_label=student_hard_label, teacher_soft_label=teacher_soft_label, predictions=predictions)
  78. super(Distill_model, self).__init__(inputs=[query, sim_query, teacher_input], outputs=outputs, **kwargs)
  79. else:
  80. #预测时候只加载学生模型
  81. outputs = dict(predictions=predictions)
  82. super(Distill_model, self).__init__(inputs=[query, sim_query], outputs=outputs, **kwargs)

其中比较重要的步骤就是先冻结teacher模型的参数使其不参与训练:

#冻结teacher network的参数
for layer in teacher_network.layers:
    layer.trainable = False

然后在预测阶段只加载student模型:

#预测时候只加载学生模型
outputs = dict(predictions=predictions)
super(Distill_model, self).__init__(inputs=[query, sim_query], outputs=outputs, **kwargs)

然后是loss的计算:

  1. # mse损失计算
  2. y = tf.reshape(labels, (-1,))
  3. student_soft_label = model_outputs['student_soft_label']
  4. teacher_soft_label = model_outputs['teacher_soft_label']
  5. mse_loss = tf.keras.losses.mean_squared_error(teacher_soft_label, student_soft_label)
  6. #ce损失计算
  7. similarity = model_outputs['student_hard_label']
  8. cond = (similarity < self.config["neg_threshold"])
  9. zeros = tf.zeros_like(similarity, dtype=tf.float32)
  10. ones = tf.ones_like(similarity, dtype=tf.float32)
  11. squre_similarity = tf.square(similarity)
  12. neg_similarity = tf.where(cond, squre_similarity, zeros)
  13. pos_loss = y * (tf.square(ones - similarity) / 4)
  14. neg_loss = (ones - y) * neg_similarity
  15. ce_loss = pos_loss+neg_loss
  16. losses = self.config['alpha']*mse_loss + (1-self.config['alpha'])*ce_loss
  17. loss = tf.reduce_mean(losses)

三、总结

知识蒸馏作为一个模型压缩的方法,优点还是很多的,实现起来方便,也可以在样本数量少的情况下使用。

参考文章:

模型蒸馏原理和bert模型蒸馏以及theseus压缩实战_colourmind的博客-CSDN博客_模型蒸馏

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/木道寻08/article/detail/857762
推荐阅读
相关标签
  

闽ICP备14008679号