赞
踩
keras-bert
是Keras框架加载BERT模型的Python第三方模块,在之前的文章中,笔者介绍了如何使用keras-bret
来实现不同的NLP任务,比如:
本文将介绍两种使用keras-bert
加载BERT模型的方法。使用的Python环境如下:
python==3.7.0
tensorflow==1.14.0
Keras==2.2.4
keras-bert==0.83.0
加载的模型为Google官方发布的BERT中文预训练模型。创建的模型为BERT+Bi-LSTM+CRF
,其中对BERT进行微调。
方法1的完整代码如下:
# -*- coding: utf-8 -*- from keras.layers import * from keras.models import Model from keras.utils import plot_model from keras_bert import load_trained_model_from_checkpoint from keras_contrib.layers import CRF # 创建BERT-BiLSTM-CRF模型 model_path = "./chinese_L-12_H-768_A-12/" bert = load_trained_model_from_checkpoint( model_path + "bert_config.json", model_path + "bert_model.ckpt", seq_len=128 ) # make bert layer trainable for layer in bert.layers: layer.trainable = True x1 = Input(shape=(None,)) x2 = Input(shape=(None,)) bert_out = bert([x1, x2]) lstm_out = Bidirectional(LSTM(64, return_sequences=True, dropout=0.2, recurrent_dropout=0.2))(bert_out) crf_out = CRF(8, sparse_target=True)(lstm_out) model = Model([x1, x2], crf_out) model.summary() plot_model(model, to_file="model.png")
输出的模型结构如下:
__________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) (None, None) 0 __________________________________________________________________________________________________ input_2 (InputLayer) (None, None) 0 __________________________________________________________________________________________________ model_2 (Model) multiple 101382144 input_1[0][0] input_2[0][0] __________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, None, 128) 426496 model_2[1][0] __________________________________________________________________________________________________ crf_1 (CRF) (None, None, 8) 1112 bidirectional_1[0][0] ================================================================================================== Total params: 101,809,752 Trainable params: 101,809,752 Non-trainable params: 0 __________________________________________________________________________________________________
可以看到,该方法加载BERT,会把BERT模型整体当做一个输出形状为multiple的层,我们无法得知BERT模型的具体层信息,好处是我们的模型结构会显得比较简单(略去了BERT的细节)。
方法2加载BERT模型的Python代码如下:
# -*- coding: utf-8 -*- from keras.layers import * from keras.models import Model from keras.utils import plot_model from keras_bert import load_trained_model_from_checkpoint from keras_contrib.layers import CRF # 创建BERT-BiLSTM-CRF模型 model_path = "./chinese_L-12_H-768_A-12/" bert = load_trained_model_from_checkpoint( model_path + "bert_config.json", model_path + "bert_model.ckpt", seq_len=128 ) # make bert layer trainable for layer in bert.layers: layer.trainable = True lstm_out = Bidirectional(LSTM(64, return_sequences=True, dropout=0.2, recurrent_dropout=0.2))(bert.output) crf_out = CRF(8, sparse_target=True)(lstm_out) model = Model(bert.input, crf_out) model.summary() plot_model(model, to_file="model.png")
输出的模型结构如下:
__________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== Input-Token (InputLayer) (None, 128) 0 __________________________________________________________________________________________________ Input-Segment (InputLayer) (None, 128) 0 __________________________________________________________________________________________________ Embedding-Token (TokenEmbedding [(None, 128, 768), ( 16226304 Input-Token[0][0] __________________________________________________________________________________________________ Embedding-Segment (Embedding) (None, 128, 768) 1536 Input-Segment[0][0] __________________________________________________________________________________________________ Embedding-Token-Segment (Add) (None, 128, 768) 0 Embedding-Token[0][0] Embedding-Segment[0][0] __________________________________________________________________________________________________ Embedding-Position (PositionEmb (None, 128, 768) 98304 Embedding-Token-Segment[0][0] __________________________________________________________________________________________________ Embedding-Dropout (Dropout) (None, 128, 768) 0 Embedding-Position[0][0] __________________________________________________________________________________________________ Embedding-Norm (LayerNormalizat (None, 128, 768) 1536 Embedding-Dropout[0][0] __________________________________________________________________________________________________ Encoder-1-MultiHeadSelfAttentio (None, 128, 768) 2362368 Embedding-Norm[0][0] __________________________________________________________________________________________________ Encoder-1-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-1-MultiHeadSelfAttention[ __________________________________________________________________________________________________ Encoder-1-MultiHeadSelfAttentio (None, 128, 768) 0 Embedding-Norm[0][0] Encoder-1-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-1-MultiHeadSelfAttentio (None, 128, 768) 1536 Encoder-1-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-1-FeedForward (FeedForw (None, 128, 768) 4722432 Encoder-1-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-1-FeedForward-Dropout ( (None, 128, 768) 0 Encoder-1-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-1-FeedForward-Add (Add) (None, 128, 768) 0 Encoder-1-MultiHeadSelfAttention- Encoder-1-FeedForward-Dropout[0][ __________________________________________________________________________________________________ Encoder-1-FeedForward-Norm (Lay (None, 128, 768) 1536 Encoder-1-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-2-MultiHeadSelfAttentio (None, 128, 768) 2362368 Encoder-1-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-2-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-2-MultiHeadSelfAttention[ __________________________________________________________________________________________________ Encoder-2-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-1-FeedForward-Norm[0][0] Encoder-2-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-2-MultiHeadSelfAttentio (None, 128, 768) 1536 Encoder-2-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-2-FeedForward (FeedForw (None, 128, 768) 4722432 Encoder-2-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-2-FeedForward-Dropout ( (None, 128, 768) 0 Encoder-2-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-2-FeedForward-Add (Add) (None, 128, 768) 0 Encoder-2-MultiHeadSelfAttention- Encoder-2-FeedForward-Dropout[0][ __________________________________________________________________________________________________ Encoder-2-FeedForward-Norm (Lay (None, 128, 768) 1536 Encoder-2-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-3-MultiHeadSelfAttentio (None, 128, 768) 2362368 Encoder-2-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-3-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-3-MultiHeadSelfAttention[ __________________________________________________________________________________________________ Encoder-3-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-2-FeedForward-Norm[0][0] Encoder-3-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-3-MultiHeadSelfAttentio (None, 128, 768) 1536 Encoder-3-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-3-FeedForward (FeedForw (None, 128, 768) 4722432 Encoder-3-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-3-FeedForward-Dropout ( (None, 128, 768) 0 Encoder-3-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-3-FeedForward-Add (Add) (None, 128, 768) 0 Encoder-3-MultiHeadSelfAttention- Encoder-3-FeedForward-Dropout[0][ __________________________________________________________________________________________________ Encoder-3-FeedForward-Norm (Lay (None, 128, 768) 1536 Encoder-3-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-4-MultiHeadSelfAttentio (None, 128, 768) 2362368 Encoder-3-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-4-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-4-MultiHeadSelfAttention[ __________________________________________________________________________________________________ Encoder-4-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-3-FeedForward-Norm[0][0] Encoder-4-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-4-MultiHeadSelfAttentio (None, 128, 768) 1536 Encoder-4-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-4-FeedForward (FeedForw (None, 128, 768) 4722432 Encoder-4-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-4-FeedForward-Dropout ( (None, 128, 768) 0 Encoder-4-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-4-FeedForward-Add (Add) (None, 128, 768) 0 Encoder-4-MultiHeadSelfAttention- Encoder-4-FeedForward-Dropout[0][ __________________________________________________________________________________________________ Encoder-4-FeedForward-Norm (Lay (None, 128, 768) 1536 Encoder-4-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-5-MultiHeadSelfAttentio (None, 128, 768) 2362368 Encoder-4-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-5-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-5-MultiHeadSelfAttention[ __________________________________________________________________________________________________ Encoder-5-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-4-FeedForward-Norm[0][0] Encoder-5-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-5-MultiHeadSelfAttentio (None, 128, 768) 1536 Encoder-5-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-5-FeedForward (FeedForw (None, 128, 768) 4722432 Encoder-5-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-5-FeedForward-Dropout ( (None, 128, 768) 0 Encoder-5-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-5-FeedForward-Add (Add) (None, 128, 768) 0 Encoder-5-MultiHeadSelfAttention- Encoder-5-FeedForward-Dropout[0][ __________________________________________________________________________________________________ Encoder-5-FeedForward-Norm (Lay (None, 128, 768) 1536 Encoder-5-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-6-MultiHeadSelfAttentio (None, 128, 768) 2362368 Encoder-5-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-6-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-6-MultiHeadSelfAttention[ __________________________________________________________________________________________________ Encoder-6-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-5-FeedForward-Norm[0][0] Encoder-6-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-6-MultiHeadSelfAttentio (None, 128, 768) 1536 Encoder-6-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-6-FeedForward (FeedForw (None, 128, 768) 4722432 Encoder-6-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-6-FeedForward-Dropout ( (None, 128, 768) 0 Encoder-6-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-6-FeedForward-Add (Add) (None, 128, 768) 0 Encoder-6-MultiHeadSelfAttention- Encoder-6-FeedForward-Dropout[0][ __________________________________________________________________________________________________ Encoder-6-FeedForward-Norm (Lay (None, 128, 768) 1536 Encoder-6-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-7-MultiHeadSelfAttentio (None, 128, 768) 2362368 Encoder-6-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-7-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-7-MultiHeadSelfAttention[ __________________________________________________________________________________________________ Encoder-7-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-6-FeedForward-Norm[0][0] Encoder-7-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-7-MultiHeadSelfAttentio (None, 128, 768) 1536 Encoder-7-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-7-FeedForward (FeedForw (None, 128, 768) 4722432 Encoder-7-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-7-FeedForward-Dropout ( (None, 128, 768) 0 Encoder-7-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-7-FeedForward-Add (Add) (None, 128, 768) 0 Encoder-7-MultiHeadSelfAttention- Encoder-7-FeedForward-Dropout[0][ __________________________________________________________________________________________________ Encoder-7-FeedForward-Norm (Lay (None, 128, 768) 1536 Encoder-7-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-8-MultiHeadSelfAttentio (None, 128, 768) 2362368 Encoder-7-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-8-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-8-MultiHeadSelfAttention[ __________________________________________________________________________________________________ Encoder-8-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-7-FeedForward-Norm[0][0] Encoder-8-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-8-MultiHeadSelfAttentio (None, 128, 768) 1536 Encoder-8-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-8-FeedForward (FeedForw (None, 128, 768) 4722432 Encoder-8-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-8-FeedForward-Dropout ( (None, 128, 768) 0 Encoder-8-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-8-FeedForward-Add (Add) (None, 128, 768) 0 Encoder-8-MultiHeadSelfAttention- Encoder-8-FeedForward-Dropout[0][ __________________________________________________________________________________________________ Encoder-8-FeedForward-Norm (Lay (None, 128, 768) 1536 Encoder-8-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-9-MultiHeadSelfAttentio (None, 128, 768) 2362368 Encoder-8-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-9-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-9-MultiHeadSelfAttention[ __________________________________________________________________________________________________ Encoder-9-MultiHeadSelfAttentio (None, 128, 768) 0 Encoder-8-FeedForward-Norm[0][0] Encoder-9-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-9-MultiHeadSelfAttentio (None, 128, 768) 1536 Encoder-9-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-9-FeedForward (FeedForw (None, 128, 768) 4722432 Encoder-9-MultiHeadSelfAttention- __________________________________________________________________________________________________ Encoder-9-FeedForward-Dropout ( (None, 128, 768) 0 Encoder-9-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-9-FeedForward-Add (Add) (None, 128, 768) 0 Encoder-9-MultiHeadSelfAttention- Encoder-9-FeedForward-Dropout[0][ __________________________________________________________________________________________________ Encoder-9-FeedForward-Norm (Lay (None, 128, 768) 1536 Encoder-9-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-10-MultiHeadSelfAttenti (None, 128, 768) 2362368 Encoder-9-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-10-MultiHeadSelfAttenti (None, 128, 768) 0 Encoder-10-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-10-MultiHeadSelfAttenti (None, 128, 768) 0 Encoder-9-FeedForward-Norm[0][0] Encoder-10-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-10-MultiHeadSelfAttenti (None, 128, 768) 1536 Encoder-10-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-10-FeedForward (FeedFor (None, 128, 768) 4722432 Encoder-10-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-10-FeedForward-Dropout (None, 128, 768) 0 Encoder-10-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-10-FeedForward-Add (Add (None, 128, 768) 0 Encoder-10-MultiHeadSelfAttention Encoder-10-FeedForward-Dropout[0] __________________________________________________________________________________________________ Encoder-10-FeedForward-Norm (La (None, 128, 768) 1536 Encoder-10-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-11-MultiHeadSelfAttenti (None, 128, 768) 2362368 Encoder-10-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-11-MultiHeadSelfAttenti (None, 128, 768) 0 Encoder-11-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-11-MultiHeadSelfAttenti (None, 128, 768) 0 Encoder-10-FeedForward-Norm[0][0] Encoder-11-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-11-MultiHeadSelfAttenti (None, 128, 768) 1536 Encoder-11-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-11-FeedForward (FeedFor (None, 128, 768) 4722432 Encoder-11-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-11-FeedForward-Dropout (None, 128, 768) 0 Encoder-11-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-11-FeedForward-Add (Add (None, 128, 768) 0 Encoder-11-MultiHeadSelfAttention Encoder-11-FeedForward-Dropout[0] __________________________________________________________________________________________________ Encoder-11-FeedForward-Norm (La (None, 128, 768) 1536 Encoder-11-FeedForward-Add[0][0] __________________________________________________________________________________________________ Encoder-12-MultiHeadSelfAttenti (None, 128, 768) 2362368 Encoder-11-FeedForward-Norm[0][0] __________________________________________________________________________________________________ Encoder-12-MultiHeadSelfAttenti (None, 128, 768) 0 Encoder-12-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-12-MultiHeadSelfAttenti (None, 128, 768) 0 Encoder-11-FeedForward-Norm[0][0] Encoder-12-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-12-MultiHeadSelfAttenti (None, 128, 768) 1536 Encoder-12-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-12-FeedForward (FeedFor (None, 128, 768) 4722432 Encoder-12-MultiHeadSelfAttention __________________________________________________________________________________________________ Encoder-12-FeedForward-Dropout (None, 128, 768) 0 Encoder-12-FeedForward[0][0] __________________________________________________________________________________________________ Encoder-12-FeedForward-Add (Add (None, 128, 768) 0 Encoder-12-MultiHeadSelfAttention Encoder-12-FeedForward-Dropout[0] __________________________________________________________________________________________________ Encoder-12-FeedForward-Norm (La (None, 128, 768) 1536 Encoder-12-FeedForward-Add[0][0] __________________________________________________________________________________________________ bidirectional_1 (Bidirectional) (None, 128, 128) 426496 Encoder-12-FeedForward-Norm[0][0] __________________________________________________________________________________________________ crf_1 (CRF) (None, 128, 8) 1112 bidirectional_1[0][0] ================================================================================================== Total params: 101,809,752 Trainable params: 101,809,752 Non-trainable params: 0 __________________________________________________________________________________________________
可以看到,该方法加载BERT模型,可以完整地看到BERT具体层信息,而不是把BERT模型当成一个层来看,更像是BERT finetune的感觉。
本文较为简单,介绍了两种使用keras-bert
加载BERT模型的方法。之所以笔者在此介绍这些加载方法,是为了后续方便使用对抗训练FGM来增加模型效果,FGM对抗训练需要我们对Embedding层做扰动。
本文到此结束,感谢大家的阅读~
2021年3月31日于上海浦东~
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。