赞
踩
Bert Config中的几个参数
vocab_size: Vocabulary size of inputs_ids
in BertModel
. 词汇表大小
hidden_size: Size of the encoder layers and the pooler layer.
encoder层和pooler层大小。
(embedding size)
num_hidden_layers: Number of hidden layers in the Transformer
encoder.
每个attention层的head个数
List item
num_attention_heads: Number of attention heads for each attention layer in
the Transformer encoder.
intermediate_size: The size of the “intermediate” (i.e., feed-forward)
layer in the Transformer encoder.
hidden_act: The non-linear activation function (function or string) in the
encoder and pooler.
hidden_dropout_prob: The dropout probability for all fully connected
layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob: The dropout ratio for the attention
probabilities.
max_position_embeddings: The maximum sequence length that this model might
ever be used with. Typically set this to something large just in case
(e.g., 512 or 1024 or 2048).
type_vocab_size: The vocabulary size of the token_type_ids
passed into
BertModel
.
initializer_range: The stdev of the truncated_normal_initializer for
initializing all weight matrices.
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。