当前位置:   article > 正文

BERT模型分析_type_vocab_size是什么意思

type_vocab_size是什么意思

Bert Config中的几个参数

  • vocab_size: Vocabulary size of inputs_ids in BertModel. 词汇表大小

  • hidden_size: Size of the encoder layers and the pooler layer.
    encoder层和pooler层大小。

    (embedding size)

  • num_hidden_layers: Number of hidden layers in the Transformer
    encoder.
    每个attention层的head个数

  • List item

num_attention_heads: Number of attention heads for each attention layer in
the Transformer encoder.
intermediate_size: The size of the “intermediate” (i.e., feed-forward)
layer in the Transformer encoder.

hidden_act: The non-linear activation function (function or string) in the
encoder and pooler.
hidden_dropout_prob: The dropout probability for all fully connected
layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob: The dropout ratio for the attention
probabilities.
max_position_embeddings: The maximum sequence length that this model might
ever be used with. Typically set this to something large just in case
(e.g., 512 or 1024 or 2048).
type_vocab_size: The vocabulary size of the token_type_ids passed into
BertModel.
initializer_range: The stdev of the truncated_normal_initializer for
initializing all weight matrices.

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/348660
推荐阅读
  • 相关标签
      

    闽ICP备14008679号