当前位置:   article > 正文

PyTorch笔记 - Position Embedding (Transformer/ViT/Swin/MAE)_position embding

position embding

欢迎关注我的CSDN:https://blog.csdn.net/caroline_wendy
本文地址:https://blog.csdn.net/caroline_wendy/article/details/128447794

Position Embedding(位置编码)

  • Transformer
    • 1d absolute
    • sin/cos constant
  • Vision Transformer
    • 1d absolute
    • trainable
  • Swin Transformer
    • 2d relative bias
    • trainable
  • Masked AutoEncoder
    • 2d absolute
    • sin/cos constant

Paper:

  • Transformer - Attention Is All You Need
  • ViT - An Image is Worth 16x16 Words Transformers for Image Recognition at Scale
  • SwinTransformer - Hierarchical Vision Transformer using Shifte
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/356908
推荐阅读
相关标签
  

闽ICP备14008679号