当前位置:   article > 正文

Vision Transformer (ViT)及各种变体_vit架构

vit架构

目录

0.Vision Transformer介绍

1.ViT 模型架构

1.1 Linear Projection of Flattened Patches

1.2 Transformer Encoder

1.3 MLP Head

1.4 ViT架构图

1.5 model scaling

2.Hybrid ViT

4.其他Vision Transformer变体

5.Vit代码

6.参考博文


0.Vision Transformer介绍

2017年Vaswani等人在发表的《Attention Is All You Need》中提出Transformer模型,是第一个完全依靠自注意力计算其输入和输出的模型,从此在自然语言处理领域大获成功。

2021年Dosovitskiy等人将注意力机制的思想应用于计算机视觉领域,提出了Vision Transformer(ViT )模块。在大规模数据集的支持下,ViT模型可以达到与CNNs模型相当的精度,如下图所示为ViT的不同版本与ResNet和EfficientNet在不同数据集下的准确率对比。

论文名称:《AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE》 

论文地址https://arxiv.org/abs/2010.11929

1.ViT 模型架构

作者在文中给出ViT模型如下架构图,其中主要有三个部分组成:

1)Linear Projection of Flattened Patches(Embedding层,将子图映射为向量);

2)Transformer Encoder(编码层,对输入的信息进行计算学习);

3)MLP Head(用于分类的层结构);

1.1 Linear Projection of Flattened Patches

在标准的Transformer模块中,输入是Token(向量)序列,即二维矩阵[num_token, token_dim]。而图像数据格式为[H, W, C]的三维数据,因此需要将图像数据经过Embedding层进行变换,转换为Transformer模块能够输入的数据类型。

 以ViT-B/16为例,将输入图片(224x224)按照16x16大小的Patch进行划分,划分后会得到( 224 / 16 ) * ( 224 / 16 ) =196个Patches。接着通过线性映射(Linear Projection)将每个Patch映射到一维向量中。

Linear Projection:使用一个卷积核大小为16x16,步距为16,卷积核个数为768的卷积来实现线性映射,这个卷积操作产生shape变化为[224, 224, 3] -> [14, 14, 768],然后把H以及W两个维度展平(Flattened Patches)即可,shape变化为([14, 14, 768] -> [196, 768]),此时正好变成了一个二维矩阵,符合Transformer输入的需求。其中,196表征的是patches的数量,将每个Patche数据shape为[16, 16, 3]通过卷积映射得到一个长度为768的向量(后面都直接称为token)。

在输入Transformer Encoder之前注意需要加上[class]token以及Position Embedding

1)[class]token:原文中,作者参考了Bert模型,在刚刚得到的一堆tokens中插入一个专门用于分类的[class]token,这个[class]token是一个可训练的参数,数据格式和其他token一样都是一个向量,以ViT-B/16为例,就是一个长度为768的向量,与之前从图片中生成的tokens拼接在一起,Cat([1, 768], [196, 768]) -> [197, 768]。

2)Position Embedding:Position Embedding采用的是一个可训练的一维位置编码(1D Pos. Emb.),是直接叠加在tokens上的(add),所以shape要一样。以ViT-B/16为例,刚刚拼接[class]token后shape是[197, 768],那么这里的Position Embedding的shape也是[197, 768]。

自注意力是所有的元素两两之间去做交互,所以是没有顺序的,但是图片是一个整体,子图patches是有自己的顺序的,在空间位置上是相关的,所以要给patch embedding加上了positional embedding这样一组位置参数,让模型自己去学习patches之间的空间位置相关性。

对于Position Embedding作者也有做一系列对比试验,虽然没有位置嵌入的模型和有位置嵌入的模型的性能有很大差距,但是不同的位置信息编码方式之间几乎没有差别,由于Transformer编码器工作在patch级别的输入上,相对于pixel级别,如何编码空间信息的差异不太重要,结果如下所示:

1.2 Transformer Encoder

Transformer Encoder 主要由以下几部分组成:

1)Layer Norm:Transformer中使用Layer Normalization进行归一化操作,能够加快训练的速度,提高训练的稳定性;

2)Multi-Head Attention:与Transformer中的一样,详见:Transformer-《Attention Is All You Need》_HM-hhxx!的博客-CSDN博客

3)Dropout/DropPath:在原论文的代码中是直接使用的Dropout层,在但实现的代码中使用的是DropPath;

4)MLP Block:全连接+GELU激活函数+Dropout组成, 第一个全连接层会把输入节点个数翻4倍[197, 768] -> [197, 3072],第二个全连接层会还原回原节点个数[197, 3072] -> [197, 768],MLPBlock结构如下图所示

 

Encoder结构如下图所示,左侧为实际结构,右侧为论文中结构,省去了Dropout/DropPath层,其实就是重复堆叠如下图所示的Encoder Block L次,MLP Block结构如上图所示:

   

1.3 MLP Head

在经过Transformer Encoder时,输入的shape和输出的shape保持不变。在论文中,以ViT-B/16为例,输入的是[197, 768]输出的还是[197, 768]。在Transformer Encoder后还有一个Layer Norm,结构图中并没有给出,如下图所示:

这里我们只是需要Transformer Encoder中的分类信息,所以我们只需要提取出[class]token生成的对应结果就行,即[197, 768]中抽取出[class]token对应的[1, 768],因为self-attention计算全局信息的特征,这个[class]token其中已经融合了其他token的信息。接着我们通过MLP Head得到我们最终的分类结果。MLP Head原论文中说在训练ImageNet21K时是由Linear+tanh激活函数+Linear组成。但是迁移到ImageNet1K上或者你自己的数据上时,只用一个Linear即可。

1.4 ViT架构图

1.5 model scaling

论文中,作者根据Bert模型设计了‘Base’和‘Large’模型,并增加了一个‘Huge’模型。并对名称进行了解释,例如ViT-L / 16表示具有16 × 16输入patch size的" Large "变体。需要注意的是Transformer的序列长度与patch size的平方成反比,因此patch size较小的模型计算开销较大。因此在ViT源码中,除了patch size为16×16的,还有32×32的。

下表中的Layers就是Transformer Encoder中重复堆叠Encoder Block的次数,Hidden Size就是对应通过Embedding层后每个token的dim(向量的长度),MLP size是Transformer Encoder中MLP Block第一个全连接的节点个数(是Hidden Size的四倍),Heads代表Transformer中Multi-Head Attention的heads数。

2.Hybrid ViT

论文在4.1章节中介绍模型的缩放后,对模型的混合模型进行了介绍,即将传统的CNN特征提取和Transformer进行结合。文中将以ResNet50作为特征提取器的混合模型,但这里的R50的卷积层采用的StdConv2d不是传统的Conv2d,然后将所有的BatchNorm层替换成GroupNorm层。在原Resnet50网络中,stage1重复堆叠3次,stage2重复堆叠4次,stage3重复堆叠6次,stage4重复堆叠3次,但在这里的R50中,把stage4中的3个Block移至stage3中,所以stage3中共重复堆叠9次。

通过R50 Backbone进行特征提取后,得到的特征矩阵shape是[14, 14, 1024],接着再输入Patch Embedding层,注意Patch Embedding中卷积层Conv2d的kernel_size和stride都变成了1,只是用来调整channel。后面的部分和前面的ViT结构一样。

 下表是论文中对比ViT、ResNet及R-ViT模型的效果,通过对比发现,在训练epoch较少时hybrid模型效果优于ViT,但在epoch增加时ViT效果更好。

4.其他Vision Transformer变体

在论文《A review of convolutional neural network architectures and their optimizations》中,作者指出一些研究表明ViT模型与CNN相比缺乏可优化性,这是由于ViT缺乏空间归纳偏差。因此,在ViT模型中使用卷积策略来削弱这种偏差,可以提高其稳定性和性能。并列出如下Vit变体:

1)LeVit(2021):映入主义偏向的思想来结合位置信息。

论文名称:《LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference》

论文地址:https://arxiv.org/abs/2104.01136

2)PVT(2021):金字塔vit,将transformer融入到CNNs中,在图像的密集分区上进行训练,以实现输出高分辨率。克服了transformer对于密集预测任务的缺点。

 论文名称:《Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions》

论文地址:https://arxiv.org/abs/2102.12122

3)T2T-ViT(2021):通过递归地将相邻的token聚合为一个token,图像最终被逐步结构化为token;提供了具有更深更窄的高效backbone;将图像结构化为token。

论文名称:《Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet》

论文地址:https://arxiv.org/abs/2101.11986

4)MobileVit(2021):将mobilenet v2连接vit,效果明显优于其他轻量级网络。结合逆残差(inverse residual)和Vit。

论文名称:《MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer》

论文地址:https://arxiv.org/abs/2110.02178

5)VTs(2021):Visual Transformers,通过词法分析将特征图转换为一系列视觉token,然后通过投影仪( Wu等2020b)将处理后的视觉令牌投影到原始地图和原始图像上。实验表明,VTs在使用较少的FLOPs和参数的情况下,将ImageNet top - 1的ResNet精度提高了4.6 ~ 7个点。通过映射将图像输入transformer。

论文名称:《Visual Transformers: Token-based Image Representation and Processing for Computer Vision》

论文地址:https://arxiv.org/abs/2006.03677

6)Conformer(2021):结合CNN与Transformer的优点,并行的通过特征耦合单元(Feature Coupling Unit,FCU)与每个阶段的局部和全局特征进行交互,从而兼具CNN和Transformer的优点。将CNNs和Transformer模块并行组合。

论文名称:《Conformer: Local Features Coupling Global Representations for Visual Recognition》

论文地址:https://arxiv.org/abs/2105.03889

7)BoTNet(2021):通过在ResNet的最后三个瓶颈块中用全局自注意力替换空间卷积显著改善了基线。用全局自注意力代替空间卷积。

论文名称:《Bottleneck Transformers for Visual Recognitiont》

论文地址:CVPR 2021 Open Access Repository

8)CoAtNets(2021):并认为CNNs的深层结构和注意力机制可以通过简单的相对注意力联系起来。此外还认为叠加卷积层和Transformer encoder可以产生好的效果。堆叠卷积层和Transformer编码器。

论文名称:《CoAtNet: Marrying Convolution and Attention for All Data Sizes》

论文地址:https://arxiv.org/abs/2106.04803

9)Swin Transformer(2021):通过移动窗口将自注意力计算限制在不重叠的局部窗口,同时允许跨窗口连接,在Imagenet - 1K上达到了87.3 %的准确率。将自注意力限制在不重叠的局部窗口并将其进行连接。

论文名称:《Swin Transformer: Hierarchical Vision Transformer using Shifted Windows》

论文地址:https://arxiv.org/abs/2103.14030

5.Vit代码

原始Vit代码地址:

pytorch-image-models/vision_transformer.py at main · huggingface/pytorch-image-models · GitHub

model.py:

  1. """
  2. original code from rwightman:
  3. https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
  4. """
  5. from functools import partial
  6. from collections import OrderedDict
  7. import torch
  8. import torch.nn as nn
  9. def drop_path(x, drop_prob: float = 0., training: bool = False):
  10. """
  11. Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
  12. This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
  13. the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
  14. See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
  15. changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
  16. 'survival rate' as the argument.
  17. """
  18. if drop_prob == 0. or not training:
  19. return x
  20. keep_prob = 1 - drop_prob
  21. # work with diff dim tensors, not just 2D ConvNets
  22. shape = (x.shape[0],) + (1,) * (x.ndim - 1)
  23. random_tensor = keep_prob + \
  24. torch.rand(shape, dtype=x.dtype, device=x.device)
  25. random_tensor.floor_() # binarize
  26. output = x.div(keep_prob) * random_tensor
  27. return output
  28. class DropPath(nn.Module):
  29. """
  30. Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
  31. """
  32. def __init__(self, drop_prob=None):
  33. super(DropPath, self).__init__()
  34. self.drop_prob = drop_prob
  35. def forward(self, x):
  36. return drop_path(x, self.drop_prob, self.training)
  37. class PatchEmbed(nn.Module):
  38. """
  39. 2D Image to Patch Embedding
  40. """
  41. def __init__(self, img_size=224, patch_size=16, in_c=3, embed_dim=768, norm_layer=None):
  42. super().__init__()
  43. img_size = (img_size, img_size)
  44. patch_size = (patch_size, patch_size)
  45. self.img_size = img_size
  46. self.patch_size = patch_size
  47. self.grid_size = (img_size[0] // patch_size[0],
  48. img_size[1] // patch_size[1])
  49. self.num_patches = self.grid_size[0] * self.grid_size[1]
  50. self.proj = nn.Conv2d(
  51. in_c, embed_dim, kernel_size=patch_size, stride=patch_size)
  52. self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
  53. def forward(self, x):
  54. B, C, H, W = x.shape
  55. assert H == self.img_size[0] and W == self.img_size[1], \
  56. f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
  57. # flatten: [B, C, H, W] -> [B, C, HW]
  58. # transpose: [B, C, HW] -> [B, HW, C]
  59. x = self.proj(x).flatten(2).transpose(1, 2) # [B,196,768]
  60. x = self.norm(x)
  61. return x
  62. class Attention(nn.Module):
  63. def __init__(self,
  64. dim, # 输入token的dim
  65. num_heads=8,
  66. qkv_bias=False,
  67. qk_scale=None,
  68. attn_drop_ratio=0.,
  69. proj_drop_ratio=0.):
  70. super(Attention, self).__init__()
  71. self.num_heads = num_heads
  72. head_dim = dim // num_heads
  73. self.scale = qk_scale or head_dim ** -0.5 # 开根号操作
  74. self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
  75. self.attn_drop = nn.Dropout(attn_drop_ratio)
  76. self.proj = nn.Linear(dim, dim)
  77. self.proj_drop = nn.Dropout(proj_drop_ratio)
  78. def forward(self, x):
  79. # [batch_size, num_patches + 1, total_embed_dim]
  80. B, N, C = x.shape
  81. # qkv(): -> [batch_size, num_patches + 1, 3 * total_embed_dim]
  82. # reshape: -> [batch_size, num_patches + 1, 3, num_heads, embed_dim_per_head]
  83. # permute: -> [3, batch_size, num_heads, num_patches + 1, embed_dim_per_head]
  84. qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C //
  85. self.num_heads).permute(2, 0, 3, 1, 4)
  86. # [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
  87. # make torchscript happy (cannot use tensor as tuple)
  88. q, k, v = qkv[0], qkv[1], qkv[2]
  89. # transpose: -> [batch_size, num_heads, embed_dim_per_head, num_patches + 1]
  90. # @: multiply -> [batch_size, num_heads, num_patches + 1, num_patches + 1]
  91. attn = (q @ k.transpose(-2, -1)) * self.scale
  92. # q*k的转置*缩放因子,缩放因子就是根号下dk
  93. attn = attn.softmax(dim=-1)
  94. attn = self.attn_drop(attn)
  95. # @: multiply -> [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
  96. # transpose: -> [batch_size, num_patches + 1, num_heads, embed_dim_per_head]
  97. # reshape: -> [batch_size, num_patches + 1, total_embed_dim]
  98. x = (attn @ v).transpose(1, 2).reshape(B, N, C)
  99. x = self.proj(x)
  100. x = self.proj_drop(x)
  101. return x
  102. class Mlp(nn.Module):
  103. """
  104. MLP as used in Vision Transformer, MLP-Mixer and related networks
  105. """
  106. def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
  107. super().__init__()
  108. out_features = out_features or in_features
  109. hidden_features = hidden_features or in_features
  110. self.fc1 = nn.Linear(in_features, hidden_features)
  111. self.act = act_layer()
  112. self.fc2 = nn.Linear(hidden_features, out_features)
  113. self.drop = nn.Dropout(drop)
  114. def forward(self, x):
  115. x = self.fc1(x)
  116. x = self.act(x)
  117. x = self.drop(x)
  118. x = self.fc2(x)
  119. x = self.drop(x)
  120. return x
  121. # encoder block
  122. class Block(nn.Module):
  123. def __init__(self,
  124. dim,
  125. num_heads,
  126. mlp_ratio=4.,
  127. qkv_bias=False,
  128. qk_scale=None,
  129. drop_ratio=0.,
  130. attn_drop_ratio=0.,
  131. drop_path_ratio=0.,
  132. act_layer=nn.GELU,
  133. norm_layer=nn.LayerNorm):
  134. super(Block, self).__init__()
  135. self.norm1 = norm_layer(dim)
  136. self.attn = Attention(dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
  137. attn_drop_ratio=attn_drop_ratio, proj_drop_ratio=drop_ratio)
  138. # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
  139. self.drop_path = DropPath(
  140. drop_path_ratio) if drop_path_ratio > 0. else nn.Identity()
  141. self.norm2 = norm_layer(dim)
  142. mlp_hidden_dim = int(dim * mlp_ratio)
  143. self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim,
  144. act_layer=act_layer, drop=drop_ratio)
  145. def forward(self, x):
  146. x = x + self.drop_path(self.attn(self.norm1(x)))
  147. x = x + self.drop_path(self.mlp(self.norm2(x)))
  148. return x
  149. class VisionTransformer(nn.Module):
  150. def __init__(self, img_size=224, patch_size=16, in_c=3, num_classes=1000,
  151. embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.0, qkv_bias=True,
  152. qk_scale=None, representation_size=None, distilled=False, drop_ratio=0.,
  153. attn_drop_ratio=0., drop_path_ratio=0., embed_layer=PatchEmbed, norm_layer=None,
  154. act_layer=None):
  155. """
  156. Args:
  157. img_size (int, tuple): input image size
  158. patch_size (int, tuple): patch size
  159. in_c (int): number of input channels
  160. num_classes (int): number of classes for classification head
  161. embed_dim (int): embedding dimension
  162. depth (int): depth of transformer
  163. num_heads (int): number of attention heads
  164. mlp_ratio (int): ratio of mlp hidden dim to embedding dim
  165. qkv_bias (bool): enable bias for qkv if True
  166. qk_scale (float): override default qk scale of head_dim ** -0.5 if set
  167. representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
  168. distilled (bool): model includes a distillation token and head as in DeiT models
  169. drop_ratio (float): dropout rate
  170. attn_drop_ratio (float): attention dropout rate
  171. drop_path_ratio (float): stochastic depth rate
  172. embed_layer (nn.Module): patch embedding layer
  173. norm_layer: (nn.Module): normalization layer
  174. """
  175. super(VisionTransformer, self).__init__()
  176. self.num_classes = num_classes
  177. # num_features for consistency with other models
  178. self.num_features = self.embed_dim = embed_dim
  179. self.num_tokens = 2 if distilled else 1
  180. norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
  181. act_layer = act_layer or nn.GELU
  182. self.patch_embed = embed_layer(
  183. img_size=img_size, patch_size=patch_size, in_c=in_c, embed_dim=embed_dim)
  184. num_patches = self.patch_embed.num_patches
  185. self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
  186. self.dist_token = nn.Parameter(torch.zeros(
  187. 1, 1, embed_dim)) if distilled else None
  188. self.pos_embed = nn.Parameter(torch.zeros(
  189. 1, num_patches + self.num_tokens, embed_dim))
  190. self.pos_drop = nn.Dropout(p=drop_ratio)
  191. # stochastic depth decay rule
  192. dpr = [x.item() for x in torch.linspace(0, drop_path_ratio, depth)]
  193. self.blocks = nn.Sequential(*[
  194. Block(dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
  195. drop_ratio=drop_ratio, attn_drop_ratio=attn_drop_ratio, drop_path_ratio=dpr[
  196. i],
  197. norm_layer=norm_layer, act_layer=act_layer)
  198. for i in range(depth)
  199. ])
  200. self.norm = norm_layer(embed_dim)
  201. # Representation layer
  202. if representation_size and not distilled:
  203. self.has_logits = True
  204. self.num_features = representation_size
  205. self.pre_logits = nn.Sequential(OrderedDict([
  206. ("fc", nn.Linear(embed_dim, representation_size)),
  207. ("act", nn.Tanh())
  208. ]))
  209. else:
  210. self.has_logits = False
  211. self.pre_logits = nn.Identity()
  212. # Classifier head(s)
  213. self.head = nn.Linear(
  214. self.num_features, num_classes) if num_classes > 0 else nn.Identity()
  215. self.head_dist = None
  216. if distilled:
  217. self.head_dist = nn.Linear(
  218. self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()
  219. # Weight init
  220. nn.init.trunc_normal_(self.pos_embed, std=0.02)
  221. if self.dist_token is not None:
  222. nn.init.trunc_normal_(self.dist_token, std=0.02)
  223. nn.init.trunc_normal_(self.cls_token, std=0.02)
  224. self.apply(_init_vit_weights)
  225. def forward_features(self, x):
  226. # [B, C, H, W] -> [B, num_patches, embed_dim]
  227. x = self.patch_embed(x) # [B, 196, 768]
  228. # [1, 1, 768] -> [B, 1, 768]
  229. cls_token = self.cls_token.expand(x.shape[0], -1, -1)
  230. if self.dist_token is None:
  231. x = torch.cat((cls_token, x), dim=1) # [B, 197, 768]
  232. else:
  233. x = torch.cat((cls_token, self.dist_token.expand(
  234. x.shape[0], -1, -1), x), dim=1)
  235. x = self.pos_drop(x + self.pos_embed)
  236. x = self.blocks(x)
  237. x = self.norm(x)
  238. if self.dist_token is None:
  239. return self.pre_logits(x[:, 0])
  240. else:
  241. return x[:, 0], x[:, 1]
  242. def forward(self, x):
  243. x = self.forward_features(x)
  244. # 图片分类没走if这块
  245. if self.head_dist is not None:
  246. x, x_dist = self.head(x[0]), self.head_dist(x[1])
  247. if self.training and not torch.jit.is_scripting():
  248. # during inference, return the average of both classifier predictions
  249. return x, x_dist
  250. else:
  251. return (x + x_dist) / 2
  252. else:
  253. x = self.head(x)
  254. return x
  255. def _init_vit_weights(m):
  256. """
  257. ViT weight initialization
  258. :param m: module
  259. """
  260. if isinstance(m, nn.Linear):
  261. nn.init.trunc_normal_(m.weight, std=.01)
  262. if m.bias is not None:
  263. nn.init.zeros_(m.bias)
  264. elif isinstance(m, nn.Conv2d):
  265. nn.init.kaiming_normal_(m.weight, mode="fan_out")
  266. if m.bias is not None:
  267. nn.init.zeros_(m.bias)
  268. elif isinstance(m, nn.LayerNorm):
  269. nn.init.zeros_(m.bias)
  270. nn.init.ones_(m.weight)
  271. def vit_base_patch16_224(num_classes: int = 1000):
  272. """
  273. ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
  274. ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.
  275. weights ported from official Google JAX impl:
  276. 链接: https://pan.baidu.com/s/1zqb08naP0RPqqfSXfkB2EA 密码: eu9f
  277. """
  278. model = VisionTransformer(img_size=224,
  279. patch_size=16,
  280. embed_dim=768,
  281. depth=12,
  282. num_heads=12,
  283. representation_size=None,
  284. num_classes=num_classes)
  285. return model
  286. def vit_base_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):
  287. """
  288. ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
  289. ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
  290. weights ported from official Google JAX impl:
  291. https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch16_224_in21k-e5005f0a.pth
  292. """
  293. model = VisionTransformer(img_size=224,
  294. patch_size=16,
  295. embed_dim=768,
  296. depth=12,
  297. num_heads=12,
  298. representation_size=768 if has_logits else None,
  299. num_classes=num_classes)
  300. return model
  301. def vit_base_patch32_224(num_classes: int = 1000):
  302. """
  303. ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).
  304. ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.
  305. weights ported from official Google JAX impl:
  306. 链接: https://pan.baidu.com/s/1hCv0U8pQomwAtHBYc4hmZg 密码: s5hl
  307. """
  308. model = VisionTransformer(img_size=224,
  309. patch_size=32,
  310. embed_dim=768,
  311. depth=12,
  312. num_heads=12,
  313. representation_size=None,
  314. num_classes=num_classes)
  315. return model
  316. def vit_base_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):
  317. """
  318. ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).
  319. ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
  320. weights ported from official Google JAX impl:
  321. https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch32_224_in21k-8db57226.pth
  322. """
  323. model = VisionTransformer(img_size=224,
  324. patch_size=32,
  325. embed_dim=768,
  326. depth=12,
  327. num_heads=12,
  328. representation_size=768 if has_logits else None,
  329. num_classes=num_classes)
  330. return model
  331. def vit_large_patch16_224(num_classes: int = 1000):
  332. """
  333. ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
  334. ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.
  335. weights ported from official Google JAX impl:
  336. 链接: https://pan.baidu.com/s/1cxBgZJJ6qUWPSBNcE4TdRQ 密码: qqt8
  337. """
  338. model = VisionTransformer(img_size=224,
  339. patch_size=16,
  340. embed_dim=1024,
  341. depth=24,
  342. num_heads=16,
  343. representation_size=None,
  344. num_classes=num_classes)
  345. return model
  346. def vit_large_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):
  347. """
  348. ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
  349. ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
  350. weights ported from official Google JAX impl:
  351. https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch16_224_in21k-606da67d.pth
  352. """
  353. model = VisionTransformer(img_size=224,
  354. patch_size=16,
  355. embed_dim=1024,
  356. depth=24,
  357. num_heads=16,
  358. representation_size=1024 if has_logits else None,
  359. num_classes=num_classes)
  360. return model
  361. def vit_large_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):
  362. """
  363. ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929).
  364. ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
  365. weights ported from official Google JAX impl:
  366. https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch32_224_in21k-9046d2e7.pth
  367. """
  368. model = VisionTransformer(img_size=224,
  369. patch_size=32,
  370. embed_dim=1024,
  371. depth=24,
  372. num_heads=16,
  373. representation_size=1024 if has_logits else None,
  374. num_classes=num_classes)
  375. return model
  376. def vit_huge_patch14_224_in21k(num_classes: int = 21843, has_logits: bool = True):
  377. """
  378. ViT-Huge model (ViT-H/14) from original paper (https://arxiv.org/abs/2010.11929).
  379. ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
  380. NOTE: converted weights not currently available, too large for github release hosting.
  381. """
  382. model = VisionTransformer(img_size=224,
  383. patch_size=14,
  384. embed_dim=1280,
  385. depth=32,
  386. num_heads=16,
  387. representation_size=1280 if has_logits else None,
  388. num_classes=num_classes)
  389. return model

6.参考博文

1.深度学习之图像分类(十二): Vision Transformer - 魔法学院小学弟

2. Vision Transformer详解_太阳花的小绿豆的博客-CSDN博客

3.论文《A review of convolutional neural network architectures and their optimizations》 A review of convolutional neural network architectures and their optimizations | SpringerLink

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Cpp五条/article/detail/92594
推荐阅读
相关标签
  

闽ICP备14008679号