赞
踩
模型由三个模块组成
Linear Projection of Flattened Patches(Embeddind层)
Transformer Encoder(编码层)
MLP Head(最终用于分类的层结构)
Embedding层结构详解
对于标准的Transformer模块,要求输入的是token(向量)序列,即二维矩阵[num_token, token_dim],如下图,token0-9对应的都是向量,以ViT-B/16为例,每个token向量长度为768.
对于图像数据而言,其数据格式为[H,W,C]是三维矩阵明显不是Transformer想要的。所以需要先通过一个Embedding层来对数据做个变换。如下图所示,首先将一个图片按给定大小分成一堆Patches。以ViT-B/16为例,将输入图片(224*224)按照16*16大小的patch进行划分,划分后会得到(224/16)^2=196个patches。接着通过线性映射将每个Patch映射到一维向量中,以ViT-B/16为例,每个Patches数据shape为[16,16,3]通过映射得到一个长度为768的向量(后面都直接称为Token)。[16,16,3]->[768]
在代码实现中,直接通过一个卷积层来实现。以ViT-B/16为例,直接使用一个卷积核大小为16*16,步距为16,卷积核个数为768的卷积实现。通过卷积[224,224,3]->[14,14,768],然后把H以及W两个维度展平即可[14,14,768]->[196,768],此时正好变成一个二维矩阵,正是Transformer想要的。
在输入Transformer Encoder之前注意需要加上[class]token以及Position Embedding。在原论文中,作者说参考BERT,在刚刚得到的一堆tokens中插入一个专门用于分类的[class]token,这个[class]token是一个可训练的参数,数据格式和其他token一样都是一个向量,以ViT-B/16为例,就是一个长度为768的向量,与之前从图片中生成的tokens拼接在一起,Cat([1,768],[196,768])->[197,768]。然后关于Position Embedding就是之前Transformer中讲到的Positional Encoding,这里的Position Embedding采用的是一个可训练的参数1D Pos. Emb.,是直接叠加在tokens上的(add),所以shape要一样。以ViT-B/16为例,刚刚拼接[class]token后shape是[197,768],那么这里的Position Embedding的shape也是[197,768]。
对于Position Embedding作者也有一系列对比实验,在源码中默认使用的是1D Pos. Emb.,对比不使用Position Embedding准确率提升了大概3个点,和2D Pos. Emb.比起来没太大差别。
其实就是重复堆叠Encoder Block L次,主要由以下几部分组成:
Layer Norm,这种Normalization方法主要针对NLP领域提出的,这里是对每个token进行Norm处理,之前也有讲过Layer Norm。
Multi-Head Attention
Dropout/DropPath,在原论文的代码是直接使用的Dropout层
MLP Block,如图右侧,就是全连接+GELU激活函数+Dropout组成,需要注意的是第一个全连接层会把输入节点的个数翻4倍[197,768]->[197,3072],第二个全连接层会还原回原节点个数[197,3072]->[197,768]
上面通过Transformer Encoder后输出的shape和输入的shape是保持不变的,以ViT-B/16为例,输入的是[197,768],输出的还是[197,768]。注意,在Transformer Encoder后其实还有一个Layer Norm没有画出来,后面画由ViT模型的详细结构。这里只是需要分类的信息,所以我们只需要提取出[class]token生成的对应结果就行,即[197,768]中抽取出[class]token对应的[1,768]。
接着我们通过MLP Head得到我们最终的分类结果。MLP Head原论文中说在训练ImageNet21K时是由Linear+tanh激活函数+Linear组成。但是迁移到ImageNet1K或者自己的数据集上时,只用一个Linear即可。
将传统CNN特征提取和Transformer进行结合。下面绘制的是以ResNet50作为特征提取器的混合模型,但这里的Resnet与之前的Resnet不同。首先这里的R50的卷积采用的StdConv2d不是传统的Conv2d,然后将所有的BatchNorm层换成了GroupNorm层。在原Resnet50网络中,stage1重复堆叠3次,stage2重复堆叠4次,stage3重复堆叠6次,stage4重复堆叠3次,但在这里的R50中,把stage4中的3个Block移至stage3中,所以stage3共重复堆叠9次。
通过R50 Backbone进行特征提取后,得到的特征矩阵shape是[14,14,1024],接着再输入patch Embedding层中,注意Patch Embedding中卷积层Conv2d的kernel_size和stride都变成了1,只是用来调整channel。后面的部分和前面的ViT中讲的完全一样。
下表是论文用来对比ViT,Resnet(和刚刚讲的一样,使用的卷积层和Norm层都进行了修改)以及Hybrid模型的效果。通过对比发现,在训练epoch较少时Hybrid优于ViT,但当epoch增大后ViT优于Hybrid。
在论文Table1中给出是哪个模型(base/large/huge)的参数,在源码中除了有Patch Size为16*16的还有32*32的。其中的Layers就是Transformer Encoder中重复堆叠Encoder Block的次数,Hidden Size就是对应通过Embedding层后每个token的dim(向量的长度),MLP Size是Transformer Encoder中MLP Block第一个全连接的节点个数(是Hidden Size的四倍),Heads代表Transformer中Multi-Head Attention的heads数。
Model | Patch Size | Layers | Hidden Size D | MLP size | Heads | Params |
---|---|---|---|---|---|---|
ViT-Base | 16x16 | 12 | 768 | 3072 | 12 | 86M |
ViT-Large | 16x16 | 24 | 1024 | 4096 | 16 | 307M |
ViT-Huge | 14x14 | 32 | 1280 | 5120 | 16 | 632M |
ViT模型
- """
- original code from rwightman:
- https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
- """
- from functools import partial
- from collections import OrderedDict
-
- import torch
- import torch.nn as nn
-
-
- # 随机深度
- def drop_path(x, drop_prob: float = 0., training: bool = False):
- """
- Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
- the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
- See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
- changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
- 'survival rate' as the argument.
- """
- if drop_prob == 0. or not training:
- return x
- keep_prob = 1 - drop_prob
- shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
- random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
- random_tensor.floor_() # binarize
- output = x.div(keep_prob) * random_tensor
- return output
-
-
- class DropPath(nn.Module):
- """
- Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- """
- def __init__(self, drop_prob=None):
- super(DropPath, self).__init__()
- self.drop_prob = drop_prob
-
- def forward(self, x):
- return drop_path(x, self.drop_prob, self.training)
-
-
- class PatchEmbed(nn.Module):
- """
- 2D Image to Patch Embedding
- """
- # 将输入图像转化为self-attention的token向量格式
- # 输入图片224*224*3(RGB 3通道),按照16*16的patch划分
- # 224,224,3-->14,14,768
- def __init__(self, img_size=224, patch_size=16, in_c=3, embed_dim=768, norm_layer=None):
- super().__init__()
- img_size = (img_size, img_size)
- patch_size = (patch_size, patch_size)
- self.img_size = img_size
- self.patch_size = patch_size
- # 224 // 16 ,224 // 16
- self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
- # 计算patches的数目 14 * 14
- self.num_patches = self.grid_size[0] * self.grid_size[1]
-
- # 卷积层 conv16*16,s=16,o = [(i+2p-k)/s] + 1=[(256-16)/16]+1=14,14*14
- self.proj = nn.Conv2d(in_c, embed_dim, kernel_size=patch_size, stride=patch_size)
- # nn.Identity()不做任何操作
- self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
-
- # 正向传播过程:传入图片数据 VIT模型中图像输入大小是固定的,无法更改
- def forward(self, x):
- B, C, H, W = x.shape
- # 检查传入图片大小是否符合预先设定
- assert H == self.img_size[0] and W == self.img_size[1], \
- f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
-
- # flatten: [B, C, H, W] -> [B, C, HW] [B, 768, 14*14]
- # transpose: [B, C, HW] -> [B, HW, C] [B, 196, 768]
- x = self.proj(x).flatten(2).transpose(1, 2)
- x = self.norm(x)
- return x
-
-
- # multihead -self attention
- class Attention(nn.Module):
- def __init__(self,
- dim, # 输入token的dim = 768
- num_heads=8,
- qkv_bias=False,
- qk_scale=None, # 根号下d
- attn_drop_ratio=0.,
- proj_drop_ratio=0.):
- super(Attention, self).__init__()
- self.num_heads = num_heads
- # 计算每个head qkv分得的头个数
- head_dim = dim // num_heads
- # 对应根号下d
- self.scale = qk_scale or head_dim ** -0.5
- # 全连接层实现qkv,严格来说是三个分开,这里提高并行化
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop_ratio)
- self.proj = nn.Linear(dim, dim) # 要将计算完的分头全连接起来,wo映射通过linear实现
- self.proj_drop = nn.Dropout(proj_drop_ratio)
-
- # 经典
- def forward(self, x):
- # [batch_size, num_patches + 1, total_embed_dim]
- B, N, C = x.shape
-
- # qkv(): -> [batch_size, num_patches + 1, 3 * total_embed_dim]
- # reshape进行拆分: -> [batch_size, num_patches + 1, 3(代表qkv), num_heads, embed_dim_per_head]
- # permute: -> [3, batch_size, num_heads, num_patches + 1, embed_dim_per_head] 方便运算
- qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- # [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- # transpose: -> [batch_size, num_heads, embed_dim_per_head, num_patches + 1]
- # @:矩阵乘法
- # @: multiply -> [batch_size, num_heads, num_patches + 1, num_patches + 1]
- attn = (q @ k.transpose(-2, -1)) * self.scale # scale进行norm处理
- # dim=-1 针对每一行进行处理
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- # @: multiply -> [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
- # transpose: -> [batch_size, num_patches + 1, num_heads, embed_dim_per_head]
- # reshape: -> [batch_size, num_patches + 1, total_embed_dim] 把最后两个信息拼接在一起
- x = (attn @ v).transpose(1, 2).reshape(B, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
- class Mlp(nn.Module):
- """
- MLP as used in Vision Transformer, MLP-Mixer and related networks
- """
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features) # 增加通道数
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features) # 还原通道数
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
- # transformer中就是将block重复堆叠L次
- class Block(nn.Module):
- def __init__(self,
- dim,
- num_heads,
- mlp_ratio=4., # 第一个全连接层是输入节点个数的4倍
- qkv_bias=False,
- qk_scale=None,
- drop_ratio=0.,
- attn_drop_ratio=0.,
- drop_path_ratio=0.,
- act_layer=nn.GELU,
- norm_layer=nn.LayerNorm):
- super(Block, self).__init__()
- self.norm1 = norm_layer(dim)
- self.attn = Attention(dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
- attn_drop_ratio=attn_drop_ratio, proj_drop_ratio=drop_ratio)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path_ratio) if drop_path_ratio > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop_ratio)
-
- def forward(self, x):
- x = x + self.drop_path(self.attn(self.norm1(x)))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- return x
-
-
- class VisionTransformer(nn.Module):
- def __init__(self, img_size=224, patch_size=16, in_c=3, num_classes=1000,
- embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.0, qkv_bias=True,
- qk_scale=None, representation_size=None, distilled=False, drop_ratio=0.,
- attn_drop_ratio=0., drop_path_ratio=0., embed_layer=PatchEmbed, norm_layer=None,
- act_layer=None):
- """
- Args:
- img_size (int, tuple): input image size
- patch_size (int, tuple): patch size
- in_c (int): number of input channels
- num_classes (int): number of classes for classification head
- embed_dim (int): embedding dimension
- depth (int): depth of transformer 重复堆叠encoder的次数
- num_heads (int): number of attention heads
- mlp_ratio (int): ratio of mlp hidden dim to embedding dim
- qkv_bias (bool): enable bias for qkv if True
- qk_scale (float): override default qk scale of head_dim ** -0.5 if set
- representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
- distilled (bool): model includes a distillation token and head as in DeiT models
- drop_ratio (float): dropout rate
- attn_drop_ratio (float): attention dropout rate
- drop_path_ratio (float): stochastic depth rate
- embed_layer (nn.Module): patch embedding layer
- norm_layer: (nn.Module): normalization layer
- """
- super(VisionTransformer, self).__init__()
- self.num_classes = num_classes
- self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
- self.num_tokens = 2 if distilled else 1
- norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
- act_layer = act_layer or nn.GELU
-
- self.patch_embed = embed_layer(img_size=img_size, patch_size=patch_size, in_c=in_c, embed_dim=embed_dim)
- num_patches = self.patch_embed.num_patches
-
- self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
- self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None
- self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
- self.pos_drop = nn.Dropout(p=drop_ratio)
-
- dpr = [x.item() for x in torch.linspace(0, drop_path_ratio, depth)] # stochastic depth decay rule
- self.blocks = nn.Sequential(*[
- Block(dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop_ratio=drop_ratio, attn_drop_ratio=attn_drop_ratio, drop_path_ratio=dpr[i],
- norm_layer=norm_layer, act_layer=act_layer)
- for i in range(depth)
- ])
- self.norm = norm_layer(embed_dim)
-
- # Representation layer
- if representation_size and not distilled:
- self.has_logits = True
- self.num_features = representation_size
- self.pre_logits = nn.Sequential(OrderedDict([
- ("fc", nn.Linear(embed_dim, representation_size)),
- ("act", nn.Tanh())
- ]))
- else:
- self.has_logits = False
- self.pre_logits = nn.Identity()
-
- # Classifier head(s)
- self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
- self.head_dist = None
- if distilled:
- self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()
-
- # Weight init
- nn.init.trunc_normal_(self.pos_embed, std=0.02)
- if self.dist_token is not None:
- nn.init.trunc_normal_(self.dist_token, std=0.02)
-
- nn.init.trunc_normal_(self.cls_token, std=0.02)
- self.apply(_init_vit_weights)
-
- # 正向传播过程
- def forward_features(self, x):
- # [B, C, H, W] -> [B, num_patches, embed_dim]
- x = self.patch_embed(x) # [B, 196, 768]
- # [1, 1, 768] -> [B, 1, 768]
- cls_token = self.cls_token.expand(x.shape[0], -1, -1)
- if self.dist_token is None:
- x = torch.cat((cls_token, x), dim=1) # [B, 197, 768]
- else:
- x = torch.cat((cls_token, self.dist_token.expand(x.shape[0], -1, -1), x), dim=1)
-
- x = self.pos_drop(x + self.pos_embed)
- x = self.blocks(x)
- x = self.norm(x)
- if self.dist_token is None:
- return self.pre_logits(x[:, 0])
- else:
- return x[:, 0], x[:, 1]
-
- def forward(self, x):
- x = self.forward_features(x)
- if self.head_dist is not None:
- x, x_dist = self.head(x[0]), self.head_dist(x[1])
- if self.training and not torch.jit.is_scripting():
- # during inference, return the average of both classifier predictions
- return x, x_dist
- else:
- return (x + x_dist) / 2
- else:
- x = self.head(x)
- return x
-
-
- def _init_vit_weights(m):
- """
- ViT weight initialization
- :param m: module
- """
- if isinstance(m, nn.Linear):
- nn.init.trunc_normal_(m.weight, std=.01)
- if m.bias is not None:
- nn.init.zeros_(m.bias)
- elif isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode="fan_out")
- if m.bias is not None:
- nn.init.zeros_(m.bias)
- elif isinstance(m, nn.LayerNorm):
- nn.init.zeros_(m.bias)
- nn.init.ones_(m.weight)
-
-
- def vit_base_patch16_224(num_classes: int = 1000):
- """
- ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.
- weights ported from official Google JAX impl:
- 链接: https://pan.baidu.com/s/1zqb08naP0RPqqfSXfkB2EA 密码: eu9f
- """
- model = VisionTransformer(img_size=224,
- patch_size=16,
- embed_dim=768,
- depth=12,
- num_heads=12,
- representation_size=None,
- num_classes=num_classes)
- return model
-
-
- def vit_base_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):
- """
- ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
- weights ported from official Google JAX impl:
- https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch16_224_in21k-e5005f0a.pth
- """
- model = VisionTransformer(img_size=224,
- patch_size=16,
- embed_dim=768,
- depth=12,
- num_heads=12,
- representation_size=768 if has_logits else None,
- num_classes=num_classes)
- return model
-
-
- def vit_base_patch32_224(num_classes: int = 1000):
- """
- ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.
- weights ported from official Google JAX impl:
- 链接: https://pan.baidu.com/s/1hCv0U8pQomwAtHBYc4hmZg 密码: s5hl
- """
- model = VisionTransformer(img_size=224,
- patch_size=32,
- embed_dim=768,
- depth=12,
- num_heads=12,
- representation_size=None,
- num_classes=num_classes)
- return model
-
-
- def vit_base_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):
- """
- ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
- weights ported from official Google JAX impl:
- https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch32_224_in21k-8db57226.pth
- """
- model = VisionTransformer(img_size=224,
- patch_size=32,
- embed_dim=768,
- depth=12,
- num_heads=12,
- representation_size=768 if has_logits else None,
- num_classes=num_classes)
- return model
-
-
- def vit_large_patch16_224(num_classes: int = 1000):
- """
- ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.
- weights ported from official Google JAX impl:
- 链接: https://pan.baidu.com/s/1cxBgZJJ6qUWPSBNcE4TdRQ 密码: qqt8
- """
- model = VisionTransformer(img_size=224,
- patch_size=16,
- embed_dim=1024,
- depth=24,
- num_heads=16,
- representation_size=None,
- num_classes=num_classes)
- return model
-
-
- def vit_large_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):
- """
- ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
- weights ported from official Google JAX impl:
- https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch16_224_in21k-606da67d.pth
- """
- model = VisionTransformer(img_size=224,
- patch_size=16,
- embed_dim=1024,
- depth=24,
- num_heads=16,
- representation_size=1024 if has_logits else None,
- num_classes=num_classes)
- return model
-
-
- def vit_large_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):
- """
- ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
- weights ported from official Google JAX impl:
- https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch32_224_in21k-9046d2e7.pth
- """
- model = VisionTransformer(img_size=224,
- patch_size=32,
- embed_dim=1024,
- depth=24,
- num_heads=16,
- representation_size=1024 if has_logits else None,
- num_classes=num_classes)
- return model
-
-
- def vit_huge_patch14_224_in21k(num_classes: int = 21843, has_logits: bool = True):
- """
- ViT-Huge model (ViT-H/14) from original paper (https://arxiv.org/abs/2010.11929).
- ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
- NOTE: converted weights not currently available, too large for github release hosting.
- """
- model = VisionTransformer(img_size=224,
- patch_size=14,
- embed_dim=1280,
- depth=32,
- num_heads=16,
- representation_size=1280 if has_logits else None,
- num_classes=num_classes)
- return model
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。