当前位置:   article > 正文

深度学习中的注意力模块的添加

深度学习中的注意力模块的添加

在深度学习中,骨干网络通常指的是网络的主要结构或主干部分,它负责从原始输入中提取高级特征。骨干网络通常由卷积神经网络(CNN)或者类似的架构组成,用于对图像、文本或其他类型的数据进行特征提取和表示学习。

注意力模块则是一种用于处理序列数据的重要组件,例如在自然语言处理领域中常用的 Transformer 模型中就包含了注意力机制。注意力模块可以让模型更好地关注输入序列中的不同部分,并学习它们之间的相关性,从而提高模型的性能和泛化能力。

骨干网络和注意力模块通常是结合在一起来构建端到端的深度学习模型。这种结合可以通过多种方式实现:

  1. 注意力机制作为模块插入:在骨干网络的某个特定层或者多个层之间插入注意力模块。这样可以让模型在处理输入数据时更加灵活,可以根据任务的需要更加关注特定的信息或特征。

  2. 注意力机制与骨干网络并行:将注意力模块与骨干网络的不同部分并行处理输入数据,然后将它们的输出进行合并或者融合。这种方式可以提供更丰富的特征表征,同时保留了骨干网络和注意力模块各自的特点。

  3. 注意力机制作为整个模型的一部分:有些模型设计中,注意力机制被整合到模型的整个结构中,例如在 Transformer 模型中,注意力机制是模型的核心组件之一,与编码器、解码器等其他模块相互作用,共同完成任务。

总的来说,骨干网络和注意力模块的结合方式取决于具体的任务和模型设计需求。它们相互协作可以提高模型的表现,并且在不同的应用场景中可能会有不同的结合方式和调整方法。

举例:以 ResNet 骨干网络为例,并在其中的一个特定层插入自注意力机制。

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. from torchvision.models import resnet50
  5. class SelfAttention(nn.Module):
  6. def __init__(self, in_channels, out_channels):
  7. super(SelfAttention, self).__init__()
  8. self.query_conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
  9. self.key_conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
  10. self.value_conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
  11. self.gamma = nn.Parameter(torch.zeros(1))
  12. def forward(self, x):
  13. batch_size, channels, height, width = x.size()
  14. proj_query = self.query_conv(x).view(batch_size, -1, width * height).permute(0, 2, 1)
  15. proj_key = self.key_conv(x).view(batch_size, -1, width * height)
  16. energy = torch.bmm(proj_query, proj_key)
  17. attention = F.softmax(energy, dim=-1)
  18. proj_value = self.value_conv(x).view(batch_size, -1, width * height)
  19. out = torch.bmm(proj_value, attention.permute(0, 2, 1))
  20. out = out.view(batch_size, channels, height, width)
  21. out = self.gamma * out + x
  22. return out
  23. class ResNetWithAttention(nn.Module):
  24. def __init__(self, num_classes):
  25. super(ResNetWithAttention, self).__init__()
  26. self.resnet = resnet50(pretrained=True)
  27. # Insert attention module after the second convolutional layer
  28. self.resnet.layer1.add_module("self_attention", SelfAttention(256, 256))
  29. self.fc = nn.Linear(2048, num_classes)
  30. def forward(self, x):
  31. x = self.resnet(x)
  32. x = F.avg_pool2d(x, x.size()[2:]).view(x.size(0), -1)
  33. x = self.fc(x)
  34. return x
  35. # Example usage:
  36. model = ResNetWithAttention(num_classes=1000)
  37. input_tensor = torch.randn(1, 3, 224, 224) # Example input tensor
  38. output = model(input_tensor)
  39. print(output.shape) # Should print: torch.Size([1, 1000])

在这个示例中,我们定义了一个自注意力模块 SelfAttention,并将其插入到了 ResNet 的第一个残差块 layer1 中的第二个卷积层之后。然后我们定义了一个新的模型 ResNetWithAttention,其中包含了 ResNet 的主干部分和我们插入的注意力模块。最后,我们在模型的最后添加了一个全连接层用于分类。

这个示例展示了如何在 PyTorch 中实现将注意力模块插入到现有骨干网络中的过程。通过这种方式,我们可以灵活地设计深度学习模型,以更好地适应不同的任务和数据特点。

举例:在 PyTorch 中实现将注意力机制与骨干网络并行处理输入数据,我们可以在骨干网络的输出上应用注意力机制,然后将其与骨干网络的输出进行合并或融合。下面是一个示例,我们将在 ResNet50 骨干网络的输出上应用自注意力机制,并将其与原始输出进行融合。

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. from torchvision.models import resnet50
  5. class SelfAttention(nn.Module):
  6. def __init__(self, in_channels, out_channels):
  7. super(SelfAttention, self).__init__()
  8. self.query_conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
  9. self.key_conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
  10. self.value_conv = nn.Conv2d(in_channels, out_channels, kernel_size=1)
  11. self.gamma = nn.Parameter(torch.zeros(1))
  12. def forward(self, x):
  13. batch_size, channels, height, width = x.size()
  14. proj_query = self.query_conv(x).view(batch_size, -1, width * height).permute(0, 2, 1)
  15. proj_key = self.key_conv(x).view(batch_size, -1, width * height)
  16. energy = torch.bmm(proj_query, proj_key)
  17. attention = F.softmax(energy, dim=-1)
  18. proj_value = self.value_conv(x).view(batch_size, -1, width * height)
  19. out = torch.bmm(proj_value, attention.permute(0, 2, 1))
  20. out = out.view(batch_size, channels, height, width)
  21. out = self.gamma * out + x
  22. return out
  23. class ResNetWithAttentionParallel(nn.Module):
  24. def __init__(self, num_classes):
  25. super(ResNetWithAttentionParallel, self).__init__()
  26. self.resnet = resnet50(pretrained=True)
  27. self.attention = SelfAttention(2048, 2048)
  28. self.fc = nn.Linear(2048 * 2, num_classes) # Concatenating original and attention-enhanced features
  29. def forward(self, x):
  30. features = self.resnet(x)
  31. attention_out = self.attention(features)
  32. combined_features = torch.cat((features, attention_out), dim=1) # Concatenate original and attention-enhanced features
  33. output = self.fc(combined_features.view(features.size(0), -1))
  34. return output
  35. # Example usage:
  36. model = ResNetWithAttentionParallel(num_classes=1000)
  37. input_tensor = torch.randn(1, 3, 224, 224) # Example input tensor
  38. output = model(input_tensor)
  39. print(output.shape) # Should print: torch.Size([1, 1000])

在这个示例中,我们定义了一个自注意力模块 SelfAttention,并在 ResNet50 的输出上应用了这个注意力机制。然后,我们将注意力机制的输出与原始的骨干网络输出进行了融合,通过将它们连接在一起。最后,我们在融合后的特征上添加了一个全连接层用于分类。

这个示例展示了如何在 PyTorch 中实现将注意力机制与骨干网络并行处理输入数据的方法。通过这种方式,我们可以利用注意力机制来增强骨干网络提取的特征,从而提高模型的性能和泛化能力。

举例:一个自注意力(self-attention)机制作为整个模型一部分的例子,这个例子基于 Transformer 模型的结构。在 Transformer 中,自注意力机制被整合到编码器和解码器中,用于处理序列数据。

下面是一个简化版本的 Transformer 编码器,其中包含自注意力层作为整个模型的一部分:

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. class SelfAttention(nn.Module):
  5. def __init__(self, embed_size, heads):
  6. super(SelfAttention, self).__init__()
  7. self.embed_size = embed_size
  8. self.heads = heads
  9. self.head_dim = embed_size // heads
  10. assert (
  11. self.head_dim * heads == embed_size
  12. ), "Embedding size needs to be divisible by heads"
  13. self.values = nn.Linear(self.head_dim, self.head_dim, bias=False)
  14. self.keys = nn.Linear(self.head_dim, self.head_dim, bias=False)
  15. self.queries = nn.Linear(self.head_dim, self.head_dim, bias=False)
  16. self.fc_out = nn.Linear(heads * self.head_dim, embed_size)
  17. def forward(self, values, keys, query, mask):
  18. N = query.shape[0]
  19. value_len, key_len, query_len = values.shape[1], keys.shape[1], query.shape[1]
  20. # Split the embedding into self.heads different pieces
  21. values = values.reshape(N, value_len, self.heads, self.head_dim)
  22. keys = keys.reshape(N, key_len, self.heads, self.head_dim)
  23. queries = query.reshape(N, query_len, self.heads, self.head_dim)
  24. values = self.values(values)
  25. keys = self.keys(keys)
  26. queries = self.queries(queries)
  27. # Scaled dot-product attention
  28. energy = torch.einsum("nqhd,nkhd->nhqk", [queries, keys])
  29. if mask is not None:
  30. energy = energy.masked_fill(mask == 0, float("-1e20"))
  31. attention = torch.softmax(energy / (self.embed_size ** (1 / 2)), dim=3)
  32. out = torch.einsum("nhql,nlhd->nqhd", [attention, values]).reshape(
  33. N, query_len, self.heads * self.head_dim
  34. )
  35. out = self.fc_out(out)
  36. return out
  37. class TransformerEncoderLayer(nn.Module):
  38. def __init__(self, embed_size, heads, dropout, forward_expansion):
  39. super(TransformerEncoderLayer, self).__init__()
  40. self.attention = SelfAttention(embed_size, heads)
  41. self.norm1 = nn.LayerNorm(embed_size)
  42. self.norm2 = nn.LayerNorm(embed_size)
  43. self.feed_forward = nn.Sequential(
  44. nn.Linear(embed_size, forward_expansion * embed_size),
  45. nn.ReLU(),
  46. nn.Linear(forward_expansion * embed_size, embed_size),
  47. )
  48. self.dropout = nn.Dropout(dropout)
  49. def forward(self, value, key, query, mask):
  50. attention = self.attention(value, key, query, mask)
  51. # Add skip connection, run through normalization and finally dropout
  52. x = self.dropout(self.norm1(attention + query))
  53. forward = self.feed_forward(x)
  54. out = self.dropout(self.norm2(forward + x))
  55. return out
  56. class TransformerEncoder(nn.Module):
  57. def __init__(
  58. self,
  59. src_vocab_size,
  60. embed_size,
  61. num_layers,
  62. heads,
  63. device,
  64. forward_expansion,
  65. dropout,
  66. max_length,
  67. ):
  68. super(TransformerEncoder, self).__init__()
  69. self.embed_size = embed_size
  70. self.device = device
  71. self.word_embedding = nn.Embedding(src_vocab_size, embed_size)
  72. self.position_embedding = nn.Embedding(max_length, embed_size)
  73. self.layers = nn.ModuleList(
  74. [
  75. TransformerEncoderLayer(
  76. embed_size,
  77. heads,
  78. dropout=dropout,
  79. forward_expansion=forward_expansion,
  80. )
  81. for _ in range(num_layers)
  82. ]
  83. )
  84. self.dropout = nn.Dropout(dropout)
  85. def forward(self, x, mask):
  86. N, seq_length = x.shape
  87. positions = torch.arange(0, seq_length).expand(N, seq_length).to(self.device)
  88. out = self.dropout(self.word_embedding(x) + self.position_embedding(positions))
  89. for layer in self.layers:
  90. out = layer(out, out, out, mask)
  91. return out
  92. # Example usage:
  93. device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
  94. src_vocab_size = 1000 # Example vocabulary size
  95. max_length = 100 # Example maximum sequence length
  96. embed_size = 256
  97. heads = 8
  98. num_layers = 6
  99. forward_expansion = 4
  100. dropout = 0.2
  101. encoder = TransformerEncoder(
  102. src_vocab_size,
  103. embed_size,
  104. num_layers,
  105. heads,
  106. device,
  107. forward_expansion,
  108. dropout,
  109. max_length,
  110. )
  111. # Example input tensor
  112. input_tensor = torch.randint(0, src_vocab_size, (32, 10)) # Batch size: 32, Sequence length: 10
  113. mask = torch.ones(32, 10) # Example mask
  114. output = encoder(input_tensor, mask)
  115. print(output.shape) # Should print: torch.Size([32, 10, 256])

在这个例子中,我们定义了一个简化版本的 Transformer 编码器,其中包含自注意力层作为整个模型的一部分。自注意力层用于处理输入序列,并学习序列中不同位置之间的关系。整个模型接受输入序列并输出相应的表示。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/不正经/article/detail/361804
推荐阅读
相关标签
  

闽ICP备14008679号