赞
踩
小屌丝:鱼哥,深度神经网络是不是少了一篇没写
小鱼:少了哪篇?
小屌丝:额… 就是 内个…
小鱼:你倒是说啊, 哪个啊?
小屌丝:就…就是
小鱼:赶紧说,别墨迹
小屌丝:ResNet
小鱼:咳, 我还以为哪篇了
小屌丝:那…是不是要更新啊
小鱼:好说好说。
小屌丝: 哇塞。
残差网络(Residual Network,简称ResNet)是由Kaiming He等人在2015年提出的一种深度神经网络架构。
ResNet的核心思想是引入了“残差学习”的概念来解决深度神经网络训练中的梯度消失/梯度爆炸问题,使得网络能够通过简单地增加层数来提高准确率,而不会导致训练困难。
残差网络的核心原理是通过残差模块来构建深层网络。
在传统的神经网络中,每一层的输出是下一层的输入。
而在残差网络中,每一层的输入不仅会传递给下一层,还会通过跳跃连接(skip connection)直接传递给更深的层次。
这种设计允许梯度直接流过这些跳跃连接,从而缓解了深层网络中的梯度消失问题。
残差网络的实现主要包括以下步骤:
残差模块的基本形式可以用以下公式表示:
[ y = F ( x , W i ) + x ] [ \mathbf{y} = \mathcal{F}(\mathbf{x}, {W_i}) + \mathbf{x} ] [y=F(x,Wi)+x]
其中,
# -*- coding:utf-8 -*- # @Time : 2024-05-02 # @Author : Carl_DJ import torch import torch.nn as nn import torch.nn.functional as F # 定义残差块 class BasicBlock(nn.Module): expansion = 1 def __init__(self, in_channels, out_channels, stride=1, downsample=None): super(BasicBlock, self).__init__() # 第一个卷积层 self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(out_channels) # 第二个卷积层 self.conv2 = nn.Conv2d(out_channels, out_channels * self.expansion, kernel_size=3, stride=1, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(out_channels * self.expansion) self.relu = nn.ReLU(inplace=True) self.downsample = downsample def forward(self, x): identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) if self.downsample is not None: identity = self.downsample(x) out += identity out = self.relu(out) return out # 构建ResNet模型 class ResNet(nn.Module): def __init__(self, block, layers, num_classes=1000): super(ResNet, self).__init__() self.in_channels = 64 # 初始卷积层 self.conv1 = nn.Conv2d(3, self.in_channels, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(self.in_channels) self.relu = nn.ReLU(inplace=True) self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) # 残差层 self.layer1 = self._make_layer(block, 64, layers[0]) self.layer2 = self._make_layer(block, 128, layers[1], stride=2) self.layer3 = self._make_layer(block, 256, layers[2], stride=2) self.layer4 = self._make_layer(block, 512, layers[3], stride=2) self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) self.fc = nn.Linear(512 * block.expansion, num_classes) def _make_layer(self, block, out_channels, blocks, stride=1): downsample = None if stride != 1 or self.in_channels != out_channels * block.expansion: downsample = nn.Sequential( nn.Conv2d(self.in_channels, out_channels * block.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(out_channels * block.expansion), ) layers = [] layers.append(block(self.in_channels, out_channels, stride, downsample)) self.in_channels = out_channels * block.expansion for _ in range(1, blocks): layers.append(block(self.in_channels, out_channels)) return nn.Sequential(*layers) def forward(self, x): x = self.conv1(x) x = self.bn1(x) x = self.relu(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) x = self.avgpool(x) x = torch.flatten(x, 1) x = self.fc(x) return x # 实例化ResNet模型并应用于输入数据 def resnet18(): return ResNet(BasicBlock, [2, 2, 2, 2]) model = resnet18() print(model) # 假设我们有一批大小为4的输入数据 input_tensor = torch.rand(4, 3, 224, 224) output = model(input_tensor) print(output.size())
代码解析:
残差网络(ResNet)通过引入跳跃连接解决了深度神经网络训练中的关键问题,使得网络能够在不增加额外参数和计算复杂度的情况下增加深度,显著提高了深度学习模型的性能。
ResNet的成功推动了深度学习在诸多领域的应用,如图像分类、物体检测和语义分割等。
我是小鱼:
关注小鱼,学习【机器学习】&【深度学习】领域的知识。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。