赞
踩
详细的改进教程以及源码,戳这!戳这!!戳这!!!B站:AI学术叫叫兽 源码在相簿的链接中,动态中也有链接,感谢支持!祝科研遥遥领先!
截止到发稿时,B站YOLOv8最新改进系列的源码包,已更新40种+损失函数的改进!自己排列组合2-4种后,不考虑位置已达10万种以上改进方法!考虑位置不同后可排列上百万种!!专注AI学术,关注B站博主:Ai学术叫叫兽er!
摘要高光谱图像去噪是高光谱数据有效分析和解释的关键。然而,同时建模全局和局部特征很少被探索以增强HSI去噪。在这封信中,我们提出了一个混合卷积和注意力网络(HCANet),它利用了卷积神经网络(CNN)和变压器的优势。为了增强全局和局部特征的建模,我们设计了一个卷积和注意力融合模块,旨在捕获长程依赖性和邻域光谱相关性。此外,为了改善多尺度信息聚合,我们设计了一个多尺度前馈网络,通过提取不同尺度的特征来提高去噪性能。在主流HSI数据集上的实验结果验证了HCANet的合理性和有效性。该模型能有效去除各种复杂噪声。我们的代码可在https://github.com/summitgao/HCANet上获得。
超光谱成像是一种强大的技术,它可以从物体或场景中获取丰富的光谱信息。与RGB数据相比,高光谱图像(HSI)捕获的光谱信息更精细。因此,HSI已被广泛用于许多实际应用中,例如解混[1]和地物分类[2]。然而,HSI经常受到传感器成像过程中不可避免的混合噪声的困扰,这是由于曝光时间和反射能量不足造成的。这些噪声可能会降低图像质量并妨碍后续分析和解释的性能。消除这些噪声可以提高地面目标检测和分类的准确性。因此,HSI去噪是许多遥感应用中预处理阶段的关键和不可缺少的技术。受HSI的空间和光谱特性的启发,传统的HSI去噪方法利用具有先验的优化方案,例如低秩[3],全变差[4],非局部相似性[5],虽然这些方法已经取得了可观的性能,但它们通常取决于手工先验和真实世界噪声模型之间的相似程度。近年来,卷积神经网络(CNN)[7]为HSI去噪提供了新的思路,表现出显着的性能进步。Maffei等人。[8]提出了一种基于CNN的HSI去噪模型,将噪声水平图作为输入来训练网络。Wang等人。[9]提出了一种基于联合Octave和注意力机制的卷积网络,用于HSI去噪。Pan等人。[10]提出了一种渐进的多尺度信息聚合网络,以消除HSI中的噪声。这些基于CNN的方法使用卷积核进行局部特征建模。最近,随着Vision Transformer(ViT)的出现,基于Transformer的方法在各种计算机视觉任务中取得了重大成功。现有的基于Transformer的图像去噪方法通过学习全局上下文信息取得了很大的成功。然而,如果局部特征被有效地考虑和利用,HSI去噪性能可能会进一步提高。因此,通过结合CNN和Transformers来考虑局部和全局信息以提高去噪性能是很重要的。由于以下两个挑战,为HSI去噪构建有效的Transformer和CNN混合模型通常是不平凡的:1)局部和全局特征建模的最佳混合架构仍然是一个悬而未决的问题。卷积核捕捉局部特征,这意味着失去了长距离的信息交互。卷积和注意力的结合可以提供一个可行的解决方案。2)Transformer中前馈网络的单尺度特征聚合受到限制。一些方法使用深度卷积来改善FFN中的局部特征聚合。然而,由于隐藏层中的信道数量较多,单尺度令牌聚合很难利用丰富的信道表示。为了解决上述两个挑战,我们提出了一种用于HSI去噪的混合卷积和注意力网络(HCANet),它同时利用全局上下文信息和局部特征,如图1所示。具体来说,为了增强全局和局部特征的建模,我们设计了一个卷积和注意力融合模块(CAFM),旨在捕获长程依赖性和邻域光谱相关性。此外,为了提高FFN中的多尺度信息聚合,我们设计了一个多尺度前馈网络(MSFN),通过提取不同尺度的特征来提高去噪性能。在MSFN中使用了三个具有不同步长的并行扩张卷积。通过在两个真实世界的数据集上进行实验,我们验证了我们提出的HCANet是上级优于其他国家的最先进的竞争对手。
这封信的贡献可以总结如下:
1.探索了用于HSI去噪的全局和局部特征建模的有前途但具有挑战性的问题。据我们所知,这是第一个将联合收割机卷积和注意力机制结合起来用于HSI去噪任务的工作。
2.提出了多尺度前馈网络,在不同尺度上无缝提取特征,有效抑制多尺度噪声。
3.在两个基准数据集上进行了大量实验,验证了HCANet的合理性和有效性。
我们提出了HCANet,一种新的网络HSI去噪。特别是,我们提出了卷积和注意力融合模块,CAFM,融合全局和局部特征。此外,我们提出了多尺度前馈网络,MSFN从多个尺度提取特征,提高去噪性能。在具有挑战性的HSI数据集上的实验结果表明,与现有的HSI去噪方法相比,我们提出的模型是有效的。我们的模型实现了显着的去噪性能的定量指标和重建图像的视觉质量。
import sys import torch import torch.nn as nn import torch.nn.functional as F from pdb import set_trace as stx import numbers from einops import rearrange import os sys.path.append(os.getcwd()) # m_seed = 1 # # 设置seed # torch.manual_seed(m_seed) # torch.cuda.manual_seed_all(m_seed) def to_3d(x): return rearrange(x, 'b c h w -> b (h w) c') def to_4d(x,h,w): return rearrange(x, 'b (h w) c -> b c h w',h=h,w=w) class BiasFree_LayerNorm(nn.Module): def __init__(self, normalized_shape): super(BiasFree_LayerNorm, self).__init__() if isinstance(normalized_shape, numbers.Integral): normalized_shape = (normalized_shape,) normalized_shape = torch.Size(normalized_shape) assert len(normalized_shape) == 1 self.weight = nn.Parameter(torch.ones(normalized_shape)) self.normalized_shape = normalized_shape def forward(self, x): sigma = x.var(-1, keepdim=True, unbiased=False) return x / torch.sqrt(sigma+1e-5) * self.weight class WithBias_LayerNorm(nn.Module): def __init__(self, normalized_shape): super(WithBias_LayerNorm, self).__init__() if isinstance(normalized_shape, numbers.Integral): normalized_shape = (normalized_shape,) normalized_shape = torch.Size(normalized_shape) assert len(normalized_shape) == 1 self.weight = nn.Parameter(torch.ones(normalized_shape)) self.bias = nn.Parameter(torch.zeros(normalized_shape)) self.normalized_shape = normalized_shape def forward(self, x): mu = x.mean(-1, keepdim=True) sigma = x.var(-1, keepdim=True, unbiased=False) return (x - mu) / torch.sqrt(sigma+1e-5) * self.weight + self.bias class LayerNorm(nn.Module): def __init__(self, dim, LayerNorm_type): super(LayerNorm, self).__init__() if LayerNorm_type =='BiasFree': self.body = BiasFree_LayerNorm(dim) else: self.body = WithBias_LayerNorm(dim) def forward(self, x): h, w = x.shape[-2:] return to_4d(self.body(to_3d(x)), h, w) ########################################################################## ## Multi-Scale Feed-Forward Network (MSFN) class FeedForward(nn.Module): def __init__(self, dim, ffn_expansion_factor, bias): super(FeedForward, self).__init__() hidden_features = int(dim*ffn_expansion_factor) self.project_in = nn.Conv3d(dim, hidden_features*3, kernel_size=(1,1,1), bias=bias) self.dwconv1 = nn.Conv3d(hidden_features, hidden_features, kernel_size=(3,3,3), stride=1, dilation=1, padding=1, groups=hidden_features, bias=bias) # self.dwconv2 = nn.Conv3d(hidden_features, hidden_features, kernel_size=(3,3,3), stride=1, dilation=2, padding=2, groups=hidden_features, bias=bias) # self.dwconv3 = nn.Conv3d(hidden_features, hidden_features, kernel_size=(3,3,3), stride=1, dilation=3, padding=3, groups=hidden_features, bias=bias) self.dwconv2 = nn.Conv2d(hidden_features, hidden_features, kernel_size=(3,3), stride=1, dilation=2, padding=2, groups=hidden_features, bias=bias) self.dwconv3 = nn.Conv2d(hidden_features, hidden_features, kernel_size=(3,3), stride=1, dilation=3, padding=3, groups=hidden_features, bias=bias) self.project_out = nn.Conv3d(hidden_features, dim, kernel_size=(1,1,1), bias=bias) def forward(self, x): x = x.unsqueeze(2) x = self.project_in(x) x1,x2,x3 = x.chunk(3, dim=1) x1 = self.dwconv1(x1).squeeze(2) x2 = self.dwconv2(x2.squeeze(2)) x3 = self.dwconv3(x3.squeeze(2)) # x1 = self.dwconv1(x1) # x2 = self.dwconv2(x2) # x3 = self.dwconv3(x3) x = F.gelu(x1)*x2*x3 x = x.unsqueeze(2) x = self.project_out(x) x = x.squeeze(2) return x ########################################################################## ## Convolution and Attention Fusion Module (CAFM) class Attention(nn.Module): def __init__(self, dim, num_heads, bias): super(Attention, self).__init__() self.num_heads = num_heads self.temperature = nn.Parameter(torch.ones(num_heads, 1, 1)) self.qkv = nn.Conv3d(dim, dim*3, kernel_size=(1,1,1), bias=bias) self.qkv_dwconv = nn.Conv3d(dim*3, dim*3, kernel_size=(3,3,3), stride=1, padding=1, groups=dim*3, bias=bias) self.project_out = nn.Conv3d(dim, dim, kernel_size=(1,1,1), bias=bias) self.fc = nn.Conv3d(3*self.num_heads, 9, kernel_size=(1,1,1), bias=True) self.dep_conv = nn.Conv3d(9*dim//self.num_heads, dim, kernel_size=(3,3,3), bias=True, groups=dim//self.num_heads, padding=1) def forward(self, x): b,c,h,w = x.shape x = x.unsqueeze(2) qkv = self.qkv_dwconv(self.qkv(x)) qkv = qkv.squeeze(2) f_conv = qkv.permute(0,2,3,1) f_all = qkv.reshape(f_conv.shape[0], h*w, 3*self.num_heads, -1).permute(0, 2, 1, 3) f_all = self.fc(f_all.unsqueeze(2)) f_all = f_all.squeeze(2) #local conv f_conv = f_all.permute(0, 3, 1, 2).reshape(x.shape[0], 9*x.shape[1]//self.num_heads, h, w) f_conv = f_conv.unsqueeze(2) out_conv = self.dep_conv(f_conv) # B, C, H, W out_conv = out_conv.squeeze(2) # global SA q,k,v = qkv.chunk(3, dim=1) q = rearrange(q, 'b (head c) h w -> b head c (h w)', head=self.num_heads) k = rearrange(k, 'b (head c) h w -> b head c (h w)', head=self.num_heads) v = rearrange(v, 'b (head c) h w -> b head c (h w)', head=self.num_heads) q = torch.nn.functional.normalize(q, dim=-1) k = torch.nn.functional.normalize(k, dim=-1) attn = (q @ k.transpose(-2, -1)) * self.temperature attn = attn.softmax(dim=-1) out = (attn @ v) out = rearrange(out, 'b head c (h w) -> b (head c) h w', head=self.num_heads, h=h, w=w) out = out.unsqueeze(2) out = self.project_out(out) out = out.squeeze(2) output = out + out_conv return output ########################################################################## ## CAMixing Block class TransformerBlock(nn.Module): def __init__(self, dim, num_heads, ffn_expansion_factor, bias, LayerNorm_type): super(TransformerBlock, self).__init__() self.norm1 = LayerNorm(dim, LayerNorm_type) self.attn = Attention(dim, num_heads, bias) self.norm2 = LayerNorm(dim, LayerNorm_type) self.ffn = FeedForward(dim, ffn_expansion_factor, bias) def forward(self, x): x = x + self.attn(self.norm1(x)) x = x + self.ffn(self.norm2(x)) return x class OverlapPatchEmbed(nn.Module): def __init__(self, in_c=31, embed_dim=48, bias=False): super(OverlapPatchEmbed, self).__init__() self.proj = nn.Conv3d(in_c, embed_dim, kernel_size=(3,3,3), stride=1, padding=1, bias=bias) def forward(self, x): x = x.unsqueeze(2) x = self.proj(x) x = x.squeeze(2) return x class Downsample(nn.Module): def __init__(self, n_feat): super(Downsample, self).__init__() self.body = nn.Sequential(nn.Conv2d(n_feat, n_feat//2, kernel_size=3, stride=1, padding=1, bias=False), nn.PixelUnshuffle(2)) def forward(self, x): # x = x.unsqueeze(2) x = self.body(x) # x = x.squeeze(2) return x class Upsample(nn.Module): def __init__(self, n_feat): super(Upsample, self).__init__() self.body = nn.Sequential(nn.Conv2d(n_feat, n_feat*2, kernel_size=3, stride=1, padding=1, bias=False), nn.PixelShuffle(2)) def forward(self, x): # x = x.unsqueeze(2) x = self.body(x) # x = x.squeeze(2) return x ########################################################################## ##---------- HCANet ----------------------- class HCANet(nn.Module): def __init__(self, inp_channels=31, out_channels=31, dim = 48, num_blocks = [2,3,3,4], num_refinement_blocks = 1, heads = [1,2,4,8], ffn_expansion_factor = 2.66, bias = False, LayerNorm_type = 'WithBias', ): super(HCANet, self).__init__() self.patch_embed = OverlapPatchEmbed(inp_channels, dim) self.encoder_level1 = nn.Sequential(*[TransformerBlock(dim=dim, num_heads=heads[0], ffn_expansion_factor=ffn_expansion_factor, bias=bias, LayerNorm_type=LayerNorm_type) for i in range(num_blocks[0])]) self.down1_2 = Downsample(dim) self.encoder_level2 = nn.Sequential(*[TransformerBlock(dim=int(dim*2**1), num_heads=heads[1], ffn_expansion_factor=ffn_expansion_factor, bias=bias, LayerNorm_type=LayerNorm_type) for i in range(num_blocks[1])]) self.down2_3 = Downsample(int(dim*2**1)) self.encoder_level3 = nn.Sequential(*[TransformerBlock(dim=int(dim*2**2), num_heads=heads[2], ffn_expansion_factor=ffn_expansion_factor, bias=bias, LayerNorm_type=LayerNorm_type) for i in range(num_blocks[2])]) self.up3_2 = Upsample(int(dim*2**2)) self.reduce_chan_level2 = nn.Conv3d(int(dim*2**2), int(dim*2**1), kernel_size=(1,1,1), bias=bias) self.decoder_level2 = nn.Sequential(*[TransformerBlock(dim=int(dim*2**1), num_heads=heads[1], ffn_expansion_factor=ffn_expansion_factor, bias=bias, LayerNorm_type=LayerNorm_type) for i in range(num_blocks[1])]) self.up2_1 = Upsample(int(dim*2**1)) self.decoder_level1 = nn.Sequential(*[TransformerBlock(dim=int(dim*2**1), num_heads=heads[0], ffn_expansion_factor=ffn_expansion_factor, bias=bias, LayerNorm_type=LayerNorm_type) for i in range(num_blocks[0])]) self.refinement = nn.Sequential(*[TransformerBlock(dim=int(dim*2**1), num_heads=heads[0], ffn_expansion_factor=ffn_expansion_factor, bias=bias, LayerNorm_type=LayerNorm_type) for i in range(num_refinement_blocks)]) self.output = nn.Conv3d(int(dim*2**1), out_channels, kernel_size=(3,3,3), stride=1, padding=1, bias=bias) def forward(self, inp_img): inp_enc_level1 = self.patch_embed(inp_img) out_enc_level1 = self.encoder_level1(inp_enc_level1) inp_enc_level2 = self.down1_2(out_enc_level1) out_enc_level2 = self.encoder_level2(inp_enc_level2) inp_enc_level3 = self.down2_3(out_enc_level2) out_enc_level3 = self.encoder_level3(inp_enc_level3) out_dec_level3 = out_enc_level3 inp_dec_level2 = self.up3_2(out_dec_level3) inp_dec_level2 = torch.cat([inp_dec_level2, out_enc_level2], 1) inp_dec_level2 = self.reduce_chan_level2(inp_dec_level2.unsqueeze(2)) inp_dec_level2 = inp_dec_level2.squeeze(2) out_dec_level2 = self.decoder_level2(inp_dec_level2) inp_dec_level1 = self.up2_1(out_dec_level2) inp_dec_level1 = torch.cat([inp_dec_level1, out_enc_level1], 1) out_dec_level1 = self.decoder_level1(inp_dec_level1) out_dec_level1 = self.refinement(out_dec_level1) out_dec_level1 = self.output(out_dec_level1.unsqueeze(2)).squeeze(2) + inp_img return out_dec_level1 if __name__ == "__main__": model = HCANet() # print(model) # summary(model, (1,31,128,128)) inputs = torch.ones([2,31,128,128]) #[b,c,h,w] outputs = model(inputs) print(outputs.size())
详细的改进教程以及源码,戳这!戳这!!戳这!!!B站:AI学术叫叫兽 源码在相簿的链接中,动态中也有链接,感谢支持!祝科研遥遥领先!
详细的改进教程以及源码,戳这!戳这!!戳这!!!B站:AI学术叫叫兽er 源码在相簿的链接中,动态中也有链接,感谢支持!祝科研遥遥领先!
详细的改进教程以及源码,戳这!戳这!!戳这!!!B站:AI学术叫叫兽er 源码在相簿的链接中,动态中也有链接,感谢支持!祝科研遥遥领先!
执行命令
python train.py
改完收工!
关注B站:AI学术叫叫兽
从此走上科研快速路
遥遥领先同行!!!!
详细的改进教程以及源码,戳这!戳这!!戳这!!!B站:AI学术叫叫兽er 源码在相簿的链接中,动态中也有链接,感谢支持!祝科研遥遥领先!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。