首页IT科技densenet详解(DenseNet代码复现+超详细注释(PyTorch))

densenet详解(DenseNet代码复现+超详细注释(PyTorch))

时间2025-06-20 15:54:59分类IT科技浏览4883
导读:关于DenseNet的原理和具体细节,可参见上篇解读:经典神经网络论文超详细解读(六)——DenseNet学习笔记(翻译+精读+代码复现)...

关于DenseNet的原理和具体细节            ,可参见上篇解读:经典神经网络论文超详细解读(六)——DenseNet学习笔记(翻译+精读+代码复现)

接下来我们就来复现一下代码            。

DenseNet模型简介

整个DenseNet模型主要包含三个核心细节结构                  ,分别是DenseLayer(整个模型最基础的原子单元      ,完成一次最基础的特征提取         ,如下图第三行)            、DenseBlock(整个模型密集连接的基础单元                  ,如下图第二行左侧部分)和Transition(不同密集连接之间的过渡单元         ,如下图第二行右侧部分)      ,通过以上结构的拼接+分类层即可完成整个模型的搭建                  。

DenseLayer层 包含BN + Relu + 1*1Conv + BN + Relu + 3*3Conv      。在DenseBlock中                  ,各个层的特征图大小一致            ,可以在channel维度上连接         。所有DenseBlock中各个层卷积之后均输出 k个特征图   ,即得到的特征图的channel数为k                   ,或者说采用 k 个卷积核                  。 其中               ,k 在DenseNet称为growth rate,这是一个超参数         。一般情况下使用较小的k(比如12)               ,就可以得到较佳的性能      。

DenseBlock模块 其实就是堆叠一定数量的DenseLayer层                  ,在整个DenseBlock模块内不同DenseLayer层之间会发生密集连接   ,在DenseBlock模块内特征层宽度不变            ,不存在stride=2或者池化的情况                  。

Transition模块 BN + Relu + 1*1Conv + 2*2AvgPool

                  ,1*1Conv负责降低通道数      ,2*2AvgPool负责降低特征层宽度         ,降低到1/2            。另外                  ,Transition层可以起到压缩模型的作用   。假定Transition的上接DenseBlock得到的特征图channels数为m         ,Transition层可以产生 ⌊θm⌋个特征(通过卷积层)      ,其中θ∈(0,1] 是压缩系数(compression rate)                  。当θ=1 时                  ,特征个数经过Transition层没有变化            ,即无压缩   ,而当压缩系数小于1时                  ,这种结构称为DenseNet-C               ,文中使用θ=0.5                。对于使用bottleneck层的DenseBlock结构和压缩系数小于1的Transition组合结构称为DenseNet-BC。

 

一                  、构造初始卷积层

初始卷积层是由一个7*7的conv层和3*3的pooling层组成,stride都为2

 代码

-------------一      、构造初始卷积层----------------------------- def Conv1(in_planes, places, stride=2): return nn.Sequential( nn.Conv2d(in_channels=in_planes,out_channels=places,kernel_size=7,stride=stride,padding=3, bias=False), nn.BatchNorm2d(places), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=3, stride=2, padding=1) )

二         、构造Dense Block模块

2.1 构造Dense Block内部结构

DenserLayer是Dense Block内部结构               ,BN+ReLU+1x1 Conv+BN+ReLU+3x3 Conv                  ,最后也加入dropout层以用于训练过程               。主要用于特征的提取等工作   ,控制输入经过网络后            ,输入的模型的特征数量                  ,比如第一个模型输入是5个特征层      ,后面一个是四个特征层等                  。

但是可以发现一点         ,这个和别的网络有所不同的是                  ,每一个DenseLayer虽然特征提取的函数一样的         ,因为要结合前面的特征最为新的网络的输入      ,所以模型每次的输入的维度是不同   。比如groth_rate = 32,每次输入特征都会在原来的基础上增加32个通道            。因此需要在函数中定义 num_layer个不同输入的网络模型                  ,这也正是模型函数有意思的一点                  。

代码

---(1)构造Dense Block内部结构--- #BN+ReLU+1x1 Conv+BN+ReLU+3x3 Conv class _DenseLayer(nn.Module): def __init__(self, inplace, growth_rate, bn_size, drop_rate=0): super(_DenseLayer, self).__init__() self.drop_rate = drop_rate self.dense_layer = nn.Sequential( nn.BatchNorm2d(inplace), nn.ReLU(inplace=True), # growth_rate:增长率      。一层产生多少个特征图 nn.Conv2d(in_channels=inplace, out_channels=bn_size * growth_rate, kernel_size=1, stride=1, padding=0, bias=False), nn.BatchNorm2d(bn_size * growth_rate), nn.ReLU(inplace=True), nn.Conv2d(in_channels=bn_size * growth_rate, out_channels=growth_rate, kernel_size=3, stride=1, padding=1, bias=False), ) self.dropout = nn.Dropout(p=self.drop_rate) def forward(self, x): y = self.dense_layer(x) if self.drop_rate > 0: y = self.dropout(y) return torch.cat([x, y], 1)

2.2 构造Dense Block模块

再实现DenseBlock模块            ,内部是密集连接方式(输入特征数线性增长)

代码

---(2)构造Dense Block模块--- class DenseBlock(nn.Module): def __init__(self, num_layers, inplances, growth_rate, bn_size , drop_rate=0): super(DenseBlock, self).__init__() layers = [] # 随着layer层数的增加   ,每增加一层                  ,输入的特征图就增加一倍growth_rate for i in range(num_layers): layers.append(_DenseLayer(inplances + i * growth_rate, growth_rate, bn_size, drop_rate)) self.layers = nn.Sequential(*layers) def forward(self, x): return self.layers(x)

三                  、构造Transition模块

Transition模块降低模型复杂度         。包含BN + Relu + 1*1Conv + 2*2AvgPool结构由于每个稠密块都会带来通道数的增加               ,使用过多则会带来过于复杂的模型                  。过渡层用来控制模型复杂度         。它通过1 × 1 卷积层来减小通道数,并使用步幅为2的平均池化层减半高和宽      。

代码

-------------三         、构造Transition层----------------------------- #BN+1×1Conv+2×2AveragePooling class _TransitionLayer(nn.Module): def __init__(self, inplace, plance): super(_TransitionLayer, self).__init__() self.transition_layer = nn.Sequential( nn.BatchNorm2d(inplace), nn.ReLU(inplace=True), nn.Conv2d(in_channels=inplace,out_channels=plance,kernel_size=1,stride=1,padding=0,bias=False), nn.AvgPool2d(kernel_size=2,stride=2), ) def forward(self, x): return self.transition_layer(x)

四      、搭建DenseNet网络

DenseNet如下图所示               ,主要是由多个DenseBlock组成

代码 

-------------四                  、搭建DenseNet网络----------------------------- class DenseNet(nn.Module): def __init__(self, init_channels=64, growth_rate=32, blocks=[6, 12, 24, 16],num_classes=10): super(DenseNet, self).__init__() bn_size = 4 drop_rate = 0 self.conv1 = Conv1(in_planes=3, places=init_channels) blocks*4 #第一次执行特征的维度来自于前面的特征提取 num_features = init_channels # 第1个DenseBlock有6个DenseLayer, 执行DenseBlock(6,64,32,4) self.layer1 = DenseBlock(num_layers=blocks[0], inplances=num_features, growth_rate=growth_rate, bn_size=bn_size, drop_rate=drop_rate) num_features = num_features + blocks[0] * growth_rate # 第1个transition 执行 _TransitionLayer(256,128) self.transition1 = _TransitionLayer(inplace=num_features, plance=num_features // 2) #num_features减少为原来的一半                  ,执行第1回合之后   ,第2个DenseBlock的输入的feature应该是:num_features = 128 num_features = num_features // 2 # 第2个DenseBlock有12个DenseLayer, 执行DenseBlock(12,128,32,4) self.layer2 = DenseBlock(num_layers=blocks[1], inplances=num_features, growth_rate=growth_rate, bn_size=bn_size, drop_rate=drop_rate) num_features = num_features + blocks[1] * growth_rate # 第2个transition 执行 _TransitionLayer(512,256) self.transition2 = _TransitionLayer(inplace=num_features, plance=num_features // 2) # num_features减少为原来的一半            ,执行第2回合之后                  ,第3个DenseBlock的输入的feature应该是:num_features = 256 num_features = num_features // 2 # 第3个DenseBlock有24个DenseLayer, 执行DenseBlock(24,256,32,4) self.layer3 = DenseBlock(num_layers=blocks[2], inplances=num_features, growth_rate=growth_rate, bn_size=bn_size, drop_rate=drop_rate) num_features = num_features + blocks[2] * growth_rate # 第3个transition 执行 _TransitionLayer(1024,512) self.transition3 = _TransitionLayer(inplace=num_features, plance=num_features // 2) # num_features减少为原来的一半      ,执行第3回合之后         ,第4个DenseBlock的输入的feature应该是:num_features = 512 num_features = num_features // 2 # 第4个DenseBlock有16个DenseLayer, 执行DenseBlock(16,512,32,4) self.layer4 = DenseBlock(num_layers=blocks[3], inplances=num_features, growth_rate=growth_rate, bn_size=bn_size, drop_rate=drop_rate) num_features = num_features + blocks[3] * growth_rate self.avgpool = nn.AvgPool2d(7, stride=1) self.fc = nn.Linear(num_features, num_classes) def forward(self, x): x = self.conv1(x) x = self.layer1(x) x = self.transition1(x) x = self.layer2(x) x = self.transition2(x) x = self.layer3(x) x = self.transition3(x) x = self.layer4(x) x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.fc(x) return x

五            、网络模型的创建和测试

5.1 网络模型创建打印 densenet121

 Q: densenet121 中的121 是如何来的?

从设置的参数可以看出来                  ,block_num=4,且每个Block的num_layer=(6, 12, 24, 16),则总共有58个denselayer                  。

从代码中可以知道每个denselayer包含两个卷积            。总共三个 _Transition层         ,每个层一个卷积   。在最开始的时候一个卷积      ,结束的时候一个全连接层                  。则总计:58*2+3+1+1=121

代码

if __name__==__main__: # model = torchvision.models.densenet121() model = DenseNet121() print(model) input = torch.randn(1, 3, 224, 224) out = model(input) print(out.shape)

打印模型如下

DenseNet( (features): Sequential( (conv0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu0): ReLU(inplace=True) (pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (denseblock1): _DenseBlock( (denselayer1): _DenseLayer( (norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer2): _DenseLayer( (norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer3): _DenseLayer( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer4): _DenseLayer( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer5): _DenseLayer( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer6): _DenseLayer( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (transition1): _Transition( (norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock2): _DenseBlock( (denselayer1): _DenseLayer( (norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer2): _DenseLayer( (norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer3): _DenseLayer( (norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer4): _DenseLayer( (norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer5): _DenseLayer( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer6): _DenseLayer( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer7): _DenseLayer( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer8): _DenseLayer( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer9): _DenseLayer( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer10): _DenseLayer( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer11): _DenseLayer( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer12): _DenseLayer( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (transition2): _Transition( (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock3): _DenseBlock( (denselayer1): _DenseLayer( (norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer2): _DenseLayer( (norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer3): _DenseLayer( (norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer4): _DenseLayer( (norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer5): _DenseLayer( (norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer6): _DenseLayer( (norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer7): _DenseLayer( (norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer8): _DenseLayer( (norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer9): _DenseLayer( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer10): _DenseLayer( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer11): _DenseLayer( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer12): _DenseLayer( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer13): _DenseLayer( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer14): _DenseLayer( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer15): _DenseLayer( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer16): _DenseLayer( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer17): _DenseLayer( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer18): _DenseLayer( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer19): _DenseLayer( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer20): _DenseLayer( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer21): _DenseLayer( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer22): _DenseLayer( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer23): _DenseLayer( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer24): _DenseLayer( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (transition3): _Transition( (norm): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (pool): AvgPool2d(kernel_size=2, stride=2, padding=0) ) (denseblock4): _DenseBlock( (denselayer1): _DenseLayer( (norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer2): _DenseLayer( (norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer3): _DenseLayer( (norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer4): _DenseLayer( (norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer5): _DenseLayer( (norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer6): _DenseLayer( (norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer7): _DenseLayer( (norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer8): _DenseLayer( (norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer9): _DenseLayer( (norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer10): _DenseLayer( (norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer11): _DenseLayer( (norm1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer12): _DenseLayer( (norm1): BatchNorm2d(864, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(864, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer13): _DenseLayer( (norm1): BatchNorm2d(896, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer14): _DenseLayer( (norm1): BatchNorm2d(928, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(928, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer15): _DenseLayer( (norm1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(960, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) (denselayer16): _DenseLayer( (norm1): BatchNorm2d(992, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu1): ReLU(inplace=True) (conv1): Conv2d(992, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu2): ReLU(inplace=True) (conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) (norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (classifier): Linear(in_features=1024, out_features=1000, bias=True) )

5.2 使用torchsummary打印每个网络模型的详细信息 

代码

if __name__ == __main__: net = DenseNet(num_classes=10).cuda() summary(net, (3, 224, 224))

打印模型如下

PyTorch Version: 1.12.0+cu113 Torchvision Version: 0.13.0+cu113 ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 64, 112, 112] 9,408 BatchNorm2d-2 [-1, 64, 112, 112] 128 ReLU-3 [-1, 64, 112, 112] 0 MaxPool2d-4 [-1, 64, 56, 56] 0 BatchNorm2d-5 [-1, 64, 56, 56] 128 ReLU-6 [-1, 64, 56, 56] 0 Conv2d-7 [-1, 128, 56, 56] 8,192 BatchNorm2d-8 [-1, 128, 56, 56] 256 ReLU-9 [-1, 128, 56, 56] 0 Conv2d-10 [-1, 32, 56, 56] 36,864 _DenseLayer-11 [-1, 96, 56, 56] 0 BatchNorm2d-12 [-1, 96, 56, 56] 192 ReLU-13 [-1, 96, 56, 56] 0 Conv2d-14 [-1, 128, 56, 56] 12,288 BatchNorm2d-15 [-1, 128, 56, 56] 256 ReLU-16 [-1, 128, 56, 56] 0 Conv2d-17 [-1, 32, 56, 56] 36,864 _DenseLayer-18 [-1, 128, 56, 56] 0 BatchNorm2d-19 [-1, 128, 56, 56] 256 ReLU-20 [-1, 128, 56, 56] 0 Conv2d-21 [-1, 128, 56, 56] 16,384 BatchNorm2d-22 [-1, 128, 56, 56] 256 ReLU-23 [-1, 128, 56, 56] 0 Conv2d-24 [-1, 32, 56, 56] 36,864 _DenseLayer-25 [-1, 160, 56, 56] 0 BatchNorm2d-26 [-1, 160, 56, 56] 320 ReLU-27 [-1, 160, 56, 56] 0 Conv2d-28 [-1, 128, 56, 56] 20,480 BatchNorm2d-29 [-1, 128, 56, 56] 256 ReLU-30 [-1, 128, 56, 56] 0 Conv2d-31 [-1, 32, 56, 56] 36,864 _DenseLayer-32 [-1, 192, 56, 56] 0 BatchNorm2d-33 [-1, 192, 56, 56] 384 ReLU-34 [-1, 192, 56, 56] 0 Conv2d-35 [-1, 128, 56, 56] 24,576 BatchNorm2d-36 [-1, 128, 56, 56] 256 ReLU-37 [-1, 128, 56, 56] 0 Conv2d-38 [-1, 32, 56, 56] 36,864 _DenseLayer-39 [-1, 224, 56, 56] 0 BatchNorm2d-40 [-1, 224, 56, 56] 448 ReLU-41 [-1, 224, 56, 56] 0 Conv2d-42 [-1, 128, 56, 56] 28,672 BatchNorm2d-43 [-1, 128, 56, 56] 256 ReLU-44 [-1, 128, 56, 56] 0 Conv2d-45 [-1, 32, 56, 56] 36,864 _DenseLayer-46 [-1, 256, 56, 56] 0 DenseBlock-47 [-1, 256, 56, 56] 0 BatchNorm2d-48 [-1, 256, 56, 56] 512 ReLU-49 [-1, 256, 56, 56] 0 Conv2d-50 [-1, 128, 56, 56] 32,768 AvgPool2d-51 [-1, 128, 28, 28] 0 _TransitionLayer-52 [-1, 128, 28, 28] 0 BatchNorm2d-53 [-1, 128, 28, 28] 256 ReLU-54 [-1, 128, 28, 28] 0 Conv2d-55 [-1, 128, 28, 28] 16,384 BatchNorm2d-56 [-1, 128, 28, 28] 256 ReLU-57 [-1, 128, 28, 28] 0 Conv2d-58 [-1, 32, 28, 28] 36,864 _DenseLayer-59 [-1, 160, 28, 28] 0 BatchNorm2d-60 [-1, 160, 28, 28] 320 %2

创心域SEO版权声明:以上内容作者已申请原创保护,未经允许不得转载,侵权必究!授权事宜、对本内容有异议或投诉,敬请联系网站管理员,我们将尽快回复您,谢谢合作!

展开全文READ MORE
javascript 博客园(利用JS代码实现ZBLOG博客运行天数的效果展示)