赞
踩
pytorch和tensorflow是机器学习的两大框架,上一篇帖子已经完整梳理了TensorFlow自学的知识点,这一篇把自学pytorch的知识点也整理下来,内容参考了网上很多帖子,不一一引用了,如有侵权请联系我。
目录
3、pytorch的数据加载和处理 -- Dataset和DataLoader
4.1.1 nn.functional 和 nn.Module
5.1.3 继承nn.Module基类构建模型并辅助应用模型容器进行封装
7.2.3 torchkeras.Model使用单GPU示例
7.2.4 torchkeras.Model使用多GPU示例
7.2.5 torchkeras.LightModel使用GPU/TPU示例
Pytorch是torch的python版本,是由Facebook开源的神经网络框架,专门针对 GPU 加速的深度神经网络(DNN)编程。Torch 是一个经典的对多维矩阵数据进行操作的张量(tensor )库,在机器学习和其他数学密集型应用有广泛应用。与Tensorflow的静态计算图不同,pytorch的计算图是动态的,可以根据计算需要实时改变计算图。但由于Torch语言采用 Lua,导致在国内一直很小众,并逐渐被支持 Python 的 Tensorflow 抢走用户。作为经典机器学习库 Torch 的端口,PyTorch 为 Python 语言使用者提供了舒适的写代码选择。
PyTorch的设计追求最少的封装,尽量避免重复造轮子。不像 TensorFlow 中充斥着session、graph、operation、name_scope、variable、tensor、layer等全新的概念,PyTorch 的设计遵循tensor→variable(autograd)→nn.Module 三个由低到高的抽象层次,分别代表高维数组(张量)、自动求导(变量)和神经网络(层/模块),而且这三个抽象之间联系紧密,可以同时进行修改和操作。 简洁的设计带来的另外一个好处就是代码易于理解。PyTorch的源码只有TensorFlow的十分之一左右,更少的抽象、更直观的设计使得PyTorch的源码十分易于阅读。
PyTorch 的灵活性不以速度为代价,在许多评测中,PyTorch 的速度表现胜过 TensorFlow和Keras 等框架。框架的运行速度和程序员的编码水平有极大关系,但同样的算法,使用PyTorch实现的那个更有可能快过用其他框架实现的。
PyTorch 是所有的框架中面向对象设计的最优雅的一个。PyTorch的面向对象的接口设计来源于Torch,而Torch的接口设计以灵活易用而著称,Keras作者最初就是受Torch的启发才开发了Keras。PyTorch继承了Torch的衣钵,尤其是API的设计和模块的接口都与Torch高度一致。PyTorch的设计最符合人们的思维,它让用户尽可能地专注于实现自己的想法,即所思即所得,不需要考虑太多关于框架本身的束缚。
PyTorch 提供了完整的文档,循序渐进的指南,作者亲自维护的论坛 供用户交流和求教问题。Facebook 人工智能研究院对 PyTorch 提供了强力支持,作为当今排名前三的深度学习研究机构,FAIR的支持足以确保PyTorch获得持续的开发更新,不至于像许多由个人开发的框架那样昙花一现。
Pytorch的安装可以参考前文TensorFlow2自学笔记_阿尔法羊的博客-CSDN博客的准备工作部分。也要安装Anaconda,和其他一些列辅助的库,也尽量使用国内镜像来安装(国外那个网速实在太慢),不同的就是把tensorflow 换成 pytorch,这里不再详述。
张量是Pytorch的核心概念,pytorch的计算都是基于张量的计算。所以了解张量的基本概念和基本操作,是学习pytorch的基础内容。
Pytorch的基本数据结构是张量Tensor。张量即多维数组。Pytorch的张量和numpy中的array很类似。
张量的数据结构包括张量的数据类型、张量的维度、张量的尺寸、张量和numpy数组等基本概念。
张量的数据类型和numpy.array基本一一对应,但是不支持str类型。
数据类型有以下这些:
torch.float64(torch.double),
torch.float32(torch.float),
torch.float16,
torch.int64(torch.long),
torch.int32(torch.int),
torch.int16,
torch.int8,
torch.uint8,
torch.bool
一般神经网络建模使用的都是torch.float32类型。
- #例1-1-1 张量数据类型的操作
- import numpy as np
- import torch
- # 自动推断数据类型
- i = torch.tensor(1)
- print(i,i.dtype)
- x = torch.tensor(2.0)
- print(x,x.dtype)
- b = torch.tensor(True)
- print(b,b.dtype)
-
- out:
- tensor(1) torch.int64
- tensor(2.) torch.float32
- tensor(True) torch.bool
-
-
-
-
- # 指定数据类型
- i = torch.tensor(1,dtype = torch.int32)
- print(i,i.dtype)
- x = torch.tensor(2.0,dtype = torch.double)
- print(x,x.dtype)
-
- out:
- tensor(1, dtype=torch.int32) torch.int32
- tensor(2., dtype=torch.float64) torch.float64
-
-
-
-
-
- # 使用特定类型构造函数
- i = torch.IntTensor(1)
- print(i,i.dtype)
- x = torch.Tensor(np.array(2.0))
- print(x,x.dtype) #等价于torch.FloatTensor
- b = torch.BoolTensor(np.array([1,0,2,0]))
- print(b,b.dtype)
-
- out:
- tensor([0], dtype=torch.int32) torch.int32
- tensor(2.) torch.float32
- tensor([ True, False, True, False]) torch.bool
-
-
-
-
- # 不同类型进行转换
- i = torch.tensor(1)
- print(i,i.dtype)
- #调用 float方法转换成浮点类型
- x = i.float()
- print(x,x.dtype)
- #使用type函数转换成浮点类型
- y = i.type(torch.float)
- print(y,y.dtype)
- #使用type_as方法转换成某个Tensor相同类型
- z = i.type_as(x)
- print(z,z.dtype)
-
- out:
- tensor(1) torch.int64
- tensor(1.) torch.float32
- tensor(1.) torch.float32
- tensor(1.) torch.float32
不同类型的数据可以用不同维度(dimension)的张量来表示。
标量为0维张量,向量为1维张量,矩阵为2维张量。
彩色图像有rgb三个通道,可以表示为3维张量。
视频还有时间维,可以表示为4维张量。
可以简单地总结为:有几层中括号,就是多少维的张量。
- 例1-1-2 张量的维度
- Import torch
- # 标量,0维张量
- scalar = torch.tensor(True)
- print(scalar)
- print(scalar.dim())
-
- out:
- tensor(True)
- 0
-
-
-
- #向量,1维张量
- vector = torch.tensor([1.0,2.0,3.0,4.0])
- print(vector)
- print(vector.dim())
-
- out:
- tensor([1., 2., 3., 4.])
- 1
-
-
-
- #矩阵, 2维张量
- matrix = torch.tensor([[1.0,2.0],[3.0,4.0]])
- print(matrix)
- print(matrix.dim())
-
- out:
- tensor([[1., 2.],
- [3., 4.]])
- 2
-
-
-
- # 3维张量
- tensor3 = torch.tensor([[[1.0,2.0],[3.0,4.0]],[[5.0,6.0],[7.0,8.0]]])
- print(tensor3)
- print(tensor3.dim())
-
- out:
- tensor([[[1., 2.],
- [3., 4.]],
-
- [[5., 6.],
- [7., 8.]]])
- 3
-
-
-
- # 4维张量
- tensor4 = torch.tensor([[[[1.0,1.0],[2.0,2.0]],[[3.0,3.0],[4.0,4.0]]],
- [[[5.0,5.0],[6.0,6.0]],[[7.0,7.0],[8.0,8.0]]]])
- print(tensor4)
- print(tensor4.dim())
-
- out:
- tensor([[[[1., 1.],
- [2., 2.]],
-
- [[3., 3.],
- [4., 4.]]],
-
-
- [[[5., 5.],
- [6., 6.]],
-
- [[7., 7.],
- [8., 8.]]]])
- 4
可以使用 shape属性或者 size()方法查看张量在每一维的长度.
可以使用view方法改变张量的尺寸。
如果view方法改变尺寸失败,可以使用reshape方法
- #例1-1-3 张量的尺寸
- Import torch
- vector = torch.tensor([1.0,2.0,3.0,4.0])
- print(vector.size())
- print(vector.shape)
-
- out:
- torch.Size([4])
- torch.Size([4])
-
-
-
-
- matrix = torch.tensor([[1.0,2.0],[3.0,4.0]])
- print(matrix.size())
-
- out:
- torch.Size([2, 2])
-
-
-
-
- # 使用view可以改变张量尺寸
- vector = torch.arange(0,12)
- print(vector)
- print(vector.shape)
- matrix34 = vector.view(3,4)
- print(matrix34)
- print(matrix34.shape)
- matrix43 = vector.view(4,-1) #-1表示该位置长度由程序自动推断
- print(matrix43)
- print(matrix43.shape)
-
- out:
- tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
- torch.Size([12])
- tensor([[ 0, 1, 2, 3],
- [ 4, 5, 6, 7],
- [ 8, 9, 10, 11]])
- torch.Size([3, 4])
- tensor([[ 0, 1, 2],
- [ 3, 4, 5],
- [ 6, 7, 8],
- [ 9, 10, 11]])
- torch.Size([4, 3])
-
-
-
-
- # 有些操作会让张量存储结构扭曲,直接使用view会失败,可以用reshape方法
- matrix26 = torch.arange(0,12).view(2,6)
- print(matrix26)
- print(matrix26.shape)
- # 转置操作让张量存储结构扭曲
- matrix62 = matrix26.t()
- print(matrix62.is_contiguous())
- # 直接使用view方法会失败,可以使用reshape方法
- #matrix34 = matrix62.view(3,4) #error!
- matrix34 = matrix62.reshape(3,4) #等价于matrix34 =
- matrix62.contiguous().view(3,4)
- print(matrix34)
-
- out:
- tensor([[ 0, 1, 2, 3, 4, 5],
- [ 6, 7, 8, 9, 10, 11]])
- torch.Size([2, 6])
- False
- tensor([[ 0, 6, 1, 7],
- [ 2, 8, 3, 9],
- [ 4, 10, 5, 11]])
可以用numpy方法从Tensor得到numpy数组,也可以用torch.from_numpy从numpy数组得到 Tensor。
这两种方法关联的Tensor和numpy数组是共享数据内存的。
如果改变其中一个,另外一个的值也会发生改变。
如果有需要,可以用张量的clone方法拷贝张量,中断这种关联。
此外,还可以使用item方法从标量张量得到对应的Python数值。
使用tolist方法从张量得到对应的Python数值列表。
- #例1-1-4 张量和numpy数组
- import numpy as np
- import torch
- #torch.from_numpy函数从numpy数组得到Tensor
- arr = np.zeros(3)
- tensor = torch.from_numpy(arr)
- print("before add 1:")
- print(arr)
- print(tensor)
- print("\nafter add 1:")
- np.add(arr,1, out = arr) #给 arr增加1,tensor也随之改变
- print(arr)
- print(tensor)
-
- out:
- before add 1:
- [0. 0. 0.]
- tensor([0., 0., 0.], dtype=torch.float64)
-
- after add 1:
- [1. 1. 1.]
- tensor([1., 1., 1.], dtype=torch.float64)
-
-
-
-
-
- # numpy方法从Tensor得到numpy数组
- tensor = torch.zeros(3)
- arr = tensor.numpy()
- print("before add 1:")
- print(tensor)
- print(arr)
- print("\nafter add 1:")
- #使用带下划线的方法表示计算结果会返回给调用 张量
- tensor.add_(1) #给 tensor增加1,arr也随之改变
- #或: torch.add(tensor,1,out = tensor)
- print(tensor)
- print(arr)
-
- out:
- before add 1:
- tensor([0., 0., 0.])
- [0. 0. 0.]
-
- after add 1:
- tensor([1., 1., 1.])
- [1. 1. 1.]
-
-
-
-
-
- # 可以用clone() 方法拷贝张量,中断这种关联
- tensor = torch.zeros(3)
- #使用clone方法拷贝张量, 拷贝后的张量和原始张量内存独立
- arr = tensor.clone().numpy() # 也可以使用tensor.data.numpy()
- print("before add 1:")
- print(tensor)
- print(arr)
- print("\nafter add 1:")
- #使用 带下划线的方法表示计算结果会返回给调用 张量
- tensor.add_(1) #给 tensor增加1,arr不再随之改变
- print(tensor)
- print(arr)
-
- out:
- before add 1:
- tensor([0., 0., 0.])
- [0. 0. 0.]
-
- after add 1:
- tensor([1., 1., 1.])
- [0. 0. 0.]
-
-
-
-
-
- # item方法和tolist方法可以将张量转换成Python数值和数值列表
- scalar = torch.tensor(1.0)
- s = scalar.item()
- print(s)
- print(type(s))
- tensor = torch.rand(2,2)
- t = tensor.tolist()
- print(t)
- print(type(t))
-
- out:
- 1.0
- <class 'float'>
- [[0.5407917499542236, 0.08548498153686523], [0.8822196125984192, 0.5270139575004578]]
- <class 'list'>
- #例1-2-1 创建张量
- import numpy as np
- import torch
- a = torch.tensor([1,2,3],dtype = torch.float)
- print('a:',a)
- b = torch.arange(1,10,step = 2)
- print('b:',b)
- c = torch.linspace(0.0,2*3.14,10)
- print('c:',c)
- d = torch.zeros((3,3))
- print('d:',d)
-
- e = torch.ones((3,3),dtype = torch.int)
- f = torch.zeros_like(e,dtype = torch.float)
- print('e:',e)
- print('f:',f)
-
- torch.fill_(f,5)
- print('f:',f)
-
- #均匀随机分布
- torch.manual_seed(0)
- minval,maxval = 0,10
- g = minval + (maxval-minval)*torch.rand([5])
- print('g:',g)
-
- #正态分布随机
- h = torch.normal(mean = torch.zeros(3,3), std = torch.ones(3,3))
- print('h:',h)
-
- #正态分布随机
- mean,std = 2,5
- i = std*torch.randn((3,3))+mean
- print('i:',i)
-
- #整数随机排列
- j = torch.randperm(20)
- print('j:',j)
-
- #特殊矩阵
- k = torch.eye(3,3) #单位矩阵
- print('k:',k)
- l = torch.diag(torch.tensor([1,2,3])) #对角矩阵
- print('l:',l)
运行结果:
- a: tensor([1., 2., 3.])
- b: tensor([1, 3, 5, 7, 9])
- c: tensor([0.0000, 0.6978, 1.3956, 2.0933, 2.7911, 3.4889, 4.1867, 4.8844, 5.5822,
- 6.2800])
- d: tensor([[0., 0., 0.],
- [0., 0., 0.],
- [0., 0., 0.]])
- e: tensor([[1, 1, 1],
- [1, 1, 1],
- [1, 1, 1]], dtype=torch.int32)
- f: tensor([[0., 0., 0.],
- [0., 0., 0.],
- [0., 0., 0.]])
- f: tensor([[5., 5., 5.],
- [5., 5., 5.],
- [5., 5., 5.]])
- g: tensor([4.9626, 7.6822, 0.8848, 1.3203, 3.0742])
- h: tensor([[ 0.5507, 0.2704, 0.6472],
- [ 0.2490, -0.3354, 0.4564],
- [-0.6255, 0.4539, -1.3740]])
- i: tensor([[16.2371, -1.6612, 3.9163],
- [ 7.4999, 1.5616, 4.0768],
- [ 5.2128, -8.9407, 6.4601]])
- j: tensor([ 3, 17, 9, 19, 1, 18, 4, 13, 15, 12, 0, 16, 7, 11, 2, 5, 8, 10,
- 6, 14])
- k: tensor([[1., 0., 0.],
- [0., 1., 0.],
- [0., 0., 1.]])
- l: tensor([[1, 0, 0],
- [0, 2, 0],
- [0, 0, 3]])
张量的索引切片方式和numpy几乎是一样的。
切片时支持缺省参数和省略号。
可以通过索引和切片对部分元素进行修改。
此外,对于不规则的切片提取,可以使用torch.index_select, torch.masked_select, torch.take
如果要通过修改张量的某些元素得到新的张量,可以使用 torch.where,torch.masked_fill,torch.index_fill
- #例1-2-2 索引切片-a
- #均匀随机分布
- torch.manual_seed(0)
- minval,maxval = 0,10
- t = torch.floor(minval + (maxval-minval)*torch.rand([5,5])).int()
- print(t)
-
- #第0行
- print(t[0])
-
- #倒数第一行
- print(t[-1])
-
- #第1行第3列
- print(t[1,3])
- print(t[1][3])
-
- #第1行至第3行
- print(t[1:4,:])
-
- #第1行至最后一行,第0列到最后一列每隔两列取一列
- print(t[1:4,:4:2])
-
- #可以使用索引和切片修改部分元素
- x = torch.tensor([[1,2],[3,4]],dtype = torch.float32,requires_grad=True)
- x.data[1,:] = torch.tensor([0.0,0.0])
- print(x)
-
- a = torch.arange(27).view(3,3,3)
- print(a)
-
- #省略号可以表示多个冒号
- print(a[...,1])
运行结果
- tensor([[4, 7, 0, 1, 3],
- [6, 4, 8, 4, 6],
- [3, 4, 0, 1, 2],
- [5, 6, 8, 1, 2],
- [6, 9, 3, 8, 4]], dtype=torch.int32)
- tensor([4, 7, 0, 1, 3], dtype=torch.int32)
- tensor([6, 9, 3, 8, 4], dtype=torch.int32)
- tensor(4, dtype=torch.int32)
- tensor(4, dtype=torch.int32)
- tensor([[6, 4, 8, 4, 6],
- [3, 4, 0, 1, 2],
- [5, 6, 8, 1, 2]], dtype=torch.int32)
- tensor([[6, 8],
- [3, 0],
- [5, 8]], dtype=torch.int32)
- tensor([[1., 2.],
- [0., 0.]], requires_grad=True)
- tensor([[[ 0, 1, 2],
- [ 3, 4, 5],
- [ 6, 7, 8]],
-
- [[ 9, 10, 11],
- [12, 13, 14],
- [15, 16, 17]],
-
- [[18, 19, 20],
- [21, 22, 23],
- [24, 25, 26]]])
- tensor([[ 1, 4, 7],
- [10, 13, 16],
- [19, 22, 25]])
对于不规则的切片提取,可以使用torch.index_select, torch.take, torch.gather, torch.masked_select.
假设有个班级成绩册的例子,有4个班级,每个班级10个学生,每个学生7门科目成绩。可以用一个 4×10×7的张量来表示。
- #例1-2-2 索引切片-b
- minval=0
- maxval=100
- scores = torch.floor(minval + (maxval-minval)*torch.rand([4,10,7])).int()
- print(scores)
-
-
- #打印如下:
- tensor([[[49, 39, 71, 15, 19, 69, 89],
- [57, 99, 45, 14, 8, 42, 15],
- [49, 10, 5, 39, 48, 53, 45],
- [54, 25, 71, 3, 87, 71, 19],
- [50, 63, 50, 13, 64, 74, 37],
- [44, 71, 61, 10, 23, 15, 59],
- [44, 93, 48, 26, 16, 50, 59],
- [39, 41, 6, 3, 37, 68, 3],
- [47, 26, 46, 5, 28, 74, 17],
- [62, 11, 16, 11, 18, 2, 72]],
-
- [[85, 75, 23, 77, 30, 20, 79],
- [98, 88, 88, 92, 4, 10, 24],
- [66, 15, 89, 36, 51, 2, 69],
- [27, 39, 69, 78, 79, 70, 89],
- [92, 29, 6, 99, 45, 82, 71],
- [26, 89, 10, 36, 14, 92, 39],
- [15, 36, 90, 92, 41, 94, 0],
- [33, 12, 37, 65, 32, 79, 60],
- [76, 4, 50, 67, 31, 99, 68],
- [70, 10, 30, 64, 81, 12, 7]],
-
- [[29, 21, 59, 86, 30, 83, 79],
- [30, 71, 53, 89, 37, 71, 83],
- [17, 96, 66, 9, 24, 48, 84],
- [92, 47, 0, 2, 97, 56, 41],
- [14, 2, 59, 8, 96, 12, 35],
- [83, 91, 13, 63, 94, 16, 4],
- [55, 42, 79, 58, 85, 27, 74],
- [18, 47, 17, 50, 67, 8, 87],
- [43, 94, 6, 70, 7, 30, 39],
- [45, 80, 40, 85, 59, 99, 31]],
-
- [[59, 71, 93, 64, 30, 80, 60],
- [10, 10, 98, 38, 31, 68, 67],
- [ 0, 64, 87, 75, 39, 72, 44],
- [78, 66, 78, 2, 54, 39, 98],
- [44, 30, 1, 39, 13, 32, 81],
- [47, 70, 92, 0, 20, 75, 49],
- [66, 49, 13, 92, 16, 90, 34],
- [27, 49, 2, 70, 87, 80, 32],
- [ 2, 80, 97, 84, 86, 17, 14],
- [68, 13, 78, 28, 51, 85, 35]]], dtype=torch.int32)
-
-
- #抽取每个班级第0个学生,第5个学生,第9个学生的全部成绩
- cj=torch.index_select(scores,dim = 1,index = torch.tensor([0,5,9]))
- print(cj)
- #打印如下:
- tensor([[[49, 39, 71, 15, 19, 69, 89],
- [44, 71, 61, 10, 23, 15, 59],
- [62, 11, 16, 11, 18, 2, 72]],
-
- [[85, 75, 23, 77, 30, 20, 79],
- [26, 89, 10, 36, 14, 92, 39],
- [70, 10, 30, 64, 81, 12, 7]],
-
- [[29, 21, 59, 86, 30, 83, 79],
- [83, 91, 13, 63, 94, 16, 4],
- [45, 80, 40, 85, 59, 99, 31]],
-
- [[59, 71, 93, 64, 30, 80, 60],
- [47, 70, 92, 0, 20, 75, 49],
- [68, 13, 78, 28, 51, 85, 35]]], dtype=torch.int32)
-
-
-
- #抽取每个班级第0个学生,第5个学生,第9个学生的第1门课程,第3门课程,第6门课程成绩
- cj2 = torch.index_select(torch.index_select(scores,dim = 1,index =
- torch.tensor([0,5,9]))
- ,dim=2,index = torch.tensor([1,3,6]))
- print(cj2)
- #打印如下:
- tensor([[[39, 15, 89],
- [71, 10, 59],
- [11, 11, 72]],
-
- [[75, 77, 79],
- [89, 36, 39],
- [10, 64, 7]],
-
- [[21, 86, 79],
- [91, 63, 4],
- [80, 85, 31]],
-
- [[71, 64, 60],
- [70, 0, 49],
- [13, 28, 35]]], dtype=torch.int32)
-
-
-
- #抽取第0个班级第0个学生的第0门课程,第2个班级的第4个学生的第1门课程,第3个班级的第9个学生第6
- 门课程成绩
- #take将输入看成一维数组,输出和index同形状
- cj3 = torch.take(scores,torch.tensor([0*10*7+0,2*10*7+4*7+1,3*10*7+9*7+6]))
- print(cj3)
- #打印如下:
- tensor([49, 2, 35], dtype=torch.int32)
-
- #抽取分数大于等于80分的分数(布尔索引)
- #结果是1维张量
- cj4 = torch.masked_select(scores,scores>=80)
- print(cj4)
- #打印如下:
- tensor([89, 99, 87, 93, 85, 98, 88, 88, 92, 89, 89, 92, 99, 82, 89, 92, 90, 92,
- 94, 99, 81, 86, 83, 89, 83, 96, 84, 92, 97, 96, 83, 91, 94, 85, 87, 94,
- 80, 85, 99, 93, 80, 98, 87, 98, 81, 92, 92, 90, 87, 80, 80, 97, 84, 86,
- 85], dtype=torch.int32)
-
- #如果分数大于60分,赋值成1,否则赋值成0
- ifpass = torch.where(scores>60,torch.tensor(1),torch.tensor(0))
- print(ifpass)
- #打印如下:
- tensor([[[0, 0, 1, 0, 0, 1, 1],
- [0, 1, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0],
- [0, 0, 1, 0, 1, 1, 0],
- [0, 1, 0, 0, 1, 1, 0],
- [0, 1, 1, 0, 0, 0, 0],
- [0, 1, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 1, 0],
- [0, 0, 0, 0, 0, 1, 0],
- [1, 0, 0, 0, 0, 0, 1]],
-
- [[1, 1, 0, 1, 0, 0, 1],
- [1, 1, 1, 1, 0, 0, 0],
- [1, 0, 1, 0, 0, 0, 1],
- [0, 0, 1, 1, 1, 1, 1],
- [1, 0, 0, 1, 0, 1, 1],
- [0, 1, 0, 0, 0, 1, 0],
- [0, 0, 1, 1, 0, 1, 0],
- [0, 0, 0, 1, 0, 1, 0],
- [1, 0, 0, 1, 0, 1, 1],
- [1, 0, 0, 1, 1, 0, 0]],
-
- [[0, 0, 0, 1, 0, 1, 1],
- [0, 1, 0, 1, 0, 1, 1],
- [0, 1, 1, 0, 0, 0, 1],
- [1, 0, 0, 0, 1, 0, 0],
- [0, 0, 0, 0, 1, 0, 0],
- [1, 1, 0, 1, 1, 0, 0],
- [0, 0, 1, 0, 1, 0, 1],
- [0, 0, 0, 0, 1, 0, 1],
- [0, 1, 0, 1, 0, 0, 0],
- [0, 1, 0, 1, 0, 1, 0]],
-
- [[0, 1, 1, 1, 0, 1, 0],
- [0, 0, 1, 0, 0, 1, 1],
- [0, 1, 1, 1, 0, 1, 0],
- [1, 1, 1, 0, 0, 0, 1],
- [0, 0, 0, 0, 0, 0, 1],
- [0, 1, 1, 0, 0, 1, 0],
- [1, 0, 0, 1, 0, 1, 0],
- [0, 0, 0, 1, 1, 1, 0],
- [0, 1, 1, 1, 1, 0, 0],
- [1, 0, 1, 0, 0, 1, 0]]])
-
- #将每个班级第0个学生,第5个学生,第9个学生的全部成绩赋值成满分
- torch.index_fill(scores,dim = 1,index = torch.tensor([0,5,9]),value = 100)
- #等价于 scores.index_fill(dim = 1,index = torch.tensor([0,5,9]),value = 100)
- #打印如下:
- tensor([[[100, 100, 100, 100, 100, 100, 100],
- [ 57, 99, 45, 14, 8, 42, 15],
- [ 49, 10, 5, 39, 48, 53, 45],
- [ 54, 25, 71, 3, 87, 71, 19],
- [ 50, 63, 50, 13, 64, 74, 37],
- [100, 100, 100, 100, 100, 100, 100],
- [ 44, 93, 48, 26, 16, 50, 59],
- [ 39, 41, 6, 3, 37, 68, 3],
- [ 47, 26, 46, 5, 28, 74, 17],
- [100, 100, 100, 100, 100, 100, 100]],
-
- [[100, 100, 100, 100, 100, 100, 100],
- [ 98, 88, 88, 92, 4, 10, 24],
- [ 66, 15, 89, 36, 51, 2, 69],
- [ 27, 39, 69, 78, 79, 70, 89],
- [ 92, 29, 6, 99, 45, 82, 71],
- [100, 100, 100, 100, 100, 100, 100],
- [ 15, 36, 90, 92, 41, 94, 0],
- [ 33, 12, 37, 65, 32, 79, 60],
- [ 76, 4, 50, 67, 31, 99, 68],
- [100, 100, 100, 100, 100, 100, 100]],
-
- [[100, 100, 100, 100, 100, 100, 100],
- [ 30, 71, 53, 89, 37, 71, 83],
- [ 17, 96, 66, 9, 24, 48, 84],
- [ 92, 47, 0, 2, 97, 56, 41],
- [ 14, 2, 59, 8, 96, 12, 35],
- [100, 100, 100, 100, 100, 100, 100],
- [ 55, 42, 79, 58, 85, 27, 74],
- [ 18, 47, 17, 50, 67, 8, 87],
- [ 43, 94, 6, 70, 7, 30, 39],
- [100, 100, 100, 100, 100, 100, 100]],
-
- [[100, 100, 100, 100, 100, 100, 100],
- [ 10, 10, 98, 38, 31, 68, 67],
- [ 0, 64, 87, 75, 39, 72, 44],
- [ 78, 66, 78, 2, 54, 39, 98],
- [ 44, 30, 1, 39, 13, 32, 81],
- [100, 100, 100, 100, 100, 100, 100],
- [ 66, 49, 13, 92, 16, 90, 34],
- [ 27, 49, 2, 70, 87, 80, 32],
- [ 2, 80, 97, 84, 86, 17, 14],
- [100, 100, 100, 100, 100, 100, 100]]], dtype=torch.int32)
-
- #将分数小于60分的分数赋值成60分
- cj5 = torch.masked_fill(scores,scores<60,60)
- #等价于b = scores.masked_fill(scores<60,60)
- print(cj5)
- #打印如下:
- tensor([[[60, 60, 71, 60, 60, 69, 89],
- [60, 99, 60, 60, 60, 60, 60],
- [60, 60, 60, 60, 60, 60, 60],
- [60, 60, 71, 60, 87, 71, 60],
- [60, 63, 60, 60, 64, 74, 60],
- [60, 71, 61, 60, 60, 60, 60],
- [60, 93, 60, 60, 60, 60, 60],
- [60, 60, 60, 60, 60, 68, 60],
- [60, 60, 60, 60, 60, 74, 60],
- [62, 60, 60, 60, 60, 60, 72]],
-
- [[85, 75, 60, 77, 60, 60, 79],
- [98, 88, 88, 92, 60, 60, 60],
- [66, 60, 89, 60, 60, 60, 69],
- [60, 60, 69, 78, 79, 70, 89],
- [92, 60, 60, 99, 60, 82, 71],
- [60, 89, 60, 60, 60, 92, 60],
- [60, 60, 90, 92, 60, 94, 60],
- [60, 60, 60, 65, 60, 79, 60],
- [76, 60, 60, 67, 60, 99, 68],
- [70, 60, 60, 64, 81, 60, 60]],
-
- [[60, 60, 60, 86, 60, 83, 79],
- [60, 71, 60, 89, 60, 71, 83],
- [60, 96, 66, 60, 60, 60, 84],
- [92, 60, 60, 60, 97, 60, 60],
- [60, 60, 60, 60, 96, 60, 60],
- [83, 91, 60, 63, 94, 60, 60],
- [60, 60, 79, 60, 85, 60, 74],
- [60, 60, 60, 60, 67, 60, 87],
- [60, 94, 60, 70, 60, 60, 60],
- [60, 80, 60, 85, 60, 99, 60]],
-
- [[60, 71, 93, 64, 60, 80, 60],
- [60, 60, 98, 60, 60, 68, 67],
- [60, 64, 87, 75, 60, 72, 60],
- [78, 66, 78, 60, 60, 60, 98],
- [60, 60, 60, 60, 60, 60, 81],
- [60, 70, 92, 60, 60, 75, 60],
- [66, 60, 60, 92, 60, 90, 60],
- [60, 60, 60, 70, 87, 80, 60],
- [60, 80, 97, 84, 86, 60, 60],
- [68, 60, 78, 60, 60, 85, 60]]], dtype=torch.int32)
-
-
-
-
-
-
维度变换相关函数主要有 torch.reshape(或者调用张量的view方法), torch.squeeze, torch.unsqueeze, torch.transpose
torch.reshape 可以改变张量的形状。
torch.squeeze 可以减少维度。
torch.unsqueeze 可以增加维度。
torch.transpose 可以交换维度。
- #例1-2-3 维度变换
- import torch
- # 张量的view方法有时候会调用失败,可以使用reshape方法。
- torch.manual_seed(0)
- minval,maxval = 0,255
- a = (minval + (maxval-minval)*torch.rand([1,3,3,2])).int()
- print(a.shape)
- print(a)
-
- out:
- torch.Size([1, 3, 3, 2])
- tensor([[[[126, 195],
- [ 22, 33],
- [ 78, 161]],
-
- [[124, 228],
- [116, 161],
- [ 88, 102]],
-
- [[ 5, 43],
- [ 74, 132],
- [177, 204]]]], dtype=torch.int32)
-
-
-
-
-
- # 改成 (3,6)形状的张量
- b = a.view([3,6]) #torch.reshape(a,[3,6])
- print(b.shape)
- print(b)
-
- out:
- torch.Size([3, 6])
- tensor([[126, 195, 22, 33, 78, 161],
- [124, 228, 116, 161, 88, 102],
- [ 5, 43, 74, 132, 177, 204]], dtype=torch.int32)
-
-
-
-
-
- # 改回成 [1,3,3,2] 形状的张量
- c = torch.reshape(b,[1,3,3,2]) # b.view([1,3,3,2])
- print(c)
-
- out:
- tensor([[[[126, 195],
- [ 22, 33],
- [ 78, 161]],
-
- [[124, 228],
- [116, 161],
- [ 88, 102]],
-
- [[ 5, 43],
- [ 74, 132],
- [177, 204]]]], dtype=torch.int32)
-
-
-
-
-
- #如果张量在某个维度上只有一个元素,利用torch.squeeze可以消除这个维度。
- #torch.unsqueeze的作用和torch.squeeze的作用相反。
- d = torch.tensor([[1.0,2.0]])
- e = torch.squeeze(d)
- print(d)
- print(e)
- print(d.shape)
- print(e.shape)
-
- out:
- tensor([[1., 2.]])
- tensor([1., 2.])
- torch.Size([1, 2])
- torch.Size([2])
-
-
-
-
-
- #在第0维插入长度为1的一个维度
- f = torch.unsqueeze(e,axis=0)
- print(e)
- print(f)
- print(e.shape)
- print(f.shape)
-
- out:
- tensor([1., 2.])
- tensor([[1., 2.]])
- torch.Size([2])
- torch.Size([1, 2])
-
-
-
-
-
- #torch.transpose可以交换张量的维度,torch.transpose常用于图片存储格式的变换上。
- #如果是二维的矩阵,通常会调用矩阵的转置方法 matrix.t(),等价于 torch.transpose(matrix,0,1)。
- minval=0
- maxval=255
- # Batch,Height,Width,Channel
- data = torch.floor(minval + (maxval-minval)*torch.rand([100,256,256,4])).int()
- print(data.shape)
- # 转换成 Pytorch默认的图片格式 Batch,Channel,Height,Width
- # 需要交换两次
- data_t = torch.transpose(torch.transpose(data,1,2),1,3)
- print(data_t.shape)
-
- out:
- torch.Size([100, 256, 256, 4])
- torch.Size([100, 4, 256, 256])
-
-
-
-
-
- matrix = torch.tensor([[1,2,3],[4,5,6]])
- print(matrix)
- print(matrix.t()) #等价于torch.transpose(matrix,0,1)
-
- out:
- tensor([[1, 2, 3],
- [4, 5, 6]])
- tensor([[1, 4],
- [2, 5],
- [3, 6]])
可以用torch.cat方法和torch.stack方法将多个张量合并,
可以用torch.split方法把一个张量分割 成多个张量。
torch.cat和torch.stack有略微的区别,torch.cat是连接,不会增加维度,而torch.stack是堆叠, 会增加维度
- #例1-2-4 张量的合并分割
- import torch
- a = torch.tensor([[1.0,2.0],[3.0,4.0]])
- b = torch.tensor([[5.0,6.0],[7.0,8.0]])
- c = torch.tensor([[9.0,10.0],[11.0,12.0]])
- abc_cat = torch.cat([a,b,c],dim = 0)
- print(abc_cat.shape)
- print(abc_cat)
-
- out:
- torch.Size([6, 2])
- tensor([[ 1., 2.],
- [ 3., 4.],
- [ 5., 6.],
- [ 7., 8.],
- [ 9., 10.],
- [11., 12.]])
-
-
-
-
-
- abc_stack = torch.stack([a,b,c],axis = 0) #torch中dim和axis参数名可以混用
- print(abc_stack.shape)
- print(abc_stack)
-
- out:
- torch.Size([3, 2, 2])
- tensor([[[ 1., 2.],
- [ 3., 4.]],
-
- [[ 5., 6.],
- [ 7., 8.]],
-
- [[ 9., 10.],
- [11., 12.]]])
-
-
-
-
-
- torch.cat([a,b,c],axis = 1)
-
- out:
- tensor([[ 1., 2., 5., 6., 9., 10.],
- [ 3., 4., 7., 8., 11., 12.]])
-
-
-
-
-
- torch.stack([a,b,c],axis = 1)
-
- out:
- tensor([[[ 1., 2.],
- [ 5., 6.],
- [ 9., 10.]],
-
- [[ 3., 4.],
- [ 7., 8.],
- [11., 12.]]])
-
-
-
-
-
- #torch.split是torch.cat的逆运算,可以指定分割份数平均分割,也可以通过指定每份的记录数量进行分割。
- print(abc_cat)
- a,b,c = torch.split(abc_cat,split_size_or_sections = 2,dim = 0) #每份2个进行分割
- print(a)
- print(b)
- print(c)
- print(abc_cat)
- p,q,r = torch.split(abc_cat,split_size_or_sections =[4,1,1],dim = 0) #每份分别为[4,1,1]
- print(p)
- print(q)
- print(r)
-
- out:
- tensor([[ 1., 2.],
- [ 3., 4.],
- [ 5., 6.],
- [ 7., 8.],
- [ 9., 10.],
- [11., 12.]])
- tensor([[1., 2.],
- [3., 4.]])
- tensor([[5., 6.],
- [7., 8.]])
- tensor([[ 9., 10.],
- [11., 12.]])
- tensor([[ 1., 2.],
- [ 3., 4.],
- [ 5., 6.],
- [ 7., 8.],
- [ 9., 10.],
- [11., 12.]])
- tensor([[1., 2.],
- [3., 4.],
- [5., 6.],
- [7., 8.]])
- tensor([[ 9., 10.]])
- tensor([[11., 12.]])
-
张量数学运算主要有:标量运算,向量运算,矩阵运算。
张量的数学运算符可以分为标量运算符、向量运算符、以及矩阵运算符。
加减乘除乘方,以及三角函数,指数,对数等常见函数,逻辑比较运算符等都是标量运算符。
标量运算符的特点是对张量实施逐元素运算。
有些标量运算符对常用的数学运算符进行了重载,并且支持类似numpy的广播特性。
- #例1-3-1 张量的数学运算-标量运算
- import torch
- import numpy as np
- a = torch.tensor([[1.0,2],[-3,4.0]])
- b = torch.tensor([[5.0,6],[7.0,8.0]])
- a+b #运算符重载
-
- out:
- tensor([[ 6., 8.],
- [ 4., 12.]])
-
-
-
- a-b
-
- out:
- tensor([[ -4., -4.],
- [-10., -4.]])
-
- a*b
-
- out:
- tensor([[ 5., 12.],
- [-21., 32.]])
-
-
- a/b
-
- out:
- tensor([[ 0.2000, 0.3333],
- [-0.4286, 0.5000]])
-
-
- a**2
-
- out:
- tensor([[ 1., 4.],
- [ 9., 16.]])
-
-
-
- a**(0.5)
-
- out:
- tensor([[1.0000, 1.4142],
- [ nan, 2.0000]])
-
- a%3 #求模
-
- out:
- tensor([[1., 2.],
- [-0., 1.]])
-
-
-
- a//3 #地板除法
-
- out:
- tensor([[ 0., 0.],
- [-1., 1.]])
-
-
- a>=2 # torch.ge(a,2) #ge: greater_equal缩写
-
- out:
- tensor([[False, True],
- [False, True]])
-
-
-
-
- (a>=2)&(a<=3)
-
- out:
- tensor([[False, True],
- [False, False]])
-
-
-
-
- (a>=2)|(a<=3)
-
- out:
- tensor([[True, True],
- [True, True]])
-
-
-
-
- a==5 #torch.eq(a,5)
-
- out:
- tensor([[False, False],
- [False, False]])
-
-
-
- torch.sqrt(a)
-
- out:
- tensor([[1.0000, 1.4142],
- [ nan, 2.0000]])
-
-
-
- a = torch.tensor([1.0,8.0])
- b = torch.tensor([5.0,6.0])
- c = torch.tensor([6.0,7.0])
- d = a+b+c
- print(d)
-
- out:
- tensor([12., 21.])
-
-
-
- print(torch.max(a,b))
-
- out:
- tensor([5., 8.])
-
-
- print(torch.min(a,b))
-
- out:
- tensor([1., 6.])
-
-
-
- x = torch.tensor([2.6,-2.7])
- print(torch.round(x)) #保留整数部分,四舍五入
- print(torch.floor(x)) #保留整数部分,向下归整
- print(torch.ceil(x)) #保留整数部分,向上归整
- print(torch.trunc(x)) #保留整数部分,向0归整
-
- out:
- tensor([ 3., -3.])
- tensor([ 2., -3.])
- tensor([ 3., -2.])
- tensor([ 2., -2.])
-
-
- x = torch.tensor([2.6,-2.7])
- print(torch.fmod(x,2)) #作除法取余数
- print(torch.remainder(x,2)) #作除法取剩余的部分,结果恒正
-
- out:
- tensor([ 0.6000, -0.7000])
- tensor([0.6000, 1.3000])
-
-
- # 幅值裁剪
- x = torch.tensor([0.9,-0.8,100.0,-20.0,0.7])
- y = torch.clamp(x,min=-1,max = 1)
- z = torch.clamp(x,max = 1)
- print(y)
- print(z)
-
- out:
-
- tensor([ 0.9000, -0.8000, 1.0000, -1.0000, 0.7000])
- tensor([ 0.9000, -0.8000, 1.0000, -20.0000, 0.7000])
-
-
-
-
向量运算符只在一个特定轴上运算,将一个向量映射到一个标量或者另外一个向量。
- #例1-3-2 张量的数学运算-向量运算
- import torch
- #统计值
- a = torch.arange(1,10).float()
- print(torch.sum(a))
- print(torch.mean(a))
- print(torch.max(a))
- print(torch.min(a))
- print(torch.prod(a)) #累乘
- print(torch.std(a)) #标准差
- print(torch.var(a)) #方差
- print(torch.median(a)) #中位数
-
- out:
- tensor(45.)
- tensor(5.)
- tensor(9.)
- tensor(1.)
- tensor(362880.)
- tensor(2.7386)
- tensor(7.5000)
- tensor(5.)
-
-
- #指定维度计算统计值
- b = a.view(3,3)
- print(b)
- print(torch.max(b,dim = 0))
- print(torch.max(b,dim = 1))
-
- out:
- tensor([[1., 2., 3.],
- [4., 5., 6.],
- [7., 8., 9.]])
- torch.return_types.max(
- values=tensor([7., 8., 9.]),
- indices=tensor([2, 2, 2]))
- torch.return_types.max(
- values=tensor([3., 6., 9.]),
- indices=tensor([2, 2, 2]))
-
-
- #cum扫描
- a = torch.arange(1,10)
- print(torch.cumsum(a,0))
- print(torch.cumprod(a,0))
- print(torch.cummax(a,0).values)
- print(torch.cummax(a,0).indices)
- print(torch.cummin(a,0))
-
- out:
- tensor([ 1, 3, 6, 10, 15, 21, 28, 36, 45])
- tensor([ 1, 2, 6, 24, 120, 720, 5040, 40320, 362880])
- tensor([1, 2, 3, 4, 5, 6, 7, 8, 9])
- tensor([0, 1, 2, 3, 4, 5, 6, 7, 8])
- torch.return_types.cummin(
- values=tensor([1, 1, 1, 1, 1, 1, 1, 1, 1]),
- indices=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0]))
-
-
- #torch.sort和torch.topk可以对张量排序
- a = torch.tensor([[9,7,8],[1,3,2],[5,6,4]]).float()
- print(torch.topk(a,2,dim = 0),"\n")
- print(torch.topk(a,2,dim = 1),"\n")
- print(torch.sort(a,dim = 1),"\n")
-
- out:
- torch.return_types.topk(
- values=tensor([[9., 7., 8.],
- [5., 6., 4.]]),
- indices=tensor([[0, 0, 0],
- [2, 2, 2]]))
-
- torch.return_types.topk(
- values=tensor([[9., 8.],
- [3., 2.],
- [6., 5.]]),
- indices=tensor([[0, 2],
- [1, 2],
- [1, 0]]))
-
- torch.return_types.sort(
- values=tensor([[7., 8., 9.],
- [1., 2., 3.],
- [4., 5., 6.]]),
- indices=tensor([[1, 2, 0],
- [0, 2, 1],
- [2, 0, 1]]))
矩阵必须是二维的。类似torch.tensor([1,2,3])这样的不是矩阵。
矩阵运算包括:矩阵乘法,矩阵转置,矩阵逆,矩阵求迹,矩阵范数,矩阵行列式,矩阵求特征 值,矩阵分解等运算。
- #例1-3-3 张量的数学运算-矩阵运算
- import torch
- #矩阵乘法
- a = torch.tensor([[1,2],[3,4]])
- b = torch.tensor([[2,0],[0,2]])
- print(a@b) #等价于torch.matmul(a,b) 或 torch.mm(a,b)
-
- out:
- tensor([[2, 4],
- [6, 8]])
-
-
-
- #矩阵转置
- a = torch.tensor([[1.0,2],[3,4]])
- print(a.t())
-
- out:
- tensor([[1., 3.],
- [2., 4.]])
-
-
-
- #矩阵逆,必须为浮点类型
- a = torch.tensor([[1.0,2],[3,4]])
- print(torch.inverse(a))
-
- out:
- tensor([[-2.0000, 1.0000],
- [ 1.5000, -0.5000]])
-
-
-
- #矩阵求trace
- a = torch.tensor([[1.0,2],[3,4]])
- print(torch.trace(a))
-
- out:
- tensor(5.)
-
-
-
- #矩阵求范数
- a = torch.tensor([[1.0,2],[3,4]])
- print(torch.norm(a))
-
- out:
- tensor(5.4772)
-
-
-
- #矩阵行列式
- a = torch.tensor([[1.0,2],[3,4]])
- print(torch.det(a))
-
- out:
- tensor(-2.0000)
-
-
-
- #矩阵特征值和特征向量
- a = torch.tensor([[1.0,2],[-5,4]],dtype = torch.float)
- print(torch.eig(a,eigenvectors=True))
- #两个特征值分别是 -2.5+2.7839j, 2.5-2.7839j
-
- out:
- torch.return_types.eig(
- eigenvalues=tensor([[ 2.5000, 2.7839],
- [ 2.5000, -2.7839]]),
- eigenvectors=tensor([[ 0.2535, -0.4706],
- [ 0.8452, 0.0000]]))
-
-
-
- #矩阵QR分解, 将一个方阵分解为一个正交矩阵q和上三角矩阵r
- #QR分解实际上是对矩阵a实施Schmidt正交化得到q
- a = torch.tensor([[1.0,2.0],[3.0,4.0]])
- q,r = torch.qr(a)
- print(q,"\n")
- print(r,"\n")
- print(q@r)
-
- out:
- tensor([[-0.3162, -0.9487],
- [-0.9487, 0.3162]])
-
- tensor([[-3.1623, -4.4272],
- [ 0.0000, -0.6325]])
-
- tensor([[1.0000, 2.0000],
- [3.0000, 4.0000]])
-
-
-
- #矩阵svd分解
- #svd分解可以将任意一个矩阵分解为一个正交矩阵u,一个对角阵s和一个正交矩阵v.t()的乘积
- #svd常用于矩阵压缩和降维
- a=torch.tensor([[1.0,2.0],[3.0,4.0],[5.0,6.0]])
- u,s,v = torch.svd(a)
- print(u,"\n")
- print(s,"\n")
- print(v,"\n")
- print(u@torch.diag(s)@v.t())
- #利用svd分解可以在Pytorch中实现主成分分析降维
-
- out:
- tensor([[-0.2298, 0.8835],
- [-0.5247, 0.2408],
- [-0.8196, -0.4019]])
-
- tensor([9.5255, 0.5143])
-
- tensor([[-0.6196, -0.7849],
- [-0.7849, 0.6196]])
-
- tensor([[1.0000, 2.0000],
- [3.0000, 4.0000],
- [5.0000, 6.0000]])
Pytorch的广播规则和numpy是一样的:
1、如果张量的维度不同,将维度较小的张量进行扩展,直到两个张量的维度都一样。
2、如果两个张量在某个维度上的长度是相同的,或者其中一个张量在该维度上的长度为1, 那么我们就说这两个张量在该维度上是相容的。
3、如果两个张量在所有维度上都是相容的,它们就能使用广播。
4、广播之后,每个维度的长度将取两个张量在该维度长度的较大值。
5、在任何一个维度上,如果一个张量的长度为1,另一个张量长度大于1,那么在该维度 上,就好像是对第一个张量进行了复制。 torch.broadcast_tensors可以将多个张量根据广播规则转换成相同的维度。
- #例 1-3-4 广播机制
- import torch
- a = torch.tensor([1,2,3])
- b = torch.tensor([[0,0,0],[1,1,1],[2,2,2]])
- print(b + a)
- a_broad,b_broad = torch.broadcast_tensors(a,b)
- print(a_broad,"\n")
- print(b_broad,"\n")
- print(a_broad + b_broad)
-
-
- out:
- tensor([[1, 2, 3],
- [2, 3, 4],
- [3, 4, 5]])
- tensor([[1, 2, 3],
- [1, 2, 3],
- [1, 2, 3]])
-
- tensor([[0, 0, 0],
- [1, 1, 1],
- [2, 2, 2]])
-
- tensor([[1, 2, 3],
- [2, 3, 4],
- [3, 4, 5]])
神经网络通常依赖反向传播求梯度来更新网络参数,求梯度过程通常是一件非常复杂而容易出错 的事情。
而深度学习框架可以帮助我们自动地完成这种求梯度运算。
Pytorch一般通过反向传播 backward 方法实现这种求梯度计算。
该方法求得的梯度将存在对应自变量张量的grad属性下。
除此之外,也能够调用torch.autograd.grad函数来实现求梯度计算。
这就是Pytorch的自动微分机制。
backward 方法通常在一个标量张量上调用,该方法求得的梯度将存在对应自变量张量的grad属 性下。
如果调用的张量非标量,则要传入一个和它同形状的gradient参数张量。
相当于用该gradient参数张量与调用张量作向量点乘,得到的标量结果再反向传播。
- #例2-1-1:利用backward方法求导数
- #标量的反向传播
- import numpy as np
- import torch
- # f(x) = a*x**2 + b*x + c的导数
- x = torch.tensor(0.0,requires_grad = True) # x需要被求导
- a = torch.tensor(1.0)
- b = torch.tensor(-2.0)
- c = torch.tensor(1.0)
- y = a*torch.pow(x,2) + b*x + c
- y.backward()
- dy_dx = x.grad
- print(dy_dx)
-
- out:
- tensor(-2.)
-
-
-
-
- #非标量的反向传播
- import numpy as np
- import torch
- # f(x) = a*x**2 + b*x + c
- x = torch.tensor([[0.0,0.0],[1.0,2.0]],requires_grad = True) # x需要被求导
- a = torch.tensor(1.0)
- b = torch.tensor(-2.0)
- c = torch.tensor(1.0)
- y = a*torch.pow(x,2) + b*x + c
- gradient = torch.tensor([[1.0,1.0],[1.0,1.0]])
- print("x:\n",x)
- print("y:\n",y)
- y.backward(gradient = gradient)
- x_grad = x.grad
- print("x_grad:\n",x_grad)
-
- out:
- x:
- tensor([[0., 0.],
- [1., 2.]], requires_grad=True)
- y:
- tensor([[1., 1.],
- [0., 1.]], grad_fn=<AddBackward0>)
- x_grad:
- tensor([[-2., -2.],
- [ 0., 2.]])
-
-
-
-
-
- #非标量的反向传播可以用标量的反向传播实现
- import numpy as np
- import torch
- # f(x) = a*x**2 + b*x + c
- x = torch.tensor([[0.0,0.0],[1.0,2.0]],requires_grad = True) # x需要被求导
- a = torch.tensor(1.0)
- b = torch.tensor(-2.0)
- c = torch.tensor(1.0)
- y = a*torch.pow(x,2) + b*x + c
- gradient = torch.tensor([[1.0,1.0],[1.0,1.0]])
- z = torch.sum(y*gradient)
- print("x:",x)
- print("y:",y)
- z.backward()
- x_grad = x.grad
- print("x_grad:\n",x_grad)
-
- out:
- x: tensor([[0., 0.],
- [1., 2.]], requires_grad=True)
- y: tensor([[1., 1.],
- [0., 1.]], grad_fn=<AddBackward0>)
- x_grad:
- tensor([[-2., -2.],
- [ 0., 2.]])
-
- #例2-1-2 利用autograd.grad方法求导数
- import numpy as np
- import torch
- # f(x) = a*x**2 + b*x + c的导数
- x = torch.tensor(0.0,requires_grad = True) # x需要被求导
- a = torch.tensor(1.0)
- b = torch.tensor(-2.0)
- c = torch.tensor(1.0)
- y = a*torch.pow(x,2) + b*x + c
- # create_graph 设置为 True 将允许创建更高阶的导数
- dy_dx = torch.autograd.grad(y,x,create_graph=True)[0]
- print(dy_dx.data)
- # 求二阶导数
- dy2_dx2 = torch.autograd.grad(dy_dx,x)[0]
- print(dy2_dx2.data)
-
- out:
- tensor(-2.)
- tensor(2.)
-
-
-
-
-
- #例1-2-2 利用autograd.grad方法求导数,对多个自变量求导数
- import numpy as np
- import torch
- x1 = torch.tensor(1.0,requires_grad = True) # x需要被求导
- x2 = torch.tensor(2.0,requires_grad = True)
- y1 = x1*x2
- y2 = x1+x2
- # 允许同时对多个自变量求导数
- (dy1_dx1,dy1_dx2) = torch.autograd.grad(outputs=y1,inputs =
- [x1,x2],retain_graph = True)
- print(dy1_dx1,dy1_dx2)
- # 如果有多个因变量,相当于把多个因变量的梯度结果求和
- (dy12_dx1,dy12_dx2) = torch.autograd.grad(outputs=[y1,y2],inputs = [x1,x2])
- print(dy12_dx1,dy12_dx2)
-
- out:
- tensor(2.) tensor(1.)
- tensor(3.) tensor(2.)
- #例2-1-3 利用自动微分和优化器求最小值
- import numpy as np
- import torch
- # f(x) = a*x**2 + b*x + c的最小值
- x = torch.tensor(0.0,requires_grad = True) # x需要被求导
- a = torch.tensor(1.0)
- b = torch.tensor(-2.0)
- c = torch.tensor(1.0)
- optimizer = torch.optim.SGD(params=[x],lr = 0.01)
- def f(x):
- result = a*torch.pow(x,2) + b*x + c
- return(result)
- for i in range(500):
- optimizer.zero_grad()
- y = f(x)
- y.backward()
- optimizer.step()
- print("y=",f(x).data,";","x=",x.data)
-
- out:
- y= tensor(0.) ; x= tensor(1.0000)
Pytorch的计算图由节点和边组成,节点表示张量或者Function,边表示张量和Function之间的 依赖关系。
Pytorch中的计算图是动态图。这里的动态主要有两重含义。
第一层含义是:计算图的正向传播是立即执行的。无需等待完整的计算图创建完毕,每条语句都 会在计算图中动态添加节点和边,并立即执行正向传播得到计算结果。
第二层含义是:计算图在反向传播后立即销毁。下次调用需要重新构建计算图。如果在程序中使 用了backward方法执行了反向传播,或者利用torch.autograd.grad方法计算了梯度,那么创建的 计算图会被立即销毁,释放存储空间,下次调用需要重新创建。
- #例2-2-1 计算图的正向传播是立即执行的
- import torch
- w = torch.tensor([[3.0,1.0]],requires_grad=True)
- b = torch.tensor([[3.0]],requires_grad=True)
- X = torch.randn(10,2)
- Y = torch.randn(10,1)
- Y_hat = X@w.t() + b # Y_hat定义后其正向传播被立即执行,与其后面的loss创建语句无关
- loss = torch.mean(torch.pow(Y_hat-Y,2))
- print(loss.data)
- print(Y_hat.data)
-
- out:
- tensor(25.9445)
- tensor([[ 5.8349],
- [ 0.5817],
- [-4.2764],
- [ 3.2476],
- [ 3.6737],
- [ 2.8748],
- [ 8.3981],
- [ 7.1418],
- [-4.8522],
- [ 2.2610]])
-
-
-
-
-
- #计算图在反向传播后立即销毁
- import torch
- w = torch.tensor([[3.0,1.0]],requires_grad=True)
- b = torch.tensor([[3.0]],requires_grad=True)
- X = torch.randn(10,2)
- Y = torch.randn(10,1)
- Y_hat = X@w.t() + b # Y_hat定义后其正向传播被立即执行,与其后面的loss创建语句无关
- loss = torch.mean(torch.pow(Y_hat-Y,2))
- #计算图在反向传播后立即销毁,如果需要保留计算图, 需要设置retain_graph = True
- loss.backward() #loss.backward(retain_graph = True)
- #loss.backward() #如果再次执行反向传播将报错
计算图中的张量我们已经比较熟悉了, 计算图中的另外一种节点是Function, 实际上就是 Pytorch 中各种对张量操作的函数。
这些Function和我们Python中的函数有一个较大的区别,那就是它同时包括正向计算逻辑和反向传播的逻辑。
我们可以通过继承torch.autograd.Function来创建这种支持反向传播的Function
- #例2-2-2 计算图中的Function
- import torch
- class MyReLU(torch.autograd.Function):
- #正向传播逻辑,可以用ctx存储一些值,供反向传播使用。
- @staticmethod
- def forward(ctx, input):
- ctx.save_for_backward(input)
- return input.clamp(min=0)
- #反向传播逻辑
- @staticmethod
- def backward(ctx, grad_output):
- input, = ctx.saved_tensors
- grad_input = grad_output.clone()
- grad_input[input < 0] = 0
- return grad_input
-
- import torch
- w = torch.tensor([[3.0,1.0]],requires_grad=True)
- b = torch.tensor([[3.0]],requires_grad=True)
- X = torch.tensor([[-1.0,-1.0],[1.0,1.0]])
- Y = torch.tensor([[2.0,3.0]])
- relu = MyReLU.apply # relu现在也可以具有正向传播和反向传播功能
- Y_hat = relu(X@w.t() + b)
- loss = torch.mean(torch.pow(Y_hat-Y,2))
- loss.backward()
- print(w.grad)
- print(b.grad)
-
- out:
- tensor([[4.5000, 4.5000]])
- tensor([[4.5000]])
-
-
- # Y_hat的梯度函数即是我们自己所定义的 MyReLU.backward
- print(Y_hat.grad_fn)
- <torch.autograd.function.MyReLUBackward object at 0x000001FE1652D900>
了解了Function的功能,我们可以简单地理解一下反向传播的原理和过程。理解该部分原理需要一些高等数学中求导链式法则的基础知识。
- #例2-2-3 计算图与反向传播
- import torch
- x = torch.tensor(3.0,requires_grad=True)
- y1 = x + 1
- y2 = 2*x
- loss = (y1-y2)**2
- loss.backward()
loss.backward()语句调用后,依次发生以下计算过程。
1,loss自己的grad梯度赋值为1,即对自身的梯度为1。
2,loss根据其自身梯度以及关联的backward方法,计算出其对应的自变量即y1和y2的梯度,将 该值赋值到y1.grad和y2.grad。
3,y2和y1根据其自身梯度以及关联的backward方法, 分别计算出其对应的自变量x的梯度, x.grad将其收到的多个梯度值累加。 (注意,1,2,3步骤的求梯度顺序和对多个梯度值的累加规则恰好是求导链式法则的程序表述) 正因为求导链式法则衍生的梯度累加规则,张量的grad梯度不会自动清零,在需要的时候需要手动置零。
执行下面代码,我们会发现 loss.grad并不是我们期望的1,而是 None。 类似地 y1.grad 以及 y2.grad也是 None. 这是为什么呢?这是由于它们不是叶子节点张量。
在反向传播过程中,只有 is_leaf=True 的叶子节点,需要求导的张量的导数结果才会被最后保留下来。
那么什么是叶子节点张量呢?叶子节点张量需要满足两个条件。
1,叶子节点张量是由用户直接创建的张量,而非由某个Function通过计算得到的张量。
2,叶子节点张量的 requires_grad属性必须为True. Pytorch设计这样的规则主要是为了节约内存或者显存空间,因为几乎所有的时候,用户只会关 心他自己直接创建的张量的梯度。 所有依赖于叶子节点张量的张量, 其requires_grad 属性必定是True的,但其梯度值只在计算过程中被用到,不会最终存储到grad属性中。
如果需要保留中间计算结果的梯度到grad属性中,可以使用 retain_grad方法。
如果仅仅是为了 调试代码查看梯度值,可以利用register_hook打印日志
- #例2-2-4 叶子节点和非叶子节点
- import torch
- x = torch.tensor(3.0,requires_grad=True)
- y1 = x + 1
- y2 = 2*x
- loss = (y1-y2)**2
- loss.backward()
- print("loss.grad:", loss.grad)
- print("y1.grad:", y1.grad)
- print("y2.grad:", y2.grad)
- print(x.grad)
-
- out:
- loss.grad: None
- y1.grad: None
- y2.grad: None
- tensor(4.)
-
-
-
-
- print(x.is_leaf)
- print(y1.is_leaf)
- print(y2.is_leaf)
- print(loss.is_leaf)
-
- out:
- True
- False
- False
- False
利用retain_grad可以保留非叶子节点的梯度值,利用register_hook可以查看非叶子节点的梯度 值。
- #例2-2-4 叶子节点的梯度值
- import torch
- #正向传播
- x = torch.tensor(3.0,requires_grad=True)
- y1 = x + 1
- y2 = 2*x
- loss = (y1-y2)**2
- #非叶子节点梯度显示控制
- y1.register_hook(lambda grad: print('y1 grad: ', grad))
- y2.register_hook(lambda grad: print('y2 grad: ', grad))
- loss.retain_grad()
- #反向传播
- loss.backward()
- print("loss.grad:", loss.grad)
- print("x.grad:", x.grad)
-
- out:
- y2 grad: tensor(4.)
- y1 grad: tensor(-4.)
- loss.grad: tensor(1.)
- x.grad: tensor(4.)
可以利用 torch.utils.tensorboard 将计算图导出到 TensorBoard进行可视化。
- #例2-2-5 计算图在TensorBoard中的可视化
- import torch
- from torch import nn
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.w = nn.Parameter(torch.randn(2,1))
- self.b = nn.Parameter(torch.zeros(1,1))
- def forward(self, x):
- y = x@self.w + self.b
- return y
- net = Net()
- from torch.utils.tensorboard import SummaryWriter
- writer = SummaryWriter('./data/tensorboard')
- writer.add_graph(net,input_to_model = torch.rand(10,2))
- writer.close()
- %load_ext tensorboard
- #%tensorboard --logdir ./data/tensorboard
- from tensorboard import notebook
- notebook.list()
- #在tensorboard中查看模型
- notebook.start("--logdir ./data/tensorboard")
-
运行得到下图
Pytorch中有5个不同的层次结构:即硬件层,内核层,低阶API,中阶API,高阶 API【torchkeras】。
Pytorch的层次结构从低到高可以分成如下五层。
最底层为硬件层,Pytorch支持CPU、GPU加入计算资源池。
第二层为C++实现的内核。
第三层为Python实现的操作符,提供了封装C++内核的低级API指令,主要包括各种张量操作算 子、自动微分、变量管理. 如torch.tensor,torch.cat,torch.autograd.grad,nn.Module.
第四层为Python实现的模型组件,对低级API进行了函数封装,主要包括各种模型层,损失函 数,优化器,数据管道等等。 如 torch.nn.Linear,torch.nn.BCE,torch.optim.Adam,torch.utils.data.DataLoader.
第五层为Python实现的模型接口。Pytorch没有官方的高阶API。为了便于训练模型,我们仿照 keras中的模型接口,使用了不到300行代码,封装了Pytorch的高阶模型接口 torchkeras.Model。
Pytorch的低阶API实现线性回归模型和DNN二分类模型。
低阶API主要包括张量操作,计算图和自动微分。
- #例2-3-1-a 低阶API实现线性回归模型示范
-
- import os
- import datetime
- #打印时间
- def printbar():
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
-
- #第一步 准备数据
- import numpy as np
- import pandas as pd
- from matplotlib import pyplot as plt
- import torch
- from torch import nn
- #样本数量
- n = 400
- # 生成测试用数据集
- X = 10*torch.rand([n,2])-5.0 #torch.rand是均匀分布
- w0 = torch.tensor([[2.0],[-3.0]])
- b0 = torch.tensor([[10.0]])
- Y = X@w0 + b0 + torch.normal( 0.0,2.0,size = [n,1]) # @表示矩阵乘法,增加正态扰动
-
- # 数据可视化
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- plt.figure(figsize = (12,5))
- ax1 = plt.subplot(121)
- ax1.scatter(X[:,0].numpy(),Y[:,0].numpy(), c = "b",label = "samples")
- ax1.legend()
- plt.xlabel("x1")
- plt.ylabel("y",rotation = 0)
- ax2 = plt.subplot(122)
- ax2.scatter(X[:,1].numpy(),Y[:,0].numpy(), c = "g",label = "samples")
- ax2.legend()
- plt.xlabel("x2")
- plt.ylabel("y",rotation = 0)
- plt.show()
-
- # 构建数据管道迭代器
- def data_iter(features, labels, batch_size=8):
- num_examples = len(features)
- indices = list(range(num_examples))
- np.random.shuffle(indices) #样本的读取顺序是随机的
- for i in range(0, num_examples, batch_size):
- indexs = torch.LongTensor(indices[i: min(i + batch_size,num_examples)])
- yield features.index_select(0, indexs), labels.index_select(0,indexs)
- # 测试数据管道效果
- batch_size = 8
- (features,labels) = next(data_iter(X,Y,batch_size))
- print(features)
- print(labels)
-
- #第二步 构建模型
- # 定义模型
- class LinearRegression:
- def __init__(self):
- self.w = torch.randn_like(w0,requires_grad=True)
- self.b = torch.zeros_like(b0,requires_grad=True)
- #正向传播
- def forward(self,x):
- return x@self.w + self.b
- # 损失函数
- def loss_func(self,y_pred,y_true):
- return torch.mean((y_pred - y_true)**2/2)
- model = LinearRegression()
-
- #第三步 训练模型
- def train_step(model, features, labels):
- predictions = model.forward(features)
- loss = model.loss_func(predictions,labels)
- # 反向传播求梯度
- loss.backward()
- # 使用torch.no_grad()避免梯度记录,也可以通过操作 model.w.data 实现避免梯度记录
- with torch.no_grad():
- # 梯度下降法更新参数
- model.w -= 0.001*model.w.grad
- model.b -= 0.001*model.b.grad
- # 梯度清零
- model.w.grad.zero_()
- model.b.grad.zero_()
- return loss
- # 测试train_step效果
- batch_size = 10
- (features,labels) = next(data_iter(X,Y,batch_size))
- train_step(model,features,labels)
-
- def train_model(model,epochs):
- for epoch in range(1,epochs+1):
- for features, labels in data_iter(X,Y,10):
- loss = train_step(model,features,labels)
- if epoch%200==0:
- printbar()
- print("epoch =",epoch,"loss = ",loss.item())
- print("model.w =",model.w.data)
- print("model.b =",model.b.data)
- train_model(model,epochs = 1000)
-
- # 结果可视化
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- plt.figure(figsize = (12,5))
- ax1 = plt.subplot(121)
- ax1.scatter(X[:,0].numpy(),Y[:,0].numpy(), c = "b",label = "samples")
- ax1.plot(X[:,0].numpy(),(model.w[0].data*X[:,0]+model.b[0].data).numpy(),"-r",linewidth = 5.0,label = "model")
- ax1.legend()
- plt.xlabel("x1")
- plt.ylabel("y",rotation = 0)
- ax2 = plt.subplot(122)
- ax2.scatter(X[:,1].numpy(),Y[:,0].numpy(), c = "g",label = "samples")
- ax2.plot(X[:,1].numpy(),(model.w[1].data*X[:,1]+model.b[0].data).numpy(),"-r",linewidth = 5.0,label = "model")
- ax2.legend()
- plt.xlabel("x2")
- plt.ylabel("y",rotation = 0)
- plt.show()
- #例2-3-1-b 低阶API实现DNN二分类模型范例
-
- #第一步 准备数据
- import numpy as np
- import pandas as pd
- from matplotlib import pyplot as plt
- import torch
- from torch import nn
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- #正负样本数量
- n_positive,n_negative = 2000,2000
- #生成正样本, 小圆环分布
- r_p = 5.0 + torch.normal(0.0,1.0,size = [n_positive,1])
- theta_p = 2*np.pi*torch.rand([n_positive,1])
- Xp = torch.cat([r_p*torch.cos(theta_p),r_p*torch.sin(theta_p)],axis = 1)
- Yp = torch.ones_like(r_p)
- #生成负样本, 大圆环分布
- r_n = 8.0 + torch.normal(0.0,1.0,size = [n_negative,1])
- theta_n = 2*np.pi*torch.rand([n_negative,1])
- Xn = torch.cat([r_n*torch.cos(theta_n),r_n*torch.sin(theta_n)],axis = 1)
- Yn = torch.zeros_like(r_n)
- #汇总样本
- X = torch.cat([Xp,Xn],axis = 0)
- Y = torch.cat([Yp,Yn],axis = 0)
- #可视化
- plt.figure(figsize = (6,6))
- plt.scatter(Xp[:,0].numpy(),Xp[:,1].numpy(),c = "r")
- plt.scatter(Xn[:,0].numpy(),Xn[:,1].numpy(),c = "g")
- plt.legend(["positive","negative"]);
- # 构建数据管道迭代器
- def data_iter(features, labels, batch_size=8):
- num_examples = len(features)
- indices = list(range(num_examples))
- np.random.shuffle(indices) #样本的读取顺序是随机的
- for i in range(0, num_examples, batch_size):
- indexs = torch.LongTensor(indices[i: min(i + batch_size,num_examples)])
- yield features.index_select(0, indexs), labels.index_select(0,indexs)
- # 测试数据管道效果
- batch_size = 8
- (features,labels) = next(data_iter(X,Y,batch_size))
- print(features)
- print(labels)
-
- #第二步 定义模型
-
- class DNNModel(nn.Module):
- def __init__(self):
- super(DNNModel, self).__init__()
- self.w1 = nn.Parameter(torch.randn(2,4))
- self.b1 = nn.Parameter(torch.zeros(1,4))
- self.w2 = nn.Parameter(torch.randn(4,8))
- self.b2 = nn.Parameter(torch.zeros(1,8))
- self.w3 = nn.Parameter(torch.randn(8,1))
- self.b3 = nn.Parameter(torch.zeros(1,1))
- # 正向传播
- def forward(self,x):
- x = torch.relu(x@self.w1 + self.b1)
- x = torch.relu(x@self.w2 + self.b2)
- y = torch.sigmoid(x@self.w3 + self.b3)
- return y
- # 损失函数(二元交叉熵)
- def loss_func(self,y_pred,y_true):
- #将预测值限制在1e-7以上, 1- (1e-7)以下,避免log(0)错误
- eps = 1e-7
- y_pred = torch.clamp(y_pred,eps,1.0-eps)
- bce = - y_true*torch.log(y_pred) - (1-y_true)*torch.log(1-y_pred)
- return torch.mean(bce)
- # 评估指标(准确率)
- def metric_func(self,y_pred,y_true):
- y_pred = torch.where(y_pred>0.5,torch.ones_like(y_pred,dtype = torch.float32),torch.zeros_like(y_pred,dtype = torch.float32))
- acc = torch.mean(1-torch.abs(y_true-y_pred))
- return acc
- model = DNNModel()
- # 测试模型结构
- batch_size = 10
- (features,labels) = next(data_iter(X,Y,batch_size))
- predictions = model(features)
- loss = model.loss_func(labels,predictions)
- metric = model.metric_func(labels,predictions)
- print("init loss:", loss.item())
- print("init metric:", metric.item())
-
- #第三步 训练模型
-
- def train_step(model, features, labels):
- # 正向传播求损失
- predictions = model.forward(features)
- loss = model.loss_func(predictions,labels)
- metric = model.metric_func(predictions,labels)
- # 反向传播求梯度
- loss.backward()
- # 梯度下降法更新参数
- for param in model.parameters():
- #注意是对param.data进行重新赋值,避免此处操作引起梯度记录
- param.data = (param.data - 0.01*param.grad.data)
- # 梯度清零
- model.zero_grad()
- return loss.item(),metric.item()
- def train_model(model,epochs):
- for epoch in range(1,epochs+1):
- loss_list,metric_list = [],[]
- for features, labels in data_iter(X,Y,20):
- lossi,metrici = train_step(model,features,labels)
- loss_list.append(lossi)
- metric_list.append(metrici)
- loss = np.mean(loss_list)
- metric = np.mean(metric_list)
- if epoch%100==0:
- print()
- print("epoch =",epoch,"loss = ",loss,"metric = ",metric)
- train_model(model,epochs = 1000)
-
- # 结果可视化
- fig, (ax1,ax2) = plt.subplots(nrows=1,ncols=2,figsize = (12,5))
- ax1.scatter(Xp[:,0],Xp[:,1], c="r")
- ax1.scatter(Xn[:,0],Xn[:,1],c = "g")
- ax1.legend(["positive","negative"]);
- ax1.set_title("y_true");
- Xp_pred = X[torch.squeeze(model.forward(X)>=0.5)]
- Xn_pred = X[torch.squeeze(model.forward(X)<0.5)]
- ax2.scatter(Xp_pred[:,0],Xp_pred[:,1],c = "r")
- ax2.scatter(Xn_pred[:,0],Xn_pred[:,1],c = "g")
- ax2.legend(["positive","negative"]);
- ax2.set_title("y_pred");
下面的范例使用Pytorch的中阶API实现线性回归模型和和DNN二分类模型。
Pytorch的中阶API主要包括各种模型层,损失函数,优化器,数据管道等等。
- #例2-3-2-a 中阶API实现线性回归范例
- import numpy as np
- import pandas as pd
- from matplotlib import pyplot as plt
- import torch
- from torch import nn
- import torch.nn.functional as F
- from torch.utils.data import Dataset,DataLoader,TensorDataset
-
- #第一步 准备数据
- #样本数量
- n = 400
- # 生成测试用数据集
- X = 10*torch.rand([n,2])-5.0 #torch.rand是均匀分布
- w0 = torch.tensor([[2.0],[-3.0]])
- b0 = torch.tensor([[10.0]])
- Y = X@w0 + b0 + torch.normal( 0.0,2.0,size = [n,1]) # @表示矩阵乘法,增加正态扰动
- # 数据可视化
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- plt.figure(figsize = (12,5))
- ax1 = plt.subplot(121)
- ax1.scatter(X[:,0],Y[:,0], c = "b",label = "samples")
- ax1.legend()
- plt.xlabel("x1")
- plt.ylabel("y",rotation = 0)
- ax2 = plt.subplot(122)
- ax2.scatter(X[:,1],Y[:,0], c = "g",label = "samples")
- ax2.legend()
- plt.xlabel("x2")
- plt.ylabel("y",rotation = 0)
- plt.show()
-
- #构建输入数据管道
- ds = TensorDataset(X,Y)
- dl = DataLoader(ds,batch_size = 10,shuffle=True,num_workers=2)
-
- #第二步 定义模型
- model = nn.Linear(2,1) #线性层
- model.loss_func = nn.MSELoss()
- model.optimizer = torch.optim.SGD(model.parameters(),lr = 0.01)
-
- #第三步 训练模型
- def train_step(model, features, labels):
- predictions = model(features)
- loss = model.loss_func(predictions,labels)
- loss.backward()
- model.optimizer.step()
- model.optimizer.zero_grad()
- return loss.item()
- # 测试train_step效果
- features,labels = next(iter(dl))
- train_step(model,features,labels)
-
- def train_model(model,epochs):
- for epoch in range(1,epochs+1):
- for features, labels in dl:
- loss = train_step(model,features,labels)
- if epoch%50==0:
- printbar()
- w = model.state_dict()["weight"]
- b = model.state_dict()["bias"]
- print("epoch =",epoch,"loss = ",loss)
- print("w =",w)
- print("b =",b)
- train_model(model,epochs = 200)
- # 结果可视化
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- w,b = model.state_dict()["weight"],model.state_dict()["bias"]
- plt.figure(figsize = (12,5))
- ax1 = plt.subplot(121)
- ax1.scatter(X[:,0],Y[:,0], c = "b",label = "samples")
- ax1.plot(X[:,0],w[0,0]*X[:,0]+b[0],"-r",linewidth = 5.0,label = "model")
- ax1.legend()
- plt.xlabel("x1")
- plt.ylabel("y",rotation = 0)
- ax2 = plt.subplot(122)
- ax2.scatter(X[:,1],Y[:,0], c = "g",label = "samples")
- ax2.plot(X[:,1],w[0,1]*X[:,1]+b[0],"-r",linewidth = 5.0,label = "model")
- ax2.legend()
- plt.xlabel("x2")
- plt.ylabel("y",rotation = 0)
- plt.show()
- #例2-3-2-b 中阶API实现 DNN二分类
-
- #第一步 准备数据
- import numpy as np
- import pandas as pd
- from matplotlib import pyplot as plt
- import torch
- from torch import nn
- import torch.nn.functional as F
- from torch.utils.data import Dataset,DataLoader,TensorDataset
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- #正负样本数量
- n_positive,n_negative = 2000,2000
- #生成正样本, 小圆环分布
- r_p = 5.0 + torch.normal(0.0,1.0,size = [n_positive,1])
- theta_p = 2*np.pi*torch.rand([n_positive,1])
- Xp = torch.cat([r_p*torch.cos(theta_p),r_p*torch.sin(theta_p)],axis = 1)
- Yp = torch.ones_like(r_p)
- #生成负样本, 大圆环分布
- r_n = 8.0 + torch.normal(0.0,1.0,size = [n_negative,1])
- theta_n = 2*np.pi*torch.rand([n_negative,1])
- Xn = torch.cat([r_n*torch.cos(theta_n),r_n*torch.sin(theta_n)],axis = 1)
- Yn = torch.zeros_like(r_n)
- #汇总样本
- X = torch.cat([Xp,Xn],axis = 0)
- Y = torch.cat([Yp,Yn],axis = 0)
- #可视化
- plt.figure(figsize = (6,6))
- plt.scatter(Xp[:,0],Xp[:,1],c = "r")
- plt.scatter(Xn[:,0],Xn[:,1],c = "g")
- plt.legend(["positive","negative"]);
-
- #构建输入数据管道
- ds = TensorDataset(X,Y)
- dl = DataLoader(ds,batch_size = 10,shuffle=True,num_workers=2)
-
- #第二步 定义模型
- class DNNModel(nn.Module):
- def __init__(self):
- super(DNNModel, self).__init__()
- self.fc1 = nn.Linear(2,4)
- self.fc2 = nn.Linear(4,8)
- self.fc3 = nn.Linear(8,1)
- # 正向传播
- def forward(self,x):
- x = F.relu(self.fc1(x))
- x = F.relu(self.fc2(x))
- y = nn.Sigmoid()(self.fc3(x))
- return y
- # 损失函数
- def loss_func(self,y_pred,y_true):
- return nn.BCELoss()(y_pred,y_true)
- # 评估函数(准确率)
- def metric_func(self,y_pred,y_true):
- y_pred = torch.where(y_pred>0.5,torch.ones_like(y_pred,dtype =
- torch.float32),
- torch.zeros_like(y_pred,dtype = torch.float32))
- acc = torch.mean(1-torch.abs(y_true-y_pred))
- return acc
- # 优化器
- @property
- def optimizer(self):
- return torch.optim.Adam(self.parameters(),lr = 0.001)
- model = DNNModel()
- # 测试模型结构
- (features,labels) = next(iter(dl))
- predictions = model(features)
- loss = model.loss_func(predictions,labels)
- metric = model.metric_func(predictions,labels)
- print("init loss:",loss.item())
- print("init metric:",metric.item())
-
- #第三步 训练模型
- def train_step(model, features, labels):
- # 正向传播求损失
- predictions = model(features)
- loss = model.loss_func(predictions,labels)
- metric = model.metric_func(predictions,labels)
- # 反向传播求梯度
- loss.backward()
- # 更新模型参数
- model.optimizer.step()
- model.optimizer.zero_grad()
- return loss.item(),metric.item()
- # 测试train_step效果
- features,labels = next(iter(dl))
- train_step(model,features,labels)
-
- def train_model(model,epochs):
- for epoch in range(1,epochs+1):
- loss_list,metric_list = [],[]
- for features, labels in dl:
- lossi,metrici = train_step(model,features,labels)
- loss_list.append(lossi)
- metric_list.append(metrici)
- loss = np.mean(loss_list)
- metric = np.mean(metric_list)
- if epoch%100==0:
- printbar()
- print("epoch =",epoch,"loss = ",loss,"metric = ",metric)
- train_model(model,epochs = 300)
-
- # 结果可视化
- fig, (ax1,ax2) = plt.subplots(nrows=1,ncols=2,figsize = (12,5))
- ax1.scatter(Xp[:,0],Xp[:,1], c="r")
- ax1.scatter(Xn[:,0],Xn[:,1],c = "g")
- ax1.legend(["positive","negative"]);
- ax1.set_title("y_true");
- Xp_pred = X[torch.squeeze(model.forward(X)>=0.5)]
- Xn_pred = X[torch.squeeze(model.forward(X)<0.5)]
- ax2.scatter(Xp_pred[:,0],Xp_pred[:,1],c = "r")
- ax2.scatter(Xn_pred[:,0],Xn_pred[:,1],c = "g")
- ax2.legend(["positive","negative"]);
- ax2.set_title("y_pred");
-
-
-
-
-
-
Pytorch没有官方的高阶API,一般需要用户自己实现训练循环、验证循环、和预测循环。
这里我们通过仿照tf.keras.Model的功能对Pytorch的nn.Module进行了封装, 实现了 fit, validate,predict, summary 方法,相当于用户自定义高阶API。 并在其基础上实现线性回归模型和DNN二分类模型。
- import os
- import datetime
- from torchkeras import Model, summary
- #打印时间
- def printbar():
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
- #例2-3-3-a 高阶API实现线性回归范例
-
- #第一步 准备数据
- import numpy as np
- import pandas as pd
- from matplotlib import pyplot as plt
- import torch
- from torch import nn
- import torch.nn.functional as F
- from torch.utils.data import Dataset,DataLoader,TensorDataset
- #样本数量
- n = 400
- # 生成测试用数据集
- X = 10*torch.rand([n,2])-5.0 #torch.rand是均匀分布
- w0 = torch.tensor([[2.0],[-3.0]])
- b0 = torch.tensor([[10.0]])
- Y = X@w0 + b0 + torch.normal( 0.0,2.0,size = [n,1]) # @表示矩阵乘法,增加正态扰动
- # 数据可视化
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- plt.figure(figsize = (12,5))
- ax1 = plt.subplot(121)
- ax1.scatter(X[:,0],Y[:,0], c = "b",label = "samples")
- ax1.legend()
- plt.xlabel("x1")
- plt.ylabel("y",rotation = 0)
- ax2 = plt.subplot(122)
- ax2.scatter(X[:,1],Y[:,0], c = "g",label = "samples")
- ax2.legend()
- plt.xlabel("x2")
- plt.ylabel("y",rotation = 0)
- plt.show()
-
- #构建输入数据管道
- ds = TensorDataset(X,Y)
- ds_train,ds_valid = torch.utils.data.random_split(ds,[int(400*0.7),400-
- int(400*0.7)])
- dl_train = DataLoader(ds_train,batch_size = 10,shuffle=True,num_workers=2)
- dl_valid = DataLoader(ds_valid,batch_size = 10,num_workers=2)
-
- #第二步 定义模型
- # 继承用户自定义模型
- from torchkeras import Model
- class LinearRegression(Model):
- def __init__(self):
- super(LinearRegression, self).__init__()
- self.fc = nn.Linear(2,1)
- def forward(self,x):
- return self.fc(x)
- model = LinearRegression()
- model.summary(input_shape = (2,))
-
- #第三步 训练模型
- ### 使用fit方法进行训练
- def mean_absolute_error(y_pred,y_true):
- return torch.mean(torch.abs(y_pred-y_true))
- def mean_absolute_percent_error(y_pred,y_true):
- absolute_percent_error = (torch.abs(y_pred-y_true)+1e-7)/
- (torch.abs(y_true)+1e-7)
- return torch.mean(absolute_percent_error)
- model.compile(loss_func = nn.MSELoss(),
- optimizer= torch.optim.Adam(model.parameters(),lr = 0.01),
-
- metrics_dict={"mae":mean_absolute_error,"mape":mean_absolute_percent_error})
- dfhistory = model.fit(200,dl_train = dl_train, dl_val = dl_valid,log_step_freq
- = 20)
-
- # 结果可视化
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- w,b = model.state_dict()["fc.weight"],model.state_dict()["fc.bias"]
- plt.figure(figsize = (12,5))
- ax1 = plt.subplot(121)
- ax1.scatter(X[:,0],Y[:,0], c = "b",label = "samples")
- ax1.plot(X[:,0],w[0,0]*X[:,0]+b[0],"-r",linewidth = 5.0,label = "model")
- ax1.legend()
- plt.xlabel("x1")
- plt.ylabel("y",rotation = 0)
- ax2 = plt.subplot(122)
- ax2.scatter(X[:,1],Y[:,0], c = "g",label = "samples")
- ax2.plot(X[:,1],w[0,1]*X[:,1]+b[0],"-r",linewidth = 5.0,label = "model")
- ax2.legend()
- plt.xlabel("x2")
- plt.ylabel("y",rotation = 0)
- plt.show()
-
- #第四步 评估模型
- dfhistory.tail()
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- import matplotlib.pyplot as plt
- def plot_metric(dfhistory, metric):
- train_metrics = dfhistory[metric]
- val_metrics = dfhistory['val_'+metric]
- epochs = range(1, len(train_metrics) + 1)
- plt.plot(epochs, train_metrics, 'bo--')
- plt.plot(epochs, val_metrics, 'ro-')
- plt.title('Training and validation '+ metric)
- plt.xlabel("Epochs")
- plt.ylabel(metric)
- plt.legend(["train_"+metric, 'val_'+metric])
- plt.show()
- plot_metric(dfhistory,"loss")
- plot_metric(dfhistory,"mape")
- # 评估
- model.evaluate(dl_valid)
-
- #第五步 使用模型
- # 预测
- dl = DataLoader(TensorDataset(X))
- model.predict(dl)[0:10]
- # 预测
- model.predict(dl_valid)[0:10]
-
- #例2-3-3-b 高阶API实现DNN二分类范例
-
- #第一步 准备数据
- import numpy as np
- import pandas as pd
- from matplotlib import pyplot as plt
- import torch
- from torch import nn
- import torch.nn.functional as F
- from torch.utils.data import Dataset,DataLoader,TensorDataset
- import torchkeras
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- #正负样本数量
- n_positive,n_negative = 2000,2000
- #生成正样本, 小圆环分布
- r_p = 5.0 + torch.normal(0.0,1.0,size = [n_positive,1])
- theta_p = 2*np.pi*torch.rand([n_positive,1])
- Xp = torch.cat([r_p*torch.cos(theta_p),r_p*torch.sin(theta_p)],axis = 1)
- Yp = torch.ones_like(r_p)
- #生成负样本, 大圆环分布
- r_n = 8.0 + torch.normal(0.0,1.0,size = [n_negative,1])
- theta_n = 2*np.pi*torch.rand([n_negative,1])
- Xn = torch.cat([r_n*torch.cos(theta_n),r_n*torch.sin(theta_n)],axis = 1)
- Yn = torch.zeros_like(r_n)
- #汇总样本
- X = torch.cat([Xp,Xn],axis = 0)
- Y = torch.cat([Yp,Yn],axis = 0)
- #可视化
- plt.figure(figsize = (6,6))
- plt.scatter(Xp[:,0],Xp[:,1],c = "r")
- plt.scatter(Xn[:,0],Xn[:,1],c = "g")
- plt.legend(["positive","negative"]);
-
- ds = TensorDataset(X,Y)
- ds_train,ds_valid = torch.utils.data.random_split(ds,
- [int(len(ds)*0.7),len(ds)-int(len(ds)*0.7)])
- dl_train = DataLoader(ds_train,batch_size = 100,shuffle=True,num_workers=2)
- dl_valid = DataLoader(ds_valid,batch_size = 100,num_workers=2)
-
- #第二步 定义模型
- class Net(nn.Module):
- def __init__(self):
- super().__init__()
- self.fc1 = nn.Linear(2,4)
- self.fc2 = nn.Linear(4,8)
- self.fc3 = nn.Linear(8,1)
- def forward(self,x):
- x = F.relu(self.fc1(x))
- x = F.relu(self.fc2(x))
- y = nn.Sigmoid()(self.fc3(x))
- return y
- model = torchkeras.Model(Net())
- model.summary(input_shape =(2,))
-
- #第三步 训练模型
- # 准确率
- def accuracy(y_pred,y_true):
- y_pred = torch.where(y_pred>0.5,torch.ones_like(y_pred,dtype =
- torch.float32),
- torch.zeros_like(y_pred,dtype = torch.float32))
- acc = torch.mean(1-torch.abs(y_true-y_pred))
- return acc
- model.compile(loss_func = nn.BCELoss(),optimizer=
- torch.optim.Adam(model.parameters(),lr = 0.01),
- metrics_dict={"accuracy":accuracy})
- dfhistory = model.fit(100,dl_train = dl_train,dl_val = dl_valid,log_step_freq
- = 10)
- # 结果可视化
- fig, (ax1,ax2) = plt.subplots(nrows=1,ncols=2,figsize = (12,5))
- ax1.scatter(Xp[:,0],Xp[:,1], c="r")
- ax1.scatter(Xn[:,0],Xn[:,1],c = "g")
- ax1.legend(["positive","negative"]);
- ax1.set_title("y_true");
- Xp_pred = X[torch.squeeze(model.forward(X)>=0.5)]
- Xn_pred = X[torch.squeeze(model.forward(X)<0.5)]
- ax2.scatter(Xp_pred[:,0],Xp_pred[:,1],c = "r")
- ax2.scatter(Xn_pred[:,0],Xn_pred[:,1],c = "g")
- ax2.legend(["positive","negative"]);
- ax2.set_title("y_pred");
-
- #第四步 评估模型
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- import matplotlib.pyplot as plt
- def plot_metric(dfhistory, metric):
- train_metrics = dfhistory[metric]
- val_metrics = dfhistory['val_'+metric]
- epochs = range(1, len(train_metrics) + 1)
- plt.plot(epochs, train_metrics, 'bo--')
- plt.plot(epochs, val_metrics, 'ro-')
- plt.title('Training and validation '+ metric)
- plt.xlabel("Epochs")
- plt.ylabel(metric)
- plt.legend(["train_"+metric, 'val_'+metric])
- plt.show()
- plot_metric(dfhistory,"loss")
- model.evaluate(dl_valid)
-
- #第五步 使用模型
- model.predict(dl_valid)[0:10]
-
-
-
-
-
-
Pytorch通常使用Dataset和DataLoader这两个工具类来构建数据管道。
Dataset定义了数据集的内容,它相当于一个类似列表的数据结构,具有确定的长度,能够用索 引获取数据集中的元素。
而DataLoader定义了按batch加载数据集的方法,它是一个实现了 __iter__ 方法的可迭代对 象,每次迭代输出一个batch的数据。
DataLoader能够控制batch的大小,batch中元素的采样方法,以及将batch结果整理成模型所需 输入形式的方法,并且能够使用多进程读取数据。
在绝大部分情况下,用户只需实现Dataset的 __len__ 方法和 __getitem__ 方法,就可以轻松构 建自己的数据集,并用默认数据管道进行加载
让我们考虑一下从一个数据集中获取一个batch的数据需要哪些步骤。 (假定数据集的特征和标签分别表示为张量 X 和 Y ,数据集可以表示为 (X,Y) , 假定batch大小为 m )
a) 首先我们要确定数据集的长度 n 。 结果类似: n = 1000 。
b) 然后我们从 0 到 n-1 的范围中抽样出 m 个数(batch大小)。
假定 m=4 , 拿到的结果是一个列表,类似: indices = [1,4,8,9]
c) 接着我们从数据集中去取这 m 个数对应下标的元素。 拿到的结果是一个元组列表,类似: samples = [(X[1],Y[1]),(X[4],Y[4]),(X[8],Y[8]), (X[9],Y[9])]
d) 最后我们将结果整理成两个张量作为输出。 拿到的结果是两个张量,
类似 batch = (features,labels)
其中 features = torch.stack([X[1],X[4],X[8],X[9]]) labels = torch.stack([Y[1],Y[4],Y[8],Y[9]])
上述第a个步骤确定数据集的长度是由 Dataset的 __len__ 方法实现的。
第2个步骤从 0 到 n-1 的范围中抽样出 m 个数的方法是由 DataLoader的 sampler 和 batch_sampler 参数指定的。
sampler 参数指定单个元素抽样方法,一般无需用户设置,程序默认在DataLoader的参数 shuffle=True 时采用随机抽样, shuffle=False 时采用顺序抽样。
batch_sampler 参数将多个抽样的元素整理成一个列表,一般无需用户设置,默认方法在 DataLoader的参数 drop_last=True 时会丢弃数据集最后一个长度不能被batch大小整除的批次,在 drop_last=False 时保留最后一个批次。
第3个步骤的核心逻辑根据下标取数据集中的元素 是由 Dataset的 __getitem__ 方法实现的。
第4个步骤的逻辑由DataLoader的参数 collate_fn 指定。一般情况下也无需用户设置。
以下是 Dataset和 DataLoader的核心接口逻辑伪代码,不完全和源码一致
- #例3-1-3 Dataset和DataLoader的主要接口
- import torch
- class Dataset(object):
- def __init__(self):
- pass
- def __len__(self):
- raise NotImplementedError
- def __getitem__(self,index):
- raise NotImplementedError
- class DataLoader(object):
- def __init__(self,dataset,batch_size,collate_fn,shuffle = True,drop_last = False):
- self.dataset = dataset
- self.sampler =torch.utils.data.RandomSampler if shuffle else \
- torch.utils.data.SequentialSampler
- self.batch_sampler = torch.utils.data.BatchSampler
- self.sample_iter = self.batch_sampler(self.sampler(range(len(dataset))), batch_size = batch_size,drop_last = drop_last)
- def __next__(self):
- indices = next(self.sample_iter)
- batch = self.collate_fn([self.dataset[i] for i in indices])
- return batch
Dataset创建数据集常用的方法有:
a) 使用 torch.utils.data.TensorDataset 根据Tensor创建数据集(numpy的array,Pandas的 DataFrame需要先转换成Tensor)。
b) 使用 torchvision.datasets.ImageFolder 根据图片目录创建图片数据集。
c) 继承 torch.utils.data.Dataset 创建自定义数据集。
d) 此外,还可以通过 torch.utils.data.random_split 将一个数据集分割成多份,常用于分割训练集,验证集和测试 集。
e) 调用Dataset的加法运算符( + )将多个数据集合并成一个数据集。
- #例3-2-1 根据Tensor创建数据集
- import numpy as np
- import torch
- from torch.utils.data import TensorDataset,Dataset,DataLoader,random_split
-
- # 根据Tensor创建数据集
- from sklearn import datasets
- iris = datasets.load_iris()
- ds_iris = TensorDataset(torch.tensor(iris.data),torch.tensor(iris.target))
- # 分割成训练集和预测集
- n_train = int(len(ds_iris)*0.8)
- n_valid = len(ds_iris) - n_train
- ds_train,ds_valid = random_split(ds_iris,[n_train,n_valid])
- print(type(ds_iris))
- print(type(ds_train))
-
- out:
- <class 'torch.utils.data.dataset.TensorDataset'>
- <class 'torch.utils.data.dataset.Subset'>
-
-
-
-
- # 使用DataLoader加载数据集
- dl_train,dl_valid = DataLoader(ds_train,batch_size =
- 8),DataLoader(ds_valid,batch_size = 8)
- for features,labels in dl_train:
- print(features,labels)
- break
-
- out:
- tensor([[6.5000, 3.0000, 5.2000, 2.0000],
- [6.3000, 3.4000, 5.6000, 2.4000],
- [4.9000, 2.4000, 3.3000, 1.0000],
- [6.7000, 3.1000, 4.7000, 1.5000],
- [4.5000, 2.3000, 1.3000, 0.3000],
- [5.7000, 2.5000, 5.0000, 2.0000],
- [5.2000, 4.1000, 1.5000, 0.1000],
- [5.7000, 2.6000, 3.5000, 1.0000]], dtype=torch.float64) tensor([2, 2, 1, 1, 0, 2, 0, 1], dtype=torch.int32)
-
-
-
- # 演示加法运算符(`+`)的合并作用
- ds_data = ds_train + ds_valid
- print('len(ds_train) = ',len(ds_train))
- print('len(ds_valid) = ',len(ds_valid))
- print('len(ds_train+ds_valid) = ',len(ds_data))
- print(type(ds_data))
-
- out:
- len(ds_train) = 120
- len(ds_valid) = 30
- len(ds_train+ds_valid) = 150
- <class 'torch.utils.data.dataset.ConcatDataset'>
- #3-2-2 根据图片目录创建图片数据集
- import numpy as np
- import torch
- from torch.utils.data import DataLoader
- from torchvision import transforms,datasets
-
- #演示一些常用的图片增强操作
- from PIL import Image
- img = Image.open('./data/dog2.jpg')
- img
-
- # 随机数值翻转
- transforms.RandomVerticalFlip()(img)
-
- #随机旋转
- transforms.RandomRotation(45)(img)
-
- # 定义图片增强操作
- transform_train = transforms.Compose([
- transforms.RandomHorizontalFlip(), #随机水平翻转
- transforms.RandomVerticalFlip(), #随机垂直翻转
- transforms.RandomRotation(45), #随机在45度角度内旋转
- transforms.ToTensor() #转换成张量
- ]
- )
- transform_valid = transforms.Compose([
- transforms.ToTensor()
- ]
- )
-
- # 根据图片目录创建数据集
- # 这里用到的animal数据集是我自己整理的,链接在文章末尾
- #注意这里要在train 和 test 目录下按照图片类别分别新建文件夹,文件夹的名称就是类别名,然后把图片分别放入各个文件夹
- ds_train = datasets.ImageFolder("data/animal/train/", transform = transform_train,target_transform= lambda t:torch.tensor([t]).float())
- ds_valid = datasets.ImageFolder("data/animal/test/", transform = transform_valid,target_transform= lambda t:torch.tensor([t]).float())
- print(ds_train.class_to_idx)
-
- # 使用DataLoader加载数据集
- dl_train = DataLoader(ds_train,batch_size = 2,shuffle = True,num_workers=1)
- dl_valid = DataLoader(ds_valid,batch_size = 2,shuffle = True,num_workers=1)
-
- for features,labels in dl_train:
- print(features)
- print(labels)
- break
- #例3-2-3 创建自定义数据集
- #下面通过继承Dataset类创建douban文本分类任务的自定义数据集。 douban数据集链接在文章末尾。
- #大概思路如下:首先,对训练集文本分词构建词典。然后将训练集文本和测试集文本数据转换成 token单词编码。 接着将转换成单词编码的训练集数据和测试集数据按样本分割成多个文件,一个文件代表一个样本。 最后,我们可以根据文件名列表获取对应序号的样本内容,从而构建Dataset数据集。
-
-
- import numpy as np
- import pandas as pd
- from collections import OrderedDict
- import re,string,jieba,csv
-
- #from keras.datasets import imdb
- #(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
-
- MAX_WORDS = 10000 # 仅考虑最高频的10000个词
- MAX_LEN = 200 # 每个样本保留200个词的长度
- BATCH_SIZE = 20
- train_data_path = 'data/douban/train.csv'
- test_data_path = 'data/douban/test.csv'
- train_token_path = 'data/douban/train_token.csv'
- test_token_path = 'data/douban/test_token.csv'
- train_samples_path = 'data/douban/train_samples/'
- test_samples_path = 'data/douban/test_samples/'
-
- #print(train_data[0])
-
-
- ##构建词典
- word_count_dict = {}
- #清洗文本
- def clean_text(text):
- bd='[’!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~]+,。!?“”《》:、. '
- for i in bd:
- text=text.replace(i,'') #字符串替换去标点符号
- fenci=jieba.lcut(text)
- return fenci
-
- with open(train_data_path,"r",encoding = 'utf-8',newline='') as f:
- reader = csv.reader(f,delimiter=',')
- for row in reader:
- #print(row)
- text = row[1]
- label = row[0]
- #print(label,text)
- cleaned_text = clean_text(text)
- for word in cleaned_text:
- #print(word)
- word_count_dict[word] = word_count_dict.get(word,0)+1
-
- print(len(word_count_dict))
-
- df_word_dict = pd.DataFrame(pd.Series(word_count_dict,name = "count"))
- df_word_dict = df_word_dict.sort_values(by = "count",ascending =False)
- df_word_dict = df_word_dict[0:MAX_WORDS-2] #
- df_word_dict["word_id"] = range(2,MAX_WORDS) #编号0和1分别留给未知词<unkown>和填充<padding>
- word_id_dict = df_word_dict["word_id"].to_dict()
- df_word_dict.head(10)
-
- out:
- count word_id
- 的 68229 2
- 了 20591 3
- 是 15321 4
- 我 9312 5
- 看 7423 6
- 很 7395 7
- 也 7256 8
- 都 7053 9
- 在 6753 10
- 和 6388 11
-
-
- #转换token
- # 填充文本
- def pad(data_list,pad_length):
- padded_list = data_list.copy()
- if len(data_list)> pad_length:
- padded_list = data_list[-pad_length:]
- if len(data_list)< pad_length:
- padded_list = [1]*(pad_length-len(data_list))+data_list
- return padded_list
- def text_to_token(text_file,token_file):
- with open(train_data_path,"r",encoding = 'utf-8',newline='') as f,\
- open(token_file,"w",encoding = 'utf-8') as fout:
- reader = csv.reader(f,delimiter=',')
- for row in reader:
- text = row[1]
- label = row[0]
- cleaned_text = clean_text(text)
- word_token_list = [word_id_dict.get(word, 0) for word in cleaned_text]
- pad_list = pad(word_token_list,MAX_LEN)
- out_line = label+"\t"+" ".join([str(x) for x in pad_list])
- fout.write(out_line+"\n")
- text_to_token(train_data_path,train_token_path)
- text_to_token(test_data_path,test_token_path)
-
-
-
- # 分割样本
- import os
- if not os.path.exists(train_samples_path):
- os.mkdir(train_samples_path)
- if not os.path.exists(test_samples_path):
- os.mkdir(test_samples_path)
- def split_samples(token_path,samples_dir):
- with open(token_path,"r",encoding = 'utf-8') as fin:
- i = 0
- for line in fin:
- with open(samples_dir+"%d.txt"%i,"w",encoding = "utf-8") as fout:
- fout.write(line)
- i = i+1
- split_samples(train_token_path,train_samples_path)
- split_samples(test_token_path,test_samples_path)
-
-
-
-
-
- #创建数据集
- import os
- import torch
- from torch.utils.data import DataLoader,Dataset
- from torchvision import transforms,datasets
- class imdbDataset(Dataset):
- def __init__(self,samples_dir):
- self.samples_dir = samples_dir
- self.samples_paths = os.listdir(samples_dir)
- def __len__(self):
- return len(self.samples_paths)
- def __getitem__(self,index):
- path = self.samples_dir + self.samples_paths[index]
- with open(path,"r",encoding = "utf-8") as f:
- line = f.readline()
- label,tokens = line.split("\t")
- label = torch.tensor([float(label)],dtype = torch.float)
- feature = torch.tensor([int(x) for x in tokens.split(" ")],dtype = torch.long)
- return (feature,label)
- ds_train = imdbDataset(train_samples_path)
- ds_test = imdbDataset(test_samples_path)
- print(len(ds_train))
- print(len(ds_test))
-
-
- dl_train = DataLoader(ds_train,batch_size = BATCH_SIZE,shuffle = True,num_workers=4)
- dl_test = DataLoader(ds_test,batch_size = BATCH_SIZE,num_workers=4)
- for features,labels in dl_train:
- print(features)
- print(labels)
- break
-
- out:
-
-
-
- #创建模型
- import torch
- from torch import nn
- import importlib
- from torchkeras import Model,summary
- class Net(Model):
- def __init__(self):
- super(Net, self).__init__() #设置padding_idx参数后将在训练过程中将填充的token始终赋值为0向量
- self.embedding = nn.Embedding(num_embeddings = MAX_WORDS,embedding_dim = 3,padding_idx = 1)
- self.conv = nn.Sequential()
- self.conv.add_module("conv_1",nn.Conv1d(in_channels = 3,out_channels = 16,kernel_size = 5))
- self.conv.add_module("pool_1",nn.MaxPool1d(kernel_size = 2))
- self.conv.add_module("relu_1",nn.ReLU())
- self.conv.add_module("conv_2",nn.Conv1d(in_channels = 16,out_channels = 128,kernel_size = 2))
- self.conv.add_module("pool_2",nn.MaxPool1d(kernel_size = 2))
- self.conv.add_module("relu_2",nn.ReLU())
- self.dense = nn.Sequential()
- self.dense.add_module("flatten",nn.Flatten())
- self.dense.add_module("linear",nn.Linear(6144,1))
- self.dense.add_module("sigmoid",nn.Sigmoid())
- def forward(self,x):
- x = self.embedding(x).transpose(1,2)
- x = self.conv(x)
- y = self.dense(x)
- return y
- model = Net()
- print(model)
- model.summary(input_shape = (200,),input_dtype = torch.LongTensor)
-
-
-
- # 编译模型
- def accuracy(y_pred,y_true):
- y_pred = torch.where(y_pred>0.5,torch.ones_like(y_pred,dtype = torch.float32),torch.zeros_like(y_pred,dtype = torch.float32))
- acc = torch.mean(1-torch.abs(y_true-y_pred))
- return acc
- model.compile(loss_func = nn.BCELoss(),optimizer=
- torch.optim.Adagrad(model.parameters(),lr = 0.02),
- metrics_dict={"accuracy":accuracy})
- # 训练模型
- dfhistory = model.fit(10,dl_train,dl_val=dl_test,log_step_freq= 200)
-
-
-
DataLoader能够控制batch的大小,batch中元素的采样方法,以及将batch结果整理成模型所需输入形式的方法,并且能够使用多进程读取数据。
DataLoader的函数签名如下
- DataLoader(
- dataset,
- batch_size=1,
- shuffle=False,
- sampler=None,
- batch_sampler=None,
- num_workers=0,
- collate_fn=None,
- pin_memory=False,
- drop_last=False,
- timeout=0,
- worker_init_fn=None,
- multiprocessing_context=None,
- )
一般情况下,我们仅仅会配置 dataset, batch_size, shuffle, num_workers, drop_last这五个参 数,其他参数使用默认值即可。
dataset : 数据集
batch_size: 批次大小
shuffle: 是否乱序
sampler: 样本采样函数,一般无需设置。
batch_sampler: 批次采样函数,一般无需设置。
num_workers: 使用多进程读取数据,设置的进程数。
collate_fn: 整理一个批次数据的函数。
pin_memory: 是否设置为锁业内存。默认为False,锁业内存不会使用虚拟内存(硬盘),从锁 业内存拷贝到GPU上速度会更快。
drop_last: 是否丢弃最后一个样本数量不足batch_size批次数据。
timeout: 加载一个数据批次的最长等待时间,一般无需设置。
worker_init_fn: 每个worker中dataset的初始化函数,常用于 IterableDataset。一般不使用。
DataLoader除了可以加载我们前面讲的 torch.utils.data.Dataset 外,还能够加载另外一种数据集 torch.utils.data.IterableDataset。 和Dataset数据集相当于一种列表结构不同,IterableDataset相当于一种迭代器结构。 它更加复 杂,一般较少使用。
- #例3-3 使用DataLoader加载数据集
- import numpy as np
- import torch
- from torch.utils.data import DataLoader,TensorDataset,Dataset
- from torchvision import transforms,datasets
- #构建输入数据管道
- ds = TensorDataset(torch.arange(1,50))
- dl = DataLoader(ds,
- batch_size = 10,
- shuffle= True,
- num_workers=2,
- drop_last = True)
- #迭代数据
- for batch, in dl:
- print(batch)
-
- out:
- tensor([35, 19, 3, 1, 24, 20, 8, 37, 32, 38])
- tensor([28, 26, 7, 48, 4, 41, 15, 45, 11, 14])
- tensor([23, 5, 10, 6, 18, 39, 31, 22, 42, 12])
- tensor([34, 47, 30, 25, 29, 49, 44, 46, 33, 13])
前面我们介绍了Pytorch的张量的结构操作和数学运算中的一些常用API。 利用这些张量的API我们可以构建出神经网络相关的组件(如激活函数,模型层,损失函数)。
Pytorch和神经网络相关的功能组件大多都封装在 torch.nn模块下。 这些功能组件的绝大部分既有函数形式实现,也有类形式实现。
其中nn.functional(一般引入后改名为F)有各种功能组件的函数实现。例如:
(激活函数) * F.relu * F.sigmoid * F.tanh * F.softmax
(模型层) * F.linear * F.conv2d * F.max_pool2d * F.dropout2d * F.embedding
(损失函数) * F.binary_cross_entropy * F.mse_loss * F.cross_entropy
为了便于对参数进行管理,一般通过继承 nn.Module 转换成为类的实现形式,并直接封装在 nn 模块下。例如:
(激活函数) * nn.ReLU * nn.Sigmoid * nn.Tanh * nn.Softmax
(模型层) * nn.Linear * nn.Conv2d * nn.MaxPool2d * nn.Dropout2d * nn.Embedding
(损失函数) * nn.BCELoss * nn.MSELoss * nn.CrossEntropyLoss
实际上nn.Module除了可以管理其引用的各种参数,还可以管理其引用的子模块,功能十分强 大。
在Pytorch中,模型的参数是需要被优化器训练的,因此,通常要设置参数为 requires_grad = True 的张量。 同时,在一个模型中,往往有许多的参数,要手动管理这些参数并不是一件容易的事情。 Pytorch一般将参数用nn.Parameter来表示,并且用nn.Module来管理其结构下的所有参数。
- #例4-1-2 使用nn.Module来管理参数
- import torch
- from torch import nn
- import torch.nn.functional as F
- from matplotlib import pyplot as plt
- # nn.Parameter 具有 requires_grad = True 属性
- w = nn.Parameter(torch.randn(2,2))
- print(w)
- print(w.requires_grad)
-
- out:
- Parameter containing:
- tensor([[ 0.8579, -0.3747],
- [-0.1361, 0.2524]], requires_grad=True)
- True
-
-
-
- # nn.ParameterList 可以将多个nn.Parameter组成一个列表
- params_list = nn.ParameterList([nn.Parameter(torch.rand(8,i)) for i in
- range(1,3)])
- print(params_list)
- print(params_list[0].requires_grad)
-
- out:
- ParameterList(
- (0): Parameter containing: [torch.FloatTensor of size 8x1]
- (1): Parameter containing: [torch.FloatTensor of size 8x2]
- )
- True
-
-
-
- # nn.ParameterDict 可以将多个nn.Parameter组成一个字典
- params_dict = nn.ParameterDict({"a":nn.Parameter(torch.rand(2,2)),
- "b":nn.Parameter(torch.zeros(2))})
- print(params_dict)
- print(params_dict["a"].requires_grad)
-
- out:
- ParameterDict(
- (a): Parameter containing: [torch.FloatTensor of size 2x2]
- (b): Parameter containing: [torch.FloatTensor of size 2]
- )
- True
-
-
-
- # 可以用Module将它们管理起来
- # module.parameters()返回一个生成器,包括其结构下的所有parameters
- module = nn.Module()
- module.w = w
- module.params_list = params_list
- module.params_dict = params_dict
- num_param = 0
- for param in module.parameters():
- print(param,"\n")
- num_param = num_param + 1
- print("number of Parameters =",num_param)
-
- out:
- Parameter containing:
- tensor([[ 0.8579, -0.3747],
- [-0.1361, 0.2524]], requires_grad=True)
-
- Parameter containing:
- tensor([[0.9753],
- [0.1606],
- [0.2186],
- [0.6484],
- [0.8174],
- [0.2587],
- [0.5496],
- [0.7685]], requires_grad=True)
-
- Parameter containing:
- tensor([[0.5034, 0.2805],
- [0.9023, 0.1758],
- [0.1499, 0.5110],
- [0.2113, 0.4445],
- [0.6116, 0.8562],
- [0.2120, 0.8932],
- [0.3098, 0.9548],
- [0.4298, 0.4322]], requires_grad=True)
-
- Parameter containing:
- tensor([[0.4966, 0.5429],
- [0.8729, 0.5744]], requires_grad=True)
-
- Parameter containing:
- tensor([0., 0.], requires_grad=True)
-
- number of Parameters = 5
-
-
-
- #实践当中,一般通过继承nn.Module来构建模块类,并将所有含有需要学习的参数的部分放在构造函数中。
- #以下范例为Pytorch中nn.Linear的源码的简化版本
- #可以看到它将需要学习的参数放在了__init__构造函数中,并在forward中调用F.linear函数来实现计算逻辑。
- class Linear(nn.Module):
- __constants__ = ['in_features', 'out_features']
- def __init__(self, in_features, out_features, bias=True):
- super(Linear, self).__init__()
- self.in_features = in_features
- self.out_features = out_features
- self.weight = nn.Parameter(torch.Tensor(out_features, in_features))
- if bias:
- self.bias = nn.Parameter(torch.Tensor(out_features))
- else:
- self.register_parameter('bias', None)
- def forward(self, input):
- return F.linear(input, self.weight, self.bias)
一般情况下,我们都很少直接使用 nn.Parameter来定义参数构建模型,而是通过一些拼装一些常用的模型层来构造模型。
这些模型层也是继承自nn.Module的对象,本身也包括参数,属于我们要定义的模块的子模块。 nn.Module提供了一些方法可以管理这些子模块。
children() 方法: 返回生成器,包括模块下的所有子模块。
named_children()方法:返回一个生成器,包括模块下的所有子模块,以及它们的名字。
modules()方法:返回一个生成器,包括模块下的所有各个层级的模块,包括模块本身。
named_modules()方法:返回一个生成器,包括模块下的所有各个层级的模块以及它们的名 字,包括模块本身。
其中chidren()方法和named_children()方法较多使用。
modules()方法和named_modules()方法较少使用,其功能可以通过多个named_children()的嵌 套使用实现。
- #例4-1-3 使用nn.Module来管理子模块
- import torch
- from torch import nn
- import torch.nn.functional as F
-
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.embedding = nn.Embedding(num_embeddings = 10000,embedding_dim = 3,padding_idx = 1)
- self.conv = nn.Sequential()
- self.conv.add_module("conv_1",nn.Conv1d(in_channels = 3,out_channels = 16,kernel_size = 5))
- self.conv.add_module("pool_1",nn.MaxPool1d(kernel_size = 2))
- self.conv.add_module("relu_1",nn.ReLU())
- self.conv.add_module("conv_2",nn.Conv1d(in_channels = 16,out_channels = 128,kernel_size = 2))
- self.conv.add_module("pool_2",nn.MaxPool1d(kernel_size = 2))
- self.conv.add_module("relu_2",nn.ReLU())
- self.dense = nn.Sequential()
- self.dense.add_module("flatten",nn.Flatten())
- self.dense.add_module("linear",nn.Linear(6144,1))
- self.dense.add_module("sigmoid",nn.Sigmoid())
- def forward(self,x):
- x = self.embedding(x).transpose(1,2)
- x = self.conv(x)
- y = self.dense(x)
- return y
-
- net = Net()
-
- i = 0
- for child in net.children():
- i+=1
- print(child,"\n")
- print("child number",i)
-
- out:
- Sequential(
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear): Linear(in_features=6144, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
-
- child number 3
-
-
-
- i = 0
- for name,child in net.named_children():
- i+=1
- print(name,":",child,"\n")
- print("child number",i)
-
- out:
- embedding : Embedding(10000, 3, padding_idx=1)
-
- conv : Sequential(
- (conv_1): Conv1d(3, 16, kernel_size=(5,), stride=(1,))
- (pool_1): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_1): ReLU()
- (conv_2): Conv1d(16, 128, kernel_size=(2,), stride=(1,))
- (pool_2): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_2): ReLU()
- )
-
- dense : Sequential(
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear): Linear(in_features=6144, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
-
- child number 3
-
-
-
- i = 0
- for module in net.modules():
- i+=1
- print(module)
- print("module number:",i)
-
- out:
- Net(
- (embedding): Embedding(10000, 3, padding_idx=1)
- (conv): Sequential(
- (conv_1): Conv1d(3, 16, kernel_size=(5,), stride=(1,))
- (pool_1): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_1): ReLU()
- (conv_2): Conv1d(16, 128, kernel_size=(2,), stride=(1,))
- (pool_2): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_2): ReLU()
- )
- (dense): Sequential(
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear): Linear(in_features=6144, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
- )
- Embedding(10000, 3, padding_idx=1)
- Sequential(
- (conv_1): Conv1d(3, 16, kernel_size=(5,), stride=(1,))
- (pool_1): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_1): ReLU()
- (conv_2): Conv1d(16, 128, kernel_size=(2,), stride=(1,))
- (pool_2): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_2): ReLU()
- )
- Conv1d(3, 16, kernel_size=(5,), stride=(1,))
- MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- ReLU()
- Conv1d(16, 128, kernel_size=(2,), stride=(1,))
- MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- ReLU()
- Sequential(
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear): Linear(in_features=6144, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
- Flatten(start_dim=1, end_dim=-1)
- Linear(in_features=6144, out_features=1, bias=True)
- Sigmoid()
- module number: 13
-
-
-
- #下面我们通过named_children方法找到embedding层,并将其参数设置为不可训练(相当于冻结embedding层)。
- children_dict = {name:module for name,module in net.named_children()}
- print(children_dict)
- embedding = children_dict["embedding"]
- embedding.requires_grad_(False) #冻结其参数
-
- out:
- {'embedding': Embedding(10000, 3, padding_idx=1), 'conv': Sequential(
- (conv_1): Conv1d(3, 16, kernel_size=(5,), stride=(1,))
- (pool_1): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_1): ReLU()
- (conv_2): Conv1d(16, 128, kernel_size=(2,), stride=(1,))
- (pool_2): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_2): ReLU()
- ), 'dense': Sequential(
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear): Linear(in_features=6144, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )}
-
-
-
- #可以看到其第一层的参数已经不可以被训练了。
- for param in embedding.parameters():
- print(param.requires_grad)
- print(param.numel())
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Embedding-1 [-1, 200, 3] 30,000
- Conv1d-2 [-1, 16, 196] 256
- MaxPool1d-3 [-1, 16, 98] 0
- ReLU-4 [-1, 16, 98] 0
- Conv1d-5 [-1, 128, 97] 4,224
- MaxPool1d-6 [-1, 128, 48] 0
- ReLU-7 [-1, 128, 48] 0
- Flatten-8 [-1, 6144] 0
- Linear-9 [-1, 1] 6,145
- Sigmoid-10 [-1, 1] 0
- ================================================================
- Total params: 40,625
- Trainable params: 10,625
- Non-trainable params: 30,000
- ----------------------------------------------------------------
- Input size (MB): 0.000763
- Forward/backward pass size (MB): 0.287796
- Params size (MB): 0.154972
- Estimated Total Size (MB): 0.443531
- ----------------------------------------------------------------
深度学习模型一般由各种模型层组合而成。
torch.nn中内置了非常丰富的各种模型层。它们都属于nn.Module的子类,具备参数管理功能。
例如:
nn.Linear, nn.Flatten, nn.Dropout, nn.BatchNorm2d
nn.Conv2d,nn.AvgPool2d,nn.Conv1d,nn.ConvTranspose2d
nn.Embedding,nn.GRU,nn.LSTM
nn.Transformer
如果这些内置模型层不能够满足需求,我们也可以通过继承nn.Module基类构建自定义的模型层。
实际上,pytorch不区分模型和模型层,都是通过继承nn.Module进行构建。
因此,我们只要继承nn.Module基类并实现forward方法即可自定义模型层。
基础层
nn.Linear:全连接层。参数个数 = 输入层特征数× 输出层特征数(weight)+ 输出层特征数 (bias)
nn.Flatten:压平层,用于将多维张量样本压成一维张量样本。
nn.BatchNorm1d:一维批标准化层。通过线性变换将输入批次缩放平移到稳定的均值和标 准差。可以增强模型对输入不同分布的适应性,加快模型训练速度,有轻微正则化效果。一 般在激活函数之前使用。可以用afine参数设置该层是否含有可以训练的参数。
nn.BatchNorm2d:二维批标准化层。
nn.BatchNorm3d:三维批标准化层。
nn.Dropout:一维随机丢弃层。一种正则化手段。
nn.Dropout2d:二维随机丢弃层。
nn.Dropout3d:三维随机丢弃层。
nn.Threshold:限幅层。当输入大于或小于阈值范围时,截断之。
nn.ConstantPad2d: 二维常数填充层。对二维张量样本填充常数扩展长度。
nn.ReplicationPad1d: 一维复制填充层。对一维张量样本通过复制边缘值填充扩展长度。
nn.ZeroPad2d:二维零值填充层。对二维张量样本在边缘填充0值.
nn.GroupNorm:组归一化。一种替代批归一化的方法,将通道分成若干组进行归一。不受 batch大小限制,据称性能和效果都优于BatchNorm。
nn.LayerNorm:层归一化。较少使用。
nn.InstanceNorm2d: 样本归一化。较少使用。
卷积网络相关层
nn.Conv1d:普通一维卷积,常用于文本。参数个数 = 输入通道数×卷积核尺寸(如3)×卷积核 个数 + 卷积核尺寸(如3)
nn.Conv2d:普通二维卷积,常用于图像。参数个数 = 输入通道数×卷积核尺寸(如3乘3)×卷 积核个数 + 卷积核尺寸(如3乘3) 通过调整dilation参数大于1,可以变成空洞卷积,增大卷积 核感受野。 通过调整groups参数不为1,可以变成分组卷积。分组卷积中不同分组使用相同 的卷积核,显著减少参数数量。 当groups参数等于通道数时,相当于tensorflow中的二维深 度卷积层tf.keras.layers.DepthwiseConv2D。 利用分组卷积和1乘1卷积的组合操作,可以构 造相当于Keras中的二维深度可分离卷积层tf.keras.layers.SeparableConv2D。
nn.Conv3d:普通三维卷积,常用于视频。参数个数 = 输入通道数×卷积核尺寸(如3乘3乘3)× 卷积核个数 + 卷积核尺寸(如3乘3乘3) 。
nn.MaxPool1d: 一维最大池化。
nn.MaxPool2d:二维最大池化。一种下采样方式。没有需要训练的参数。
nn.MaxPool3d:三维最大池化。
nn.AdaptiveMaxPool2d:二维自适应最大池化。无论输入图像的尺寸如何变化,输出的图 像尺寸是固定的。 该函数的实现原理,大概是通过输入图像的尺寸和要得到的输出图像的 尺寸来反向推算池化算子的padding,stride等参数。
nn.FractionalMaxPool2d:二维分数最大池化。普通最大池化通常输入尺寸是输出的整数 倍。而分数最大池化则可以不必是整数。分数最大池化使用了一些随机采样策略,有一定的 正则效果,可以用它来代替普通最大池化和Dropout层。
nn.AvgPool2d:二维平均池化。
nn.AdaptiveAvgPool2d:二维自适应平均池化。无论输入的维度如何变化,输出的维度是固 定的。
nn.ConvTranspose2d:二维卷积转置层,俗称反卷积层。并非卷积的逆操作,但在卷积核 相同的情况下,当其输入尺寸是卷积操作输出尺寸的情况下,卷积转置的输出尺寸恰好是卷 积操作的输入尺寸。在语义分割中可用于上采样。
nn.Upsample:上采样层,操作效果和池化相反。可以通过mode参数控制上采样策略 为”nearest”最邻近策略或”linear”线性插值策略。
nn.Unfold:滑动窗口提取层。其参数和卷积操作nn.Conv2d相同。实际上,卷积操作可以等 价于nn.Unfold和nn.Linear以及nn.Fold的一个组合。 其中nn.Unfold操作可以从输入中提取各 个滑动窗口的数值矩阵,并将其压平成一维。利用nn.Linear将nn.Unfold的输出和卷积核做 乘法后,再使用 nn.Fold操作将结果转换成输出图片形状。
nn.Fold:逆滑动窗口提取层。
循环网络相关层
nn.Embedding:嵌入层。一种比Onehot更加有效的对离散特征进行编码的方法。一般用于 将输入中的单词映射为稠密向量。嵌入层的参数需要学习。
nn.LSTM:长短记忆循环网络层【支持多层】。最普遍使用的循环网络层。具有携带轨道, 遗忘门,更新门,输出门。可以较为有效地缓解梯度消失问题,从而能够适用长期依赖问 题。设置bidirectional = True时可以得到双向LSTM。需要注意的时,默认的输入和输出形状 是(seq,batch,feature), 如果需要将batch维度放在第0维,则要设置batch_first参数设置为 True。
nn.GRU:门控循环网络层【支持多层】。LSTM的低配版,不具有携带轨道,参数数量少于 LSTM,训练速度更快。
nn.RNN:简单循环网络层【支持多层】。容易存在梯度消失,不能够适用长期依赖问题。 一般较少使用。
nn.LSTMCell:长短记忆循环网络单元。和nn.LSTM在整个序列上迭代相比,它仅在序列上 迭代一步。一般较少使用。
nn.GRUCell:门控循环网络单元。和nn.GRU在整个序列上迭代相比,它仅在序列上迭代一 步。一般较少使用。
nn.RNNCell:简单循环网络单元。和nn.RNN在整个序列上迭代相比,它仅在序列上迭代一 步。一般较少使用。
Transformer相关层
nn.Transformer:Transformer网络结构。Transformer网络结构是替代循环网络的一种结构,解决了循环网络难以并行,难以捕捉长期依赖的缺陷。它是目前NLP任务的主流模型的 主要构成部分。
Transformer网络结构由TransformerEncoder编码器和TransformerDecoder 解码器组成。编码器和解码器的核心是MultiheadAttention多头注意力层。
nn.TransformerEncoder:Transformer编码器结构。由多个 nn.TransformerEncoderLayer编 码器层组成。
nn.TransformerDecoder:Transformer解码器结构。由多个 nn.TransformerDecoderLayer 解码器层组成。
nn.TransformerEncoderLayer:Transformer的编码器层。
nn.TransformerDecoderLayer:Transformer的解码器层。
nn.MultiheadAttention:多头注意力层。
如果Pytorch的内置模型层不能够满足需求,我们也可以通过继承nn.Module基类构建自定义的模型层。 实际上,pytorch不区分模型和模型层,都是通过继承nn.Module进行构建。 因此,我们只要继承nn.Module基类并实现forward方法即可自定义模型层。 下面是Pytorch的nn.Linear层的源码,我们可以仿照它来自定义模型层。
- #例4-2-2 自定义模型层
- import torch
- from torch import nn
- import torch.nn.functional as F
- class Linear(nn.Module):
- __constants__ = ['in_features', 'out_features']
- def __init__(self, in_features, out_features, bias=True):
- super(Linear, self).__init__()
- self.in_features = in_features
- self.out_features = out_features
- self.weight = nn.Parameter(torch.Tensor(out_features, in_features))
- if bias:
- self.bias = nn.Parameter(torch.Tensor(out_features))
- else:
- self.register_parameter('bias', None)
- self.reset_parameters()
- def reset_parameters(self):
- nn.init.kaiming_uniform_(self.weight, a=math.sqrt(5))
- if self.bias is not None:
- fan_in, _ = nn.init._calculate_fan_in_and_fan_out(self.weight)
- bound = 1 / math.sqrt(fan_in)
- nn.init.uniform_(self.bias, -bound, bound)
- def forward(self, input):
- return F.linear(input, self.weight, self.bias)
- def extra_repr(self):
- return 'in_features={}, out_features={}, bias={}'.format(self.in_features, self.out_features, self.bias is not None)
- linear = nn.Linear(20, 30)
- inputs = torch.randn(128, 20)
- output = linear(inputs)
- print(output.size())
-
- out:
- torch.Size([128, 30])
一般来说,监督学习的目标函数由损失函数和正则化项组成。(Objective = Loss + Regularization)
Pytorch中的损失函数一般在训练模型时候指定。 注意Pytorch中内置的损失函数的参数和tensorflow不同,是y_pred在前,y_true在后,而 Tensorflow是y_true在前,y_pred在后。
对于回归模型,通常使用的内置损失函数是均方损失函数nn.MSELoss 。
对于二分类模型,通常使用的是二元交叉熵损失函数nn.BCELoss (输入已经是sigmoid激活函数 之后的结果) 或者 nn.BCEWithLogitsLoss (输入尚未经过nn.Sigmoid激活函数) 。
对于多分类模型,一般推荐使用交叉熵损失函数 nn.CrossEntropyLoss。 (y_true需要是一维的, 是类别编码。y_pred未经过nn.Softmax激活。) 此外,如果多分类的y_pred经过了nn.LogSoftmax激活,可以使用nn.NLLLoss损失函数(The negative log likelihood loss)。 这种方法和直接使用nn.CrossEntropyLoss等价。
如果有需要,也可以自定义损失函数,自定义损失函数需要接收两个张量y_pred,y_true作为输 入参数,并输出一个标量作为损失函数值。 Pytorch中的正则化项一般通过自定义的方式和损失函数一起添加作为目标函数。 如果仅仅使用L2正则化,也可以利用优化器的weight_decay参数来实现相同的效果。
内置的损失函数一般有类的实现和函数的实现两种形式。
如:nn.BCE 和 F.binary_cross_entropy 都是二元交叉熵损失函数,前者是类的实现形式,后者是 函数的实现形式。
实际上类的实现形式通常是调用函数的实现形式并用nn.Module封装后得到的。
一般我们常用的是类的实现形式。它们封装在torch.nn模块下,并且类名以Loss结尾。
常用的一些内置损失函数说明如下。
nn.MSELoss(均方误差损失,也叫做L2损失,用于回归)
nn.L1Loss (L1损失,也叫做绝对值误差损失,用于回归)
nn.SmoothL1Loss (平滑L1损失,当输入在-1到1之间时,平滑为L2损失,用于回归)
nn.BCELoss (二元交叉熵,用于二分类,输入已经过nn.Sigmoid激活,对不平衡数据集可以 用weigths参数调整类别权重)
nn.BCEWithLogitsLoss (二元交叉熵,用于二分类,输入未经过nn.Sigmoid激活)
nn.CrossEntropyLoss (交叉熵,用于多分类,要求label为稀疏编码,输入未经过 nn.Softmax激活,对不平衡数据集可以用weigths参数调整类别权重)
nn.NLLLoss (负对数似然损失,用于多分类,要求label为稀疏编码,输入经过 nn.LogSoftmax激活)
nn.CosineSimilarity(余弦相似度,可用于多分类)
nn.AdaptiveLogSoftmaxWithLoss (一种适合非常多类别且类别分布很不均衡的损失函数, 会自适应地将多个小类别合成一个cluster)
- #例4-3-1 内置损失函数
- import numpy as np
- import pandas as pd
- import torch
- from torch import nn
- import torch.nn.functional as F
- y_pred = torch.tensor([[10.0,0.0,-10.0],[8.0,8.0,8.0]])
- y_true = torch.tensor([0,2])
- # 直接调用交叉熵损失
- ce = nn.CrossEntropyLoss()(y_pred,y_true)
- print(ce)
- # 等价于先计算nn.LogSoftmax激活,再调用NLLLoss
- y_pred_logsoftmax = nn.LogSoftmax(dim = 1)(y_pred)
- nll = nn.NLLLoss()(y_pred_logsoftmax,y_true)
- print(nll)
-
- out:
- tensor(0.5493)
- tensor(0.5493)
自定义损失函数接收两个张量y_pred,y_true作为输入参数,并输出一个标量作为损失函数值。
也可以对nn.Module进行子类化,重写forward方法实现损失的计算逻辑,从而得到损失函数的 类的实现。
下面是一个Focal Loss的自定义实现示范。
Focal Loss是一种对binary_crossentropy的改进损失 函数形式。
它在样本不均衡和存在较多易分类的样本时相比binary_crossentropy具有明显的优势。
它有两个可调参数,alpha参数和gamma参数。
其中alpha参数主要用于衰减负样本的权重, gamma参数主要用于衰减容易训练样本的权重。
从而让模型更加聚焦在正样本和困难样本上。这就是为什么这个损失函数叫做Focal Loss。
- #例4-3-2 自定义损失函数
- import torch
- from torch import nn
-
- class FocalLoss(nn.Module):
- def __init__(self,gamma=2.0,alpha=0.75):
- super().__init__()
- self.gamma = gamma
- self.alpha = alpha
- def forward(self,y_pred,y_true):
- bce = torch.nn.BCELoss(reduction = "none")(y_pred,y_true)
- p_t = (y_true * y_pred) + ((1 - y_true) * (1 - y_pred))
- alpha_factor = y_true * self.alpha + (1 - y_true) * (1 - self.alpha)
- modulating_factor = torch.pow(1.0 - p_t, self.gamma)
- loss = torch.mean(alpha_factor * modulating_factor * bce)
- return loss
-
- #困难样本
- y_pred_hard = torch.tensor([[0.5],[0.5]])
- y_true_hard = torch.tensor([[1.0],[0.0]])
- #容易样本
- y_pred_easy = torch.tensor([[0.9],[0.1]])
- y_true_easy = torch.tensor([[1.0],[0.0]])
- focal_loss = FocalLoss()
- bce_loss = nn.BCELoss()
- print("focal_loss(hard samples):", focal_loss(y_pred_hard,y_true_hard))
-
- print("bce_loss(hard samples):", bce_loss(y_pred_hard,y_true_hard))
- print("focal_loss(easy samples):", focal_loss(y_pred_easy,y_true_easy))
- print("bce_loss(easy samples):", bce_loss(y_pred_easy,y_true_easy))
- #可见 focal_loss让容易样本的权重衰减到原来的 0.0005/0.1054 = 0.00474
- #而让困难样本的权重只衰减到原来的 0.0866/0.6931=0.12496
- # 因此相对而言,focal_loss可以衰减容易样本的权重。
-
- out:
- focal_loss(hard samples): tensor(0.0866)
- bce_loss(hard samples): tensor(0.6931)
- focal_loss(easy samples): tensor(0.0005)
- bce_loss(easy samples): tensor(0.1054)
通常认为L1 正则化可以产生稀疏权值矩阵,即产生一个稀疏模型,可以用于特征选择。
而L2 正则化可以防止模型过拟合(overfitting)。一定程度上,L1也可以防止过拟合。
下面以一个二分类问题为例,演示给模型的目标函数添加自定义L1和L2正则化项的方法。
这个范例同时演示了上一个部分的FocalLoss的使用
- import torch
- from torch import nn
-
- class FocalLoss(nn.Module):
- def __init__(self,gamma=2.0,alpha=0.75):
- super().__init__()
- self.gamma = gamma
- self.alpha = alpha
- def forward(self,y_pred,y_true):
- bce = torch.nn.BCELoss(reduction = "none")(y_pred,y_true)
- p_t = (y_true * y_pred) + ((1 - y_true) * (1 - y_pred))
- alpha_factor = y_true * self.alpha + (1 - y_true) * (1 - self.alpha)
- modulating_factor = torch.pow(1.0 - p_t, self.gamma)
- loss = torch.mean(alpha_factor * modulating_factor * bce)
- return loss
-
-
- #例4-3-3 自定义L1和L2正则化项
- import numpy as np
- import pandas as pd
- from matplotlib import pyplot as plt
- import torch
- from torch import nn
- import torch.nn.functional as F
- from torch.utils.data import Dataset,DataLoader,TensorDataset
- import torchkeras
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
-
- #1、准备数据
- #正负样本数量
- n_positive,n_negative = 200,6000
- #生成正样本, 小圆环分布
- r_p = 5.0 + torch.normal(0.0,1.0,size = [n_positive,1])
- theta_p = 2*np.pi*torch.rand([n_positive,1])
- Xp = torch.cat([r_p*torch.cos(theta_p),r_p*torch.sin(theta_p)],axis = 1)
- Yp = torch.ones_like(r_p)
- #生成负样本, 大圆环分布
- r_n = 8.0 + torch.normal(0.0,1.0,size = [n_negative,1])
- theta_n = 2*np.pi*torch.rand([n_negative,1])
- Xn = torch.cat([r_n*torch.cos(theta_n),r_n*torch.sin(theta_n)],axis = 1)
- Yn = torch.zeros_like(r_n)
- #汇总样本
- X = torch.cat([Xp,Xn],axis = 0)
- Y = torch.cat([Yp,Yn],axis = 0)
- #可视化
- plt.figure(figsize = (6,6))
- plt.scatter(Xp[:,0],Xp[:,1],c = "r")
- plt.scatter(Xn[:,0],Xn[:,1],c = "g")
- plt.legend(["positive","negative"]);
- ds = TensorDataset(X,Y)
- ds_train,ds_valid = torch.utils.data.random_split(ds,
- [int(len(ds)*0.7),len(ds)-int(len(ds)*0.7)])
- dl_train = DataLoader(ds_train,batch_size = 100,shuffle=True,num_workers=2)
- dl_valid = DataLoader(ds_valid,batch_size = 100,num_workers=2)
-
- #2、定义模型
- class DNNModel(torchkeras.Model):
- def __init__(self):
- super(DNNModel, self).__init__()
- self.fc1 = nn.Linear(2,4)
- self.fc2 = nn.Linear(4,8)
- self.fc3 = nn.Linear(8,1)
- def forward(self,x):
- x = F.relu(self.fc1(x))
- x = F.relu(self.fc2(x))
- y = nn.Sigmoid()(self.fc3(x))
- return y
- model = DNNModel()
- model.summary(input_shape =(2,))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Linear-1 [-1, 4] 12
- Linear-2 [-1, 8] 40
- Linear-3 [-1, 1] 9
- ================================================================
- Total params: 61
- Trainable params: 61
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.000008
- Forward/backward pass size (MB): 0.000099
- Params size (MB): 0.000233
- Estimated Total Size (MB): 0.000340
- ----------------------------------------------------------------
- #3、训练模型
- # 准确率
- def accuracy(y_pred,y_true):
- y_pred = torch.where(y_pred>0.5,torch.ones_like(y_pred,dtype = torch.float32),torch.zeros_like(y_pred,dtype = torch.float32))
- acc = torch.mean(1-torch.abs(y_true-y_pred))
- return acc
- # L2正则化
- def L2Loss(model,alpha):
- l2_loss = torch.tensor(0.0, requires_grad=True)
- for name, param in model.named_parameters():
- if 'bias' not in name: #一般不对偏置项使用正则
- l2_loss = l2_loss + (0.5 * alpha * torch.sum(torch.pow(param, 2)))
- return l2_loss
- # L1正则化
- def L1Loss(model,beta):
- l1_loss = torch.tensor(0.0, requires_grad=True)
- for name, param in model.named_parameters():
- if 'bias' not in name:
- l1_loss = l1_loss + beta * torch.sum(torch.abs(param))
- return l1_loss
- # 将L2正则和L1正则添加到FocalLoss损失,一起作为目标函数
- def focal_loss_with_regularization(y_pred,y_true):
- focal = FocalLoss()(y_pred,y_true)
- l2_loss = L2Loss(model,0.001) #注意设置正则化项系数
- l1_loss = L1Loss(model,0.001)
- total_loss = focal + l2_loss + l1_loss
- return total_loss
- model.compile(loss_func =focal_loss_with_regularization,optimizer= torch.optim.Adam(model.parameters(),lr = 0.01),metrics_dict={"accuracy":accuracy})
- dfhistory = model.fit(30,dl_train = dl_train,dl_val = dl_valid,log_step_freq =30)
-
- out:
- Start Training ...
-
- ================================================================================2022-03-20 22:24:58
- {'step': 30, 'loss': 0.071, 'accuracy': 0.339}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 1 | 0.059 | 0.538 | 0.025 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:24:59
- {'step': 30, 'loss': 0.026, 'accuracy': 0.966}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 2 | 0.025 | 0.967 | 0.023 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:01
- {'step': 30, 'loss': 0.024, 'accuracy': 0.966}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 3 | 0.023 | 0.967 | 0.022 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:03
- {'step': 30, 'loss': 0.023, 'accuracy': 0.966}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 4 | 0.023 | 0.967 | 0.021 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:05
- {'step': 30, 'loss': 0.023, 'accuracy': 0.966}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 5 | 0.023 | 0.966 | 0.021 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:06
- {'step': 30, 'loss': 0.023, 'accuracy': 0.966}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 6 | 0.023 | 0.967 | 0.021 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:08
- {'step': 30, 'loss': 0.022, 'accuracy': 0.967}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 7 | 0.022 | 0.967 | 0.021 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:10
- {'step': 30, 'loss': 0.022, 'accuracy': 0.966}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 8 | 0.022 | 0.967 | 0.02 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:12
- {'step': 30, 'loss': 0.022, 'accuracy': 0.963}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 9 | 0.021 | 0.967 | 0.02 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:14
- {'step': 30, 'loss': 0.021, 'accuracy': 0.966}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 10 | 0.021 | 0.967 | 0.019 | 0.97 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:16
- {'step': 30, 'loss': 0.021, 'accuracy': 0.966}
-
- +-------+------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+------+----------+----------+--------------+
- | 11 | 0.02 | 0.969 | 0.019 | 0.97 |
- +-------+------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:18
- {'step': 30, 'loss': 0.019, 'accuracy': 0.972}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 12 | 0.019 | 0.971 | 0.018 | 0.972 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:19
- {'step': 30, 'loss': 0.019, 'accuracy': 0.973}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 13 | 0.019 | 0.971 | 0.018 | 0.972 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:21
- {'step': 30, 'loss': 0.019, 'accuracy': 0.972}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 14 | 0.019 | 0.974 | 0.017 | 0.971 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:23
- {'step': 30, 'loss': 0.019, 'accuracy': 0.974}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 15 | 0.018 | 0.976 | 0.018 | 0.972 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:25
- {'step': 30, 'loss': 0.018, 'accuracy': 0.975}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 16 | 0.018 | 0.976 | 0.017 | 0.978 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:26
- {'step': 30, 'loss': 0.018, 'accuracy': 0.977}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 17 | 0.018 | 0.978 | 0.017 | 0.979 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:28
- {'step': 30, 'loss': 0.019, 'accuracy': 0.975}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 18 | 0.018 | 0.977 | 0.017 | 0.978 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:30
- {'step': 30, 'loss': 0.017, 'accuracy': 0.981}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 19 | 0.018 | 0.98 | 0.017 | 0.982 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:32
- {'step': 30, 'loss': 0.017, 'accuracy': 0.98}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 20 | 0.018 | 0.979 | 0.018 | 0.984 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:34
- {'step': 30, 'loss': 0.018, 'accuracy': 0.978}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 21 | 0.018 | 0.979 | 0.017 | 0.981 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:36
- {'step': 30, 'loss': 0.018, 'accuracy': 0.981}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 22 | 0.018 | 0.98 | 0.016 | 0.98 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:37
- {'step': 30, 'loss': 0.017, 'accuracy': 0.982}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 23 | 0.017 | 0.98 | 0.016 | 0.982 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:39
- {'step': 30, 'loss': 0.018, 'accuracy': 0.978}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 24 | 0.017 | 0.98 | 0.016 | 0.98 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:41
- {'step': 30, 'loss': 0.017, 'accuracy': 0.982}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 25 | 0.017 | 0.979 | 0.017 | 0.983 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:43
- {'step': 30, 'loss': 0.018, 'accuracy': 0.98}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 26 | 0.017 | 0.981 | 0.016 | 0.983 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:45
- {'step': 30, 'loss': 0.017, 'accuracy': 0.982}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 27 | 0.017 | 0.979 | 0.016 | 0.985 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:47
- {'step': 30, 'loss': 0.017, 'accuracy': 0.98}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 28 | 0.017 | 0.979 | 0.016 | 0.984 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:49
- {'step': 30, 'loss': 0.017, 'accuracy': 0.981}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 29 | 0.017 | 0.98 | 0.016 | 0.986 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:51
- {'step': 30, 'loss': 0.017, 'accuracy': 0.978}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 30 | 0.018 | 0.979 | 0.016 | 0.986 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-20 22:25:53
- Finished Training...
- # 结果可视化
- fig, (ax1,ax2) = plt.subplots(nrows=1,ncols=2,figsize = (12,5))
- ax1.scatter(Xp[:,0],Xp[:,1], c="r")
- ax1.scatter(Xn[:,0],Xn[:,1],c = "g")
- ax1.legend(["positive","negative"]);
- ax1.set_title("y_true");
- Xp_pred = X[torch.squeeze(model.forward(X)>=0.5)]
- Xn_pred = X[torch.squeeze(model.forward(X)<0.5)]
- ax2.scatter(Xp_pred[:,0],Xp_pred[:,1],c = "r")
- ax2.scatter(Xn_pred[:,0],Xn_pred[:,1],c = "g")
- ax2.legend(["positive","negative"]);
- ax2.set_title("y_pred");
如果仅仅需要使用L2正则化,那么也可以利用优化器的weight_decay参数来实现。 weight_decay参数可以设置参数在训练过程中的衰减,这和L2正则化的作用效果等价。
- before L2 regularization:
- gradient descent: w = w - lr * dloss_dw
- after L2 regularization:
- gradient descent: w = w - lr * (dloss_dw+beta*w) = (1-lr*beta)*w - lr*dloss_dw
- so (1-lr*beta)is the weight decay ratio.
Pytorch的优化器支持一种称之为Per-parameter options的操作,就是对每一个参数进行特定的 学习率,权重衰减率指定,以满足更为细致的要求。
- weight_params = [param for name, param in model.named_parameters() if "bias"
- not in name]
- bias_params = [param for name, param in model.named_parameters() if "bias" in
- name]
-
-
- optimizer = torch.optim.SGD([{'params': weight_params, 'weight_decay':1e-5},
- {'params': bias_params, 'weight_decay':0}],
- lr=1e-2, momentum=0.9)
-
pytorch可以使用以下3种方式构建模型:
1,继承nn.Module基类构建自定义模型。
2,使用nn.Sequential按层顺序构建模型。
3,继承nn.Module基类构建模型并辅助应用模型容器进行封装 (nn.Sequential,nn.ModuleList,nn.ModuleDict)。
其中 第1种方式最为常见,第2种方式最简单,第3种方式最为灵活也较为复杂。
模型中的用到的层一般在 __init__ 函 数中定义,然后在 forward 方法中定义模型的正向传播逻辑。
- #例5-1-1继承nn.Module基类构建自定义模型
- import torch
- from torch import nn
- from torchkeras import summary
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.conv1 = nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3)
- self.pool1 = nn.MaxPool2d(kernel_size = 2,stride = 2)
- self.conv2 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5)
- self.pool2 = nn.MaxPool2d(kernel_size = 2,stride = 2)
- self.dropout = nn.Dropout2d(p = 0.1)
- self.adaptive_pool = nn.AdaptiveMaxPool2d((1,1))
- self.flatten = nn.Flatten()
- self.linear1 = nn.Linear(64,32)
- self.relu = nn.ReLU()
- self.linear2 = nn.Linear(32,1)
- self.sigmoid = nn.Sigmoid()
- def forward(self,x):
- x = self.conv1(x)
- x = self.pool1(x)
- x = self.conv2(x)
- x = self.pool2(x)
- x = self.dropout(x)
- x = self.adaptive_pool(x)
- x = self.flatten(x)
- x = self.linear1(x)
- x = self.relu(x)
- x = self.linear2(x)
- y = self.sigmoid(x)
- return y
- net = Net()
- print(net)
-
- out:
- Net(
- (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
- (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (dropout): Dropout2d(p=0.1, inplace=False)
- (adaptive_pool): AdaptiveMaxPool2d(output_size=(1, 1))
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear1): Linear(in_features=64, out_features=32, bias=True)
- (relu): ReLU()
- (linear2): Linear(in_features=32, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
-
-
-
- summary(net,input_shape= (3,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 896
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 1] 33
- Sigmoid-11 [-1, 1] 0
- ================================================================
- Total params: 54,273
- Trainable params: 54,273
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.011719
- Forward/backward pass size (MB): 0.359634
- Params size (MB): 0.207035
- Estimated Total Size (MB): 0.578388
- ----------------------------------------------------------------
使用nn.Sequential按层顺序构建模型无需定义forward方法。仅仅适合于简单的模型。
a) 利用add_module方法
b) 利用变长参数 这种方式构建时不能给每个层指定名称。
c) 利用OrderedDict
- #例5-1-2 使用nn.Sequential按层顺序构建模型
-
- #a)利用add_module方法
- import torch
- from torch import nn
- from torchkeras import summary
-
- net = nn.Sequential()
- net.add_module("conv1",nn.Conv2d(in_channels=3,out_channels=32,kernel_size =
- 3))
- net.add_module("pool1",nn.MaxPool2d(kernel_size = 2,stride = 2))
- net.add_module("conv2",nn.Conv2d(in_channels=32,out_channels=64,kernel_size =
- 5))
- net.add_module("pool2",nn.MaxPool2d(kernel_size = 2,stride = 2))
- net.add_module("dropout",nn.Dropout2d(p = 0.1))
- net.add_module("adaptive_pool",nn.AdaptiveMaxPool2d((1,1)))
- net.add_module("flatten",nn.Flatten())
- net.add_module("linear1",nn.Linear(64,32))
- net.add_module("relu",nn.ReLU())
- net.add_module("linear2",nn.Linear(32,1))
- net.add_module("sigmoid",nn.Sigmoid())
- print(net)
-
- out:
- Sequential(
- (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
- (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (dropout): Dropout2d(p=0.1, inplace=False)
- (adaptive_pool): AdaptiveMaxPool2d(output_size=(1, 1))
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear1): Linear(in_features=64, out_features=32, bias=True)
- (relu): ReLU()
- (linear2): Linear(in_features=32, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
-
-
-
- #b)利用变长参数 这种方式构建时不能给每个层指定名称。
- net = nn.Sequential(
- nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Dropout2d(p = 0.1),
- nn.AdaptiveMaxPool2d((1,1)),
- nn.Flatten(),
- nn.Linear(64,32),
- nn.ReLU(),
- nn.Linear(32,1),
- nn.Sigmoid()
- )
- print(net)
-
- out:
- Sequential(
- (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
- (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Dropout2d(p=0.1, inplace=False)
- (5): AdaptiveMaxPool2d(output_size=(1, 1))
- (6): Flatten(start_dim=1, end_dim=-1)
- (7): Linear(in_features=64, out_features=32, bias=True)
- (8): ReLU()
- (9): Linear(in_features=32, out_features=1, bias=True)
- (10): Sigmoid()
- )
-
-
-
-
- #c) 利用OrderedDict
- from collections import OrderedDict
- net = nn.Sequential(OrderedDict(
- [("conv1",nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3)),
- ("pool1",nn.MaxPool2d(kernel_size = 2,stride = 2)),
- ("conv2",nn.Conv2d(in_channels=32,out_channels=64,kernel_size =
- 5)),
- ("pool2",nn.MaxPool2d(kernel_size = 2,stride = 2)),
- ("dropout",nn.Dropout2d(p = 0.1)),
- ("adaptive_pool",nn.AdaptiveMaxPool2d((1,1))),
- ("flatten",nn.Flatten()),
- ("linear1",nn.Linear(64,32)),
- ("relu",nn.ReLU()),
- ("linear2",nn.Linear(32,1)),
- ("sigmoid",nn.Sigmoid())
- ])
- )
- print(net)
- summary(net,input_shape= (3,32,32))
-
- out:
- Sequential(
- (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
- (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (dropout): Dropout2d(p=0.1, inplace=False)
- (adaptive_pool): AdaptiveMaxPool2d(output_size=(1, 1))
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear1): Linear(in_features=64, out_features=32, bias=True)
- (relu): ReLU()
- (linear2): Linear(in_features=32, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
-
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 896
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 1] 33
- Sigmoid-11 [-1, 1] 0
- ================================================================
- Total params: 54,273
- Trainable params: 54,273
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.011719
- Forward/backward pass size (MB): 0.359634
- Params size (MB): 0.207035
- Estimated Total Size (MB): 0.578388
- ----------------------------------------------------------------
当模型的结构比较复杂时,我们可以应用模型容器(nn.Sequential,nn.ModuleList,nn.ModuleDict) 对模型的部分结构进行封装。 这样做会让模型整体更加有层次感,有时候也能减少代码量。
注意,在下面的范例中我们每次仅仅使用一种模型容器,但实际上这些模型容器的使用是非常灵活的,可以在一个模型中任意组合任意嵌套使用。
- #5-1-3继承nn.Module基类构建模型并辅助应用模型容器进行封装
- import torch
- from torch import nn
- from torchkeras import summary
- #a)nn.Sequential作为模型容器
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Dropout2d(p = 0.1),
- nn.AdaptiveMaxPool2d((1,1))
- )
- self.dense = nn.Sequential(
- nn.Flatten(),
- nn.Linear(64,32),
- nn.ReLU(),
- nn.Linear(32,1),
- nn.Sigmoid()
- )
- def forward(self,x):
- x = self.conv(x)
- y = self.dense(x)
- return y
- net = Net()
- print(net)
-
- out:
- Net(
- (conv): Sequential(
- (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
- (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Dropout2d(p=0.1, inplace=False)
- (5): AdaptiveMaxPool2d(output_size=(1, 1))
- )
- (dense): Sequential(
- (0): Flatten(start_dim=1, end_dim=-1)
- (1): Linear(in_features=64, out_features=32, bias=True)
- (2): ReLU()
- (3): Linear(in_features=32, out_features=1, bias=True)
- (4): Sigmoid()
- )
- )
-
-
-
- #b) nn.ModuleList作为模型容器. 注意下面中的ModuleList不能用Python中的列表代替。
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.layers = nn.ModuleList([
- nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Dropout2d(p = 0.1),
- nn.AdaptiveMaxPool2d((1,1)),
- nn.Flatten(),
- nn.Linear(64,32),
- nn.ReLU(),
- nn.Linear(32,1),
- nn.Sigmoid()]
- )
- def forward(self,x):
- for layer in self.layers:
- x = layer(x)
- return x
- net = Net()
- print(net)
-
- out:
- Net(
- (layers): ModuleList(
- (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
- (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Dropout2d(p=0.1, inplace=False)
- (5): AdaptiveMaxPool2d(output_size=(1, 1))
- (6): Flatten(start_dim=1, end_dim=-1)
- (7): Linear(in_features=64, out_features=32, bias=True)
- (8): ReLU()
- (9): Linear(in_features=32, out_features=1, bias=True)
- (10): Sigmoid()
- )
- )
-
-
-
- summary(net,input_shape= (3,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 896
- ================================================================
- Total params: 896
- Trainable params: 896
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.011719
- Forward/backward pass size (MB): 0.219727
- Params size (MB): 0.003418
- Estimated Total Size (MB): 0.234863
- ----------------------------------------------------------------
-
-
-
-
- #c) nn.ModuleDict作为模型容器. 注意下面中的ModuleDict不能用Python中的字典代替。
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.layers_dict = nn.ModuleDict({"conv1":nn.Conv2d(in_channels=3,out_channels=32,kernel_size =3),
- "pool": nn.MaxPool2d(kernel_size = 2,stride = 2),
- "conv2":nn.Conv2d(in_channels=32,out_channels=64,kernel_size =5),
- "dropout": nn.Dropout2d(p = 0.1),
- "adaptive":nn.AdaptiveMaxPool2d((1,1)),
- "flatten": nn.Flatten(),
- "linear1": nn.Linear(64,32),
- "relu":nn.ReLU(),
- "linear2": nn.Linear(32,1),
- "sigmoid": nn.Sigmoid()
- })
- def forward(self,x):
- layers = ["conv1","pool","conv2","pool","dropout","adaptive","flatten","linear1","relu","linear2","sigmoid"]
- for layer in layers:
- x = self.layers_dict[layer](x)
- return x
- net = Net()
- print(net)
-
- out:
- Net(
- (layers_dict): ModuleDict(
- (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
- (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (dropout): Dropout2d(p=0.1, inplace=False)
- (adaptive): AdaptiveMaxPool2d(output_size=(1, 1))
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear1): Linear(in_features=64, out_features=32, bias=True)
- (relu): ReLU()
- (linear2): Linear(in_features=32, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
- )
-
-
-
- summary(net,input_shape= (3,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 896
- ================================================================
- Total params: 896
- Trainable params: 896
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.011719
- Forward/backward pass size (MB): 0.219727
- Params size (MB): 0.003418
- Estimated Total Size (MB): 0.234863
- ----------------------------------------------------------------
Pytorch通常需要用户编写自定义训练循环,训练循环的代码风格因人而异。
有3类典型的训练循环代码风格:脚本形式训练循环,函数形式训练循环,类形式训练循环。
下面以minist数据集的分类模型的训练为例,演示这3种训练模型的风格。
其中类形式训练循环我们会使用torchkeras.Model和torchkeras.LightModel这两种方法。
脚本风格的训练循环最为常见。
- #例5-2 用pytorch训练模型的三种方法:
-
- #准备数据
- import torch
- from torch import nn
- from torchkeras import summary
- import torchvision
- from torchvision import transforms
-
- transform = transforms.Compose([transforms.ToTensor()])
- ds_train = torchvision.datasets.MNIST(root="./data/minist/",train=True,download=True,transform=transform)
- ds_valid = torchvision.datasets.MNIST(root="./data/minist/",train=False,download=True,transform=transform)
- dl_train = torch.utils.data.DataLoader(ds_train, batch_size=128,shuffle=True, num_workers=4)
- dl_valid = torch.utils.data.DataLoader(ds_valid, batch_size=128,shuffle=False, num_workers=4)
- print(len(ds_train))
- print(len(ds_valid))
-
- out:
- 60000
- 10000
-
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- #查看部分样本
- from matplotlib import pyplot as plt
- plt.figure(figsize=(8,8))
- for i in range(9):
- img,label = ds_train[i]
- img = torch.squeeze(img)
- ax=plt.subplot(3,3,i+1)
- ax.imshow(img.numpy())
- ax.set_title("label = %d"%label)
- ax.set_xticks([])
- ax.set_yticks([])
- plt.show()
-
-
-
- #例5-2-1 脚本风格
- net = nn.Sequential()
- net.add_module("conv1",nn.Conv2d(in_channels=1,out_channels=32,kernel_size =
- 3))
- net.add_module("pool1",nn.MaxPool2d(kernel_size = 2,stride = 2))
- net.add_module("conv2",nn.Conv2d(in_channels=32,out_channels=64,kernel_size =
- 5))
- net.add_module("pool2",nn.MaxPool2d(kernel_size = 2,stride = 2))
- net.add_module("dropout",nn.Dropout2d(p = 0.1))
- net.add_module("adaptive_pool",nn.AdaptiveMaxPool2d((1,1)))
- net.add_module("flatten",nn.Flatten())
- net.add_module("linear1",nn.Linear(64,32))
- net.add_module("relu",nn.ReLU())
- net.add_module("linear2",nn.Linear(32,10))
- print(net)
-
- out:
- Sequential(
- (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))
- (pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (dropout): Dropout2d(p=0.1, inplace=False)
- (adaptive_pool): AdaptiveMaxPool2d(output_size=(1, 1))
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear1): Linear(in_features=64, out_features=32, bias=True)
- (relu): ReLU()
- (linear2): Linear(in_features=32, out_features=10, bias=True)
- )
-
-
-
- summary(net,input_shape=(1,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 320
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 10] 330
- ================================================================
- Total params: 53,994
- Trainable params: 53,994
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.003906
- Forward/backward pass size (MB): 0.359695
- Params size (MB): 0.205971
- Estimated Total Size (MB): 0.569572
- ----------------------------------------------------------------
-
-
-
- import datetime
- import numpy as np
- import pandas as pd
- from sklearn.metrics import accuracy_score
- def accuracy(y_pred,y_true):
- y_pred_cls = torch.argmax(nn.Softmax(dim=1)(y_pred),dim=1).data
- return accuracy_score(y_true,y_pred_cls)
- loss_func = nn.CrossEntropyLoss()
- optimizer = torch.optim.Adam(params=net.parameters(),lr = 0.01)
- metric_func = accuracy
- metric_name = "accuracy"
- epochs = 3
- log_step_freq = 100
- dfhistory = pd.DataFrame(columns = ["epoch","loss",metric_name,"val_loss","val_"+metric_name])
- print("Start Training...")
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("=========="*8 + "%s"%nowtime)
- for epoch in range(1,epochs+1):
- # 1,训练循环-------------------------------------------------
- net.train()
- loss_sum = 0.0
- metric_sum = 0.0
- step = 1
- for step, (features,labels) in enumerate(dl_train, 1):
- # 梯度清零
- optimizer.zero_grad()
- # 正向传播求损失
- predictions = net(features)
- loss = loss_func(predictions,labels)
- metric = metric_func(predictions,labels)
- # 反向传播求梯度
- loss.backward()
- optimizer.step()
- # 打印batch级别日志
- loss_sum += loss.item()
- metric_sum += metric.item()
- if step%log_step_freq == 0:
- print(("[step = %d] loss: %.3f, "+metric_name+": %.3f") % (step, loss_sum/step, metric_sum/step))
- # 2,验证循环-------------------------------------------------
- net.eval()
- val_loss_sum = 0.0
- val_metric_sum = 0.0
- val_step = 1
- for val_step, (features,labels) in enumerate(dl_valid, 1):
- with torch.no_grad():
- predictions = net(features)
- val_loss = loss_func(predictions,labels)
- val_metric = metric_func(predictions,labels)
- val_loss_sum += val_loss.item()
- val_metric_sum += val_metric.item()
- # 3,记录日志-------------------------------------------------
- info = (epoch, loss_sum/step, metric_sum/step,val_loss_sum/val_step, val_metric_sum/val_step)
- dfhistory.loc[epoch-1] = info
- # 打印epoch级别日志
- print(("\nEPOCH = %d, loss = %.3f,"+ metric_name + " = %.3f, val_loss = %.3f, "+"val_"+ metric_name+" = %.3f") %info)
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
- print('Finished Training...')
-
- out:
- Start Training...
- ================================================================================2022-03-21 14:28:46
- [step = 100] loss: 0.735, accuracy: 0.750
- [step = 200] loss: 0.459, accuracy: 0.847
- [step = 300] loss: 0.351, accuracy: 0.885
- [step = 400] loss: 0.294, accuracy: 0.904
-
- EPOCH = 1, loss = 0.270,accuracy = 0.913, val_loss = 0.093, val_accuracy = 0.974
-
- ================================================================================2022-03-21 14:29:12
- [step = 100] loss: 0.103, accuracy: 0.968
- [step = 200] loss: 0.108, accuracy: 0.967
- [step = 300] loss: 0.107, accuracy: 0.967
- [step = 400] loss: 0.105, accuracy: 0.968
-
- EPOCH = 2, loss = 0.103,accuracy = 0.968, val_loss = 0.059, val_accuracy = 0.981
-
- ================================================================================2022-03-21 14:29:40
- [step = 100] loss: 0.083, accuracy: 0.973
- [step = 200] loss: 0.092, accuracy: 0.972
- [step = 300] loss: 0.092, accuracy: 0.972
- [step = 400] loss: 0.092, accuracy: 0.972
-
- EPOCH = 3, loss = 0.091,accuracy = 0.973, val_loss = 0.078, val_accuracy = 0.977
-
- ================================================================================2022-03-21 14:30:08
- Finished Training...
函数风格就是在脚本风格的形式上作了简单的函数封装。
- #例5-2 用pytorch训练模型的三种方法:
-
- #准备数据
- import torch
- from torch import nn
- from torchkeras import summary
- import torchvision
- from torchvision import transforms
-
- transform = transforms.Compose([transforms.ToTensor()])
- ds_train = torchvision.datasets.MNIST(root="./data/minist/",train=True,download=True,transform=transform)
- ds_valid = torchvision.datasets.MNIST(root="./data/minist/",train=False,download=True,transform=transform)
- dl_train = torch.utils.data.DataLoader(ds_train, batch_size=128,shuffle=True, num_workers=4)
- dl_valid = torch.utils.data.DataLoader(ds_valid, batch_size=128,shuffle=False, num_workers=4)
- print(len(ds_train))
- print(len(ds_valid))
-
- out:
- 60000
- 10000
-
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- #查看部分样本
- from matplotlib import pyplot as plt
- plt.figure(figsize=(8,8))
- for i in range(9):
- img,label = ds_train[i]
- img = torch.squeeze(img)
- ax=plt.subplot(3,3,i+1)
- ax.imshow(img.numpy())
- ax.set_title("label = %d"%label)
- ax.set_xticks([])
- ax.set_yticks([])
- plt.show()
-
-
- #例5-2-2 函数风格
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.layers = nn.ModuleList([
- nn.Conv2d(in_channels=1,out_channels=32,kernel_size = 3),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Dropout2d(p = 0.1),
- nn.AdaptiveMaxPool2d((1,1)),
- nn.Flatten(),
- nn.Linear(64,32),
- nn.ReLU(),
- nn.Linear(32,10)]
- )
- def forward(self,x):
- for layer in self.layers:
- x = layer(x)
- return x
- net = Net()
- print(net)
-
- out:
- Net(
- (layers): ModuleList(
- (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))
- (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Dropout2d(p=0.1, inplace=False)
- (5): AdaptiveMaxPool2d(output_size=(1, 1))
- (6): Flatten(start_dim=1, end_dim=-1)
- (7): Linear(in_features=64, out_features=32, bias=True)
- (8): ReLU()
- (9): Linear(in_features=32, out_features=10, bias=True)
- )
- )
-
-
-
- summary(net,input_shape=(1,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 320
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 10] 330
- ================================================================
- Total params: 53,994
- Trainable params: 53,994
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.003906
- Forward/backward pass size (MB): 0.359695
- Params size (MB): 0.205971
- Estimated Total Size (MB): 0.569572
- ----------------------------------------------------------------
-
-
-
- import datetime
- import numpy as np
- import pandas as pd
- from sklearn.metrics import accuracy_score
- def accuracy(y_pred,y_true):
- y_pred_cls = torch.argmax(nn.Softmax(dim=1)(y_pred),dim=1).data
- return accuracy_score(y_true,y_pred_cls)
- model = net
- model.optimizer = torch.optim.SGD(model.parameters(),lr = 0.01)
- model.loss_func = nn.CrossEntropyLoss()
- model.metric_func = accuracy
- model.metric_name = "accuracy"
- def train_step(model,features,labels):
- # 训练模式,dropout层发生作用
- model.train()
- # 梯度清零
- model.optimizer.zero_grad()
- # 正向传播求损失
- predictions = model(features)
- loss = model.loss_func(predictions,labels)
- metric = model.metric_func(predictions,labels)
- # 反向传播求梯度
- loss.backward()
- model.optimizer.step()
- return loss.item(),metric.item()
- @torch.no_grad()
- def valid_step(model,features,labels):
- # 预测模式,dropout层不发生作用
- model.eval()
- predictions = model(features)
- loss = model.loss_func(predictions,labels)
- metric = model.metric_func(predictions,labels)
- return loss.item(), metric.item()
- # 测试train_step效果
- features,labels = next(iter(dl_train))
- train_step(model,features,labels)
-
- out:
- (2.3077056407928467, 0.125)
-
-
-
- def train_model(model,epochs,dl_train,dl_valid,log_step_freq):
- metric_name = model.metric_name
- dfhistory = pd.DataFrame(columns = ["epoch","loss",metric_name,"val_loss","val_"+metric_name])
- print("Start Training...")
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("=========="*8 + "%s"%nowtime)
- for epoch in range(1,epochs+1):
- # 1,训练循环-------------------------------------------------
- loss_sum = 0.0
- metric_sum = 0.0
- step = 1
- for step, (features,labels) in enumerate(dl_train, 1):
- loss,metric = train_step(model,features,labels)
- # 打印batch级别日志
- loss_sum += loss
- metric_sum += metric
- if step%log_step_freq == 0:
- print(("[step = %d] loss: %.3f, "+metric_name+": %.3f") % (step, loss_sum/step, metric_sum/step))
- # 2,验证循环-------------------------------------------------
- val_loss_sum = 0.0
- val_metric_sum = 0.0
- val_step = 1
- for val_step, (features,labels) in enumerate(dl_valid, 1):
- val_loss,val_metric = valid_step(model,features,labels)
- val_loss_sum += val_loss
- val_metric_sum += val_metric
- # 3,记录日志-------------------------------------------------
- info = (epoch, loss_sum/step, metric_sum/step,val_loss_sum/val_step, val_metric_sum/val_step)
- dfhistory.loc[epoch-1] = info
- # 打印epoch级别日志
- print(("\nEPOCH = %d, loss = %.3f,"+ metric_name + \
- " = %.3f, val_loss = %.3f, "+"val_"+ metric_name+" = %.3f")
- %info)
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
- print('Finished Training...')
- return dfhistory
- epochs = 3
- dfhistory = train_model(model,epochs,dl_train,dl_valid,log_step_freq = 100)
-
- out:
- Start Training...
- ================================================================================2022-03-21 14:43:36
- [step = 100] loss: 2.301, accuracy: 0.111
- [step = 200] loss: 2.291, accuracy: 0.132
- [step = 300] loss: 2.282, accuracy: 0.167
- [step = 400] loss: 2.270, accuracy: 0.208
-
- EPOCH = 1, loss = 2.260,accuracy = 0.231, val_loss = 2.178, val_accuracy = 0.432
-
- ================================================================================2022-03-21 14:44:01
- [step = 100] loss: 2.157, accuracy: 0.404
- [step = 200] loss: 2.114, accuracy: 0.423
- [step = 300] loss: 2.058, accuracy: 0.437
- [step = 400] loss: 1.990, accuracy: 0.461
-
- EPOCH = 2, loss = 1.936,accuracy = 0.478, val_loss = 1.505, val_accuracy = 0.745
-
- ================================================================================2022-03-21 14:44:29
- [step = 100] loss: 1.473, accuracy: 0.613
- [step = 200] loss: 1.374, accuracy: 0.637
- [step = 300] loss: 1.277, accuracy: 0.661
- [step = 400] loss: 1.187, accuracy: 0.684
-
- EPOCH = 3, loss = 1.132,accuracy = 0.698, val_loss = 0.629, val_accuracy = 0.876
-
- ================================================================================2022-03-21 14:44:58
- Finished Training...
类风格有两种:torchkeras.Model 和 torchkeras.LightModel
- #例5-2 用pytorch训练模型的三种方法:
-
- #准备数据
- import torch
- from torch import nn
- from torchkeras import summary
- import torchvision
- from torchvision import transforms
-
- transform = transforms.Compose([transforms.ToTensor()])
- ds_train = torchvision.datasets.MNIST(root="./data/minist/",train=True,download=True,transform=transform)
- ds_valid = torchvision.datasets.MNIST(root="./data/minist/",train=False,download=True,transform=transform)
- dl_train = torch.utils.data.DataLoader(ds_train, batch_size=128,shuffle=True, num_workers=4)
- dl_valid = torch.utils.data.DataLoader(ds_valid, batch_size=128,shuffle=False, num_workers=4)
- print(len(ds_train))
- print(len(ds_valid))
-
- out:
- 60000
- 10000
-
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- #查看部分样本
- from matplotlib import pyplot as plt
- plt.figure(figsize=(8,8))
- for i in range(9):
- img,label = ds_train[i]
- img = torch.squeeze(img)
- ax=plt.subplot(3,3,i+1)
- ax.imshow(img.numpy())
- ax.set_title("label = %d"%label)
- ax.set_xticks([])
- ax.set_yticks([])
- plt.show()
-
-
-
-
- #例5-2-3-a 类风格--torchkeras.Model
- import torchkeras
- class CnnModel(nn.Module):
- def __init__(self):
- super().__init__()
- self.layers = nn.ModuleList([
- nn.Conv2d(in_channels=1,out_channels=32,kernel_size = 3),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Dropout2d(p = 0.1),
- nn.AdaptiveMaxPool2d((1,1)),
- nn.Flatten(),
- nn.Linear(64,32),
- nn.ReLU(),
- nn.Linear(32,10)]
- )
- def forward(self,x):
- for layer in self.layers:
- x = layer(x)
- return x
- model = torchkeras.Model(CnnModel())
- print(model)
-
- out:
- Model(
- (net): CnnModel(
- (layers): ModuleList(
- (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))
- (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Dropout2d(p=0.1, inplace=False)
- (5): AdaptiveMaxPool2d(output_size=(1, 1))
- (6): Flatten(start_dim=1, end_dim=-1)
- (7): Linear(in_features=64, out_features=32, bias=True)
- (8): ReLU()
- (9): Linear(in_features=32, out_features=10, bias=True)
- )
- )
- )
-
-
-
- model.summary(input_shape=(1,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 320
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 10] 330
- ================================================================
- Total params: 53,994
- Trainable params: 53,994
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.003906
- Forward/backward pass size (MB): 0.359695
- Params size (MB): 0.205971
- Estimated Total Size (MB): 0.569572
- ----------------------------------------------------------------
-
-
-
- from sklearn.metrics import accuracy_score
- def accuracy(y_pred,y_true):
- y_pred_cls = torch.argmax(nn.Softmax(dim=1)(y_pred),dim=1).data
- return accuracy_score(y_true.numpy(),y_pred_cls.numpy())
- model.compile(loss_func = nn.CrossEntropyLoss(),optimizer= torch.optim.Adam(model.parameters(),lr = 0.02),metrics_dict={"accuracy":accuracy})
- dfhistory = model.fit(3,dl_train = dl_train, dl_val=dl_valid,log_step_freq=100)
-
- out:
- Start Training ...
-
- ================================================================================2022-03-21 14:56:50
- {'step': 100, 'loss': 0.734, 'accuracy': 0.753}
- {'step': 200, 'loss': 0.482, 'accuracy': 0.84}
- {'step': 300, 'loss': 0.384, 'accuracy': 0.875}
- {'step': 400, 'loss': 0.332, 'accuracy': 0.893}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 1 | 0.305 | 0.903 | 0.1 | 0.971 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-21 14:57:16
- {'step': 100, 'loss': 0.135, 'accuracy': 0.961}
- {'step': 200, 'loss': 0.187, 'accuracy': 0.948}
- {'step': 300, 'loss': 0.188, 'accuracy': 0.948}
- {'step': 400, 'loss': 0.175, 'accuracy': 0.952}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 2 | 0.165 | 0.955 | 0.086 | 0.976 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-21 14:57:44
- {'step': 100, 'loss': 0.109, 'accuracy': 0.969}
- {'step': 200, 'loss': 0.127, 'accuracy': 0.967}
- {'step': 300, 'loss': 0.155, 'accuracy': 0.961}
- {'step': 400, 'loss': 0.181, 'accuracy': 0.956}
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 3 | 0.179 | 0.956 | 0.122 | 0.971 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-21 14:58:13
- Finished Training...
- #例 5-2用pytorch训练模型的三种方法:
- import torch
-
- #准备数据
- from torch import nn
- from torchkeras import summary
- import torchvision
- from torchvision import transforms
-
- transform = transforms.Compose([transforms.ToTensor()])
- ds_train = torchvision.datasets.MNIST(root="./data/minist/",train=True,download=True,transform=transform)
- ds_valid = torchvision.datasets.MNIST(root="./data/minist/",train=False,download=True,transform=transform)
- dl_train = torch.utils.data.DataLoader(ds_train, batch_size=128,shuffle=True, num_workers=4)
- dl_valid = torch.utils.data.DataLoader(ds_valid, batch_size=128,shuffle=False, num_workers=4)
- print(len(ds_train))
- print(len(ds_valid))
-
- out:
- 60000
- 10000
-
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- #查看部分样本
- from matplotlib import pyplot as plt
- plt.figure(figsize=(8,8))
- for i in range(9):
- img,label = ds_train[i]
- img = torch.squeeze(img)
- ax=plt.subplot(3,3,i+1)
- ax.imshow(img.numpy())
- ax.set_title("label = %d"%label)
- ax.set_xticks([])
- ax.set_yticks([])
- plt.show()
-
-
-
-
- #例5-2-3-b 类风格--torchkeras.LightModel
- import torchkeras
- import torchmetrics
- import pytorch_lightning as pl
-
- class CnnNet(nn.Module):
- def __init__(self):
- super().__init__()
-
- self.layers = nn.ModuleList([
- nn.Conv2d(in_channels=1,out_channels=32,kernel_size = 3),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Dropout2d(p = 0.1),
- nn.AdaptiveMaxPool2d((1,1)),
- nn.Flatten(),
- nn.Linear(64,32),
- nn.ReLU(),
- nn.Linear(32,10)]
- )
- def forward(self,x):
- for layer in self.layers:
- x = layer(x)
- return x
-
- class Model(torchkeras.LightModel):
- def shared_step(self,batch)->dict:
- self.train_acc = torchmetrics.Accuracy()
- x, y = batch
- prediction = self(x)
- loss = nn.CrossEntropyLoss()(prediction,y)
- preds = torch.argmax(nn.Softmax(dim=1)(prediction),dim=1).data
- acc=self.train_acc(preds,y)
- self.log('train_acc',self.train_acc,metric_attribute='train_acc',on_step=True,on_epoch=False)
- dic = {"loss":loss,"acc":acc}
- return dic
- def configure_optimizers(self):
- optimizer = torch.optim.Adam(self.parameters(), lr=1e-2)
- lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,step_size=10, gamma=0.0001)
- return {"optimizer":optimizer,"lr_scheduler":lr_scheduler}
-
- pl.seed_everything(6666)
- net = CnnNet()
- model = Model(net)
- torchkeras.summary(model,input_shape=(1,32,32))
- print(model)
-
- out:
- Global seed set to 6666
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 320
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 10] 330
- ================================================================
- Total params: 53,994
- Trainable params: 53,994
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.003906
- Forward/backward pass size (MB): 0.359695
- Params size (MB): 0.205971
- Estimated Total Size (MB): 0.569572
- ----------------------------------------------------------------
- Model(
- (net): CnnNet(
- (layers): ModuleList(
- (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))
- (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Dropout2d(p=0.1, inplace=False)
- (5): AdaptiveMaxPool2d(output_size=(1, 1))
- (6): Flatten(start_dim=1, end_dim=-1)
- (7): Linear(in_features=64, out_features=32, bias=True)
- (8): ReLU()
- (9): Linear(in_features=32, out_features=10, bias=True)
- )
- )
- )
-
-
-
-
- ckpt_cb = pl.callbacks.ModelCheckpoint(monitor='val_loss')
- # set gpus=0 will use cpu,
- # set gpus=1 will use 1 gpu
- # set gpus=2 will use 2gpus
- # set gpus = -1 will use all gpus
- # you can also set gpus = [0,1] to use the given gpus
- # you can even set tpu_cores=2 to use two tpus
- trainer = pl.Trainer(max_epochs=10,gpus = 0, callbacks=[ckpt_cb])
- trainer.fit(model,dl_train,dl_valid)
-
- out:
- GPU available: True, used: False
- TPU available: False, using: 0 TPU cores
- IPU available: False, using: 0 IPUs
-
- | Name | Type | Params
- --------------------------------
- 0 | net | CnnNet | 54.0 K
- --------------------------------
- 54.0 K Trainable params
- 0 Non-trainable params
- 54.0 K Total params
- 0.216 Total estimated model params size (MB)
- #以下训练过程略
使用Pytorch实现神经网络模型的一般流程包括:
a),准备数据
b),定义模型
c),训练模型
d),评估模型
e),使用模型
f),保存模型。
对新手来说,其中最困难的部分实际上是准备数据过程。
我们在实践中通常会遇到的数据类型包括结构化数据,图片数据,文本数据,时间序列数据。 我们将分别以titanic生存预测问题,cifar2图片分类问题,imdb电影评论分类问题,国内新冠疫 情结束时间预测问题为例,演示应用Pytorch对这四类数据的建模方法。
这里我们以Titanic数据集为例,先准备好一个打印时间的函数
- #例6-1 结构化数据建模流程范例
- import os
- import datetime
- #打印时间
- def printbar():
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
- # a) 第一步:准备数据
- #titanic数据集的目标是根据乘客信息预测他们在Titanic号撞击冰山沉没后能否生存。
- #结构化数据一般会使用Pandas中的DataFrame进行预处理。
- import numpy as np
- import pandas as pd
- import matplotlib.pyplot as plt
- import torch
- from torch import nn
- from torch.utils.data import Dataset,DataLoader,TensorDataset
- dftrain_raw = pd.read_csv('./data/titanic/train.csv')
- dftest_raw = pd.read_csv('./data/titanic/test.csv')
- dftrain_raw.head(10)
-
-
输出数据集的前10行看看
survived | sex | age | n_sib_sp | parch | fare | class | deck | embark_town | alone | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | male | 22.0 | 1 | 0 | 7.2500 | Third | unknown | Southampton | n |
1 | 1 | female | 38.0 | 1 | 0 | 71.2833 | First | C | Cherbourg | n |
2 | 1 | female | 26.0 | 0 | 0 | 7.9250 | Third | unknown | Southampton | y |
3 | 1 | female | 35.0 | 1 | 0 | 53.1000 | First | C | Southampton | n |
4 | 0 | male | 28.0 | 0 | 0 | 8.4583 | Third | unknown | Queenstown | y |
5 | 0 | male | 2.0 | 3 | 1 | 21.0750 | Third | unknown | Southampton | n |
6 | 1 | female | 27.0 | 0 | 2 | 11.1333 | Third | unknown | Southampton | n |
7 | 1 | female | 14.0 | 1 | 0 | 30.0708 | Second | unknown | Cherbourg | n |
8 | 1 | female | 4.0 | 1 | 1 | 16.7000 | Third | G | Southampton | n |
9 | 0 | male | 20.0 | 0 | 0 | 8.0500 | Third | unknown | Southampton | y |
看一下分布情况
- ax = dftrain_raw['survived'].value_counts().plot(kind = 'bar',figsize = (12,8),fontsize=15,rot = 0)
- ax.set_ylabel('Counts',fontsize = 15)
- ax.set_xlabel('survived',fontsize = 15)
- plt.show()
- #年龄分布情况
- ax = dftrain_raw['age'].plot(kind = 'hist',bins = 20,color= 'purple',figsize = (12,8),fontsize=15)
- ax.set_ylabel('Frequency',fontsize = 15)
- ax.set_xlabel('age',fontsize = 15)
- plt.show()
- #年龄和label的相关性
- ax = dftrain_raw.query('survived == 0')['age'].plot(kind = 'density',
- figsize = (12,8),fontsize=15)
- dftrain_raw.query('survived == 1')['age'].plot(kind = 'density',
- figsize = (12,8),fontsize=15)
- ax.legend(['survived==0','survived==1'],fontsize = 12)
- ax.set_ylabel('Density',fontsize = 15)
- ax.set_xlabel('age',fontsize = 15)
- plt.show()
- def preprocessing(dfdata):
- dfresult= pd.DataFrame()
- #Pclass
- dfPclass = pd.get_dummies(dfdata['class'])
- dfPclass.columns = ['class_' +str(x) for x in dfPclass.columns ]
- dfresult = pd.concat([dfresult,dfPclass],axis = 1)
- #Sex
- dfSex = pd.get_dummies(dfdata['sex'])
- dfresult = pd.concat([dfresult,dfSex],axis = 1)
- #Age
- dfresult['age'] = dfdata['age'].fillna(0)
- dfresult['Age_null'] = pd.isna(dfdata['age']).astype('int32')
- #SibSp,Parch,Fare
- dfresult['SibSp'] = dfdata['n_siblings_spouses']
- dfresult['Parch'] = dfdata['parch']
- dfresult['Fare'] = dfdata['fare']
- #Carbin
- dfresult['Cabin_null'] = pd.isna(dfdata['deck']).astype('int32')
- #Embarked
- dfEmbarked = pd.get_dummies(dfdata['embark_town'],dummy_na=True)
- dfEmbarked.columns = ['Embarked_' + str(x) for x in dfEmbarked.columns]
- dfresult = pd.concat([dfresult,dfEmbarked],axis = 1)
- return(dfresult)
- x_train = preprocessing(dftrain_raw).values
- y_train = dftrain_raw[['survived']].values
- x_test = preprocessing(dftest_raw).values
- y_test = dftest_raw[['survived']].values
- print("x_train.shape =", x_train.shape )
- print("x_test.shape =", x_test.shape)
- print("y_train.shape =", y_train.shape )
- print("y_test.shape =", y_test.shape )
-
- dl_train = DataLoader(TensorDataset(torch.tensor(x_train).float(),torch.tensor(y_train).float()), shuffle = True, batch_size = 8)
- dl_valid = DataLoader(TensorDataset(torch.tensor(x_test).float(),torch.tensor(y_test).float()), shuffle = False, batch_size = 8)
-
- out:
- x_train.shape = (627, 16)
- x_test.shape = (264, 16)
- y_train.shape = (627, 1)
- y_test.shape = (264, 1)
- # b) 第二步:定义模型
- def create_net():
- net = nn.Sequential()
- net.add_module("linear1",nn.Linear(16,20))
- net.add_module("relu1",nn.ReLU())
- net.add_module("linear2",nn.Linear(20,16))
- net.add_module("relu2",nn.ReLU())
- net.add_module("linear3",nn.Linear(16,1))
- net.add_module("sigmoid",nn.Sigmoid())
- return net
- net = create_net()
- print(net)
-
- from torchkeras import summary
- summary(net,input_shape=(16,))
-
- out:
- Sequential(
- (linear1): Linear(in_features=16, out_features=20, bias=True)
- (relu1): ReLU()
- (linear2): Linear(in_features=20, out_features=16, bias=True)
- (relu2): ReLU()
- (linear3): Linear(in_features=16, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Linear-1 [-1, 20] 340
- ReLU-2 [-1, 20] 0
- Linear-3 [-1, 16] 336
- ReLU-4 [-1, 16] 0
- Linear-5 [-1, 1] 17
- Sigmoid-6 [-1, 1] 0
- ================================================================
- Total params: 693
- Trainable params: 693
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.000061
- Forward/backward pass size (MB): 0.000565
- Params size (MB): 0.002644
- Estimated Total Size (MB): 0.003269
- ----------------------------------------------------------------
- # c) 第三步:训练模型
- from sklearn.metrics import accuracy_score
- loss_func = nn.BCELoss()
- optimizer = torch.optim.Adam(params=net.parameters(),lr = 0.01)
- metric_func = lambda y_pred,y_true:accuracy_score(y_true.data.numpy(),y_pred.data.numpy()>0.5)
- metric_name = "accuracy"
-
- epochs = 50
- log_step_freq = 50
- dfhistory = pd.DataFrame(columns =
- ["epoch","loss",metric_name,"val_loss","val_"+metric_name])
- print("Start Training...")
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("=========="*8 + "%s"%nowtime)
- for epoch in range(1,epochs+1):
- # 1,训练循环-------------------------------------------------
- net.train()
- loss_sum = 0.0
- metric_sum = 0.0
- step = 1
- for step, (features,labels) in enumerate(dl_train, 1):
- # 梯度清零
- optimizer.zero_grad()
- # 正向传播求损失
- predictions = net(features)
- loss = loss_func(predictions,labels)
- metric = metric_func(predictions,labels)
- # 反向传播求梯度
- loss.backward()
- optimizer.step()
- # 打印batch级别日志
- loss_sum += loss.item()
- metric_sum += metric.item()
- if step%log_step_freq == 0:
- print(("[step = %d] loss: %.3f, "+metric_name+": %.3f") % (step, loss_sum/step, metric_sum/step))
- # 2,验证循环-------------------------------------------------
- net.eval()
- val_loss_sum = 0.0
- val_metric_sum = 0.0
- val_step = 1
- for val_step, (features,labels) in enumerate(dl_valid, 1):
- predictions = net(features)
- val_loss = loss_func(predictions,labels)
- val_metric = metric_func(predictions,labels)
- val_loss_sum += val_loss.item()
- val_metric_sum += val_metric.item()
- # 3,记录日志-------------------------------------------------
- info = (epoch, loss_sum/step, metric_sum/step,val_loss_sum/val_step, val_metric_sum/val_step)
- dfhistory.loc[epoch-1] = info
- # 打印epoch级别日志
- print(("\nEPOCH = %d, loss = %.3f,"+ metric_name + \
- " = %.3f, val_loss = %.3f, "+"val_"+ metric_name+" = %.3f")
- %info)
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
- print('Finished Training...')
-
-
- out:
- #训练过程略
- # d) 第四步:评估模型
- import matplotlib.pyplot as plt
- def plot_metric(dfhistory, metric):
- train_metrics = dfhistory[metric]
- val_metrics = dfhistory['val_'+metric]
- epochs = range(1, len(train_metrics) + 1)
- plt.plot(epochs, train_metrics, 'bo--')
- plt.plot(epochs, val_metrics, 'ro-')
- plt.title('Training and validation '+ metric)
- plt.xlabel("Epochs")
- plt.ylabel(metric)
- plt.legend(["train_"+metric, 'val_'+metric])
- plt.show()
-
- plot_metric(dfhistory,"loss")
- plot_metric(dfhistory,"accuracy")
-
- print('USE model')
-
-
- # e) 第五步:使用模型
- #预测概率
- y_pred_probs = net(torch.tensor(x_test[0:10]).float()).data
- print('预测概率:',y_pred_probs)
-
- #预测类别
- y_pred = torch.where(y_pred_probs>0.5,torch.ones_like(y_pred_probs),torch.zeros_like(y_pred_probs))
- print('预测类别:',y_pred)
-
- out:
- 预测概率: tensor([[0.1144],
- [0.1724],
- [0.8196],
- [0.7453],
- [0.0762],
- [0.8451],
- [0.2092],
- [0.1089],
- [0.4448],
- [0.8436]])
- 预测类别: tensor([[0.],
- [0.],
- [1.],
- [1.],
- [0.],
- [1.],
- [0.],
- [0.],
- [0.],
- [1.]])
- # f)第六步:保存模型
- '''
- print('打印参数:',net.state_dict().keys())
- '''
- # 保存模型参数(推荐)
- torch.save(net.state_dict(), "./data/6-1_model_parameter.pkl")
- net_clone = create_net()
- net_clone.load_state_dict(torch.load("./data/6-1_model_parameter.pkl"))
- net_clone.forward(torch.tensor(x_test[0:10]).float()).data
-
-
-
- #保存完整模型(不推荐)
- '''
- torch.save(net, './data/6-1_model_parameter.pkl')
- net_loaded = torch.load('./data/6-1_model_parameter.pkl')
- net_loaded(torch.tensor(x_test[0:10]).float()).data
- '''
在Pytorch中构建图片数据管道通常有两种方法。
第一种是使用 torchvision中的datasets.ImageFolder来读取图片然后用 DataLoader来并行加载。 第二种是通过继承 torch.utils.data.Dataset 实现用户自定义读取逻辑然后用 DataLoader来并行加载。
第二种方法是读取用户自定义数据集的通用方法,既可以读取图片数据集,也可以读取文本数据 集。
下面我们以第一种方法为例示范图片数据建模流程
- #例6-2 图片数据建模流程范例
- import os
- import datetime
- #打印时间
- def printbar():
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
-
-
- a) 第一步:准备数据
- import torch
- from torch import nn
- from torch.utils.data import Dataset,DataLoader
- from torchvision import transforms,datasets
- transform_train = transforms.Compose(
- [transforms.ToTensor()])
- transform_valid = transforms.Compose(
- [transforms.ToTensor()])
- ds_train = datasets.ImageFolder("data/cifar10/train/",
- transform = transform_train,target_transform= lambda
- t:torch.tensor([t]).float())
- ds_valid = datasets.ImageFolder("data/cifar10/test/",
- transform = transform_train,target_transform= lambda
- t:torch.tensor([t]).float())
- print(ds_train.class_to_idx)
-
- out:
- {'bird': 0, 'car': 1, 'cat': 2, 'deer': 3, 'dog': 4, 'frog': 5, 'horse': 6, 'plane': 7, 'ship': 8, 'truck': 9}
-
- dl_train = DataLoader(ds_train,batch_size = 50,shuffle = True,num_workers=0)#这里num_workers=0 是因为上面用了lambda,无法开启子进程,除非把它们放在‘main’主进程中
- dl_valid = DataLoader(ds_valid,batch_size = 50,shuffle = True,num_workers=0)
-
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- #查看部分样本
- from matplotlib import pyplot as plt
- plt.figure(figsize=(8,8))
- for i in range(9):
- img,label = ds_train[i]
- img = img.permute(1,2,0)
- ax=plt.subplot(3,3,i+1)
- ax.imshow(img.numpy())
- ax.set_title("label = %d"%label.item())
- ax.set_xticks([])
- ax.set_yticks([])
- plt.show()
-
- out:
- 图片显示略
-
- # Pytorch的图片默认顺序是 Batch,Channel,Width,Height
- for x,y in dl_train:
- print(x.shape,y.shape)
- break
- #上面的num_workers要设为0,否则这里就会报错
-
- out:
- torch.Size([50, 3, 32, 32]) torch.Size([50, 1])
-
-
-
- # b) 第二步: 定义模型
-
- import torch.nn.functional as F
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.conv1 = nn.Conv2d(in_channels=3,out_channels=6,kernel_size = 5)
- self.pool = nn.MaxPool2d(kernel_size = 2,stride = 2)
- self.conv2 = nn.Conv2d(in_channels=6,out_channels=16,kernel_size = 5)
- self.dropout = nn.Dropout2d(p = 0.1)
- #self.adaptive_pool = nn.AdaptiveMaxPool2d((1,1))
- #self.flatten = nn.Flatten()
- self.linear1 = nn.Linear(16*5*5,120)
- self.relu = nn.ReLU()
- self.linear2 = nn.Linear(120,84)
- self.linear3 = nn.Linear(84,10)
- self.softmax = nn.Softmax(dim=0)
-
- def forward(self,x):
- x = self.conv1(x)
- x = self.relu(x)
- x = self.pool(x)
- x = self.conv2(x)
- x = self.relu(x)
- x = self.pool(x)
- x = x.view(-1,16*5*5)
- x = self.dropout(x)
-
- #x = self.adaptive_pool(x)
- #x = self.flatten(x)
- x = self.linear1(x)
- x = self.relu(x)
- x = self.linear2(x)
- x = self.relu(x)
- x = self.linear3(x)
- #y = self.softmax(x)
- return x
- net = Net()
- print(net)
-
- out:
- Net(
- (conv1): Conv2d(3, 6, kernel_size=(5, 5), stride=(1, 1))
- (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
- (dropout): Dropout2d(p=0.1, inplace=False)
- (linear1): Linear(in_features=400, out_features=120, bias=True)
- (relu): ReLU()
- (linear2): Linear(in_features=120, out_features=84, bias=True)
- (linear3): Linear(in_features=84, out_features=10, bias=True)
- (softmax): Softmax(dim=0)
- )
-
-
-
-
- import torchkeras
- torchkeras.summary(net,input_shape= (3,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 6, 28, 28] 456
- ReLU-2 [-1, 6, 28, 28] 0
- MaxPool2d-3 [-1, 6, 14, 14] 0
- Conv2d-4 [-1, 16, 10, 10] 2,416
- ReLU-5 [-1, 16, 10, 10] 0
- MaxPool2d-6 [-1, 16, 5, 5] 0
- Dropout2d-7 [-1, 400] 0
- Linear-8 [-1, 120] 48,120
- ReLU-9 [-1, 120] 0
- Linear-10 [-1, 84] 10,164
- ReLU-11 [-1, 84] 0
- Linear-12 [-1, 10] 850
- ================================================================
- Total params: 62,006
- Trainable params: 62,006
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.011719
- Forward/backward pass size (MB): 0.114456
- Params size (MB): 0.236534
- Estimated Total Size (MB): 0.362709
- ----------------------------------------------------------------
-
-
-
-
- #c) 第三步:训练模型
- import pandas as pd
- import numpy as np
- from sklearn.metrics import roc_auc_score
- from sklearn.metrics import accuracy_score
- from sklearn.metrics import recall_score
- from sklearn.metrics import precision_score
-
- model = net
- model.optimizer = torch.optim.SGD(model.parameters(),lr = 0.01)
- #model.loss_func = torch.nn.BCELoss() # 这个类实现的是二分类交叉熵
- model.loss_func = torch.nn.CrossEntropyLoss() #这里的交叉熵实现多分类
-
- #y_pred = model.predict(X_test_data)
- #model.metric_func = lambda y_pred,y_true:roc_auc_score(y_true.data.numpy(),y_pred.data.numpy(),multi_class='ovr') #要添加multi_class=‘ovo’
- #model.metric_func = lambda y_pred,y_true:precision_score(y_true.data.numpy(),np.argmax(y_pred.data.numpy(), axis=1),average='weighted')
- model.metric_func = lambda y_pred,y_true:accuracy_score(y_true.data.numpy(),np.argmax(y_pred.data.numpy(), axis=1))
- model.metric_name = "acc"
- def train_step(model,features,labels):
- # 训练模式,dropout层发生作用
- model.train()
- # 梯度清零
- model.optimizer.zero_grad()
- # 正向传播求损失
- pred = model(features)
- #print(pred)
- #print(labels)
- labels = labels.squeeze().long()
- #print(labels)
- loss = model.loss_func(pred,labels)
- metric = model.metric_func(pred,labels)
- # 反向传播求梯度
- loss.backward()
- model.optimizer.step()
- return loss.item(),metric.item()
- def valid_step(model,features,labels):
- # 预测模式,dropout层不发生作用
- model.eval()
- predictions = model(features)
- labels = labels.squeeze().long()
- loss = model.loss_func(predictions,labels)
- metric = model.metric_func(predictions,labels)
- return loss.item(), metric.item()
- # 测试train_step效果
- features,labels = next(iter(dl_train))
- train_step(model,features,labels)
-
- out:
- (2.302344560623169, 0.08)
-
-
-
-
- import warnings
- warnings.filterwarnings("ignore")
-
- def train_model(model,epochs,dl_train,dl_valid,log_step_freq):
- metric_name = model.metric_name
- dfhistory = pd.DataFrame(columns = ["epoch","loss",metric_name,"val_loss","val_"+metric_name])
- print("Start Training...")
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("=========="*8 + "%s"%nowtime)
- for epoch in range(1,epochs+1):
- # 1,训练循环-------------------------------------------------
- loss_sum = 0.0
- metric_sum = 0.0
- step = 1
- for step, (features,labels) in enumerate(dl_train, 1):
- #print(features.shape,labels.shape)
- loss,metric = train_step(model,features,labels)
- # 打印batch级别日志
- loss_sum += loss
- metric_sum += metric
- if step%log_step_freq == 0:
- print(("[step = %d] loss: %.3f, "+metric_name+": %.3f") % (step, loss_sum/step, metric_sum/step))
- # 2,验证循环-------------------------------------------------
- val_loss_sum = 0.0
- val_metric_sum = 0.0
- val_step = 1
- for val_step, (features,labels) in enumerate(dl_valid, 1):
- val_loss,val_metric = valid_step(model,features,labels)
- val_loss_sum += val_loss
- val_metric_sum += val_metric
- # 3,记录日志-------------------------------------------------
- info = (epoch, loss_sum/step, metric_sum/step,
- val_loss_sum/val_step, val_metric_sum/val_step)
- dfhistory.loc[epoch-1] = info
- # 打印epoch级别日志
- print(("\nEPOCH = %d, loss = %.3f,"+ metric_name + \
- " = %.3f, val_loss = %.3f, "+"val_"+ metric_name+" = %.3f")
- %info)
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
- print('Finished Training...')
- return dfhistory
-
- epochs = 80
- dfhistory = train_model(model,epochs,dl_train,dl_valid,log_step_freq = 50)
-
- out:
- #训练过程略
-
- # d) 第四步:评估模型
- print(dfhistory)
-
- out:
-
- epoch loss acc val_loss val_acc
- 0 1.0 2.300890 0.12294 2.295850 0.1349
- 1 2.0 2.229730 0.15780 2.082908 0.2141
- 2 3.0 2.029092 0.24178 1.965499 0.2808
- 3 4.0 1.947292 0.28162 1.884967 0.3101
- 4 5.0 1.852998 0.32598 1.753345 0.3635
- ... ... ... ... ... ...
- 75 76.0 0.691221 0.75330 1.114598 0.6435
- 76 77.0 0.682672 0.75696 1.107986 0.6433
- 77 78.0 0.678343 0.75846 1.126077 0.6427
- 78 79.0 0.674721 0.75902 1.108517 0.6399
- 79 80.0 0.671216 0.76088 1.128913 0.6426
- 80 rows × 5 columns
-
-
-
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- import matplotlib.pyplot as plt
- def plot_metric(dfhistory, metric):
- train_metrics = dfhistory[metric]
- val_metrics = dfhistory['val_'+metric]
- epochs = range(1, len(train_metrics) + 1)
- plt.plot(epochs, train_metrics, 'bo--')
- plt.plot(epochs, val_metrics, 'ro-')
- plt.title('Training and validation '+ metric)
- plt.xlabel("Epochs")
- plt.ylabel(metric)
- plt.legend(["train_"+metric, 'val_'+metric])
- plt.show()
-
- plot_metric(dfhistory,"loss")
-
- out:
- #训练图略
- # e) 第五步:使用模型
- def predict(model,dl):
- model.eval()
- result = torch.cat([model.forward(t[0]) for t in dl])
- return(result.data)
- #预测概率
- y_pred_probs = predict(model,dl_valid)
- print(y_pred_probs)
-
- #预测类别
- def transclass(inputtensor):
- classes={0:'bird',1:'car',2:'cat',3:'deer',4:'dog',5:'frog',6:'horse',7:'plane',8:'ship',9:'truck'}
- numlist=inputtensor.numpy()
- classlist=[]
- for n in numlist:
- classlist.append(classes[n])
- return classlist
-
- values, predictions = torch.max(y_pred_probs.data, 1)
- print(transclass(predictions))
-
-
- out:
- tensor([[ 1.8810, -6.9985, 5.2611, ..., -1.9135, -3.2537, -2.1470],
- [-7.4959, 17.1518, -1.6427, ..., -0.1497, -0.7619, 11.8685],
- [ 0.1047, 1.8250, -3.2704, ..., 6.7653, 1.8235, 3.0359],
- ...,
- [ 0.9541, 2.4879, -1.7344, ..., 1.5825, 1.0240, -1.2932],
- [ 1.8352, -2.5948, 3.4929, ..., -1.1408, 2.0713, 4.4101],
- [ 1.8674, 1.3027, 2.9357, ..., -4.3468, -0.2972, -1.9376]])
- ['cat', 'car', 'plane', 'dog', 'dog', 'frog', 'deer', 'dog', 'horse', 'plane', 'frog', 'frog', 'dog', 'cat', 'dog', 'ship', 'cat', 'bird', 'dog', 'dog', 'cat', 'horse', 'dog', 'car', 'dog', 'frog', 'frog', 'car', 'dog', 'frog', 'cat', 'bird', 'ship', 'plane', 'horse', 'dog', 'cat', 'car', 'plane', 'ship', 'horse', 'truck', 'plane', 'car', 'ship', ...,
- 'dog', 'ship', 'frog', 'horse', 'horse', 'truck', 'horse', 'bird', 'dog', 'ship', 'frog', 'frog', 'horse', 'cat', 'frog', 'dog', 'horse', 'plane', 'deer', 'ship', 'plane', 'plane', 'truck', 'truck', 'bird', 'truck', 'truck', 'frog', 'cat', 'dog', 'bird', 'horse', 'frog', 'plane', 'truck', 'cat', 'truck', 'dog', 'deer', 'horse', 'plane', 'bird', 'bird', 'horse', 'deer', 'dog', 'plane', 'car', 'ship', 'car', 'car', 'plane', 'truck', 'bird', 'deer', 'cat', 'car', 'truck', 'cat']
- # f) 第六步:保存模型
- print(model.state_dict().keys())
-
- # 保存模型参数
- torch.save(model.state_dict(), "./data/6-2_model_parameter.pkl")
- net_clone = Net()
- net_clone.load_state_dict(torch.load("./data/6-2_model_parameter.pkl"))
- predict(net_clone,dl_valid)
-
- out:
- odict_keys(['conv1.weight', 'conv1.bias', 'conv2.weight', 'conv2.bias', 'linear1.weight', 'linear1.bias', 'linear2.weight', 'linear2.bias', 'linear3.weight', 'linear3.bias'])
- tensor([[ 2.6804, -6.3126, 3.4907, ..., -2.9137, -2.9597, -2.6070],
- [-0.4212, -4.3194, 2.4341, ..., -3.3696, -4.6787, -4.0673],
- [-1.9623, 1.0195, 2.6026, ..., 3.6996, 1.7376, -0.9068],
- ...,
- [ 4.3411, -4.9298, 0.9338, ..., 2.1229, 2.3299, -2.9601],
- [ 0.9571, -4.2395, -0.1160, ..., -0.1015, -5.6702, 0.6289],
- [ 0.2636, -1.0872, -1.6660, ..., 4.7971, 8.0454, -1.1111]])
文本数据预处理较为繁琐,包括中文切词(本示例不涉及),构建词典,编码转换,序列填充, 构建数据管道等等。
在torch中预处理文本数据一般使用torchtext或者自定义Dataset,torchtext功能非常强大,可以 构建文本分类,序列标注,问答模型,机器翻译等NLP任务的数据集。
torchtext常见API一览
torchtext.data.Example : 用来表示一个样本,数据和标签
torchtext.vocab.Vocab: 词汇表,可以导入一些预训练词向量
torchtext.data.Datasets: 数据集类, __getitem__ 返回 Example实例, torchtext.data.TabularDataset是其子类。
torchtext.data.Field : 用来定义字段的处理方法(文本字段,标签字段)创建 Example时的 预处理,batch 时的一些处理操作。
torchtext.data.Iterator: 迭代器,用来生成 batch torchtext.datasets: 包含了常见的数据集.
下面以豆瓣电影评论为例,示范文本数据建模流程:
- #例6-3 文本数据建模流程范例
- import os
- import datetime
- #打印时间
- def printbar():
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
-
- # a) 第一步:准备数据
- import torch
- import jieba
- import string,re
- import torchtext
- #from torchtext.legacy.data import Field,TabularDataset,Iterator,BucketIterator
-
- MAX_WORDS = 10000 # 仅考虑最高频的10000个词
- MAX_LEN = 200 # 每个样本保留200个词的长度
- BATCH_SIZE = 20
- #分词方法
- def clean_text(text):
- #过滤不需要的符号
- bd='[’!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~]+,。!?“”《》:、.'
- for i in bd:
- text=text.replace(i,'') #字符串替换去标点符号
- #用jieba分词
- fenci=jieba.lcut(text)
- return fenci
- #过滤掉低频词
- def filterLowFreqWords(arr,vocab):
- arr = [[x if x<MAX_WORDS else 0 for x in example] for example in arr]
- return arr
- #1,定义各个字段的预处理方法
- TEXT = torchtext.legacy.data.Field(sequential=True, tokenize=clean_text, lower=True,fix_length=MAX_LEN,postprocessing = filterLowFreqWords)
- LABEL = torchtext.legacy.data.Field(sequential=False, use_vocab=False)
- #2,构建表格型dataset
- #torchtext.data.TabularDataset可读取csv,tsv,json等格式
- ds_train, ds_test = torchtext.legacy.data.TabularDataset.splits(
- path='./data/douban', train='train.csv',test='test.csv', format='csv',fields=[('label', LABEL), ('text', TEXT)],skip_header = True)
-
- #因为豆瓣评分是1-5分,这里把它们转化为好评和差评,3分以上的算好评,3分以下的算差评
- for dltrain in ds_train:
- if int(dltrain.label)>3:
- dltrain.label=1
- else:
- dltrain.label=0
-
- for dltest in ds_test:
- if int(dltest.label)>3:
- dltest.label=1
- else:
- dltest.label=0
-
- #3,构建词典
- TEXT.build_vocab(ds_train)
- #4,构建数据管道迭代器
- train_iter, test_iter = torchtext.legacy.data.Iterator.splits( (ds_train, ds_test), sort_within_batch=True,sort_key=lambda x:len(x.text), batch_sizes=(BATCH_SIZE,BATCH_SIZE))
- #查看example信息
-
- print(ds_train[0].text)
- print(ds_train[0].label)
-
- out:
- ['小', '的', '时候', '完全', '不', '爱看', '太', '暴力', '太', '黑社会', '了']
- 0
-
-
-
-
- # 查看词典信息
- print(len(TEXT.vocab))
- #itos: index to string
- print(TEXT.vocab.itos[0])
- print(TEXT.vocab.itos[1])
- #stoi: string to index
- print(TEXT.vocab.stoi['<unk>']) #unknown 未知词
- print(TEXT.vocab.stoi['<pad>']) #padding 填充
- #freqs: 词频
- print(TEXT.vocab.freqs['<unk>'])
- print('"好看"的数量:',TEXT.vocab.freqs['好看'])
- print('"电影"的数量',TEXT.vocab.freqs['电影'])
- print('"导演"的数量',TEXT.vocab.freqs['导演'])
- # 查看数据管道信息
- # 注意有坑:text第0维是句子长度
-
- out:
- 77428
- <unk>
- <pad>
- 0
- 1
- 0
- "好看"的数量: 1505
- "电影"的数量 6122
- "导演"的数量 1567
-
-
-
-
- # 将数据管道组织成torch.utils.data.DataLoader相似的features,label输出形式
- class DataLoader:
- def __init__(self,data_iter):
- self.data_iter = data_iter
- self.length = len(data_iter)
- def __len__(self):
- return self.length
- def __iter__(self):
- # 注意:此处调整features为 batch first,并调整label的shape和dtype
- for batch in self.data_iter:
- yield(torch.transpose(batch.text,0,1),torch.unsqueeze(batch.label.float(),dim = 1))
-
- dl_train = DataLoader(train_iter)
- dl_test = DataLoader(test_iter)
- import torch
- from torch import nn
- import torchkeras
-
- torch.random.seed()
- import torch
- from torch import nn
-
- class Net(torchkeras.Model):
- def __init__(self):
- super(Net, self).__init__()
- #设置padding_idx参数后将在训练过程中将填充的token始终赋值为0向量
- self.embedding = nn.Embedding(num_embeddings = MAX_WORDS,embedding_dim = 3,padding_idx = 1)
- self.conv = nn.Sequential()
- self.conv.add_module("conv_1",nn.Conv1d(in_channels = 3,out_channels = 16,kernel_size = 5))
- self.conv.add_module("pool_1",nn.MaxPool1d(kernel_size = 2))
- self.conv.add_module("relu_1",nn.ReLU())
- self.conv.add_module("conv_2",nn.Conv1d(in_channels = 16,out_channels = 128,kernel_size = 2))
- self.conv.add_module("pool_2",nn.MaxPool1d(kernel_size = 2))
- self.conv.add_module("relu_2",nn.ReLU())
- self.dense = nn.Sequential()
- self.dense.add_module("flatten",nn.Flatten())
- self.dense.add_module("linear",nn.Linear(6144,1))
- self.dense.add_module("sigmoid",nn.Sigmoid())
- def forward(self,x):
- x = self.embedding(x).transpose(1,2)
- x = self.conv(x)
- y = self.dense(x)
- return y
- model = Net()
- print(model)
- model.summary(input_shape = (200,),input_dtype = torch.LongTensor)
-
- out:
- Net(
- (embedding): Embedding(10000, 3, padding_idx=1)
- (conv): Sequential(
- (conv_1): Conv1d(3, 16, kernel_size=(5,), stride=(1,))
- (pool_1): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_1): ReLU()
- (conv_2): Conv1d(16, 128, kernel_size=(2,), stride=(1,))
- (pool_2): MaxPool1d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (relu_2): ReLU()
- )
- (dense): Sequential(
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear): Linear(in_features=6144, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
- )
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Embedding-1 [-1, 200, 3] 30,000
- Conv1d-2 [-1, 16, 196] 256
- MaxPool1d-3 [-1, 16, 98] 0
- ReLU-4 [-1, 16, 98] 0
- Conv1d-5 [-1, 128, 97] 4,224
- MaxPool1d-6 [-1, 128, 48] 0
- ReLU-7 [-1, 128, 48] 0
- Flatten-8 [-1, 6144] 0
- Linear-9 [-1, 1] 6,145
- Sigmoid-10 [-1, 1] 0
- ================================================================
- Total params: 40,625
- Trainable params: 40,625
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.000763
- Forward/backward pass size (MB): 0.287796
- Params size (MB): 0.154972
- Estimated Total Size (MB): 0.443531
- ----------------------------------------------------------------
- # c) 第三步:训练模型
- # 准确率
- def accuracy(y_pred,y_true):
- y_pred = torch.where(y_pred>0.5,torch.ones_like(y_pred,dtype = torch.float32),torch.zeros_like(y_pred,dtype = torch.float32))
- acc = torch.mean(1-torch.abs(y_true-y_pred))
- return acc
- model.compile(loss_func = nn.BCELoss(),optimizer=torch.optim.Adagrad(model.parameters(),lr = 0.02),metrics_dict={"accuracy":accuracy})
- # 有时候模型训练过程中不收敛,需要多试几次
- dfhistory = model.fit(30,dl_train,dl_val=dl_test,log_step_freq= 200)
-
- out:
- #训练过程略
- 。。。。。。
-
-
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 30 | 0.509 | 0.741 | 0.672 | 0.635 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-26 12:44:45
- Finished Training...
- # d) 第四步:评估模型
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- import matplotlib.pyplot as plt
- def plot_metric(dfhistory, metric):
- train_metrics = dfhistory[metric]
- val_metrics = dfhistory['val_'+metric]
- epochs = range(1, len(train_metrics) + 1)
- plt.plot(epochs, train_metrics, 'bo--')
- plt.plot(epochs, val_metrics, 'ro-')
- plt.title('Training and validation '+ metric)
- plt.xlabel("Epochs")
- plt.ylabel(metric)
- plt.legend(["train_"+metric, 'val_'+metric])
- plt.show()
- plot_metric(dfhistory,"loss")
- plot_metric(dfhistory,"accuracy")
- # 评估
- model.evaluate(dl_test)
-
- out:
- {'val_loss': 0.6723344844852034, 'val_accuracy': 0.6349082730424244}
- # e) 第五步:使用模型
- model.predict(dl_test)
-
- out:
- tensor([[0.2567],
- [0.2567],
- [0.2567],
- ...,
- [0.8605],
- [0.7361],
- [0.3952]])
- # f)第六步:保存模型
- print(model.state_dict().keys())
- # 保存模型参数
- torch.save(model.state_dict(), "./data/6-3_model_parameter.pkl")
- model_clone = Net()
- model_clone.load_state_dict(torch.load("./data/6-3_model_parameter.pkl"))
- model_clone.compile(loss_func = nn.BCELoss(),optimizer= torch.optim.Adagrad(model.parameters(),lr = 0.02),metrics_dict={"accuracy":accuracy})
- # 评估模型
- model_clone.evaluate(dl_test)
-
- out:
- odict_keys(['embedding.weight', 'conv.conv_1.weight', 'conv.conv_1.bias', 'conv.conv_2.weight', 'conv.conv_2.bias', 'dense.linear.weight', 'dense.linear.bias'])
-
- {'val_loss': 0.6818984377266586, 'val_accuracy': 0.6283229489038048}
时间序列数据(time series data)是在不同时间上收集到的数据,用于所描述现象随时间变化的情况。这类数据反映了某一事物、现象等随时间的变化状态或程度。
时间序列的处理对经济、金融数据尤为重要。这里我们以某只股票的股价分析为例说明用pytorch进行时间序列数据建模的一般流程。
- #例6-4 时间序列数据建模流程范例
- import os
- import datetime
- import importlib
- import torchkeras
- #打印时间
- def printbar():
- nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
- print("\n"+"=========="*8 + "%s"%nowtime)
我们可以从 tushare获取所需要的股票数据
Tushare是一个免费、开源的python财经数据接口包。主要实现对股票等金融数据从数据采集、清洗加工 到 数据存储的过程,能够为金融分析人员提供快速、整洁、和多样的便于分析的数据,为他们在数据获取方面极大地减轻工作量,使他们更加专注于策略和模型的研究与实现上。
- #从 tushare 下载股票数据
- import tushare as ts
- import matplotlib.pyplot as plt
-
- df1 = ts.get_k_data('600104', ktype='D', start='2017-01-01', end='2022-03-25')
- #获取600104这只股票2017年1月1日至2022年3月25日的股票数据
- datapath1 = "data/stock/SH600104(20170101-20220325).csv"
- #保存为一个csv文件
- df1.to_csv(datapath1)
-
- # a) 第一步:准备数据
- import numpy as np
- import pandas as pd
- import matplotlib.pyplot as plt
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- df = pd.read_csv("./data/stock/SH600104(20170101-20220325).csv",sep = ",")
- df.plot(x = 'date',y = ["open","close"],figsize=(10,6))
- plt.xticks(rotation=60)
-
- out:
- from torch import nn
- #数据有很多,我们只取需要的前面几个数据:开盘价、收盘价、最高价、最低价、成交量
- dfdata = df.iloc[:, 1:7]
- print(dfdata)
-
- #dfdiff = dfdata.set_index("date")
- dfdata.plot(x = 'date',y = ["open","close"],figsize=(10,6))
- plt.xticks(rotation=60)
- dfdata = dfdata.drop("date",axis = 1).astype("float32")
-
- #对数据进行归一化,因为这些数据数量级差别比较大,归一化有利于训练
- dfguiyi = (dfdata-dfdata.min())/(dfdata.max()-dfdata.min())
- print(dfdata)
- print(dfguiyi)
-
- out:
- date open close high low volume
- 0 2017-01-03 17.34 17.66 18.07 17.34 368556.0
- 1 2017-01-04 17.74 18.06 18.27 17.66 335319.0
- 2 2017-01-05 18.15 17.82 18.15 17.72 208593.0
- 3 2017-01-06 17.81 17.68 17.93 17.55 229796.0
- 4 2017-01-09 17.70 17.93 17.99 17.69 258683.0
- ... ... ... ... ... ... ...
- 1266 2022-03-21 17.60 17.19 17.65 17.11 310405.0
- 1267 2022-03-22 17.15 17.31 17.37 17.09 195424.0
- 1268 2022-03-23 17.31 17.21 17.41 17.13 208347.0
- 1269 2022-03-24 17.11 17.12 17.22 17.06 152451.0
- 1270 2022-03-25 17.14 16.98 17.21 16.91 219721.0
-
- [1271 rows x 6 columns]
- open close high low volume
- 0 17.340000 17.660000 18.070000 17.340000 368556.0
- 1 17.740000 18.059999 18.270000 17.660000 335319.0
- 2 18.150000 17.820000 18.150000 17.719999 208593.0
- 3 17.809999 17.680000 17.930000 17.549999 229796.0
- 4 17.700001 17.930000 17.990000 17.690001 258683.0
- ... ... ... ... ... ...
- 1266 17.600000 17.190001 17.650000 17.110001 310405.0
- 1267 17.150000 17.309999 17.370001 17.090000 195424.0
- 1268 17.309999 17.209999 17.410000 17.129999 208347.0
- 1269 17.110001 17.120001 17.219999 17.059999 152451.0
- 1270 17.139999 16.980000 17.209999 16.910000 219721.0
-
- [1271 rows x 5 columns]
- open close high low volume
- 0 0.063307 0.087719 0.099580 0.083895 0.172035
- 1 0.087892 0.111918 0.111578 0.103490 0.153485
- 2 0.113092 0.097399 0.104379 0.107165 0.082761
- 3 0.092194 0.088929 0.091182 0.096754 0.094594
- 4 0.085433 0.104053 0.094781 0.105328 0.110715
- ... ... ... ... ... ...
- 1266 0.079287 0.059286 0.074385 0.069810 0.139581
- 1267 0.051629 0.066546 0.057589 0.068585 0.075411
- 1268 0.061463 0.060496 0.059988 0.071035 0.082623
- 1269 0.049170 0.055052 0.048590 0.066748 0.051428
- 1270 0.051014 0.046582 0.047990 0.057563 0.088971
-
- [1271 rows x 5 columns]
-
-
-
-
-
- dfguiyi.head()
-
- out:
- open close high low volume
- 0 0.063307 0.087719 0.099580 0.083895 0.172035
- 1 0.087892 0.111918 0.111578 0.103490 0.153485
- 2 0.113092 0.097399 0.104379 0.107165 0.082761
- 3 0.092194 0.088929 0.091182 0.096754 0.094594
- 4 0.085433 0.104053 0.094781 0.105328 0.110715
制作适合pytorch训练用的数据集
- #制作数据集
- import torch
- from torch import nn
- from torch.utils.data import Dataset,DataLoader,TensorDataset
- #用某日前60天的数据作为输入,当日数据作为标签
- day_size = 60
- class stockDataset(Dataset):
- def __len__(self):
- return len(dfguiyi) - day_size
- def __getitem__(self,i):
- x = dfguiyi.iloc[i:i+day_size-1,:]
- feature = torch.tensor(x.values)
- y = dfguiyi.iloc[i+day_size,:]
- label = torch.tensor(y.values)
- return (feature,label)
- ds_train = stockDataset()
- #用DataLoader制作训练数据集,batch设为20
- dl_train = DataLoader(ds_train,batch_size = 20)
-
- #查看一下数据集数据
- for a,b in dl_train:
- print(a,b)
- break
- out:
- tensor([[[0.0633, 0.0877, 0.0996, 0.0839, 0.1720],
- [0.0879, 0.1119, 0.1116, 0.1035, 0.1535],
- [0.1131, 0.0974, 0.1044, 0.1072, 0.0828],
- ...,
- [0.1309, 0.1337, 0.1278, 0.1298, 0.0826],
- [0.1334, 0.1385, 0.1326, 0.1384, 0.0903],
- [0.1371, 0.1779, 0.1686, 0.1574, 0.2161]],
-
- [[0.0879, 0.1119, 0.1116, 0.1035, 0.1535],
- [0.1131, 0.0974, 0.1044, 0.1072, 0.0828],
- [0.0922, 0.0889, 0.0912, 0.0968, 0.0946],
- ...,
- [0.1334, 0.1385, 0.1326, 0.1384, 0.0903],
- [0.1371, 0.1779, 0.1686, 0.1574, 0.2161],
- [0.1746, 0.1779, 0.1770, 0.1825, 0.2685]],
-
- [[0.1131, 0.0974, 0.1044, 0.1072, 0.0828],
- [0.0922, 0.0889, 0.0912, 0.0968, 0.0946],
- [0.0854, 0.1041, 0.0948, 0.1053, 0.1107],
- ...,
- [0.1371, 0.1779, 0.1686, 0.1574, 0.2161],
- [0.1746, 0.1779, 0.1770, 0.1825, 0.2685],
- [0.1887, 0.2015, 0.1980, 0.1972, 0.3619]],
-
- ...,
-
- [[0.1739, 0.1766, 0.1716, 0.1868, 0.0406],
- [0.1696, 0.1615, 0.1656, 0.1715, 0.0157],
- [0.1666, 0.1785, 0.1848, 0.1837, 0.1040],
- ...,
- [0.2797, 0.2928, 0.2873, 0.2976, 0.2026],
- [0.3024, 0.3200, 0.3305, 0.3190, 0.2307],
- [0.3110, 0.3394, 0.3257, 0.3037, 0.1949]],
-
- [[0.1696, 0.1615, 0.1656, 0.1715, 0.0157],
- [0.1666, 0.1785, 0.1848, 0.1837, 0.1040],
- [0.1764, 0.1827, 0.1854, 0.1947, 0.0446],
- ...,
- [0.3024, 0.3200, 0.3305, 0.3190, 0.2307],
- [0.3110, 0.3394, 0.3257, 0.3037, 0.1949],
- [0.3405, 0.2910, 0.3263, 0.2915, 0.2281]],
-
- [[0.1666, 0.1785, 0.1848, 0.1837, 0.1040],
- [0.1764, 0.1827, 0.1854, 0.1947, 0.0446],
- [0.1746, 0.1658, 0.1662, 0.1696, 0.0811],
- ...,
- [0.3110, 0.3394, 0.3257, 0.3037, 0.1949],
- [0.3405, 0.2910, 0.3263, 0.2915, 0.2281],
- [0.2735, 0.2898, 0.2813, 0.2756, 0.0963]]]) tensor([[0.1887, 0.2015, 0.1980, 0.1972, 0.3619],
- [0.1942, 0.1960, 0.1914, 0.1972, 0.1444],
- [0.1967, 0.1791, 0.1866, 0.1929, 0.1737],
- [0.1746, 0.1863, 0.1758, 0.1813, 0.1802],
- [0.1758, 0.1924, 0.1872, 0.1911, 0.1506],
- [0.1881, 0.1821, 0.1806, 0.1966, 0.0773],
- [0.1789, 0.1803, 0.1710, 0.1886, 0.0965],
- [0.1746, 0.1900, 0.1806, 0.1898, 0.0883],
- [0.1838, 0.2160, 0.2148, 0.2033, 0.3752],
- [0.2127, 0.2220, 0.2172, 0.2205, 0.2398],
- [0.2194, 0.2849, 0.2873, 0.2382, 0.5093],
- [0.2803, 0.2880, 0.2801, 0.2866, 0.2703],
- [0.2895, 0.2898, 0.2813, 0.2719, 0.2978],
- [0.2797, 0.2928, 0.2873, 0.2976, 0.2026],
- [0.3024, 0.3200, 0.3305, 0.3190, 0.2307],
- [0.3110, 0.3394, 0.3257, 0.3037, 0.1949],
- [0.3405, 0.2910, 0.3263, 0.2915, 0.2281],
- [0.2735, 0.2898, 0.2813, 0.2756, 0.0963],
- [0.2926, 0.3261, 0.3245, 0.2988, 0.1541],
- [0.3227, 0.3285, 0.3221, 0.3221, 0.0866]])
- #b) 第二步: 定义模型
- import torch
- from torch import nn
- import importlib
- import torchkeras
- torch.random.seed()
-
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- # 3层lstm,输入的是5个数据所以input_size为5
- self.lstm = nn.LSTM(input_size = 5,hidden_size = 20,num_layers = 2,batch_first = True)
- self.linear = nn.Linear(20,5)
- def forward(self,x_input):
- x = self.lstm(x_input)[0][:,-1,:]
- y = self.linear(x)
- return y
- net = Net()
- model = torchkeras.Model(net)
- print(model)
- model.summary(input_shape=(60,5),input_dtype = torch.FloatTensor)
-
- out:
- Model(
- (net): Net(
- (lstm): LSTM(5, 20, num_layers=2, batch_first=True)
- (linear): Linear(in_features=20, out_features=5, bias=True)
- )
- )
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- LSTM-1 [-1, 60, 20] 5,520
- Linear-2 [-1, 5] 105
- ================================================================
- Total params: 5,625
- Trainable params: 5,625
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.001144
- Forward/backward pass size (MB): 0.009193
- Params size (MB): 0.021458
- Estimated Total Size (MB): 0.031796
- ----------------------------------------------------------------
- #c) 第三步:训练模型
- #定义一个损失函数
- def mspe(y_pred,y_true):
- err_percent = (y_true - y_pred)**2/(torch.max(y_true**2,torch.tensor(1e-7)))
- return torch.mean(err_percent)
-
- model.compile(loss_func = mspe,optimizer = torch.optim.Adagrad(model.parameters(),lr = 0.01))
-
- dfhistory = model.fit(100,dl_train,log_step_freq=10)
-
- out:
- #训练过程略
- 。。。。。。
- +-------+-------+
- | epoch | loss |
- +-------+-------+
- | 100 | 0.924 |
- +-------+-------+
-
- ================================================================================2022-03-27 16:55:23
- Finished Training...
- #使用dfresult记录数据
- dfresult = dfguiyi[["open","close","high","low","volume"]].copy()
- dfresult.tail()
-
- #定义一个反归一化函数
- def fanguiyi(df,dfdata):
- result=df*(dfdata.max()-dfdata.min())+dfdata.min()
- return result
-
-
- #反归一化,得到股价数据
- dfgujia = fanguiyi(dfresult,dfdata)
- print(dfgujia)
-
- out:
- open close high low volume
- 0 17.340000 17.660000 18.070000 17.340000 368556.0
- 1 17.740000 18.059999 18.270000 17.660000 335319.0
- 2 18.150000 17.820000 18.150000 17.719999 208593.0
- 3 17.809999 17.680000 17.930000 17.549999 229796.0
- 4 17.700001 17.930000 17.990000 17.690001 258683.0
- ... ... ... ... ... ...
- 1266 17.600000 17.190001 17.650000 17.110001 310405.0
- 1267 17.150000 17.309999 17.370001 17.090000 195424.0
- 1268 17.309999 17.209999 17.410000 17.129999 208347.0
- 1269 17.110001 17.120001 17.219999 17.059999 152451.0
- 1270 17.139999 16.980000 17.209999 16.910000 219721.0
-
- [1271 rows x 5 columns]
- #d) 第四步:评估模型
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- import matplotlib.pyplot as plt
- def plot_metric(dfhistory, metric):
- train_metrics = dfhistory[metric]
- epochs = range(1, len(train_metrics) + 1)
- plt.plot(epochs, train_metrics, 'bo--')
- plt.title('Training '+ metric)
- plt.xlabel("Epochs")
- plt.ylabel(metric)
- plt.legend(["train_"+metric])
- plt.show()
- plot_metric(dfhistory,"loss")
-
- out:
- #输出图略
- # e) 使用模型
- #预测此后10天的股价走势,将其结果添加到dfresult中
- for i in range(10):
- arr_input = torch.unsqueeze(torch.from_numpy(dfresult.values[-60:,:]),axis=0)
- arr_predict = model.forward(arr_input)
- dfpredict = pd.DataFrame(arr_predict.data.numpy(),columns = dfresult.columns)
- dfresult = dfresult.append(dfpredict,ignore_index=True)
-
- dfgujia = fanguiyi(dfresult,dfdata)
- print(dfgujia)
-
- out:
- open close high low volume
- 0 17.340000 17.660000 18.070000 17.340000 368556.000000
- 1 17.740000 18.059999 18.270000 17.660000 335319.000000
- 2 18.150000 17.820000 18.150000 17.719999 208593.000000
- 3 17.809999 17.680000 17.930000 17.549999 229796.000000
- 4 17.700001 17.930000 17.990000 17.690001 258683.000000
- ... ... ... ... ... ...
- 1276 16.181501 15.864144 16.290152 15.571153 189594.421875
- 1277 16.147591 15.849412 16.265371 15.561488 191846.609375
- 1278 16.118704 15.836608 16.244844 15.553810 193725.046875
- 1279 16.094416 15.825689 16.228113 15.547682 195277.750000
- 1280 16.074209 15.816536 16.214634 15.542784 196551.828125
-
- [1281 rows x 5 columns]
呃,这个股价走势,还真让人伤心啊,赶紧清仓跑路吧
免责声明:影响股票价格的因素太多太多,这里只是一个循环网络的案例演示,不构成投资建议啊,大家不要拿着这个去做股票,亏了钱我不负责。
- f) 第六步:保存模型
- print(model.net.state_dict().keys())
- # 保存模型参数
- torch.save(model.net.state_dict(), "./data/6-4_model_parameter.pkl")
- net_clone = Net()
- net_clone.load_state_dict(torch.load("./data/6-4_model_parameter.pkl"))
- model_clone = torchkeras.Model(net_clone)
- model_clone.compile(loss_func = mspe)
- # 评估模型
- model_clone.evaluate(dl_train)
-
- out:
- odict_keys(['lstm.weight_ih_l0', 'lstm.weight_hh_l0', 'lstm.bias_ih_l0', 'lstm.bias_hh_l0', 'lstm.weight_ih_l1', 'lstm.weight_hh_l1', 'lstm.bias_ih_l1', 'lstm.bias_hh_l1', 'linear.weight', 'linear.bias'])
- {'val_loss': 0.9112453353209574}
TensorBoard是一个可视化辅助工具。它原是TensorFlow的小弟,但它也能够很好地和Pytorch进行配合。甚至在Pytorch中使用TensorBoard比TensorFlow中使用 TensorBoard还要来的更加简单和自然。
Pytorch中利用TensorBoard可视化的大概过程如下:
首先在Pytorch中指定一个目录创建一个torch.utils.tensorboard.SummaryWriter日志写入器。
然后根据需要可视化的信息,利用日志写入器将相应信息日志写入我们指定的目录。
最后就可以传入日志目录作为参数启动TensorBoard,然后就可以在TensorBoard中查看了。
Pytorch中利用TensorBoard进行信息的可视化的方法:
可视化模型结构: writer.add_graph
可视化指标变化: writer.add_scalar
可视化参数分布: writer.add_histogram
可视化原始图像: writer.add_image 或 writer.add_images
可视化人工绘图: writer.add_figure
- #例7-1-1 TensorBoard可视化模型结构
- import torch
- from torch import nn
- from torch.utils.tensorboard import SummaryWriter
- from torchkeras import Model,summary
- class Net(nn.Module):
- def __init__(self):
- super(Net, self).__init__()
- self.conv1 = nn.Conv2d(in_channels=3,out_channels=32,kernel_size = 3)
- self.pool = nn.MaxPool2d(kernel_size = 2,stride = 2)
- self.conv2 = nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5)
- self.dropout = nn.Dropout2d(p = 0.1)
- self.adaptive_pool = nn.AdaptiveMaxPool2d((1,1))
- self.flatten = nn.Flatten()
- self.linear1 = nn.Linear(64,32)
- self.relu = nn.ReLU()
- self.linear2 = nn.Linear(32,1)
- self.sigmoid = nn.Sigmoid()
- def forward(self,x):
- x = self.conv1(x)
- x = self.pool(x)
- x = self.conv2(x)
- x = self.pool(x)
- x = self.dropout(x)
- x = self.adaptive_pool(x)
- x = self.flatten(x)
- x = self.linear1(x)
- x = self.relu(x)
- x = self.linear2(x)
- y = self.sigmoid(x)
- return y
- net = Net()
- print(net)
-
- out:
- Net(
- (conv1): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1))
- (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (conv2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (dropout): Dropout2d(p=0.1, inplace=False)
- (adaptive_pool): AdaptiveMaxPool2d(output_size=(1, 1))
- (flatten): Flatten(start_dim=1, end_dim=-1)
- (linear1): Linear(in_features=64, out_features=32, bias=True)
- (relu): ReLU()
- (linear2): Linear(in_features=32, out_features=1, bias=True)
- (sigmoid): Sigmoid()
- )
-
-
-
- summary(net,input_shape= (3,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 896
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 1] 33
- Sigmoid-11 [-1, 1] 0
- ================================================================
- Total params: 54,273
- Trainable params: 54,273
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.011719
- Forward/backward pass size (MB): 0.359634
- Params size (MB): 0.207035
- Estimated Total Size (MB): 0.578388
- ----------------------------------------------------------------
-
-
-
- writer = SummaryWriter('./data/tensorboard')
- writer.add_graph(net,input_to_model = torch.rand(1,3,32,32))
- writer.close()
- %load_ext tensorboard
- #%tensorboard --logdir logs/fit --port=6007
- #%tensorboard --logdir ./data/tensorboard
- from tensorboard import notebook
- #查看启动的tensorboard程序
- notebook.list()
- #启动tensorboard程序
- notebook.start("--logdir ./data/tensorboard")
- #等价于在命令行中执行 tensorboard --logdir ./data/tensorboard
- #可以在浏览器中打开 http://localhost:6006/ 查看
-
- out:
有时候在训练过程中,如果能够实时动态地查看loss和各种metric的变化曲线,那么无疑可以帮助我们更加直观地了解模型的训练情况。
注意,writer.add_scalar仅能对标量的值的变化进行可视化。因此它一般用于对loss和metric的变化进行可视化分析。
- #例7-1-2 可视化指标变化
- import numpy as np
- import torch
- from torch.utils.tensorboard import SummaryWriter
- # f(x) = a*x**2 + b*x + c的最小值
- x = torch.tensor(0.0,requires_grad = True) # x需要被求导
- a = torch.tensor(1.0)
- b = torch.tensor(-2.0)
- c = torch.tensor(1.0)
- optimizer = torch.optim.SGD(params=[x],lr = 0.01)
- def f(x):
- result = a*torch.pow(x,2) + b*x + c
- return(result)
- writer = SummaryWriter('./data/tensorboard')
- for i in range(500):
- optimizer.zero_grad()
- y = f(x)
- y.backward()
- optimizer.step()
- writer.add_scalar("x",x.item(),i) #日志中记录x在第step i 的值
- writer.add_scalar("y",y.item(),i) #日志中记录y在第step i 的值
- writer.close()
- print("y=",f(x).data,";","x=",x.data)
-
- out:
- y= tensor(0.) ; x= tensor(1.0000)
-
-
-
- 图略
如果需要对模型的参数(一般非标量)在训练过程中的变化进行可视化,可以使用 writer.add_histogram。 它能够观测张量值分布的直方图随训练步骤的变化趋势。
- #例 7-1-3 可视化参数分布
- import numpy as np
- import torch
- from torch.utils.tensorboard import SummaryWriter
- # 创建正态分布的张量模拟参数矩阵
- def norm(mean,std):
- t = std*torch.randn((100,20))+mean
- return t
- writer = SummaryWriter('./data/tensorboard')
- for step,mean in enumerate(range(-10,10,1)):
- w = norm(mean,1)
- writer.add_histogram("w",w, step)
- writer.flush()
- writer.close()
如果我们做图像相关的任务,也可以将原始的图片在tensorboard中进行可视化展示。
如果只写入一张图片信息,可以使用writer.add_image。 如果要写入多张图片信息,可以使用writer.add_images。 也可以用 torchvision.utils.make_grid将多张图片拼成一张图片,然后用writer.add_image写入。 注意,传入的是代表图片信息的Pytorch中的张量数据。
- #例7-1-4 可视化原始图像
- import torch
- import torchvision
- from torch import nn
- from torch.utils.data import Dataset,DataLoader
- from torchvision import transforms,datasets
- transform_train = transforms.Compose(
- [transforms.ToTensor()])
- transform_valid = transforms.Compose(
- [transforms.ToTensor()])
- ds_train = datasets.ImageFolder("./data/animal/train/",
- transform = transform_train,target_transform= lambda
- t:torch.tensor([t]).float())
- ds_valid = datasets.ImageFolder("./data/animal/test/",
- transform = transform_train,target_transform= lambda
- t:torch.tensor([t]).float())
- print(ds_train.class_to_idx)
- dl_train = DataLoader(ds_train,batch_size = 50,shuffle = True,num_workers=3)
- dl_valid = DataLoader(ds_valid,batch_size = 50,shuffle = True,num_workers=3)
- dl_train_iter = iter(dl_train)
- images, labels = dl_train_iter.next()
- # 仅查看一张图片
- writer = SummaryWriter('./data/tensorboard')
- writer.add_image('images[0]', images[0])
- writer.close()
- # 将多张图片拼接成一张图片,中间用黑色网格分割
- writer = SummaryWriter('./data/tensorboard')
- # create grid of images
- img_grid = torchvision.utils.make_grid(images)
- writer.add_image('image_grid', img_grid)
- writer.close()
- # 将多张图片直接写入
- writer = SummaryWriter('./data/tensorboard')
- writer.add_images("images",images,global_step = 0)
- writer.close()
如果我们将matplotlib绘图的结果再 tensorboard中展示,可以使用 add_figure. 注意,和writer.add_image不同的是,writer.add_figure需要传入matplotlib的figure对象。
- #例7-1-5 可视化人工绘图
- import torch
- import torchvision
- from torch import nn
- from torch.utils.data import Dataset,DataLoader
- from torchvision import transforms,datasets
- transform_train = transforms.Compose(
- [transforms.ToTensor()])
- transform_valid = transforms.Compose(
- [transforms.ToTensor()])
- ds_train = datasets.ImageFolder("./data/animal/train/",
- transform = transform_train,target_transform= lambda
- t:torch.tensor([t]).float())
- ds_valid = datasets.ImageFolder("./data/animal/test/",
- transform = transform_train,target_transform= lambda
- t:torch.tensor([t]).float())
- print(ds_train.class_to_idx)
-
- out:
- {'bird': 0, 'car': 1, 'cat': 2, 'deer': 3, 'dog': 4, 'plane': 5}
-
-
-
-
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- from matplotlib import pyplot as plt
- figure = plt.figure(figsize=(8,8))
- for i in range(9):
- img,label = ds_train[i]
- img = img.permute(1,2,0)
- ax=plt.subplot(3,3,i+1)
- ax.imshow(img.numpy())
- ax.set_title("label = %d"%label.item())
- ax.set_xticks([])
- ax.set_yticks([])
- plt.show()
-
- out:
- 图略
-
-
-
- writer = SummaryWriter('./data/tensorboard')
- writer.add_figure('figure',figure,global_step=0)
- writer.close()
深度学习的训练过程常常非常耗时,一个模型训练几个小时是家常便饭,训练几天也是常有的事 情,有时候甚至要训练几十天。
训练过程的耗时主要来自于两个部分,一部分来自数据准备,另一部分来自参数迭代。
当数据准备过程还是模型训练时间的主要瓶颈时,我们可以使用更多进程来准备数据。
当参数迭代过程成为训练时间的主要瓶颈时,我们通常的方法是应用GPU来进行加速。
Pytorch中使用GPU加速模型非常简单,只要将模型和数据移动到GPU上。
核心代码如下:
- # 定义模型
- ...
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
- model.to(device) # 移动模型到cuda
- # 训练模型
- ...
- features = features.to(device) # 移动数据到cuda
- labels = labels.to(device) # 或者 labels = labels.cuda() if
- torch.cuda.is_available() else labels
如果要使用多个GPU训练模型,也非常简单。只需要在将模型设置为数据并行风格模型。 则模 型移动到GPU上之后,会在每一个GPU上拷贝一个副本,并把数据平分到各个GPU上进行训练。 核心代码如下。
- # 定义模型
- ...
- if torch.cuda.device_count() > 1:
- model = nn.DataParallel(model) # 包装为并行风格模型
- # 训练模型
- ...
- features = features.to(device) # 移动数据到cuda
- labels = labels.to(device) # 或者 labels = labels.cuda() if
- torch.cuda.is_available() else labels
- ...
以下是一些和GPU有关的基本操作汇总 在Colab笔记本中:修改->笔记本设置->硬件加速器 中选择 GPU
- #例7-2-1 GPU基本操作
-
- import torch
- from torch import nn
- # 1,查看gpu信息
- if_cuda = torch.cuda.is_available()
- print("if_cuda=",if_cuda)
- gpu_count = torch.cuda.device_count()
- print("gpu_count=",gpu_count)
-
- out:
- if_cuda= True
- gpu_count= 1
-
-
-
-
- # 2,将张量在gpu和cpu间移动
- tensor = torch.rand((100,100))
- tensor_gpu = tensor.to("cuda:0") # 或者 tensor_gpu = tensor.cuda()
- print(tensor_gpu.device)
- print(tensor_gpu.is_cuda)
- tensor_cpu = tensor_gpu.to("cpu") # 或者 tensor_cpu = tensor_gpu.cpu()
- print(tensor_cpu.device)
-
- out:
- cuda:0
- True
- cpu
-
-
-
-
- # 3,将模型中的全部张量移动到gpu上
- net = nn.Linear(2,1)
- print(next(net.parameters()).is_cuda)
- net.to("cuda:0") # 将模型中的全部参数张量依次到GPU上,注意,无需重新赋值为 net =
- net.to("cuda:0")
- print(next(net.parameters()).is_cuda)
- print(next(net.parameters()).device)
-
- out:
- False
- True
- cuda:0
-
-
-
-
- # 4,创建支持多个gpu数据并行的模型
- linear = nn.Linear(2,1)
- print(next(linear.parameters()).device)
- model = nn.DataParallel(linear)
- print(model.device_ids)
- print(next(model.module.parameters()).device)
- #注意保存参数时要指定保存model.module的参数
- torch.save(model.module.state_dict(), "./data/7-2-1_model_parameter.pkl")
- linear = nn.Linear(2,1)
- linear.load_state_dict(torch.load("./data/7-2-1_model_parameter.pkl"))
-
- out:
- cpu
- [0]
- cuda:0
-
-
-
-
- # 5,清空cuda缓存
- # 该方法在cuda超内存时十分有用
- torch.cuda.empty_cache()
下面分别使用CPU和GPU运算一个矩阵乘法,并比较其计算效率。
- #例7-2-2 矩阵乘法示例
-
- import time
- import torch
- from torch import nn
-
- # 使用cpu
- a = torch.rand((10000,200))
- b = torch.rand((200,10000))
- tic = time.time()
- c = torch.matmul(a,b)
- toc = time.time()
- print(toc-tic)
- print(a.device)
- print(b.device)
-
- out:
- 0.38155651092529297
- cpu
- cpu
-
-
-
-
- # 使用gpu
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
- a = torch.rand((10000,200),device = device) #可以指定在GPU上创建张量
- b = torch.rand((200,10000)) #也可以在CPU上创建张量后移动到GPU上
- b = b.to(device) #或者 b = b.cuda() if torch.cuda.is_available() else b
- tic = time.time()
- c = torch.matmul(a,b)
- toc = time.time()
- print(toc-tic)
- print(a.device)
- print(b.device)
-
- out:
- 0.09093642234802246
- cuda:0
- cuda:0
以下示例使用torchkeras.Model来应用GPU训练模型的方法,其实就是在model.compile时指定 device即可。
- #例7-2-3 torchkeras.Model使用单GPU示例
- import torch
- from torch import nn
- import torchvision
- from torchvision import transforms
- import torchkeras
-
-
- #准备数据
- transform = transforms.Compose([transforms.ToTensor()])
- ds_train = torchvision.datasets.MNIST(root="./data/minist/",train=True,download=True,transform=transform)
- ds_valid = torchvision.datasets.MNIST(root="./data/minist/",train=False,download=True,transform=transform)
- dl_train = torch.utils.data.DataLoader(ds_train, batch_size=128,shuffle=True, num_workers=4)
- dl_valid = torch.utils.data.DataLoader(ds_valid, batch_size=128,shuffle=False, num_workers=4)
- print(len(ds_train))
- print(len(ds_valid))
-
- out:
- 60000
- 10000
-
-
-
-
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- #查看部分样本
- from matplotlib import pyplot as plt
- plt.figure(figsize=(8,8))
- for i in range(9):
- img,label = ds_train[i]
- img = torch.squeeze(img)
- ax=plt.subplot(3,3,i+1)
- ax.imshow(img.numpy())
- ax.set_title("label = %d"%label)
- ax.set_xticks([])
- ax.set_yticks([])
- plt.show()
-
- out:
- <Figure size 576x576 with 9 Axes>
-
-
-
-
- #定义模型
- class CnnModel(nn.Module):
- def __init__(self):
- super().__init__()
- self.layers = nn.ModuleList([
- nn.Conv2d(in_channels=1,out_channels=32,kernel_size = 3),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Dropout2d(p = 0.1),
- nn.AdaptiveMaxPool2d((1,1)),
- nn.Flatten(),
- nn.Linear(64,32),
- nn.ReLU(),
- nn.Linear(32,10)]
- )
- def forward(self,x):
- for layer in self.layers:
- x = layer(x)
- return x
- net = CnnModel()
- model = torchkeras.Model(net)
- model.summary(input_shape=(1,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 320
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 10] 330
- ================================================================
- Total params: 53,994
- Trainable params: 53,994
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.003906
- Forward/backward pass size (MB): 0.359695
- Params size (MB): 0.205971
- Estimated Total Size (MB): 0.569572
- ----------------------------------------------------------------
-
-
-
- #训练模型
-
- from sklearn.metrics import accuracy_score
- def accuracy(y_pred,y_true):
- y_pred_cls = torch.argmax(nn.Softmax(dim=1)(y_pred),dim=1).data
- return accuracy_score(y_true.cpu().numpy(),y_pred_cls.cpu().numpy())
-
- # 注意此处要将数据先移动到cpu上,然后才能转换成numpy数组
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
- model.compile(loss_func = nn.CrossEntropyLoss(),
- optimizer= torch.optim.SGD(model.parameters(),lr = 0.02),
- metrics_dict={"accuracy":accuracy},device = device) # 注意此处compile时指定了device
-
- dfhistory = model.fit(3,dl_train = dl_train, dl_val=dl_valid,log_step_freq=100)
-
- out:
- #训练过程略
- 。。。。。。
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 3 | 0.315 | 0.911 | 0.196 | 0.944 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-29 13:12:16
- Finished Training...
-
-
-
-
-
- #评估模型
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- import matplotlib.pyplot as plt
- def plot_metric(dfhistory, metric):
- train_metrics = dfhistory[metric]
- val_metrics = dfhistory['val_'+metric]
- epochs = range(1, len(train_metrics) + 1)
- plt.plot(epochs, train_metrics, 'bo--')
- plt.plot(epochs, val_metrics, 'ro-')
- plt.title('Training and validation '+ metric)
- plt.xlabel("Epochs")
- plt.ylabel(metric)
- plt.legend(["train_"+metric, 'val_'+metric])
- plt.show()
- plot_metric(dfhistory,"loss")
- plot_metric(dfhistory,"accuracy")
- model.evaluate(dl_valid)
-
- out:
- #训练图略
- {'val_loss': 0.1961400840855852, 'val_accuracy': 0.9440268987341772}
-
-
-
-
- #使用模型
- model.predict(dl_valid)[0:10]
-
- out:
- tensor([[-3.0589, 0.5913, 2.9332, 1.9190, -0.4598, -0.9629, -7.2336, 10.3429,
- -4.1452, 1.7773],
- [-1.1582, -4.3752, 6.6064, 1.4136, 0.7964, -1.7493, 0.3708, 2.5375,
- -3.1236, 1.1933],
- [-0.2913, 7.6946, -0.5591, -3.4781, 1.8302, -3.5013, -0.2864, 0.8970,
- -1.4603, -1.6221],
- [ 7.8830, -3.4793, 3.0594, -3.5699, -0.7118, -1.5213, 3.4239, -5.2019,
- 1.5311, -1.1321],
- [-0.5955, -0.0648, 2.1269, -3.1870, 7.0876, -3.2265, -1.4909, 1.1239,
- -3.2128, 2.2488],
- [ 0.2199, 8.2553, -1.2405, -5.6407, 2.7766, -4.2880, -0.8283, 2.3202,
- -2.1941, -0.5836],
- [-1.4086, -2.2477, 2.0612, -1.8534, 6.2273, -1.8892, -1.8793, 1.4700,
- -2.0889, 3.3411],
- [-3.4105, -6.1126, 2.9690, 1.4873, 2.9547, 1.4985, -1.3198, 0.5871,
- 0.7181, 5.7719],
- [-1.7162, -5.8831, 4.2534, 0.3954, 1.3270, 3.6996, -0.8770, 1.2742,
- -1.1617, 3.4104],
- [-2.5201, -5.2390, 2.0075, 0.6812, 0.6092, 0.8451, -2.8697, 3.6119,
- 0.7925, 6.0172]])
-
-
-
-
- #保存模型
- # save the model parameters
- torch.save(model.state_dict(), "data/7-2-3_model_parameter.pkl")
- model_clone = torchkeras.Model(CnnModel())
- model_clone.load_state_dict(torch.load("data/7-2-3_model_parameter.pkl"))
- model_clone.compile(loss_func = nn.CrossEntropyLoss(),
- optimizer= torch.optim.Adam(model.parameters(),lr = 0.02),
- metrics_dict={"accuracy":accuracy},device = device) # 注意此处compile时指定了device
- model_clone.evaluate(dl_valid)
-
- out:
- {'val_loss': 0.1961400840855852, 'val_accuracy': 0.9440268987341772}
以下示例需要在有多个GPU的机器上跑。如果在单GPU的机器上跑,也能跑通,但是实际上 使用的是单个GPU。
- #例7-2-4 torchkeras.Model使用多GPU示例
-
- #准备数据
- import torch
- from torch import nn
- import torchvision
- from torchvision import transforms
- import torchkeras
-
- transform = transforms.Compose([transforms.ToTensor()])
- ds_train = torchvision.datasets.MNIST(root="./data/minist/",train=True,download=True,transform=transform)
- ds_valid = torchvision.datasets.MNIST(root="./data/minist/",train=False,download=True,transform=transform)
- dl_train = torch.utils.data.DataLoader(ds_train, batch_size=128,shuffle=True, num_workers=4)
- dl_valid = torch.utils.data.DataLoader(ds_valid, batch_size=128,shuffle=False, num_workers=4)
- print(len(ds_train))
- print(len(ds_valid))
-
- out:
- 60000
- 10000
-
-
-
-
- #定义模型
- class CnnModel(nn.Module):
- def __init__(self):
- super().__init__()
- self.layers = nn.ModuleList([
- nn.Conv2d(in_channels=1,out_channels=32,kernel_size = 3),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Dropout2d(p = 0.1),
- nn.AdaptiveMaxPool2d((1,1)),
- nn.Flatten(),
- nn.Linear(64,32),
- nn.ReLU(),
- nn.Linear(32,10)]
- )
- def forward(self,x):
- for layer in self.layers:
- x = layer(x)
- return x
- net = CnnModel()
- model = torchkeras.Model(net)
- model.summary(input_shape=(1,32,32))
-
- out:
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 320
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 10] 330
- ================================================================
- Total params: 53,994
- Trainable params: 53,994
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.003906
- Forward/backward pass size (MB): 0.359695
- Params size (MB): 0.205971
- Estimated Total Size (MB): 0.569572
- ----------------------------------------------------------------
-
-
-
-
- #训练模型
-
- from sklearn.metrics import accuracy_score
- def accuracy(y_pred,y_true):
- y_pred_cls = torch.argmax(nn.Softmax(dim=1)(y_pred),dim=1).data
- return accuracy_score(y_true.cpu().numpy(),y_pred_cls.cpu().numpy()) # 注意此处要将数据先移动到cpu上,然后才能转换成numpy数组
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
- model.compile(loss_func = nn.CrossEntropyLoss(),
- optimizer= torch.optim.SGD(model.parameters(),lr = 0.02),
- metrics_dict={"accuracy":accuracy},device = device) # 注意此处compile时指定了device
-
- dfhistory = model.fit(3,dl_train = dl_train, dl_val=dl_valid,log_step_freq=100)
-
- out:
- #训练过程略
- 。。。。。。
- +-------+-------+----------+----------+--------------+
- | epoch | loss | accuracy | val_loss | val_accuracy |
- +-------+-------+----------+----------+--------------+
- | 3 | 0.328 | 0.908 | 0.202 | 0.945 |
- +-------+-------+----------+----------+--------------+
-
- ================================================================================2022-03-29 13:39:01
- Finished Training...
-
-
-
-
- #评估模型
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- import matplotlib.pyplot as plt
- def plot_metric(dfhistory, metric):
- train_metrics = dfhistory[metric]
- val_metrics = dfhistory['val_'+metric]
- epochs = range(1, len(train_metrics) + 1)
- plt.plot(epochs, train_metrics, 'bo--')
- plt.plot(epochs, val_metrics, 'ro-')
- plt.title('Training and validation '+ metric)
- plt.xlabel("Epochs")
- plt.ylabel(metric)
- plt.legend(["train_"+metric, 'val_'+metric])
- plt.show()
- plot_metric(dfhistory,"loss")
- plot_metric(dfhistory,"accuracy")
- model.evaluate(dl_valid)
-
- out:
- #训练效果图略
- {'val_loss': 0.20189599447612522, 'val_accuracy': 0.9454113924050633}
-
-
-
-
- #保存模型
- # save the model parameters
- torch.save(model.state_dict(), "data/7-2-4_model_parameter.pkl")
- model_clone = torchkeras.Model(CnnModel())
- model_clone.load_state_dict(torch.load("data/7-2-4_model_parameter.pkl"))
- model_clone.compile(loss_func = nn.CrossEntropyLoss(),
- optimizer= torch.optim.Adam(model.parameters(),lr = 0.02),
- metrics_dict={"accuracy":accuracy},device = device) # 注意此处compile时指定了device
- model_clone.evaluate(dl_valid)
-
- out:
- {'val_loss': 0.20189599447612522, 'val_accuracy': 0.9454113924050633}
- #例7-2-5 torchkeras.LightModel使用GPU/TPU示例
- #准备数据
- import torch
- from torch import nn
- import torchvision
- from torchvision import transforms
- import torchkeras
-
- transform = transforms.Compose([transforms.ToTensor()])
- ds_train = torchvision.datasets.MNIST(root="./data/minist/",train=True,download=True,transform=transform)
- ds_valid = torchvision.datasets.MNIST(root="./data/minist/",train=False,download=True,transform=transform)
- dl_train = torch.utils.data.DataLoader(ds_train, batch_size=128,shuffle=True, num_workers=4)
- dl_valid = torch.utils.data.DataLoader(ds_valid, batch_size=128,shuffle=False, num_workers=4)
- print(len(ds_train))
- print(len(ds_valid))
-
-
- out:
- 60000
- 10000
-
-
-
- #定义模型
- import torchkeras
- import torchmetrics
- import pytorch_lightning as pl
- from sklearn.metrics import accuracy_score
-
- #定义一个CNN网络
- class CnnNet(nn.Module):
- def __init__(self):
- super().__init__()
- self.layers = nn.ModuleList([
- nn.Conv2d(in_channels=1,out_channels=32,kernel_size = 3),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Conv2d(in_channels=32,out_channels=64,kernel_size = 5),
- nn.MaxPool2d(kernel_size = 2,stride = 2),
- nn.Dropout2d(p = 0.1),
- nn.AdaptiveMaxPool2d((1,1)),
- nn.Flatten(),
- nn.Linear(64,32),
- nn.ReLU(),
- nn.Linear(32,10)]
- )
- def forward(self,x):
- for layer in self.layers:
- x = layer(x)
- return x
-
- #定义一个LightModel模型
- class Model(torchkeras.LightModel):
- def shared_step(self,batch)->dict:
- x, y = batch
- prediction = self(x)
- loss = nn.CrossEntropyLoss()(prediction,y)
- preds = torch.argmax(nn.Softmax(dim=1)(prediction),dim=1).data
- acc = accuracy_score(preds.cpu(), y.cpu())
- dic = {"loss":loss,"acc":acc}
- return dic
- def configure_optimizers(self):
- self=self.cuda()
- optimizer = torch.optim.Adam(self.parameters(), lr=1e-2)
- lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,step_size=10, gamma=0.0001)
- return {"optimizer":optimizer,"lr_scheduler":lr_scheduler}
-
- pl.seed_everything(8888)
- net = CnnNet()
- model = Model(net)
- torchkeras.summary(model,input_shape=(1,32,32))
- print(model)
-
- out:
- Global seed set to 8888
- ----------------------------------------------------------------
- Layer (type) Output Shape Param #
- ================================================================
- Conv2d-1 [-1, 32, 30, 30] 320
- MaxPool2d-2 [-1, 32, 15, 15] 0
- Conv2d-3 [-1, 64, 11, 11] 51,264
- MaxPool2d-4 [-1, 64, 5, 5] 0
- Dropout2d-5 [-1, 64, 5, 5] 0
- AdaptiveMaxPool2d-6 [-1, 64, 1, 1] 0
- Flatten-7 [-1, 64] 0
- Linear-8 [-1, 32] 2,080
- ReLU-9 [-1, 32] 0
- Linear-10 [-1, 10] 330
- ================================================================
- Total params: 53,994
- Trainable params: 53,994
- Non-trainable params: 0
- ----------------------------------------------------------------
- Input size (MB): 0.003906
- Forward/backward pass size (MB): 0.359695
- Params size (MB): 0.205971
- Estimated Total Size (MB): 0.569572
- ----------------------------------------------------------------
- Model(
- (net): CnnNet(
- (layers): ModuleList(
- (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))
- (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (2): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1))
- (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
- (4): Dropout2d(p=0.1, inplace=False)
- (5): AdaptiveMaxPool2d(output_size=(1, 1))
- (6): Flatten(start_dim=1, end_dim=-1)
- (7): Linear(in_features=64, out_features=32, bias=True)
- (8): ReLU()
- (9): Linear(in_features=32, out_features=10, bias=True)
- )
- )
- )
- #训练模型
- ckpt_cb = pl.callbacks.ModelCheckpoint(monitor='val_loss')
- # set gpus=0 will use cpu,
- # set gpus=1 will use 1 gpu
- # set gpus=2 will use 2gpus
- # set gpus = -1 will use all gpus
- # you can also set gpus = [0,1] to use the given gpus
- # you can even set tpu_cores=2 to use two tpus
- trainer = pl.Trainer(max_epochs=10,gpus = 1, callbacks=[ckpt_cb])
- trainer.fit(model,dl_train,dl_valid)
-
- out:
- #训练过程略
-
-
-
-
- #评估模型
- import pandas as pd
- history = model.history
- dfhistory = pd.DataFrame(history)
- dfhistory
-
- out:
- val_loss val_acc loss acc epoch
- 0 0.110953 0.966574 0.314087 0.898299 0
- 1 0.086179 0.972805 0.111056 0.966251 1
- 2 0.074870 0.976068 0.098939 0.970471 2
- 3 0.099406 0.972607 0.092866 0.972920 3
- 4 0.060055 0.981705 0.082293 0.976601 4
- 5 0.077828 0.977947 0.072885 0.978428 5
- 6 0.062522 0.983782 0.077495 0.977951 6
- 7 0.060598 0.982991 0.069507 0.979800 7
- 8 0.106252 0.975672 0.077156 0.978478 8
- 9 0.061882 0.982892 0.068871 0.980821 9
-
-
-
- %matplotlib inline
- %config InlineBackend.figure_format = 'svg'
- import matplotlib.pyplot as plt
- def plot_metric(dfhistory, metric):
- train_metrics = dfhistory[metric]
- val_metrics = dfhistory['val_'+metric]
- epochs = range(1, len(train_metrics) + 1)
- plt.plot(epochs, train_metrics, 'bo--')
- plt.plot(epochs, val_metrics, 'ro-')
- plt.title('Training and validation '+ metric)
- plt.xlabel("Epochs")
- plt.ylabel(metric)
- plt.legend(["train_"+metric, 'val_'+metric])
- plt.show()
- plot_metric(dfhistory,"loss")
- plot_metric(dfhistory,"acc")
- results = trainer.test(model, test_dataloaders=dl_valid, verbose = False)
- print(results[0])
-
- out:
- #图片略
- Testing: 100%|█████████████████████████████████████████████████████████████████████████| 79/79 [00:02<00:00, 26.77it/s]
- {'test_loss': 0.06257420778274536, 'test_acc': 0.9827}
-
-
-
-
- #使用模型
- def predict(model,dl):
- model.eval()
- preds = torch.cat([model.forward(t[0].to(model.device)) for t in dl])
- result = torch.argmax(nn.Softmax(dim=1)(preds),dim=1).data
- return(result.data)
- result = predict(model,dl_valid)
- result
-
- out:
- tensor([7, 2, 1, ..., 4, 5, 6])
-
-
-
-
- #保存模型
- print(ckpt_cb.best_model_score)
- model.load_from_checkpoint(ckpt_cb.best_model_path)
- best_net = model.net
- torch.save(best_net.state_dict(),"./data/7-2-5_net.pt")
- net_clone = CnnNet()
- net_clone.load_state_dict(torch.load("./data/7-2-5_net.pt"))
- model_clone = Model(net_clone)
- trainer = pl.Trainer()
- result = trainer.test(model_clone,test_dataloaders=dl_valid, verbose = False)
- print(result)
-
- out:
- GPU available: True, used: False
- TPU available: False, using: 0 TPU cores
- IPU available: False, using: 0 IPUs
- tensor(0.0607, device='cuda:0')
-
- Testing: 100%|█████████████████████████████████████████████████████████████████████████| 79/79 [00:04<00:00, 17.98it/s]
- [{'test_loss': 0.06257420033216476, 'test_acc': 0.9827}]
数据集是深度学习的资料基础,一些常见数据集网上可以收集到,我在这里做了一下整理,都放在data 文件夹下,其中一些数据集我做了适合自己训练的处理,可能跟网上直接下载的不太一样。
包括 animal 数据集、cifar10数据集、douban数据集、minist数据集、Titanic数据集、stock数据、tensorboard看板的文件、各个练习保存的pkl文件等等
链接:https://pan.baidu.com/s/17BVjAqZEZsAcdQhJ7Yyb3A
提取码:sspa
--来自百度网盘超级会员V4的分享
这里整理了各个章节所示例的代码,都是ipynb格式, 因为jupyter适合代码分段测试,很方便演示,所以整个笔记的代码都是用jupyter来实现的。
链接:https://pan.baidu.com/s/1i1jTkU4xm29pw-tyNVwSQw
提取码:mltf
--来自百度网盘超级会员V4的分享
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。