当前位置:   article > 正文

PyTorch实现DCGAN(生成对抗网络)生成新的假名人照片实战(附源码和数据集)_pytorch生成一个假脸模型

pytorch生成一个假脸模型

需要数据集和源码请点赞关注收藏后评论区留言~~~

一、生成对抗网络(GAN)

GAN(生成对抗网络)是用于教授DL模型以捕获训练数据分布的框架,因此可以从同一分布中生成新数据。它们由两个不同的模型组成,生成器和判别器。生成器的工作是生成看起来像训练图像的假图像,判别器的工作是查看图像并从生成器输出它是真实地训练图像还是伪图像。在训练过程中,生成器不断尝试通过生成越来越好地伪造品而使判别器的性能从超过智者,而判别器正在努力成为更好的侦探并正确的对真实和伪造图像进行分类。博弈的平衡点是当生成器生成的伪造品看起来像直接来自训练数据时,而判别器则始终猜测生成器输出是真实地还是伪造品地百分之五十置信度

二、DCGAN介绍

DCGAN是对上述GAN的直接扩展,除了它分别在判别器和生成器中明确地使用卷积和卷积转置层。判别器由卷积层、批标准化层以及LeakyReLU激活层组成,输入是3×64×64的图像,输出是输入图像来自实际数据的概率。生成器由转置卷积层,批标准化层以及ReLU激活层组成,输入是一个本征向量Z,它是从标准正太分别中采样得到的,输出是一个3×64×64的RGB图像

三、数据集简介

本篇博客用的数据集来自香港中文大学,名为img_align_celeba.zip  下载网址如下

数据集网址

由于访问不稳定以及文件过大 建议后台私信找我要

四、模型实现

1:权重初始化

所有模型权重均应从 mean=0 stdev=0.02的正太分别中随机初始化

2:生成器

生成器G用于将本征向量Z映射到数据空间

 3:判别器

判别器D是一个二分类网络,它将图像作为输入并输出输入图像是真实的概率

4:损失函数和优化器

五、结果与可视化

1:损失与训练迭代次数关系图

生成器和判别器的损失和训练迭代次数关系图如下

2:生成器G的训练进度 

在每一个批次训练完成之后都保存了生成器的输出,现在可以通过动画可视化生成器G的训练进度

3:真实图像与假图像 

生成的假图像与真实图像对比如下

六、代码

部分源码如下

  1. from __future__ import print_function
  2. # %matplotlib inline
  3. import argparse
  4. import os
  5. import random
  6. import torch
  7. import torch.nn as nn
  8. hvision.transforms as transforms
  9. import torchvision.utils as vutils
  10. import numpy as np
  11. import matplotlib.pyplot as plt
  12. import matplotlib.animation as animation
  13. from IPython.display import HTML
  14. # Set random seed for reproducibility
  15. manualSeed = 999
  16. # manualSeed = random.randint(1, 10000) # use if you want new results
  17. print("Random Seed: ", manualSeed)
  18. random.seed(manualSeed)
  19. torch.manual_seed(manualSeed)
  20. # Inputs
  21. # ------
  22. #
  23. # Let’s define some inputs for the run:
  24. #
  25. # - **dataroot** - the path to the root of the dataset folder. We will
  26. # talk more about the dataset in the next section
  27. # - **workers** - the number of worker threads for loading the data with
  28. # the DataLoader
  29. # - **batch_size** - the batch size used in training. The DCGAN paper
  30. # uses a batch size of 128
  31. # - **image_size** - the spatial size of the images used for training.
  32. # This implementation defaults to 64x64. If another size is desired,
  33. # the structures of D and G must be changed. See
  34. # `here <https://github.com/pytorch/examples/issues/70>`__ for more
  35. # details
  36. # - **nc** - number of color channels in the input images. For color
  37. # images this is 3
  38. # - **nz** - length of latent vector
  39. # - **ngf** - relates to the depth of feature maps carried through the
  40. # generator
  41. # - **ndf** - sets the depth of feature maps propagated through the
  42. # discriminator
  43. # - **num_epochs** - number of training epochs to run. Training for
  44. # longer will probably lead to better results but will also take much
  45. # longer
  46. # - **lr** - learning rate for training. As described in the DCGAN paper,
  47. # this number should be 0.0002
  48. # - **beta1** - beta1 hyperparameter for Adam optimizers. As described in
  49. # paper, this number should be 0.5
  50. # - **ngpu** - number of GPUs available. If this is 0, code will run in
  51. # CPU mode. If this number is greater than 0 it will run on that number
  52. # of GPUs
  53. #
  54. #
  55. #
  56. # In[ ]:
  57. # Root directory for dataset
  58. dataroot = r"C:\Users\Admin\Desktop\celeba\img_align_celeba"
  59. # Number of workers for dataloader
  60. workers = 2
  61. # Batch size during training
  62. batch_size = 128
  63. # Spatial size of training images. All images will be resized to this
  64. # size using a transformer.
  65. image_size = 64
  66. # Number of channels in the training images. For color images this is 3
  67. nc = 3
  68. # Size of z latent vector (i.e. size of generator input)
  69. nz = 100
  70. # Size of feature maps in generator
  71. ngf = 64
  72. # Size of feature maps in discriminator
  73. ndf = 64
  74. # Number of training epochs
  75. num_epochs = 5
  76. # Learning rate for optimizers
  77. lr = 0.0002
  78. # Beta1 hyperparam for Adam optimizers
  79. beta1 = 0.5
  80. # Number of GPUs available. Use 0 for CPU mode.
  81. ngpu = 1
  82. # Data
  83. # ----
  84. #
  85. # In this tutorial we will use the `Celeb-A Faces
  86. # dataset <http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html>`__ which can
  87. # be downloaded at the linked site, or in `Google
  88. # Drive <https://drive.google.com/drive/folders/0B7EVK8r0v71pTUZsaXdaSnZBZzg>`__.
  89. # The dataset will download as a file named *img_align_celeba.zip*. Once
  90. # downloaded, create a directory named *celeba* and extract the zip file
  91. # into that directory. Then, set the *dataroot* input for this notebook to
  92. # the *celeba* directory you just created. The resulting directory
  93. # structure should be:
  94. #
  95. # ::
  96. #
  97. # /path/to/celeba
  98. # -> img_align_celeba
  99. # -> 188242.jpg
  100. # -> 173822.jpg
  101. # -> 284702.jpg
  102. # -> 537394.jpg
  103. # ...
  104. #
  105. # This is an important step because we will be using the ImageFolder
  106. # dataset class, which requires there to be subdirectories in the
  107. # dataset’s root folder. Now, we can create the dataset, create the
  108. # dataloader, set the device to run on, and finally visualize some of the
  109. # training data.
  110. #
  111. #
  112. #
  113. # In[ ]:
  114. # We can use an image folder dataset the way we have it setup.
  115. # Create the dataset
  116. dataset = dset.ImageFolder(root=dataroot,
  117. transform=transforms.Compose([
  118. transforms.Resize(image_size),
  119. transforms.CenterCrop(image_size),
  120. transforms.ToTensor(),
  121. transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
  122. ]))
  123. # Create the dataloader
  124. dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
  125. shuffle=True, num_workers=workers)
  126. # Decide which device we want to run on
  127. device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
  128. # Plot some training images
  129. real_batch = next(iter(dataloader))
  130. plt.figure(figsize=(8, 8))
  131. plt.axis("off")
  132. plt.title("Training Images")
  133. plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=2, normalize=True).cpu(), (1, 2, 0)))
  134. # Implementation
  135. # --------------
  136. #
  137. # With our input parameters set and the dataset prepared, we can now get
  138. # into the implementation. We will start with the weigth initialization
  139. # strategy, then talk about the generator, discriminator, loss functions,
  140. # and training loop in detail.
  141. #
  142. # Weight Initialization
  143. # ~~~~~~~~~~~~~~~~~~~~~
  144. #
  145. # From the DCGAN paper, the authors specify that all model weights shall
  146. # be randomly initialized from a Normal distribution with mean=0,
  147. # stdev=0.02. The ``weights_init`` function takes an initialized model as
  148. # input and reinitializes all convolutional, convolutional-transpose, and
  149. # batch normalization layers to meet this criteria. This function is
  150. # applied to the models immediately after initialization.
  151. #
  152. #
  153. #
  154. # In[ ]:
  155. # custom weights initialization called on netG and netD
  156. def weights_init(m):
  157. classname = m.__class__.__name__
  158. if classname.find('Conv') != -1:
  159. nn.init.normal_(m.weight.data, 0.0, 0.02)
  160. elif classname.find('BatchNorm') != -1:
  161. nn.init.normal_(m.weight.data, 1.0, 0.02)
  162. nn.init.constant_(m.bias.data, 0)
  163. # Generator
  164. # ~~~~~~~~~
  165. #
  166. # The generator, $G$, is designed to map the latent space vector
  167. # ($z$) to data-space. Since our data are images, converting
  168. # $z$ to data-space means ultimately creating a RGB image with the
  169. # same size as the training images (i.e. 3x64x64). In practice, this is
  170. # accomplished through a series of strided two dimensional convolutional
  171. # transpose layers, each paired with a 2d batch norm layer and a relu
  172. # activation. The output of the generator is fed through a tanh function
  173. # to return it to the input data range of $[-1,1]$. It is worth
  174. # noting the existence of the batch norm functions after the
  175. # conv-transpose layers, as this is a critical contribution of the DCGAN
  176. # paper. These layers help with the flow of gradients during training. An
  177. # image of the generator from the DCGAN paper is shown below.
  178. #
  179. # .. figure:: /_static/img/dcgan_generator.png
  180. # :alt: dcgan_generator
  181. #
  182. # Notice, the how the inputs we set in the input section (*nz*, *ngf*, and
  183. # *nc*) influence the generator architecture in code. *nz* is the length
  184. # of the z input vector, *ngf* relates to the size of the feature maps
  185. # that are propagated through the generator, and *nc* is the number of
  186. # channels in the output image (set to 3 for RGB images). Below is the
  187. # code for the generator.
  188. #
  189. #
  190. #
  191. # In[ ]:
  192. # Generator Code
  193. nn.ReLU(True),
  194. # state size. (ngf*8) x 4 x 4
  195. nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
  196. nn.BatchNorm2d(ngf * 4),
  197. nn.ReLU(True),
  198. # state size. (ngf*4) x 8 x 8
  199. nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
  200. nn.BatchNorm2d(ngf * 2),
  201. nn.ReLU(True),
  202. # state size. (ngf*2) x 16 x 16
  203. nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1, bias=False),
  204. nn.BatchNorm2d(ngf),
  205. nn.ReLU(True),
  206. # state size. (ngf) x 32 x 32
  207. nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),
  208. nn.Tanh()
  209. # state size. (nc) x 64 x 64
  210. )
  211. def forward(self, input):
  212. return self.main(input)
  213. # Now, we can instantiate the generator and apply the ``weights_init``
  214. # function. Check out the printed model to see how the generator object is
  215. # structured.
  216. #
  217. #
  218. #
  219. # In[ ]:
  220. # Create the generator
  221. netG = Generator(ngpu).to(device)
  222. # Handle multi-gpu if desired
  223. if (device.type == 'cuda') and (ngpu > 1):
  224. netG = nn.DataParallel(netG, list(range(ngpu)))
  225. # Apply the weights_init function to randomly initialize all weights
  226. # to mean=0, stdev=0.2.
  227. netG.apply(weights_init)
  228. # Print the model
  229. print(netG)
  230. # Discriminator
  231. # ~~~~~~~~~~~~~
  232. #
  233. # As mentioned, the discriminator, $D$, is a binary classification
  234. # network that takes an image as input and outputs a scalar probability
  235. # that the input image is real (as opposed to fake). Here, $D$ takes
  236. # a 3x64x64 input image, processes it through a series of Conv2d,
  237. # BatchNorm2d, and LeakyReLU layers, and outputs the final probability
  238. # through a Sigmoid activation function. This architecture can be extended
  239. # with more layers if necessary for the problem, but there is significance
  240. # to the use of the strided convolution, BatchNorm, and LeakyReLUs. The
  241. # DCGAN paper mentions it is a good practice to use strided convolution
  242. # rather than pooling to downsample because it lets the network learn its
  243. # own pooling function. Also batch norm and leaky relu functions promote
  244. # healthy gradient flow which is critical for the learning process of both
  245. # $G$ and $D$.
  246. #
  247. #
  248. #
  249. # Discriminator Code
  250. #
  251. #
  252. # In[ ]:
  253. c
  254. input is (nc) x 64 x 64
  255. nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
  256. nn.LeakyReLU(0.2, inplace=True),
  257. # state size. (ndf) x 32 x 32
  258. nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
  259. nn.BatchNorm2d(ndf * 2),
  260. nn.LeakyReLU(0.2, inplace=True),
  261. # state size. (ndf*2) x 16 x 16
  262. nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
  263. nn.BatchNorm2d(ndf * 4),
  264. nn.LeakyReLU(0.2, inplace=True),
  265. # state size. (ndf*4) x 8 x 8
  266. nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
  267. nn.BatchNorm2d(ndf * 8),
  268. nn.LeakyReLU(0.2, inplace=True),
  269. # state size. (ndf*8) x 4 x 4
  270. nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
  271. nn.Sigmoid()
  272. )
  273. def forward(self, input):
  274. return self.main(input)
  275. # Now, as with the generator, we can create the discriminator, apply the
  276. # ``weights_init`` function, and print the model’s structure.
  277. #
  278. #
  279. #
  280. # In[ ]:
  281. # Create the Discriminator
  282. netD = Discriminator(ngpu).to(device)
  283. # Handle multi-gpu if desired
  284. if (device.type == 'cuda') and (ngpu > 1):
  285. netD = nn.DataParallel(netD, list(range(ngpu)))
  286. # Apply the weights_init function to randomly initialize all weights
  287. # to mean=0, stdev=0.2.
  288. netD.apply(weights_init)
  289. # Print the model
  290. print(netD)
  291. # Loss Functions and Optimizers
  292. # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  293. #
  294. # With $D$ and $G$ setup, we can specify how they learn
  295. # through the loss functions and optimizers. We will use the Binary Cross
  296. # Entropy loss
  297. # (`BCELoss <https://pytorch.org/docs/stable/nn.html#torch.nn.BCELoss>`__)
  298. # function which is defined in PyTorch as:
  299. #
  300. # \begin{align}\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right]\end{align}
  301. #
  302. # Notice how this function provides the calculation of both log components
  303. # in the objective function (i.e. $log(D(x))$ and
  304. # $log(1-D(G(z)))$). We can specify what part of the BCE equation to
  305. # use with the $y$ input. This is accomplished in the training loop
  306. # which is coming up soon, but it is important to understand how we can
  307. # choose which component we wish to calculate just by changing $y$
  308. # (i.e. GT labels).
  309. #
  310. # Next, we define our real label as 1 and the fake label as 0. These
  311. # labels will be used when calculating the losses of $D$ and
  312. # $G$, and this is also the convention used in the original GAN
  313. # paper. Finally, we set up two separate optimizers, one for $D$ and
  314. # one for $G$. As specified in the DCGAN paper, both are Adam
  315. # optimizers with learning rate 0.0002 and Beta1 = 0.5. For keeping track
  316. # of the generator’s learning progression, we will generate a fixed batch
  317. # of latent vectors that are drawn from a Gaussian distribution
  318. # (i.e. fixed_noise) . In the training loop, we will periodically input
  319. # this fixed_noise into $G$, and over the iterations we will see
  320. # images form out of the noise.
  321. #
  322. #
  323. #
  324. # In[ ]:
  325. # Initialize BCELoss function
  326. criterion = nn.BCELoss()
  327. # Create batch of latent vectors that we will use to visualize
  328. # the progression of the generator
  329. fixed_noise = torch.randn(64, nz, 1, 1, device=device)
  330. # Establish convention for real and fake labels during training
  331. real_label = 1
  332. fake_label = 0
  333. # Setup Adam optimizers for both G and D
  334. optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(beta1, 0.999))
  335. optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(beta1, 0.999))
  336. # Training
  337. # ~~~~~~~~
  338. #
  339. # Finally, now that we have all of the parts of the GAN framework defined,
  340. # we can train it. Be mindful that training GANs is somewhat of an art
  341. # form, as incorrect hyperparameter settings lead to mode collapse with
  342. # little explanation of what went wrong. Here, we will closely follow
  343. # Algorithm 1 from Goodfellow’s paper, while abiding by some of the best
  344. # practices shown in `ganhacks <https://github.com/soumith/ganhacks>`__.
  345. # Namely, we will “construct different mini-batches for real and fake”
  346. # images, and also adjust G’s objective function to maximize
  347. # $logD(G(z))$. Training is split up into two main parts. Part 1
  348. # updates the Discriminator and Part 2 updates the Generator.
  349. #
  350. # **Part 1 - Train the Discriminator**
  351. #
  352. # Recall, the goal of training the discriminator is to maximize the
  353. # probability of correctly classifying a given input as real or fake. In
  354. # terms of Goodfellow, we wish to “update the discriminator by ascending
  355. # its stochastic gradient”. Practically, we want to maximize
  356. # $log(D(x)) + log(1-D(G(z)))$. Due to the separate mini-batch
  357. # suggestion from ganhacks, we will calculate this in two steps. First, we
  358. # will construct a batch of real samples from the training set, forward
  359. # pass through $D$, calculate the loss ($log(D(x))$), then
  360. # calculate the gradients in a backward pass. Secondly, we will construct
  361. # a batch of fake samples with the current generator, forward pass this
  362. # batch through $D$, calculate the loss ($log(1-D(G(z)))$),
  363. # and *accumulate* the gradients with a backward pass. Now, with the
  364. # gradients accumulated from both the all-real and all-fake batches, we
  365. # call a step of the Discriminator’s optimizer.
  366. #
  367. # **Part 2 - Train the Generator**
  368. #
  369. # As stated in the original paper, we want to train the Generator by
  370. # minimizing $log(1-D(G(z)))$ in an effort to generate better fakes.
  371. # As mentioned, this was shown by Goodfellow to not provide sufficient
  372. # gradients, especially early in the learning process. As a fix, we
  373. # instead wish to maximize $log(D(G(z)))$. In the code we accomplish
  374. # this by: classifying the Generator output from Part 1 with the
  375. # Discriminator, computing G’s loss *using real labels as GT*, computing
  376. # G’s gradients in a backward pass, and finally updating G’s parameters
  377. # with an optimizer step. It may seem counter-intuitive to use the real
  378. # labels as GT labels for the loss function, but this allows us to use the
  379. # $log(x)$ part of the BCELoss (rather than the $log(1-x)$
  380. # part) which is exactly what we want.
  381. #
  382. # Finally, we will do some statistic reporting and at the end of each
  383. # epoch we will push our fixed_noise batch through the generator to
  384. # visually track the progress of G’s training. The training statistics
  385. # reported are:
  386. #
  387. # - **Loss_D** - discriminator loss calculated as the sum of losses for
  388. # the all real and all fake batches ($log(D(x)) + log(D(G(z)))$).
  389. # - **Loss_G** - generator loss calculated as $log(D(G(z)))$
  390. # - **D(x)** - the average output (across the batch) of the discriminator
  391. # for the all real batch. This should start close to 1 then
  392. # theoretically converge to 0.5 when G gets better. Think about why
  393. # this is.
  394. # - **D(G(z))** - average discriminator outputs for the all fake batch.
  395. # The first number is before D is updated and the second number is
  396. # after D is updated. These numbers should start near 0 and converge to
  397. # 0.5 as G gets better. Think about why this is.
  398. #
  399. # **Note:** This step might take a while, depending on how many epochs you
  400. # run and if you removed some data from the dataset.
  401. #
  402. #
  403. #
  404. # In[4]:
  405. # Training Loop
  406. # Lists to keep track of progress
  407. img_list = []
  408. G_losses = []
  409. D_losses = []
  410. iters = 0
  411. print("Starting Training Loop...")
  412. # For each epoch
  413. for epoch in range(num_epochs):
  414. # For each batch in the dataloader
  415. for i, data in enumerate(dataloader, 0):
  416. ############################
  417. # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
  418. ###########################
  419. ## Train with all-real batch
  420. netD.zero_grad()
  421. # Format batch
  422. real_cpu = data[0].to(device)
  423. b_size = real_cpu.size(0)
  424. label = torch.full((b_size,), real_label, device=device)
  425. # Forward pass real batch through D
  426. output = netD(real_cpu).view(-1)
  427. # Calculate loss on all-real batch
  428. errD_real = criterion(output, label)
  429. # Calculate gradients for D in backward pass
  430. errD_real.backward()
  431. D_x = output.mean().item()
  432. ## Train with all-fake batch
  433. # Generate batch of latent vectors
  434. noise = torch.randn(b_size, nz, 1, 1, device=device)
  435. # Generate fake image batch with G
  436. fake = netG(noise)
  437. label.fill_(fake_label)
  438. # Classify all fake batch with D
  439. output = netD(fake.detach()).view(-1)
  440. # Calculate D's loss on the all-fake batch
  441. errD_fake = criterion(output, label)
  442. # Calculate the gradients for this batch
  443. errD_fake.backward()
  444. D_G_z1 = output.mean().item()
  445. # Add the gradients from the all-real and all-fake batches
  446. errD = errD_real + errD_fake
  447. # Update D
  448. optimizerD.step()
  449. ############################
  450. # (2) Update G network: maximize log(D(G(z)))
  451. ###########################
  452. netG.zero_grad()
  453. label.fill_(real_label) # fake labels are real for generator cost
  454. # Since we just updated D, perform another forward pass of all-fake batch through D
  455. output = netD(fake).view(-1)
  456. # Calculate G's loss based on this output
  457. errG = criterion(output, label)
  458. # Calculate gradients for G
  459. errG.backward()
  460. D_G_z2 = output.mean().item()
  461. # Update G
  462. optimizerG.step()
  463. # Output training stats
  464. if i % 50 == 0:
  465. print('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f'
  466. % (epoch, num_epochs, i, len(dataloader),
  467. errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
  468. # Save Losses for plotting later
  469. G_losses.append(errG.item())
  470. D_losses.append(errD.item())
  471. # Check how the generator is doing by saving G's output on fixed_noise
  472. if (iters % 500 == 0) or ((epoch == num_epochs - 1) and (i == len(dataloader) - 1)):
  473. with torch.no_grad():
  474. fake = netG(fixed_noise).detach().cpu()
  475. img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
  476. iters += 1
  477. # Results
  478. # -------
  479. #
  480. # Finally, lets check out how we did. Here, we will look at three
  481. # different results. First, we will see how D and G’s losses changed
  482. # during training. Second, we will visualize G’s output on the fixed_noise
  483. # batch for every epoch. And third, we will look at a batch of real data
  484. # next to a batch of fake data from G.
  485. #
  486. # **Loss versus training iteration**
  487. #
  488. # Below is a plot of D & G’s losses versus training iterations.
  489. #
  490. #
  491. #
  492. # In[ ]:
  493. plt.figure(figsize=(10, 5))
  494. xlabel("iterations")
  495. plt.ylabel("Loss")
  496. plt.legend()
  497. plt.show()
  498. # **Visualization of G’s progression**
  499. #
  500. # Remember how we saved the generator’s output on the fixed_noise batch
  501. # after every epoch of training. Now, we can visualize the training
  502. # progression of G with an animation. Press the play button to start the
  503. # animation.
  504. #
  505. #
  506. #
  507. # In[3]:
  508. # %%capture
  509. fig = plt.figure(figsize=(8, 8))
  510. plt.axis("off")
  511. ims = [[plt.imshow(np.transpose(i, (1, 2, 0)), animated=True)] for i in img_list]
  512. ani = animation.ArtistAnimation(fig, ims, interval=1000, repeat_delay=1000, blit=True)
  513. HTML(ani.to_jshtml())
  514. # **Real Images vs. Fake Images**
  515. #
  516. # Finally, lets take a look at some real images and fake images side by
  517. # side.
  518. #
  519. #
  520. #
  521. # In[2]:
  522. # Grab a batch of real images from the dataloader
  523. real_batch = next(iter(dataloader))
  524. # Plot the real images
  525. plt.figure(figsize=(15, 15))
  526. plt.subplot(1, 2, 1)
  527. plt.axis("off")
  528. plt.title("Real Images")
  529. plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64], padding=5, normalize=True).cpu(), (1, 2, 0)))
  530. # Plot the fake images from the last epoch
  531. plt.subplot(1, 2, 2)
  532. plt.axis("off")
  533. plt.title("Fake Images")
  534. plt.imshow(np.transpose(img_list[-1], (1, 2, 0)))
  535. plt.show()

创作不易 觉得有帮助请点赞关注收藏~~~

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/煮酒与君饮/article/detail/749723
推荐阅读
相关标签
  

闽ICP备14008679号