当前位置:   article > 正文

【PyTorch][chapter 22][李宏毅深度学习][ WGAN]【实战三】

【PyTorch][chapter 22][李宏毅深度学习][ WGAN]【实战三】

前言:

      本篇主要讲两个WGAN的两个例子:

     1   高斯混合模型 WGAN实现

      2   MNIST 手写数字识别 -WGAN 实现

     WGAN 训练起来蛮麻烦的,如果要获得好的效果很多超参数需要手动设置

1: 噪声的维度

2:    学习率

3: 生成器,鉴别器网络模型

4:    batchsz,num_epochs


目录:

  1.     Google Colab
  2.     损失函数
  3.     高斯混合模型(WGAN 实现)
  4.     MNIST 手写数字识别(WGAN 实现)

   一   Google Colab

     1.1 : 打开google 云盘
          https://drive.google.com/drive/my-drive

     1.2:  新建 wgan.ipynb 文件
          把对应的python 脚本拖在该目录下面

     


     
1.3:打开colab 


            https://colab.research.google.com/drive/

             新建笔记

1.4:  在colab 中运行 main.py


from google.colab import drive
import os
drive.mount('/content/drive')
os.chdir('/content/drive/My Drive/wgan.ipynb/')
%run main.py


二 WGAN 损失函数

2.1 Wasserstein 约束和 WGAN 的约束条件转换原理

       我们知道WGAN 是根据Wasserstein Distance 推导出来的。

 Wasserstein Distance 原始约束关系为f(x)+g(y)\leq c(x,y)

只要满足k-Lipschitz 约束条件,肯定可以满足原始的约束条件。

证明如下:

这种约束关系很难求解,一般采用Weight Clipping 或者  Gradient penalty 两种方案

来约束.

  2.2 Weight Clipping

          这是一种工程经验,无理论基础

          1-Lipschitz 约束条件为:

                    ||f(x)-f(y)||\leq K||x-y||,\forall x,y

          一张1024*1024的灰度图,其状态变量为256^{1024*1024},要让所有的状态组合满足该约束条件

没有办法求解。早期的解决方案是做weight Clipping

            在利用 gradient descent 进行参数更新后,在对所有参数进行如下操作:

           w=\left\{\begin{matrix} c,\, \, \, \, \, if w>c\\ -c,\, \, \, \, \, if w<-c \\ w,\, \, \, \, \, otherwise \end{matrix}\right.

          通过约束w 范围,来约束f(x),f(y) 输出范围,效果比较好。

     2.3  Gradient penalty

       这是一种工程经验,无严格理论基础

          问题:

          weight clipping会导致很容易一不小心就梯度消失或者梯度爆炸。原因是判别器是一个多层网络,如果我们把clipping threshold设得稍微小了一点,每经过一层网络,梯度就变小一点点,多层之后就会指数衰减;反之,如果设得稍微大了一点,每经过一层网络,梯度变大一点点,多层之后就会指数爆炸。只有设得不大不小,才能让生成器获得恰到好处的回传梯度,然而在实际应用中这个平衡区域可能很狭窄,就会给调参工作带来麻烦


三    高斯混合模型(WGAN 实现)

  3.1 模型部分代码

     model.py

  1. # -*- coding: utf-8 -*-
  2. """
  3. Created on Tue Mar 19 10:50:31 2024
  4. @author: chengxf2
  5. """
  6. import torch
  7. from torch import nn
  8. import random #numpy 里面的库
  9. from torchsummary import summary
  10. class Generator(nn.Module):
  11. def __init__(self,z_dim=2,h_dim=400):
  12. super(Generator, self).__init__()
  13. #z:[batch, z_dim]
  14. self.net = nn.Sequential(
  15. nn.Linear(z_dim, h_dim),
  16. nn.ReLU(True),
  17. nn.Linear(h_dim, h_dim),
  18. nn.ReLU(True),
  19. nn.Linear(h_dim, 2)
  20. )
  21. def forward(self, z):
  22. #print("\n input.shape",z.shape)
  23. output = self.net(z)
  24. return output
  25. class Discriminator(nn.Module):
  26. def __init__(self,input_dim,h_dim):
  27. super(Discriminator,self).__init__()
  28. self.net = nn.Sequential(
  29. nn.Linear(input_dim, h_dim),
  30. nn.ReLU(True),
  31. nn.Linear(h_dim, h_dim),
  32. nn.ReLU(True),
  33. nn.Linear(h_dim, h_dim),
  34. nn.ReLU(True),
  35. nn.Linear(h_dim, 1),
  36. nn.Tanh()
  37. )
  38. def forward(self, x):
  39. out = self.net(x)
  40. return out.view(-1)
  41. def model_summary():
  42. device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
  43. #[channel, w,h]
  44. #summary(model=net, input_size=(3,32,32),batch_size=2, device="cpu")
  45. summary(Generator(2,100).to(device), (1,2,),batch_size=5)
  46. print(Generator(100))
  47. print("\n 鉴别器")
  48. summary(Discriminator(2,100).to(device) , (2,2))

3.2  训练部分代码

     main.py

  1. # -*- coding: utf-8 -*-
  2. """
  3. Created on Tue Mar 19 11:06:37 2024
  4. @author: chengxf2
  5. """
  6. import torch
  7. from torch import autograd,optim,autograd
  8. import numpy as np
  9. import visdom
  10. import random
  11. from model import Generator,Discriminator
  12. import visdom
  13. import matplotlib.pyplot as plt
  14. batchsz = 512
  15. H_dim = 400
  16. viz = visdom.Visdom()
  17. device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
  18. def data_generator():
  19. #8个高斯分布的中心点
  20. scale = 2
  21. centers = [
  22. (1, 0),
  23. (-1, 0),
  24. (0, 1),
  25. (0, -1),
  26. (1. / np.sqrt(2), 1. / np.sqrt(2)),
  27. (1. / np.sqrt(2), -1. / np.sqrt(2)),
  28. (-1. / np.sqrt(2), 1. / np.sqrt(2)),
  29. (-1. / np.sqrt(2), -1. / np.sqrt(2))
  30. ]
  31. # 放大一下
  32. centers = [(scale * x, scale * y) for x, y in centers]
  33. while True:
  34. dataset = []
  35. for i in range(batchsz):
  36. #随机生成一个点
  37. point = np.random.randn(2) * 0.02
  38. #随机选择一个高斯分布
  39. center = random.choice(centers)
  40. #N(0,1)+center (x,y)
  41. point[0] += center[0]
  42. point[1] += center[1]
  43. dataset.append(point)
  44. dataset = np.array(dataset).astype(np.float32)
  45. dataset /= 1.414
  46. yield dataset
  47. def generate_image(D, G, xr, epoch):
  48. """
  49. Generates and saves a plot of the true distribution, the generator, and the
  50. critic.
  51. """
  52. N_POINTS = 128
  53. RANGE = 3
  54. plt.clf()
  55. points = np.zeros((N_POINTS, N_POINTS, 2), dtype='float32')
  56. points[:, :, 0] = np.linspace(-RANGE, RANGE, N_POINTS)[:, None]
  57. points[:, :, 1] = np.linspace(-RANGE, RANGE, N_POINTS)[None, :]
  58. points = points.reshape((-1, 2))
  59. # (16384, 2)
  60. # print('p:', points.shape)
  61. # draw contour
  62. with torch.no_grad():
  63. points = torch.Tensor(points).to(device) # [16384, 2]
  64. disc_map = D(points).to(device).numpy() # [16384]
  65. x = y = np.linspace(-RANGE, RANGE, N_POINTS)
  66. cs = plt.contour(x, y, disc_map.reshape((len(x), len(y))).transpose())
  67. plt.clabel(cs, inline=1, fontsize=10)
  68. plt.colorbar()
  69. # draw samples
  70. with torch.no_grad():
  71. z = torch.randn(batchsz, 2).to(device) # [b, 2]
  72. samples = G(z).to(device).numpy() # [b, 2]
  73. plt.scatter(xr[:, 0], xr[:, 1], c='green', marker='.')
  74. plt.scatter(samples[:, 0], samples[:, 1], c='red', marker='+')
  75. viz.matplot(plt, win='contour', opts=dict(title='p(x):%d'%epoch))
  76. def gradient_penalty(D,xr,xf):
  77. LAMBDA = 0.2
  78. t = torch.rand(batchsz,1).to(device)
  79. #[b,1]=>[b,2]
  80. t = t.expand_as(xf)
  81. #interpolation
  82. mid = t*xr+(1-t)*xf
  83. #需要对 mid 求导
  84. mid.requires_grad_()
  85. pred = D(mid)
  86. grad = autograd.grad(outputs=pred,
  87. inputs= mid,
  88. grad_outputs= torch.ones_like(pred) ,
  89. create_graph= True,
  90. retain_graph = True,
  91. only_inputs=True)[0]
  92. gp = torch.pow((grad.norm(2, dim=1)-1),2).mean()
  93. return gp*LAMBDA
  94. def gen():
  95. ## 读取模型
  96. z_dim=2
  97. h_dim=400
  98. model = Generator(z_dim,h_dim ).to(device)
  99. state_dict = torch.load('Generator.pt')
  100. model.load_state_dict(state_dict)
  101. model.eval()
  102. z = torch.randn(batchsz, 2).to(device)
  103. #tf.stop_graident()
  104. xf = model(z)
  105. print(xf)
  106. def main():
  107. z_dim=2
  108. h_dim=400
  109. input_dim =2
  110. np.random.seed(23)
  111. num_epochs = 2
  112. data_iter = data_generator()
  113. torch.manual_seed(23)
  114. G = Generator(z_dim,h_dim ).to(device)
  115. D = Discriminator(input_dim, h_dim).to(device)
  116. #print(G)
  117. #print(D)
  118. optim_G = optim.Adam(G.parameters(), lr = 5e-4, betas =(0.5,0.9))
  119. optim_D = optim.Adam(D.parameters(), lr = 5e-4, betas =(0.5,0.9))
  120. viz.line([[0,0]], [0], win='loss', opts=dict(title='loss',
  121. legend=['D', 'G']))
  122. for epoch in range(num_epochs):
  123. #1. train Discrimator firstly
  124. for _ in range(5):
  125. # 1.1 train on real data
  126. x = next(data_iter)
  127. xr = torch.from_numpy(x).to(device)
  128. #[batch_size, 2]=>[batch, 1]
  129. predr = D(xr)
  130. #max predr , min lossr
  131. lossr = -predr.mean()
  132. #1.2 train on fake data
  133. z = torch.randn(batchsz, 2).to(device)
  134. #tf.stop_graident()
  135. xf = G(z).detach()
  136. predf = D(xf)
  137. lossf = predf.mean()
  138. #1.3 gradient penalty
  139. gp = gradient_penalty(D,xr,xf.detach())
  140. #1.4 aggregate all
  141. loss_D = lossr+lossf+gp
  142. #1.5 optimize
  143. optim_D.zero_grad()
  144. loss_D.backward()
  145. optim_D.step()
  146. #2 train Generator secondly
  147. z = torch.randn(batchsz, 2).to(device)
  148. #tf.stop_graident()
  149. xf = G(z)
  150. predf = D(xf)
  151. loss_G = - predf.mean()
  152. #optimize
  153. optim_G.zero_grad()
  154. loss_G.backward()
  155. optim_G.step()
  156. if epoch%100 ==0:
  157. viz.line([[loss_D.item(), loss_G.item()]],[epoch],win='loss',update='append')
  158. print(f"loss_D {loss_D.item()} \t ,loss_G {loss_G.item()}")
  159. generate_image(D,G,x, epoch)
  160. print("\n train end")
  161. #二、只保存模型中的参数并读取
  162. torch.save(G.state_dict(), 'Generator.pt')
  163. torch.save(G.state_dict(), 'Discriminator.pt')
  164. #2 train Generator
  165. #http://www.manongjc.com/detail/42-hvxyfyduytmpwzz.html
  166. gen()


四    MNIST 手写数字识别

    4.1  模型部分

          model.py  

  1. # -*- coding: utf-8 -*-
  2. """
  3. Created on Mon Mar 18 10:19:26 2024
  4. @author: chengxf2
  5. """
  6. import torch.nn as nn
  7. from torchsummary import summary
  8. import torch
  9. class Generator(nn.Module):
  10. def __init__(self, z_dim=10, im_chan=1, hidden_dim=64):
  11. super(Generator, self).__init__()
  12. self.z_dim = z_dim
  13. self.gen = nn.Sequential(
  14. self.layer1(z_dim, hidden_dim * 4,kernel_size=3, stride=2),
  15. self.layer1(hidden_dim * 4, hidden_dim * 2,kernel_size=4,stride = 1),
  16. self.layer1(hidden_dim * 2,hidden_dim ,kernel_size=3,stride = 2, ),
  17. self.layer2(hidden_dim,im_chan,kernel_size=4,stride=2))
  18. def layer1(self, input_channel, output_channel, kernel_size, stride = 1, padding = 0):
  19. #inplace = true, 就相当于在原内存计算
  20. return nn.Sequential(
  21. nn.ConvTranspose2d(input_channel, output_channel, kernel_size, stride, padding),
  22. nn.BatchNorm2d(output_channel),
  23. nn.ReLU(inplace=True),
  24. )
  25. def layer2(self, input_channel, output_channel, kernel_size, stride = 1, padding = 0):
  26. #双曲正切函数的输出范围为(-11)
  27. return nn.Sequential(
  28. nn.ConvTranspose2d(input_channel, output_channel, kernel_size, stride, padding),
  29. nn.Tanh()
  30. )
  31. def forward(self, noise):
  32. '''
  33. Parameters
  34. ----------
  35. noise : [batch, z_dim]
  36. Returns
  37. -------
  38. 输出的是图片[batch, channel, width, height]
  39. '''
  40. x = noise.view(len(noise), self.z_dim, 1, 1)
  41. return self.gen(x)
  42. class Discriminator(nn.Module):
  43. def __init__(self, im_chan=1, hidden_dim=16):
  44. super(Discriminator, self).__init__()
  45. self.disc = nn.Sequential(
  46. self.block1(im_chan,hidden_dim * 4,kernel_size=4,stride=2),
  47. self.block1(hidden_dim * 4,hidden_dim * 8,kernel_size=4,stride=2,),
  48. self.block2(hidden_dim * 8,1,kernel_size=4,stride=2,),
  49. )
  50. def block1(self, input_channel, output_channel, kernel_size, stride = 1, padding = 0):
  51. return nn.Sequential(
  52. nn.Conv2d(input_channel, output_channel, kernel_size, stride, padding),
  53. nn.BatchNorm2d(output_channel),
  54. nn.LeakyReLU(0.2, inplace=True)
  55. )
  56. def block2(self, input_channel, output_channel, kernel_size, stride = 1, padding = 0):
  57. return nn.Sequential(
  58. nn.Conv2d(input_channel, output_channel, kernel_size, stride, padding),
  59. )
  60. def forward(self, image):
  61. return self.disc(image)
  62. def model_summary():
  63. device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
  64. summary(Generator(100).to(device), (100,))
  65. print(Generator(100))
  66. print("\n 鉴别器")
  67. summary(Discriminator().to(device) , (1,28,28))
  68. model_summary()

    4.2  训练部分

         main.py

  1. # -*- coding: utf-8 -*-
  2. """
  3. Created on Mon Mar 18 10:37:21 2024
  4. @author: chengxf2
  5. """
  6. import torch
  7. from model import Generator
  8. from model import Discriminator
  9. import torchvision.transforms as transforms
  10. from torch.utils.data import Dataset, DataLoader, ConcatDataset, TensorDataset
  11. from torchvision.datasets import MNIST
  12. import time
  13. import matplotlib.pyplot as plt
  14. import numpy as np
  15. from torchvision.utils import make_grid
  16. import torch.nn as nn
  17. def get_noise(n_samples, z_dim, device='cpu'):
  18. return torch.randn(n_samples,z_dim,device=device)
  19. def weights_init(m):
  20. if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
  21. torch.nn.init.normal_(m.weight, 0.0, 0.02)
  22. if isinstance(m, nn.BatchNorm2d):
  23. torch.nn.init.normal_(m.weight, 0.0, 0.02)
  24. torch.nn.init.constant_(m.bias, 0)
  25. def gradient_penalty(gradient):
  26. #Gradient Penalty
  27. gradient = gradient.view(len(gradient), -1)
  28. gradient_norm = gradient.norm(2, dim=1)
  29. penalty = torch.mean((gradient_norm - 1)**2)
  30. return penalty
  31. def get_gen_loss(crit_fake_pred):
  32. #生成器的loss
  33. gen_loss = -1. * torch.mean(crit_fake_pred)
  34. return gen_loss
  35. def get_crit_loss(crit_fake_pred, crit_real_pred, gp, c_lambda):
  36. #鉴别器的loss, 原公式加符号,转换为极小值求梯度
  37. crit_loss = torch.mean(crit_fake_pred) - torch.mean(crit_real_pred) + c_lambda * gp
  38. return crit_loss
  39. def get_gradient(crit, real, fake, epsilon):
  40. #随机采样
  41. mixed_images = real * epsilon + fake * (1 - epsilon)
  42. mixed_scores = crit(mixed_images)
  43. gradient = torch.autograd.grad(
  44. inputs=mixed_images,
  45. outputs=mixed_scores,
  46. grad_outputs=torch.ones_like(mixed_scores),
  47. create_graph=True,
  48. retain_graph=True,
  49. )[0]
  50. return gradient
  51. def show_new_gen_images(tensor_img, num_img=25):
  52. tensor_img = (tensor_img + 1) / 2
  53. unflat_img = tensor_img.detach().cpu()
  54. img_grid = make_grid(unflat_img[:num_img], nrow=5)
  55. plt.imshow(img_grid.permute(1, 2, 0).squeeze(),cmap='gray')
  56. plt.title("gen image")
  57. plt.show()
  58. def show_tensor_images(image_tensor, num_images=25, size=(1, 28, 28), show_fig=False, epoch=0):
  59. #生成器输出的范围[-1,1]
  60. #image_tensor = (image_tensor + 1) / 2
  61. image_unflat = image_tensor.detach().cpu().view(-1, *size)
  62. image_grid = make_grid(image_unflat[:num_images], nrow=5)
  63. plt.axis('off')
  64. label =f"Epoch: {epoch}"
  65. plt.title(label)
  66. plt.imshow(image_grid.permute(1, 2, 0).squeeze())
  67. #if show_fig:
  68. #plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
  69. plt.show()
  70. def show_loss(G_mean_losses,C_mean_losses ):
  71. plt.figure(figsize=(10,5))
  72. plt.title("Generator and Discriminator Loss During Training")
  73. plt.plot(G_mean_losses,label="G-Loss")
  74. plt.plot(C_mean_losses,label="C-Loss")
  75. plt.xlabel("iterations")
  76. plt.ylabel("Loss")
  77. plt.legend()
  78. plt.show()
  79. def train():
  80. z_dim = 32
  81. batch_size = 128
  82. device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
  83. lr = 1e-4
  84. beta_1 = 0.0
  85. beta_2 = 0.9
  86. #MNIST Dataset Load
  87. print("\n init 1: MNIST Dataset Load ")
  88. fixed_noise = get_noise(batch_size, z_dim, device=device)
  89. train_transform = transforms.Compose([transforms.ToTensor(),])
  90. dataloader = DataLoader( MNIST('.', download=True, transform=train_transform),
  91. batch_size=batch_size,
  92. shuffle=True)
  93. print("\n init2: Loaded Data Visualization")
  94. start = time.time()
  95. dataiter = iter(dataloader)
  96. images,labels = next(dataiter)
  97. print ('Time is {} sec'.format(time.time()-start))
  98. plt.figure(figsize=(8,8))
  99. plt.axis("off")
  100. plt.title("Training Images")
  101. plt.imshow(np.transpose(make_grid(images.to(device), padding=2, normalize=True).cpu(),(1,2,0)))
  102. print('Shape of loading one batch:', images.shape)
  103. print('Total no. of batches present in trainloader:', len(dataloader))
  104. #Optimizer
  105. gen = Generator(z_dim).to(device)
  106. gen_opt = torch.optim.Adam(gen.parameters(), lr=lr, betas=(beta_1, beta_2))
  107. crit = Discriminator().to(device)
  108. crit_opt = torch.optim.Adam(crit.parameters(), lr=lr, betas=(beta_1, beta_2))
  109. gen = gen.apply(weights_init)
  110. crit = crit.apply(weights_init)
  111. print("\n -------- train ------------")
  112. n_epochs = 10
  113. cur_step = 0
  114. total_steps = 0
  115. start_time = time.time()
  116. cur_step = 0
  117. generator_losses = []
  118. Discriminator_losses = []
  119. C_mean_losses = []
  120. G_mean_losses = []
  121. c_lambda = 10
  122. crit_repeats = 5
  123. for epoch in range(n_epochs):
  124. cur_step = 0
  125. start = time.time()
  126. for real, _ in dataloader:
  127. cur_batch_size = len(real)
  128. real = real.to(device)
  129. mean_iteration_Discriminator_loss = 0
  130. for _ in range(crit_repeats):
  131. ### Update Discriminator ###
  132. crit_opt.zero_grad()
  133. fake_noise = get_noise(cur_batch_size, z_dim, device=device)
  134. fake = gen(fake_noise)
  135. crit_fake_pred = crit(fake.detach())
  136. crit_real_pred = crit(real)
  137. epsilon = torch.rand(len(real), 1, 1, 1, device=device, requires_grad=True)
  138. gradient = get_gradient(crit, real, fake.detach(), epsilon)
  139. gp = gradient_penalty(gradient)
  140. crit_loss = get_crit_loss(crit_fake_pred, crit_real_pred, gp, c_lambda)
  141. # Keep track of the average Discriminator loss in this batch
  142. mean_iteration_Discriminator_loss += crit_loss.item() / crit_repeats
  143. # Update gradients
  144. crit_loss.backward(retain_graph=True)
  145. # Update optimizer
  146. crit_opt.step()
  147. Discriminator_losses += [mean_iteration_Discriminator_loss]
  148. ### Update generator ###
  149. gen_opt.zero_grad()
  150. fake_noise_2 = get_noise(cur_batch_size, z_dim, device=device)
  151. fake_2 = gen(fake_noise_2)
  152. crit_fake_pred = crit(fake_2)
  153. gen_loss = get_gen_loss(crit_fake_pred)
  154. gen_loss.backward()
  155. # Update the weights
  156. gen_opt.step()
  157. # Keep track of the average generator loss
  158. generator_losses += [gen_loss.item()]
  159. cur_step += 1
  160. total_steps += 1
  161. print_val = f"Epoch: {epoch}/{n_epochs} Steps:{cur_step}/{len(dataloader)}\t"
  162. print_val += f"Epoch_Run_Time: {(time.time()-start):.6f}\t"
  163. print_val += f"Loss_C : {mean_iteration_Discriminator_loss:.6f}\t"
  164. print_val += f"Loss_G : {gen_loss:.6f}\t"
  165. print(print_val, end='\r',flush = True)
  166. gen_mean = sum(generator_losses[-cur_step:]) / cur_step
  167. crit_mean = sum(Discriminator_losses[-cur_step:]) / cur_step
  168. C_mean_losses.append(crit_mean)
  169. G_mean_losses.append(gen_mean)
  170. print_val = f"Epoch: {epoch}/{n_epochs} Total Steps:{total_steps}\t"
  171. print_val += f"Total_Time : {(time.time() - start_time):.6f}\t"
  172. print_val += f"Loss_C : {mean_iteration_Discriminator_loss:.6f}\t"
  173. print_val += f"Loss_G : {gen_loss:.6f}\t"
  174. print_val += f"Loss_C_Mean : {crit_mean:.6f}\t"
  175. print_val += f"Loss_G_Mean : {gen_mean:.6f}\t"
  176. print(print_val)
  177. fake_noise = fixed_noise
  178. fake = gen(fake_noise)
  179. show_tensor_images(fake, show_fig=True,epoch=epoch)
  180. cur_step = 0
  181. print("\n-----训练结束--------------")
  182. num_image = 25
  183. noise = get_noise(num_image, z_dim, device=device)
  184. #Batch Normalization,Dropout不使用
  185. gen.eval()
  186. crit.eval()
  187. with torch.no_grad():
  188. fake_img = gen(noise)
  189. show_new_gen_images(fake_img.reshape(num_image,1,28,28))
  190. train()

PyTorch-Wasserstein GAN(WGAN) | Kaggle

WGAN模型——pytorch实现_python实现wgn函数-CSDN博客
WGAN_哔哩哔哩_bilibili

15 李宏毅【機器學習2021】生成式對抗網路 (Generative Adversarial Network, GAN) (中) – 理論介紹與WGAN_哔哩哔哩_bilibili

课时12 WGAN-GP实战_哔哩哔哩_bilibili

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/295703
推荐阅读
相关标签
  

闽ICP备14008679号