当前位置:   article > 正文

六、生成对抗网络(GAN)手写数字的识别_手写数字识别gan网络 可视化

手写数字识别gan网络 可视化

1.GAN的背景

        一般的,深度学习模型可以分为判别式模型和生成式模型。判别式模型发展的较好,生成式模型建模比较困难,发展缓慢。直到GAN的发明,生成式模型才逐渐蓬勃发展,其他的生成式模型有Diffusion Model,VAE(变分自编码器)等。GAN由2个部分组成,生成器G和判别器D,具体的介绍可见:图解 生成对抗网络GAN 原理 超详解_生成对抗网络gan图解-CSDN博客

2.GAN结构

生成器输入随机噪声,输出数字图像,再由鉴别器鉴别真假,两者之间互相学习对抗,最后达到一种平衡(纳什均衡)。

3.前面部分

  1. import matplotlib.pyplot as plt
  2. import numpy as np
  3. import pandas as pd
  4. import sys
  5. import os
  6. import pathlib
  7. import librosa
  8. import librosa.display
  9. from tqdm import tnrange, notebook
  10. from tensorflow import keras
  11. import tensorflow as tf
  12. from keras import layers, datasets, Sequential, Model, optimizers
  13. from keras.layers import LeakyReLU, Dense, Input, BatchNormalization, Flatten, Reshape
  14. import imageio.v2 as imageio
  1. class GAN():
  2. def __init__(self):
  3. self.img_rows = 28
  4. self.img_cols = 28
  5. self.channels = 1
  6. self.img_shape = (self.img_rows, self.img_cols, self.channels)
  7. self.latent_dim = 100
  8. # 这下面都有才能正常运行,不知原因,只有第二行一直报错
  9. optimizer = tf.keras.optimizers.Adam(1e-4)
  10. self.discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
  11. # Build and compile the discriminator
  12. self.discriminator = self.build_discriminator()
  13. self.variables = self.discriminator.trainable_variables
  14. self.discriminator_optimizer.build(self.variables)
  15. self.discriminator.compile(loss='binary_crossentropy',
  16. optimizer=self.discriminator_optimizer,
  17. metrics=['accuracy'])
  18. # Build the generator
  19. self.generator = self.build_generator()
  20. # The generator takes noise as input and generates imgs
  21. z = Input(shape=(self.latent_dim,))
  22. img = self.generator(z)
  23. # For the combined model we will only train the generator
  24. self.discriminator.trainable = False
  25. # The discriminator takes generated images as input and determines validity
  26. validity = self.discriminator(img)
  27. # 定义组合模型,将生成器和鉴别器堆叠在一起
  28. # 生成器输入是潜在向量 z,输出是鉴别器的判别结果 validity
  29. self.combined = Model(z, validity)
  30. self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)

4.Generator构建

  1. def build_generator(self):
  2. model = Sequential()
  3. model.add(Dense(256, input_dim=self.latent_dim))
  4. model.add(LeakyReLU(alpha=0.2))
  5. model.add(BatchNormalization(momentum=0.8))
  6. model.add(Dense(512))
  7. model.add(LeakyReLU(alpha=0.2))
  8. model.add(BatchNormalization(momentum=0.8))
  9. model.add(Dense(1024))
  10. model.add(LeakyReLU(alpha=0.2))
  11. model.add(BatchNormalization(momentum=0.8))
  12. model.add(Dense(np.prod(self.img_shape), activation='tanh'))
  13. model.add(Reshape(self.img_shape))
  14. model.summary()
  15. noise = Input(shape=(self.latent_dim,))
  16. img = model(noise)
  17. return Model(noise, img)

由几层全链接层及BN和激活函数构成。

5.Discriminator构成

  1. def build_discriminator(self):
  2. model = Sequential()
  3. model.add(Flatten(input_shape=self.img_shape))
  4. model.add(Dense(512))
  5. model.add(LeakyReLU(alpha=0.2))
  6. model.add(Dense(256))
  7. model.add(LeakyReLU(alpha=0.2))
  8. model.add(Dense(1, activation='sigmoid'))
  9. model.summary()
  10. img = Input(shape=self.img_shape)
  11. validity = model(img)
  12. return Model(img, validity)

validity作为判别结果,与img一起返回。

6.训练过程

  1. def train(self, epochs, batch_size=128, sample_interval=50):
  2. # Load the minist dataset
  3. (X_train, _), (_, _) = datasets.mnist.load_data()
  4. print("Shape of the dataset:", X_train.shape)
  5. # Rescale -1 to 1
  6. X_train = X_train / 127.5 - 1.
  7. X_train = np.expand_dims(X_train, axis=3)
  8. # Adversarial ground truths
  9. valid = np.ones((batch_size, 1))
  10. fake = np.zeros((batch_size, 1))
  11. for epoch in range(epochs):
  12. # ---------------------
  13. # Train Discriminator
  14. # ---------------------
  15. # Select a random batch of images
  16. idx = np.random.randint(0, X_train.shape[0], batch_size)
  17. imgs = X_train[idx]
  18. noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
  19. # Generate a batch of new images
  20. gen_imgs = self.generator.predict(noise)
  21. # Train the discriminator
  22. d_loss_real = self.discriminator.train_on_batch(imgs, valid)
  23. d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
  24. d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
  25. # ---------------------
  26. # Train Generator
  27. # ---------------------
  28. noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
  29. # Train the generator (to have the discriminator label samples as valid)
  30. g_loss = self.combined.train_on_batch(noise, valid)
  31. # Plot the progress
  32. print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
  33. # If at save interval => save generated image samples
  34. if epoch % sample_interval == 0:
  35. self.sample_images(epoch)

对于训练过程中的部分,先归一化,定义真假标签,然后开始D和G的交替训练。先是对于D的训练,一个真实的图片和一个噪声产生的图片,分别用来训练鉴别器的损失d_loss_real和d_loss_fake,最后计算总的loss,取前面两者均值。D的目标是区分真实和生成的,最小化损失函数来使鉴别器达到最优。然后训练D和G的组合模型(参数组合,此时的鉴别器是False,不工作的),用valid和noise来训练,目标是使生成的图片被鉴别器标记为有效。

7.图片输出

  1. def sample_images(self, epoch):
  2. r, c = 5, 5
  3. noise = np.random.normal(0, 1, (r * c, self.latent_dim))
  4. gen_imgs = self.generator.predict(noise)
  5. # Rescale images 0 - 1
  6. gen_imgs = 0.5 * gen_imgs + 0.5
  7. fig, axs = plt.subplots(r, c)
  8. cnt = 0
  9. for i in range(r):
  10. for j in range(c):
  11. axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
  12. axs[i,j].axis('off')
  13. cnt += 1
  14. fig.savefig("你的路径/%d.png" % epoch)
  15. plt.close()
  1. gan = GAN()
  2. gan.train(epochs=30000, batch_size=32, sample_interval=200)

epoch=30000,下面是训练出来的数据:

刚开始:

最后:

效果还是挺不错的。后面想尝试生成音频数据试试。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/103055?site
推荐阅读
相关标签
  

闽ICP备14008679号