当前位置:   article > 正文

GAN,CGAN,DCGAN在MIST数据集上的源码及训练结果对比_dcgan网络和gan网络代码对比

dcgan网络和gan网络代码对比

    开始自学人工神经网络的时候ANN,CNN,RNN都尝试过了,包括后续的计算机视觉-目标检测算法,听觉-声纹识别,强化学习-五子棋,都进行了编码实验,并且亲手做了一个实际项目《凭证印章签字检查系统》用于银行会计凭证批量检查入库,成功实施投入使用。

    但是对抗生成网络一直没有试过。近期找来了一些源码进行了尝试。写出一些心得体会。并把使用的代码和训练结果给大家分享一下,欢迎评论和指正。

先上图:

训练了三万次 一批32张GAN,生成器和判别器都是使用了全连接:

图片不是很清晰,有很多白点杂物,但基本上能看出数字。

CGAN只训练了两千次,CGAN和GAN用的网络是一样的,只是加入了标签,大致能看出来数字是按照标签的顺序生成的,所以就停止了训练,因为后面的DC_GAN才是最好玩的:

DCGAN网络上的教程要求,所有的激活用leaky-relu,所有的上采样用conv2dTranspose,所有的下采样用conv2d,步长为2,激活函数用tanh和sigmod:

第一次尝试,都用relu,上采样用unmaxpooling2D,下采样用maxpooling2D,果然没效果,生成的图片都一样而且看不出来是数字。分析原因,原因,可能是relu激活和pooling层会造成有些神经元反向传播的时候不可导,而训练生成器的时候卷积深度比较深,造成训练困难,而判别器又因为反复学习正负样本造成神经元死亡。

第二次尝试,生成器用relu和unmaxpooling2D,生成器没用激活函数。判别器用全连接。3万次训练效果如下:

效果还是挺好的!

第三次尝试,按照教程的激活函数和上下采样函数来,生成和判别器都用卷积网络,3万次训练效果如下:

效果不太理想,可能我只是学到了方法,没学到精髓。一些超参数出了问题。

第四次尝试:生成器按照教程的网络,判别器用全连接,训练三万次效果如下:

效果还可以。

总结:

1、对抗网络对激活函数、超参数的要求较高,实验容易失败,需要反复尝试积累经验。

2、对抗网络稳定性不太好。

3、有大神指导会进步很快,希望各路大神不吝指教。

实验源码如下。

GAN:

  1. from __future__ import print_function, division
  2. from tensorflow.keras.datasets import mnist
  3. from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
  4. from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
  5. from tensorflow.keras.layers import LeakyReLU
  6. from tensorflow.keras.layers import UpSampling2D, Conv2D
  7. from tensorflow.keras.models import Sequential, Model
  8. from tensorflow.keras.optimizers import Adam
  9. import os
  10. import matplotlib.pyplot as plt
  11. import sys
  12. import numpy as np
  13. class GAN(object):
  14. def __init__(self):
  15. self.img_rows = 28
  16. self.img_cols = 28
  17. self.channels = 1
  18. self.img_shape = (self.img_rows, self.img_cols, self.channels)
  19. self.latent_dim = 100
  20. optimizer = Adam(0.0002, 0.5)
  21. # 构建和编译判别器
  22. self.discriminator = self.build_discriminator()
  23. self.discriminator.compile(loss='binary_crossentropy',
  24. optimizer=optimizer,
  25. metrics=['accuracy'])
  26. # 构建生成器
  27. self.generator = self.build_generator()
  28. # 生成器输入噪声,生成假图片
  29. z = Input(shape=(self.latent_dim,))
  30. img = self.generator(z)
  31. # 为了组合模型,只训练生成器
  32. self.discriminator.trainable = False
  33. # 判别器将生成的图片作为输入并确定有效性
  34. validity = self.discriminator(img)
  35. # The combined model (stacked generator and discriminator)
  36. # 训练生成器骗过判别器
  37. self.combined = Model(z, validity)
  38. self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
  39. def build_generator(self):
  40. model = Sequential()
  41. model.add(Dense(256, input_dim=self.latent_dim))
  42. model.add(LeakyReLU(alpha=0.2))
  43. model.add(BatchNormalization(momentum=0.8))
  44. model.add(Dense(512))
  45. model.add(LeakyReLU(alpha=0.2))
  46. model.add(BatchNormalization(momentum=0.8))
  47. model.add(Dense(1024))
  48. model.add(LeakyReLU(alpha=0.2))
  49. model.add(BatchNormalization(momentum=0.8))
  50. # np.prod(self.img_shape)=28x28x1
  51. model.add(Dense(np.prod(self.img_shape), activation='tanh'))
  52. model.add(Reshape(self.img_shape))
  53. model.summary()
  54. noise = Input(shape=(self.latent_dim,))
  55. img = model(noise)
  56. # 输入噪音输出图片
  57. return Model(noise, img)
  58. def build_discriminator(self):
  59. model = Sequential()
  60. model.add(Flatten(input_shape=self.img_shape))
  61. model.add(Dense(512))
  62. model.add(LeakyReLU(alpha=0.2))
  63. model.add(Dense(256))
  64. model.add(LeakyReLU(alpha=0.2))
  65. model.add(Dense(1, activation='sigmoid'))
  66. model.summary()
  67. img = Input(shape=self.img_shape)
  68. validity = model(img)
  69. return Model(img, validity)
  70. def train(self, epochs, batch_size=128, sample_interval=50):
  71. dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
  72. # 加载数据集
  73. (x_train, _), (_, _) = mnist.load_data(path=dataPath)
  74. # 归一化到-1到1
  75. x_train = x_train / 127.5 - 1.
  76. print(x_train.shape)
  77. x_train = np.expand_dims(x_train, axis=3)
  78. print(x_train.shape)
  79. # Adversarial ground truths
  80. valid = np.ones((batch_size, 1))
  81. fake = np.zeros((batch_size, 1))
  82. for epoch in range(epochs):
  83. # ---------------------
  84. # 训练判别器
  85. # ---------------------
  86. # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
  87. idx = np.random.randint(0, x_train.shape[0], batch_size)
  88. # 从数据集随机挑选batch_size个数据,作为一个批次训练
  89. imgs = x_train[idx]
  90. # 噪声维度(batch_size,100)
  91. noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
  92. # 由生成器根据噪音生成假的图片
  93. gen_imgs = self.generator.predict(noise)
  94. # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
  95. d_loss_real = self.discriminator.train_on_batch(imgs, valid)
  96. d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
  97. d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
  98. # ---------------------
  99. # 训练生成器
  100. # ---------------------
  101. noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
  102. # Train the generator (to have the discriminator label samples as valid)
  103. g_loss = self.combined.train_on_batch(noise, valid)
  104. # 打印loss
  105. print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100 * d_loss[1], g_loss))
  106. # 每sample_interval个epoch保存一次生成图片
  107. if epoch % sample_interval == 0:
  108. self.sample_images(epoch)
  109. if not os.path.exists("keras_model"):
  110. os.makedirs("keras_model")
  111. self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
  112. self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)
  113. def sample_images(self, epoch):
  114. r, c = 5, 5
  115. # 重新生成一批噪音,维度为(25,100)
  116. noise = np.random.normal(0, 1, (r * c, self.latent_dim))
  117. gen_imgs = self.generator.predict(noise)
  118. # 将生成的图片重新规整0-1之间
  119. gen_imgs = 0.5 * gen_imgs + 0.5
  120. fig, axs = plt.subplots(r, c)
  121. cnt = 0
  122. for i in range(r):
  123. for j in range(c):
  124. axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
  125. axs[i, j].axis('off')
  126. cnt += 1
  127. if not os.path.exists("keras_imgs"):
  128. os.makedirs("keras_imgs")
  129. fig.savefig("keras_imgs/%d.png" % epoch)
  130. plt.close()
  131. def test(self, gen_nums=100):
  132. self.generator.load_weights("keras_model/G_model15000.hdf5", by_name=True)
  133. self.discriminator.load_weights("keras_model/D_model15000.hdf5", by_name=True)
  134. noise = np.random.normal(0, 1, (gen_nums, self.latent_dim))
  135. gen = self.generator.predict(noise)
  136. print(gen.shape)
  137. # 重整图片到0-1
  138. gen = 0.5 * gen + 0.5
  139. for i in range(0, len(gen)):
  140. plt.figure(figsize=(128, 128), dpi=1)
  141. plt.imshow(gen[i, :, :, 0], cmap="gray")
  142. plt.axis("off")
  143. if not os.path.exists("keras_gen"):
  144. os.makedirs("keras_gen")
  145. plt.savefig("keras_gen" + os.sep + str(i) + '.jpg', dpi=1)
  146. plt.close()
  147. if __name__ == '__main__':
  148. gan = GAN()
  149. gan.train(epochs=30000, batch_size=32, sample_interval=1000)
  150. gan.test()

 

CGAN:

  1. from tensorflow.keras.datasets import mnist
  2. import tensorflow.keras.backend as K
  3. from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
  4. from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
  5. from tensorflow.keras.layers import LeakyReLU,Embedding
  6. from tensorflow.keras.layers import UpSampling2D, Conv2D
  7. from tensorflow.keras.models import Sequential, Model
  8. from tensorflow.keras.optimizers import Adam
  9. import os
  10. import matplotlib.pyplot as plt
  11. import sys
  12. import numpy as np
  13. class GAN(object):
  14. def __init__(self):
  15. self.img_rows = 28
  16. self.img_cols = 28
  17. self.channels = 1
  18. self.img_shape = (self.img_rows, self.img_cols, self.channels)
  19. self.latent_dim = 100
  20. optimizer = Adam(0.0002, 0.5)
  21. # 构建和编译判别器
  22. self.discriminator = self.build_discriminator()
  23. self.discriminator.compile(loss='binary_crossentropy',
  24. optimizer=optimizer,
  25. metrics=['accuracy'])
  26. # self.discriminator.compile(loss='sparse_categorical_crossentropy',
  27. # optimizer=optimizer,
  28. # metrics=['accuracy'])
  29. # 构建生成器
  30. self.generator = self.build_generator()
  31. # 生成器输入噪声,生成假图片
  32. noise = Input(shape=(self.latent_dim,))
  33. label = Input(shape=(1,))
  34. img = self.generator([noise, label])
  35. # 为了组合模型,只训练生成器
  36. self.discriminator.trainable = False
  37. # 判别器将生成的图片作为输入并确定有效性
  38. validity = self.discriminator([img, label])
  39. # The combined model (stacked generator and discriminator)
  40. # 训练生成器骗过判别器
  41. self.combined = Model([noise, label], validity)
  42. self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
  43. def build_generator(self):
  44. model = Sequential()
  45. model.add(Dense(256, input_dim=self.latent_dim + 10))
  46. model.add(LeakyReLU(alpha=0.2))
  47. model.add(BatchNormalization(momentum=0.8))
  48. model.add(Dense(512))
  49. model.add(LeakyReLU(alpha=0.2))
  50. model.add(BatchNormalization(momentum=0.8))
  51. model.add(Dense(1024))
  52. model.add(LeakyReLU(alpha=0.2))
  53. model.add(BatchNormalization(momentum=0.8))
  54. # np.prod(self.img_shape)=28x28x1
  55. model.add(Dense(np.prod(self.img_shape), activation='tanh'))
  56. model.add(Reshape(self.img_shape))
  57. model.summary()
  58. noise = Input(shape=(self.latent_dim,))
  59. label = Input(shape=(1,), dtype='int32')
  60. label_embedding = Flatten()(Embedding(10, 10)(label))
  61. model_input = K.concatenate([noise, label_embedding], axis=1)
  62. img = model(model_input)
  63. # 输入噪音输出图片
  64. return Model([noise, label], img)
  65. def build_discriminator(self):
  66. model = Sequential()
  67. model.add(Flatten(input_shape=(28,28,2)))
  68. model.add(Dense(512))
  69. model.add(LeakyReLU(alpha=0.2))
  70. model.add(Dense(256))
  71. model.add(LeakyReLU(alpha=0.2))
  72. model.add(Dense(1, activation='sigmoid'))
  73. model.summary()
  74. img = Input(shape=self.img_shape) # 输入 (28,28,1)
  75. label = Input(shape=(1,), dtype='int32')
  76. label_embedding = Flatten()(Embedding(10, np.prod(self.img_shape))(label))
  77. flat_img = Flatten()(img)
  78. model_input = K.concatenate([flat_img, label_embedding], axis = -1)
  79. validity = model(model_input)
  80. return Model([img, label], validity)
  81. def train(self, epochs, batch_size=128, sample_interval=50):
  82. dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
  83. # 加载数据集
  84. (x_train, y_train), (_, _) = mnist.load_data(path=dataPath)
  85. # 归一化到-1到1
  86. x_train = x_train / 127.5 - 1.
  87. print(x_train.shape)
  88. x_train = np.expand_dims(x_train, axis=3)
  89. print(x_train.shape)
  90. y_train = np.expand_dims(y_train, axis=1)
  91. print(y_train.shape)#60000,1
  92. # Adversarial ground truths
  93. valid = np.ones((batch_size, 1))
  94. fake = np.zeros((batch_size, 1))
  95. for epoch in range(epochs):
  96. # ---------------------
  97. # 训练判别器
  98. # ---------------------
  99. # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
  100. idx = np.random.randint(0, x_train.shape[0], batch_size)
  101. # 从数据集随机挑选batch_size个数据,作为一个批次训练
  102. imgs = x_train[idx]
  103. labels = y_train[idx]
  104. # 噪声维度(batch_size,100)
  105. noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
  106. # 由生成器根据噪音生成假的图片
  107. gen_imgs = self.generator.predict([noise, labels])
  108. # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
  109. d_loss_real = self.discriminator.train_on_batch([imgs, labels], valid)
  110. d_loss_fake2 = self.discriminator.train_on_batch([gen_imgs, labels], fake)
  111. d_loss = d_loss_real + d_loss_fake2
  112. # ---------------------
  113. # 训练生成器
  114. # ---------------------
  115. # Train the generator (to have the discriminator label samples as valid)
  116. g_loss = self.combined.train_on_batch([noise, labels], valid)
  117. # 每sample_interval个epoch保存一次生成图片
  118. if epoch % sample_interval == 0:
  119. # 打印loss
  120. print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100 * d_loss[1], g_loss))
  121. self.sample_images(epoch)
  122. if not os.path.exists("keras_model"):
  123. os.makedirs("keras_model")
  124. self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
  125. self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)
  126. def sample_images(self, epoch):
  127. r, c = 4, 5
  128. # 重新生成一批噪音,维度为(25,100)
  129. noise = np.random.normal(0, 1, (r * c, self.latent_dim))
  130. sampled_labels = np.concatenate([np.arange(0, 10).reshape(-1, 1), np.arange(0, 10).reshape(-1, 1)])
  131. gen_imgs = self.generator.predict([noise, sampled_labels])
  132. # 将生成的图片重新规整0-1之间
  133. gen_imgs = 0.5 * gen_imgs + 0.5
  134. fig, axs = plt.subplots(r, c)
  135. cnt = 0
  136. for i in range(r):
  137. for j in range(c):
  138. axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
  139. axs[i, j].axis('off')
  140. cnt += 1
  141. if not os.path.exists("keras_imgs"):
  142. os.makedirs("keras_imgs")
  143. fig.savefig("keras_imgs/%d.png" % epoch)
  144. plt.close()
  145. def retore(self, epoch):
  146. self.generator.load_weights("keras_model/G_model%d.hdf5" % epoch)
  147. self.discriminator.load_weights("keras_model/D_model%d.hdf5" % epoch)
  148. if __name__ == '__main__':
  149. gan = GAN()
  150. # gan.retore(11000)
  151. gan.train(epochs=30000, batch_size=32, sample_interval=500)

 

DCGAN1:

  1. from tensorflow.keras.datasets import mnist
  2. import tensorflow.keras.backend as K
  3. from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
  4. from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
  5. from tensorflow.keras.layers import LeakyReLU,Embedding
  6. from tensorflow.keras.layers import UpSampling2D, Conv2D
  7. from tensorflow.keras.models import Sequential, Model
  8. from tensorflow.keras.optimizers import Adam
  9. import os
  10. import matplotlib.pyplot as plt
  11. import sys
  12. import numpy as np
  13. class GAN(object):
  14. def __init__(self):
  15. self.img_rows = 28
  16. self.img_cols = 28
  17. self.channels = 1
  18. self.img_shape = (self.img_rows, self.img_cols, self.channels)
  19. self.latent_dim = 100
  20. optimizer = Adam(0.0002, 0.5)
  21. # 构建和编译判别器
  22. self.discriminator = self.build_discriminator()
  23. self.discriminator.compile(loss='binary_crossentropy',
  24. optimizer=optimizer,
  25. metrics=['accuracy'])
  26. # self.discriminator.compile(loss='sparse_categorical_crossentropy',
  27. # optimizer=optimizer,
  28. # metrics=['accuracy'])
  29. # 构建生成器
  30. self.generator = self.build_generator()
  31. # 生成器输入噪声,生成假图片
  32. noise = Input(shape=(self.latent_dim,))
  33. label = Input(shape=(1,))
  34. img = self.generator([noise, label])
  35. # 为了组合模型,只训练生成器
  36. self.discriminator.trainable = False
  37. # 判别器将生成的图片作为输入并确定有效性
  38. validity = self.discriminator([img, label])
  39. # The combined model (stacked generator and discriminator)
  40. # 训练生成器骗过判别器
  41. self.combined = Model([noise, label], validity)
  42. self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
  43. def build_generator(self):
  44. model = Sequential()
  45. model.add(Dense(128, activation='relu', input_dim=110))
  46. model.add(Dense(196, activation='relu'))
  47. model.add(Reshape((7, 7, 4)))
  48. model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
  49. model.add(UpSampling2D(size=(2,2)))
  50. model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
  51. model.add(UpSampling2D(size=(2,2)))
  52. model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))
  53. model.add(Conv2D(1, (3, 3), padding='same'))
  54. model.summary()
  55. noise = Input(shape=(self.latent_dim,))
  56. label = Input(shape=(1,), dtype='int32')
  57. label_embedding = Flatten()(Embedding(10, 10)(label))
  58. model_input = K.concatenate([noise, label_embedding], axis=1)
  59. img = model(model_input)
  60. # 输入噪音输出图片
  61. return Model([noise, label], img)
  62. def build_discriminator(self):
  63. model = Sequential()
  64. model.add(Flatten(input_shape=(28,28,2)))
  65. model.add(Dense(512))
  66. model.add(LeakyReLU(alpha=0.2))
  67. model.add(Dense(256))
  68. model.add(LeakyReLU(alpha=0.2))
  69. model.add(Dense(1, activation='sigmoid'))
  70. model.summary()
  71. img = Input(shape=self.img_shape) # 输入 (28,28,1)
  72. label = Input(shape=(1,), dtype='int32')
  73. label_embedding = Flatten()(Embedding(10, np.prod(self.img_shape))(label))
  74. flat_img = Flatten()(img)
  75. model_input = K.concatenate([flat_img, label_embedding], axis = -1)
  76. validity = model(model_input)
  77. return Model([img, label], validity)
  78. def train(self, epochs, batch_size=128, sample_interval=50):
  79. dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
  80. # 加载数据集
  81. (x_train, y_train), (_, _) = mnist.load_data(path=dataPath)
  82. # 归一化到-1到1
  83. x_train = x_train / 127.5 - 1.
  84. print(x_train.shape)
  85. x_train = np.expand_dims(x_train, axis=3)
  86. print(x_train.shape)
  87. y_train = np.expand_dims(y_train, axis=1)
  88. print(y_train.shape)#60000,1
  89. # Adversarial ground truths
  90. valid = np.ones((batch_size, 1))
  91. fake = np.zeros((batch_size, 1))
  92. for epoch in range(epochs):
  93. # ---------------------
  94. # 训练判别器
  95. # ---------------------
  96. # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
  97. idx = np.random.randint(0, x_train.shape[0], batch_size)
  98. # 从数据集随机挑选batch_size个数据,作为一个批次训练
  99. imgs = x_train[idx]
  100. labels = y_train[idx]
  101. # 噪声维度(batch_size,100)
  102. noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
  103. # 由生成器根据噪音生成假的图片
  104. gen_imgs = self.generator.predict([noise, labels])
  105. # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
  106. d_loss_real = self.discriminator.train_on_batch([imgs, labels], valid)
  107. d_loss_fake2 = self.discriminator.train_on_batch([gen_imgs, labels], fake)
  108. d_loss = d_loss_real + d_loss_fake2
  109. # ---------------------
  110. # 训练生成器
  111. # ---------------------
  112. # Train the generator (to have the discriminator label samples as valid)
  113. g_loss = self.combined.train_on_batch([noise, labels], valid)
  114. # 打印loss
  115. print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100 * d_loss[1], g_loss))
  116. # 每sample_interval个epoch保存一次生成图片
  117. if epoch % sample_interval == 0:
  118. self.sample_images(epoch)
  119. if not os.path.exists("keras_model"):
  120. os.makedirs("keras_model")
  121. self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
  122. self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)
  123. def sample_images(self, epoch):
  124. r, c = 4, 5
  125. # 重新生成一批噪音,维度为(25,100)
  126. noise = np.random.normal(0, 1, (r * c, self.latent_dim))
  127. sampled_labels = np.concatenate([np.arange(0, 10).reshape(-1, 1), np.arange(0, 10).reshape(-1, 1)])
  128. gen_imgs = self.generator.predict([noise, sampled_labels])
  129. # 将生成的图片重新规整0-1之间
  130. gen_imgs = 0.5 * gen_imgs + 0.5
  131. fig, axs = plt.subplots(r, c)
  132. cnt = 0
  133. for i in range(r):
  134. for j in range(c):
  135. axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
  136. axs[i, j].axis('off')
  137. cnt += 1
  138. if not os.path.exists("keras_imgs"):
  139. os.makedirs("keras_imgs")
  140. fig.savefig("keras_imgs/%d.png" % epoch)
  141. plt.close()
  142. def retore(self, epoch):
  143. self.generator.load_weights("keras_model/G_model%d.hdf5" % epoch)
  144. self.discriminator.load_weights("keras_model/D_model%d.hdf5" % epoch)
  145. if __name__ == '__main__':
  146. gan = GAN()
  147. # gan.retore(500)
  148. gan.train(epochs=30000, batch_size=32, sample_interval=300)

 

DCGAN2:

  1. from tensorflow.keras.datasets import mnist
  2. import tensorflow.keras.backend as K
  3. from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
  4. from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
  5. from tensorflow.keras.layers import LeakyReLU,Embedding
  6. from tensorflow.keras.layers import UpSampling2D, Conv2D, MaxPooling2D, Conv2DTranspose
  7. from tensorflow.keras.models import Sequential, Model
  8. from tensorflow.keras.optimizers import Adam
  9. import os
  10. import matplotlib.pyplot as plt
  11. import sys
  12. import numpy as np
  13. class GAN(object):
  14. def __init__(self):
  15. self.img_rows = 28
  16. self.img_cols = 28
  17. self.channels = 1
  18. self.img_shape = (self.img_rows, self.img_cols, self.channels)
  19. self.latent_dim = 100
  20. self.dropout = 0.2
  21. optimizer = Adam(0.0001, 0.9)
  22. # 构建和编译判别器
  23. self.discriminator = self.build_discriminator()
  24. self.discriminator.compile(loss='binary_crossentropy',
  25. optimizer=optimizer,
  26. metrics=['accuracy'])
  27. # self.discriminator.compile(loss='sparse_categorical_crossentropy',
  28. # optimizer=optimizer,
  29. # metrics=['accuracy'])
  30. # 构建生成器
  31. self.generator = self.build_generator()
  32. # 生成器输入噪声,生成假图片
  33. noise = Input(shape=(self.latent_dim,))
  34. label = Input(shape=(1,))
  35. img = self.generator([noise, label])
  36. # 为了组合模型,只训练生成器
  37. self.discriminator.trainable = False
  38. # 判别器将生成的图片作为输入并确定有效性
  39. validity = self.discriminator([img, label])
  40. # The combined model (stacked generator and discriminator)
  41. # 训练生成器骗过判别器
  42. self.combined = Model([noise, label], validity)
  43. self.combined.compile(loss='binary_crossentropy', optimizer=optimizer,
  44. metrics=['accuracy'])
  45. def build_generator(self):
  46. model = Sequential()
  47. model.add(Dense(64 * 7 * 7, input_dim=110))
  48. model.add(BatchNormalization(momentum=0.9))
  49. model.add(LeakyReLU(alpha=0.2))
  50. model.add(Reshape((7, 7, 64)))
  51. model.add(Conv2DTranspose(64, kernel_size = 3, strides = 2, padding='same'))
  52. model.add(BatchNormalization(momentum=0.9))
  53. model.add(LeakyReLU(alpha=0.2))
  54. model.add(Conv2DTranspose(32, kernel_size = 3, strides = 2, padding='same'))
  55. model.add(BatchNormalization(momentum=0.9))
  56. model.add(LeakyReLU(alpha=0.2))
  57. model.add(Conv2D(16, kernel_size = 3, padding='same'))
  58. model.add(BatchNormalization(momentum=0.9))
  59. model.add(LeakyReLU(alpha=0.2))
  60. model.add(Conv2D(1, (3, 3), padding='same', activation='tanh'))
  61. # model.add(Dense(128, activation='relu', input_dim=110))
  62. # model.add(Dense(16 * 7 * 7, activation='relu'))
  63. # model.add(Reshape((7, 7, 16)))
  64. # model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
  65. # model.add(UpSampling2D(size=(2,2)))
  66. # model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
  67. # model.add(UpSampling2D(size=(2,2)))
  68. # model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))
  69. # model.add(Conv2D(1, (3, 3), padding='same'))
  70. model.summary()
  71. noise = Input(shape=(self.latent_dim,))
  72. label = Input(shape=(1,), dtype='int32')
  73. label_embedding = Flatten()(Embedding(10, 10)(label))
  74. model_input = K.concatenate([noise, label_embedding], axis=1)
  75. img = model(model_input)
  76. # 输入噪音输出图片
  77. return Model([noise, label], img)
  78. def build_discriminator(self):
  79. model = Sequential()
  80. model.add(Conv2D(64, (3, 3), strides=(2,2), input_shape = (28, 28, 2)))
  81. model.add(LeakyReLU(alpha=0.2))#判别器不能用relu 征造反复标记 成神经元死亡
  82. model.add(Dropout(self.dropout))
  83. model.add(Conv2D(128, (3, 3), strides=(2,2)))
  84. model.add(LeakyReLU(alpha=0.2))
  85. model.add(Dropout(self.dropout))
  86. model.add(Conv2D(128, (3, 3), strides=(2,2)))
  87. model.add(LeakyReLU(alpha=0.2))
  88. model.add(Dropout(self.dropout))
  89. model.add(Flatten())
  90. model.add(Dense(64))
  91. model.add(LeakyReLU(alpha=0.2))
  92. model.add(Dense(1, activation='sigmoid'))
  93. model.summary()
  94. img = Input(shape=self.img_shape) # 输入 (28,28,1)
  95. label = Input(shape=(1,), dtype='int32')
  96. label_embedding = Flatten()(Embedding(10, np.prod(self.img_shape))(label))
  97. # flat_img = Flatten()(img)
  98. label_embedding = Reshape(self.img_shape)(label_embedding)
  99. model_input = K.concatenate([img, label_embedding], axis = -1)
  100. print(model_input.shape)
  101. validity = model(model_input)
  102. return Model([img, label], validity)
  103. def train(self, epochs, batch_size=128, sample_interval=50):
  104. dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
  105. # 加载数据集
  106. (x_train, y_train), (_, _) = mnist.load_data(path=dataPath)
  107. # 归一化到-1到1
  108. x_train = x_train / 127.5 - 1.
  109. print(x_train.shape)
  110. x_train = np.expand_dims(x_train, axis=3)
  111. print(x_train.shape)
  112. y_train = np.expand_dims(y_train, axis=1)
  113. print(y_train.shape)#60000,1
  114. # Adversarial ground truths
  115. valid = np.ones((batch_size, 1))
  116. fake = np.zeros((batch_size, 1))
  117. for epoch in range(epochs):
  118. # ---------------------
  119. # 训练判别器
  120. # ---------------------
  121. # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
  122. idx = np.random.randint(0, x_train.shape[0], batch_size)
  123. # 从数据集随机挑选batch_size个数据,作为一个批次训练
  124. imgs = x_train[idx]
  125. labels = y_train[idx]
  126. # 噪声维度(batch_size,100)
  127. noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
  128. # 由生成器根据噪音生成假的图片
  129. gen_imgs = self.generator.predict([noise, labels])
  130. # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
  131. d_loss_real = self.discriminator.train_on_batch([imgs, labels], valid)
  132. d_loss_fake = self.discriminator.train_on_batch([gen_imgs, labels], fake)
  133. # ---------------------
  134. # 训练生成器
  135. # ---------------------
  136. # Train the generator (to have the discriminator label samples as valid)
  137. g_loss = self.combined.train_on_batch([noise, labels], valid)
  138. # 打印loss 真图识别率 假图识别率 诱骗成功率
  139. print("%d [D realloss: %f, realacc: %.2f%%, fakeloss: %f, fakeacc: %.2f%%] [G loss: %f, acc = %.2f%%]" %
  140. (epoch, d_loss_real[0], 100 * d_loss_real[1], d_loss_fake[0], 100 * d_loss_fake[1], g_loss[0], 100 * g_loss[1]))
  141. # 每sample_interval个epoch保存一次生成图片
  142. if epoch % sample_interval == 0:
  143. self.sample_images(epoch)
  144. if not os.path.exists("keras_model"):
  145. os.makedirs("keras_model")
  146. self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
  147. self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)
  148. def sample_images(self, epoch):
  149. r, c = 4, 5
  150. # 重新生成一批噪音,维度为(25,100)
  151. noise = np.random.normal(0, 1, (r * c, self.latent_dim))
  152. sampled_labels = np.concatenate([np.arange(0, 10).reshape(-1, 1), np.arange(0, 10).reshape(-1, 1)])
  153. gen_imgs = self.generator.predict([noise, sampled_labels])
  154. # 将生成的图片重新规整0-1之间
  155. gen_imgs = 0.5 * gen_imgs + 0.5
  156. fig, axs = plt.subplots(r, c)
  157. cnt = 0
  158. for i in range(r):
  159. for j in range(c):
  160. axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
  161. axs[i, j].axis('off')
  162. cnt += 1
  163. if not os.path.exists("keras_imgs"):
  164. os.makedirs("keras_imgs")
  165. fig.savefig("keras_imgs/%d.png" % epoch)
  166. plt.close()
  167. def retore(self, epoch):
  168. self.generator.load_weights("keras_model/G_model%d.hdf5" % epoch)
  169. self.discriminator.load_weights("keras_model/D_model%d.hdf5" % epoch)
  170. if __name__ == '__main__':
  171. gan = GAN()
  172. # gan.retore(4500)
  173. gan.train(epochs=30000, batch_size=32, sample_interval=300)

 

DC_GAN3:

  1. from tensorflow.keras.datasets import mnist
  2. import tensorflow.keras.backend as K
  3. from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
  4. from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding2D
  5. from tensorflow.keras.layers import LeakyReLU,Embedding,Conv2DTranspose
  6. from tensorflow.keras.layers import UpSampling2D, Conv2D
  7. from tensorflow.keras.models import Sequential, Model
  8. from tensorflow.keras.optimizers import Adam
  9. import os
  10. import matplotlib.pyplot as plt
  11. import sys
  12. import numpy as np
  13. class GAN(object):
  14. def __init__(self):
  15. self.img_rows = 28
  16. self.img_cols = 28
  17. self.channels = 1
  18. self.img_shape = (self.img_rows, self.img_cols, self.channels)
  19. self.latent_dim = 100
  20. optimizer = Adam(0.0002, 0.5)
  21. # 构建和编译判别器
  22. self.discriminator = self.build_discriminator()
  23. self.discriminator.compile(loss='binary_crossentropy',
  24. optimizer=optimizer,
  25. metrics=['accuracy'])
  26. # self.discriminator.compile(loss='sparse_categorical_crossentropy',
  27. # optimizer=optimizer,
  28. # metrics=['accuracy'])
  29. # 构建生成器
  30. self.generator = self.build_generator()
  31. # 生成器输入噪声,生成假图片
  32. noise = Input(shape=(self.latent_dim,))
  33. label = Input(shape=(1,))
  34. img = self.generator([noise, label])
  35. # 为了组合模型,只训练生成器
  36. self.discriminator.trainable = False
  37. # 判别器将生成的图片作为输入并确定有效性
  38. validity = self.discriminator([img, label])
  39. # The combined model (stacked generator and discriminator)
  40. # 训练生成器骗过判别器
  41. self.combined = Model([noise, label], validity)
  42. self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
  43. def build_generator(self):
  44. model = Sequential()
  45. # model.add(Dense(128, activation='relu', input_dim=110))
  46. # model.add(Dense(196, activation='relu'))
  47. # model.add(Reshape((7, 7, 4)))
  48. # model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
  49. # model.add(UpSampling2D(size=(2,2)))
  50. # model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
  51. # model.add(UpSampling2D(size=(2,2)))
  52. # model.add(Conv2D(16, (3, 3), activation='relu', padding='same'))
  53. # model.add(Conv2D(1, (3, 3), padding='same'))
  54. model.add(Dense(64 * 7 * 7, input_dim=110))
  55. model.add(BatchNormalization(momentum=0.9))
  56. model.add(LeakyReLU(alpha=0.2))
  57. model.add(Reshape((7, 7, 64)))
  58. model.add(Conv2DTranspose(64, kernel_size = 3, strides = 2, padding='same'))
  59. model.add(BatchNormalization(momentum=0.9))
  60. model.add(LeakyReLU(alpha=0.2))
  61. model.add(Conv2DTranspose(32, kernel_size = 3, strides = 2, padding='same'))
  62. model.add(BatchNormalization(momentum=0.9))
  63. model.add(LeakyReLU(alpha=0.2))
  64. model.add(Conv2D(16, kernel_size = 3, padding='same'))
  65. model.add(BatchNormalization(momentum=0.9))
  66. model.add(LeakyReLU(alpha=0.2))
  67. model.add(Conv2D(1, (3, 3), padding='same', activation='tanh'))
  68. model.summary()
  69. noise = Input(shape=(self.latent_dim,))
  70. label = Input(shape=(1,), dtype='int32')
  71. label_embedding = Flatten()(Embedding(10, 10)(label))
  72. model_input = K.concatenate([noise, label_embedding], axis=1)
  73. img = model(model_input)
  74. # 输入噪音输出图片
  75. return Model([noise, label], img)
  76. def build_discriminator(self):
  77. model = Sequential()
  78. model.add(Flatten(input_shape=(28,28,2)))
  79. model.add(Dense(512))
  80. model.add(LeakyReLU(alpha=0.2))
  81. model.add(Dense(256))
  82. model.add(LeakyReLU(alpha=0.2))
  83. model.add(Dense(1, activation='sigmoid'))
  84. model.summary()
  85. img = Input(shape=self.img_shape) # 输入 (28,28,1)
  86. label = Input(shape=(1,), dtype='int32')
  87. label_embedding = Flatten()(Embedding(10, np.prod(self.img_shape))(label))
  88. flat_img = Flatten()(img)
  89. model_input = K.concatenate([flat_img, label_embedding], axis = -1)
  90. validity = model(model_input)
  91. return Model([img, label], validity)
  92. def train(self, epochs, batch_size=128, sample_interval=50):
  93. dataPath = 'C:/Users/lenovo/Desktop/MinstGan/mnist.npz'
  94. # 加载数据集
  95. (x_train, y_train), (_, _) = mnist.load_data(path=dataPath)
  96. # 归一化到-1到1
  97. x_train = x_train / 127.5 - 1.
  98. print(x_train.shape)
  99. x_train = np.expand_dims(x_train, axis=3)
  100. print(x_train.shape)
  101. y_train = np.expand_dims(y_train, axis=1)
  102. print(y_train.shape)#60000,1
  103. # Adversarial ground truths
  104. valid = np.ones((batch_size, 1))
  105. fake = np.zeros((batch_size, 1))
  106. for epoch in range(epochs):
  107. # ---------------------
  108. # 训练判别器
  109. # ---------------------
  110. # X_train.shape[0]Ϊ为数据集的数量,随机生成batch_size个数量的随机数,作为数据的索引
  111. idx = np.random.randint(0, x_train.shape[0], batch_size)
  112. # 从数据集随机挑选batch_size个数据,作为一个批次训练
  113. imgs = x_train[idx]
  114. labels = y_train[idx]
  115. # 噪声维度(batch_size,100)
  116. noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
  117. # 由生成器根据噪音生成假的图片
  118. gen_imgs = self.generator.predict([noise, labels])
  119. # 训练判别器,判别器希望真实的图片,打上标签1,假的图片打上标签0
  120. d_loss_real = self.discriminator.train_on_batch([imgs, labels], valid)
  121. d_loss_fake2 = self.discriminator.train_on_batch([gen_imgs, labels], fake)
  122. d_loss = d_loss_real + d_loss_fake2
  123. # ---------------------
  124. # 训练生成器
  125. # ---------------------
  126. # Train the generator (to have the discriminator label samples as valid)
  127. g_loss = self.combined.train_on_batch([noise, labels], valid)
  128. # 打印loss
  129. print("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100 * d_loss[1], g_loss))
  130. # 每sample_interval个epoch保存一次生成图片
  131. if epoch % sample_interval == 0:
  132. self.sample_images(epoch)
  133. if not os.path.exists("keras_model"):
  134. os.makedirs("keras_model")
  135. self.generator.save_weights("keras_model/G_model%d.hdf5" % epoch, True)
  136. self.discriminator.save_weights("keras_model/D_model%d.hdf5" % epoch, True)
  137. def sample_images(self, epoch):
  138. r, c = 4, 5
  139. # 重新生成一批噪音,维度为(25,100)
  140. noise = np.random.normal(0, 1, (r * c, self.latent_dim))
  141. sampled_labels = np.concatenate([np.arange(0, 10).reshape(-1, 1), np.arange(0, 10).reshape(-1, 1)])
  142. gen_imgs = self.generator.predict([noise, sampled_labels])
  143. # 将生成的图片重新规整0-1之间
  144. gen_imgs = 0.5 * gen_imgs + 0.5
  145. fig, axs = plt.subplots(r, c)
  146. cnt = 0
  147. for i in range(r):
  148. for j in range(c):
  149. axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap='gray')
  150. axs[i, j].axis('off')
  151. cnt += 1
  152. if not os.path.exists("keras_imgs"):
  153. os.makedirs("keras_imgs")
  154. fig.savefig("keras_imgs/%d.png" % epoch)
  155. plt.close()
  156. def retore(self, epoch):
  157. self.generator.load_weights("keras_model/G_model%d.hdf5" % epoch)
  158. self.discriminator.load_weights("keras_model/D_model%d.hdf5" % epoch)
  159. if __name__ == '__main__':
  160. gan = GAN()
  161. # gan.retore(500)
  162. gan.train(epochs=30000, batch_size=32, sample_interval=300)

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/471041
推荐阅读
相关标签
  

闽ICP备14008679号