赞
踩
1.GAN的背景
一般的,深度学习模型可以分为判别式模型和生成式模型。判别式模型发展的较好,生成式模型建模比较困难,发展缓慢。直到GAN的发明,生成式模型才逐渐蓬勃发展,其他的生成式模型有Diffusion Model,VAE(变分自编码器)等。GAN由2个部分组成,生成器G和判别器D,具体的介绍可见:图解 生成对抗网络GAN 原理 超详解_生成对抗网络gan图解-CSDN博客
2.GAN结构
生成器输入随机噪声,输出数字图像,再由鉴别器鉴别真假,两者之间互相学习对抗,最后达到一种平衡(纳什均衡)。
3.前面部分
- import matplotlib.pyplot as plt
- import numpy as np
- import pandas as pd
- import sys
- import os
- import pathlib
- import librosa
- import librosa.display
- from tqdm import tnrange, notebook
- from tensorflow import keras
- import tensorflow as tf
- from keras import layers, datasets, Sequential, Model, optimizers
- from keras.layers import LeakyReLU, Dense, Input, BatchNormalization, Flatten, Reshape
- import imageio.v2 as imageio
- class GAN():
- def __init__(self):
- self.img_rows = 28
- self.img_cols = 28
- self.channels = 1
- self.img_shape = (self.img_rows, self.img_cols, self.channels)
- self.latent_dim = 100
- # 这下面都有才能正常运行,不知原因,只有第二行一直报错
- optimizer = tf.keras.optimizers.Adam(1e-4)
- self.discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
-
- # Build and compile the discriminator
- self.discriminator = self.build_discriminator()
- self.variables = self.discriminator.trainable_variables
-
- self.discriminator_optimizer.build(self.variables)
- self.discriminator.compile(loss='binary_crossentropy',
- optimizer=self.discriminator_optimizer,
- metrics=['accuracy'])
-
- # Build the generator
- self.generator = self.build_generator()
-
- # The generator takes noise as input and generates imgs
- z = Input(shape=(self.latent_dim,))
- img = self.generator(z)
-
- # For the combined model we will only train the generator
- self.discriminator.trainable = False
-
- # The discriminator takes generated images as input and determines validity
- validity = self.discriminator(img)
-
- # 定义组合模型,将生成器和鉴别器堆叠在一起
- # 生成器输入是潜在向量 z,输出是鉴别器的判别结果 validity
- self.combined = Model(z, validity)
- self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
4.Generator构建
- def build_generator(self):
-
- model = Sequential()
-
- model.add(Dense(256, input_dim=self.latent_dim))
- model.add(LeakyReLU(alpha=0.2))
- model.add(BatchNormalization(momentum=0.8))
- model.add(Dense(512))
- model.add(LeakyReLU(alpha=0.2))
- model.add(BatchNormalization(momentum=0.8))
- model.add(Dense(1024))
- model.add(LeakyReLU(alpha=0.2))
- model.add(BatchNormalization(momentum=0.8))
- model.add(Dense(np.prod(self.img_shape), activation='tanh'))
- model.add(Reshape(self.img_shape))
-
- model.summary()
-
- noise = Input(shape=(self.latent_dim,))
- img = model(noise)
-
- return Model(noise, img)
由几层全链接层及BN和激活函数构成。
5.Discriminator构成
- def build_discriminator(self):
-
- model = Sequential()
-
- model.add(Flatten(input_shape=self.img_shape))
- model.add(Dense(512))
- model.add(LeakyReLU(alpha=0.2))
- model.add(Dense(256))
- model.add(LeakyReLU(alpha=0.2))
- model.add(Dense(1, activation='sigmoid'))
- model.summary()
-
- img = Input(shape=self.img_shape)
- validity = model(img)
-
- return Model(img, validity)
validity作为判别结果,与img一起返回。
6.训练过程
- def train(self, epochs, batch_size=128, sample_interval=50):
-
- # Load the minist dataset
- (X_train, _), (_, _) = datasets.mnist.load_data()
- print("Shape of the dataset:", X_train.shape)
-
- # Rescale -1 to 1
- X_train = X_train / 127.5 - 1.
- X_train = np.expand_dims(X_train, axis=3)
-
- # Adversarial ground truths
- valid = np.ones((batch_size, 1))
- fake = np.zeros((batch_size, 1))
-
- for epoch in range(epochs):
-
- # ---------------------
- # Train Discriminator
- # ---------------------
-
- # Select a random batch of images
- idx = np.random.randint(0, X_train.shape[0], batch_size)
- imgs = X_train[idx]
-
- noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
-
- # Generate a batch of new images
- gen_imgs = self.generator.predict(noise)
-
- # Train the discriminator
- d_loss_real = self.discriminator.train_on_batch(imgs, valid)
- d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
- d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
-
- # ---------------------
- # Train Generator
- # ---------------------
-
- noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
-
- # Train the generator (to have the discriminator label samples as valid)
- g_loss = self.combined.train_on_batch(noise, valid)
-
- # Plot the progress
- print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
-
- # If at save interval => save generated image samples
- if epoch % sample_interval == 0:
- self.sample_images(epoch)
对于训练过程中的部分,先归一化,定义真假标签,然后开始D和G的交替训练。先是对于D的训练,一个真实的图片和一个噪声产生的图片,分别用来训练鉴别器的损失d_loss_real和d_loss_fake,最后计算总的loss,取前面两者均值。D的目标是区分真实和生成的,最小化损失函数来使鉴别器达到最优。然后训练D和G的组合模型(参数组合,此时的鉴别器是False,不工作的),用valid和noise来训练,目标是使生成的图片被鉴别器标记为有效。
7.图片输出
- def sample_images(self, epoch):
- r, c = 5, 5
- noise = np.random.normal(0, 1, (r * c, self.latent_dim))
- gen_imgs = self.generator.predict(noise)
-
- # Rescale images 0 - 1
- gen_imgs = 0.5 * gen_imgs + 0.5
-
- fig, axs = plt.subplots(r, c)
- cnt = 0
- for i in range(r):
- for j in range(c):
- axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
- axs[i,j].axis('off')
- cnt += 1
- fig.savefig("你的路径/%d.png" % epoch)
- plt.close()
- gan = GAN()
- gan.train(epochs=30000, batch_size=32, sample_interval=200)
epoch=30000,下面是训练出来的数据:
刚开始:
最后:
效果还是挺不错的。后面想尝试生成音频数据试试。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。