赞
踩
GAN的作用:基于现有数据生成类似的新数据。
理解GAN的两大护法G和D
G是generator,生成器: 负责凭空捏造数据出来
D是discriminator,判别器: 负责判断数据是不是真数据
这样可以简单的看作是两个网络的博弈过程。在最原始的GAN论文里面,G和D都是两个多层感知机网络。
首先,注意一点,GAN操作的数据不一定非得是图像数据,不过为了更方便解释,我在这里用图像数据为例解释以下GAN:
- from __future__ import print_function, division
-
- from keras.datasets import mnist
- from keras.layers import Input, Dense, Reshape, Flatten, Dropout
- from keras.layers import BatchNormalization, Activation, ZeroPadding2D
- from keras.layers.advanced_activations import LeakyReLU
- from keras.layers.convolutional import UpSampling2D, Conv2D
- from keras.models import Sequential, Model
- from keras.optimizers import Adam
-
- import matplotlib.pyplot as plt
-
- import sys
-
- import numpy as np
-
- class GAN():
- def __init__(self):
- self.img_rows = 28
- self.img_cols = 28
- self.channels = 1
- self.img_shape = (self.img_rows, self.img_cols, self.channels)
- self.latent_dim = 100
-
- optimizer = Adam(0.0002, 0.5)
-
- # Build and compile the discriminator
- self.discriminator = self.build_discriminator()
- self.discriminator.compile(loss='binary_crossentropy',
- optimizer=optimizer,
- metrics=['accuracy'])
-
- # Build the generator
- self.generator = self.build_generator()
-
- # The generator takes noise as input and generates imgs
- z = Input(shape=(self.latent_dim,))
- img = self.generator(z)
-
- # For the combined model we will only train the generator
- self.discriminator.trainable = False
-
- # The discriminator takes generated images as input and determines validity
- validity = self.discriminator(img)
-
- # The combined model (stacked generator and discriminator)
- # Trains the generator to fool the discriminator
- self.combined = Model(z, validity)
- self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)
-
-
- def build_generator(self):
-
- model = Sequential()
-
- model.add(Dense(256, input_dim=self.latent_dim))
- model.add(LeakyReLU(alpha=0.2))
- model.add(BatchNormalization(momentum=0.8))
- model.add(Dense(512))
- model.add(LeakyReLU(alpha=0.2))
- model.add(BatchNormalization(momentum=0.8))
- model.add(Dense(1024))
- model.add(LeakyReLU(alpha=0.2))
- model.add(BatchNormalization(momentum=0.8))
- model.add(Dense(np.prod(self.img_shape), activation='tanh'))
- model.add(Reshape(self.img_shape))
-
- model.summary()
-
- noise = Input(shape=(self.latent_dim,))
- img = model(noise)
-
- return Model(noise, img)
-
- def build_discriminator(self):
-
- model = Sequential()
-
- model.add(Flatten(input_shape=self.img_shape))
- model.add(Dense(512))
- model.add(LeakyReLU(alpha=0.2))
- model.add(Dense(256))
- model.add(LeakyReLU(alpha=0.2))
- model.add(Dense(1, activation='sigmoid'))
- model.summary()
-
- img = Input(shape=self.img_shape)
- validity = model(img)
-
- return Model(img, validity)
-
- def train(self, epochs, batch_size=128, sample_interval=50):
-
- # Load the dataset
- (X_train, _), (_, _) = mnist.load_data()
-
- # Rescale -1 to 1
- X_train = X_train / 127.5 - 1.
- X_train = np.expand_dims(X_train, axis=3)
-
- # Adversarial ground truths
- valid = np.ones((batch_size, 1))
- fake = np.zeros((batch_size, 1))
-
- for epoch in range(epochs):
-
- # ---------------------
- # Train Discriminator
- # ---------------------
-
- # Select a random batch of images
- idx = np.random.randint(0, X_train.shape[0], batch_size)
- imgs = X_train[idx]
-
- noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
-
- # Generate a batch of new images
- gen_imgs = self.generator.predict(noise)
-
- # Train the discriminator
- d_loss_real = self.discriminator.train_on_batch(imgs, valid)
- d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
- d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
-
- # ---------------------
- # Train Generator
- # ---------------------
-
- noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
-
- # Train the generator (to have the discriminator label samples as valid)
- g_loss = self.combined.train_on_batch(noise, valid)
-
- # Plot the progress
- print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
-
- # If at save interval => save generated image samples
- if epoch % sample_interval == 0:
- self.sample_images(epoch)
-
- def sample_images(self, epoch):
- r, c = 5, 5
- noise = np.random.normal(0, 1, (r * c, self.latent_dim))
- gen_imgs = self.generator.predict(noise)
-
- # Rescale images 0 - 1
- gen_imgs = 0.5 * gen_imgs + 0.5
-
- fig, axs = plt.subplots(r, c)
- cnt = 0
- for i in range(r):
- for j in range(c):
- axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
- axs[i,j].axis('off')
- cnt += 1
- fig.savefig("images/%d.png" % epoch)
- plt.close()
-
-
- if __name__ == '__main__':
- gan = GAN()
- gan.train(epochs=30000, batch_size=32, sample_interval=200)
在原有真实样本的基础上稍加处理,使原有分类器对其真实类别无法判别。
比如:
带了一个面具之后你不认识我了,或者说是批了一个羊皮,你就识别不出来我是一个人了。
对于人脸识别,我带上一个特制眼镜就识别不出来我是谁了。
对于安保摄像识别人的系统,我穿了一个特制衬衫,摄像头就识别不出来我是一个人了。
路标指示牌上我贴了一个贴纸之后,自动驾驶就识别指示牌识别错误了。
可以利用对抗样本来进行训练,提高模型的抗干扰能力,因此有了对抗训练的概念。
通过对抗训练,相当于加了一种形式的正则,可以提高模型的鲁棒性。
https://blog.csdn.net/leviopku/article/details/81292192
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。