赞
踩
keras_循环神经网络
6.2.1 Keras 中的循环层
https://blog.csdn.net/qq_30614345/article/details/98714874
代码清单 6-22 准备 IMDB 数据
代码清单 6-23 用 Embedding 层和 SimpleRNN 层来训练模型
代码清单 6-24 绘制结果
# 与 Keras 中的所有循环层一样, SimpleRNN 可以在两种不同的模式下运行:一种是返回每 # 个时间步连续输出的完整序列,即形状为 (batch_size, timesteps, output_features) # 的三维张量;另一种是只返回每个输入序列的最终输出,即形状为 (batch_size, output_ # features) 的二维张量。这两种模式由 return_sequences 这个构造函数参数来控制。我们 # 来看一个使用 SimpleRNN 的例子,它只返回最后一个时间步的输出 from keras.models import Sequential from keras.layers import Embedding, SimpleRNN model = Sequential() model.add(Embedding(10000, 32)) model.add(SimpleRNN(32)) model.summary() _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, None, 32) 320000 _________________________________________________________________ simple_rnn_1 (SimpleRNN) (None, 32) 2080 ================================================================= Total params: 322,080 Trainable params: 322,080 Non-trainable params: 0 _________________________________________________________________ model = Sequential() model.add(Embedding(10000, 32)) model.add(SimpleRNN(32, return_sequences=True)) model.summary() _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_2 (Embedding) (None, None, 32) 320000 _________________________________________________________________ simple_rnn_2 (SimpleRNN) (None, None, 32) 2080 ================================================================= Total params: 322,080 Trainable params: 322,080 Non-trainable params: 0 _________________________________________________________________ It is sometimes useful to stack several recurrent layers one after the other in order to increase the representational power of a network. In such a setup, you have to get all intermediate layers to return full sequences: 为了提高网络的表示能力,将多个循环层逐个堆叠有时也是很有用的。在这种情况下,你 # 需要让所有中间层都返回完整的输出序列 # 为了提高网络的表示能力,将多个循环层逐个堆叠有时也是很有用的。在这种情况下,你 # 需要让所有中间层都返回完整的输出序列 model = Sequential() model.add(Embedding(10000, 32)) model.add(SimpleRNN(32, return_sequences=True)) model.add(SimpleRNN(32, return_sequences=True)) model.add(SimpleRNN(32, return_sequences=True)) model.add(SimpleRNN(32)) # This last layer only returns the last outputs. model.summary() _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= embedding_3 (Embedding) (None, None, 32) 320000 _________________________________________________________________ simple_rnn_3 (SimpleRNN) (None, None, 32) 2080 _________________________________________________________________ simple_rnn_4 (SimpleRNN) (None, None, 32) 2080 _________________________________________________________________ simple_rnn_5 (SimpleRNN) (None, None, 32) 2080 _________________________________________________________________ simple_rnn_6 (SimpleRNN) (None, 32) 2080 ================================================================= Total params: 328,320 Trainable params: 328,320 Non-trainable params: 0 _________________________________________________________________ Now let's try to use such a model on the IMDB movie review classification problem. First, let's preprocess the data: 在这么多单词之后截断文本(这些单词都 # 属于前 max_features 个最常见的单词) # 接下来,我们将这个模型应用于 IMDB 电影评论分类问题。首先,对数据进行预处理。 # 代码清单 6-22 准备 IMDB 数据 from keras.datasets import imdb from keras.preprocessing import sequence max_features = 10000 # number of words to consider as features # 作为特征的单词个数 maxlen = 500 # cut texts after this number of words (among top max_features most common words) # 在这么多单词之后截断文本(这些单词都 # 属于前 max_features 个最常见的单词) batch_size = 32 print('Loading data...') (input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features) print(len(input_train), 'train sequences') print(len(input_test), 'test sequences') print('Pad sequences (samples x time)') input_train = sequence.pad_sequences(input_train, maxlen=maxlen) input_test = sequence.pad_sequences(input_test, maxlen=maxlen) print('input_train shape:', input_train.shape) print('input_test shape:', input_test.shape) Loading data... 25000 train sequences 25000 test sequences Pad sequences (samples x time) input_train shape: (25000, 500) input_test shape: (25000, 500) Let's train a simple recurrent network using an Embedding layer and a SimpleRNN layer: 我们用一个 Embedding 层和一个 SimpleRNN 层来训练一个简单的循环网络。 # 代码清单 6-23 用 Embedding 层和 SimpleRNN 层来训练模型 # 我们用一个 Embedding 层和一个 SimpleRNN 层来训练一个简单的循环网络。 # 代码清单 6-23 用 Embedding 层和 SimpleRNN 层来训练模型 from keras.layers import Dense model = Sequential() model.add(Embedding(max_features, 32)) model.add(SimpleRNN(32)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(input_train, y_train, epochs=10, batch_size=128, validation_split=0.2) Train on 20000 samples, validate on 5000 samples Epoch 1/10 20000/20000 [==============================] - 22s - loss: 0.6455 - acc: 0.6210 - val_loss: 0.5293 - val_acc: 0.7758 Epoch 2/10 20000/20000 [==============================] - 20s - loss: 0.4005 - acc: 0.8362 - val_loss: 0.4752 - val_acc: 0.7742 Epoch 3/10 20000/20000 [==============================] - 19s - loss: 0.2739 - acc: 0.8920 - val_loss: 0.4947 - val_acc: 0.8064 Epoch 4/10 20000/20000 [==============================] - 19s - loss: 0.1916 - acc: 0.9290 - val_loss: 0.3783 - val_acc: 0.8460 Epoch 5/10 20000/20000 [==============================] - 19s - loss: 0.1308 - acc: 0.9528 - val_loss: 0.5755 - val_acc: 0.7376 Epoch 6/10 20000/20000 [==============================] - 19s - loss: 0.0924 - acc: 0.9675 - val_loss: 0.5829 - val_acc: 0.7634 Epoch 7/10 20000/20000 [==============================] - 19s - loss: 0.0726 - acc: 0.9768 - val_loss: 0.5541 - val_acc: 0.7932 Epoch 8/10 20000/20000 [==============================] - 19s - loss: 0.0426 - acc: 0.9862 - val_loss: 0.5551 - val_acc: 0.8292 Epoch 9/10 20000/20000 [==============================] - 20s - loss: 0.0300 - acc: 0.9918 - val_loss: 0.5962 - val_acc: 0.8312 Epoch 10/10 20000/20000 [==============================] - 19s - loss: 0.0256 - acc: 0.9925 - val_loss: 0.6707 - val_acc: 0.8054 Let's display the training and validation loss and accuracy: # 另一部分原因在于, SimpleRNN 不擅长处理长序列,比如文本 # 接下来显示训练和验证的损失和精度(见图 6-11 和图 6-12)。 # 代码清单 6-24 绘制结果 import matplotlib.pyplot as plt acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # 提醒一下,在第 3 章,处理这个数据集的第一个简单方法得到的测试精度是 88%。不幸的是, # 与这个基准相比,这个小型循环网络的表现并不好(验证精度只有 85%)。问题的部分原因在于, # 输入只考虑了前 500 个单词,而不是整个序列, 因此, RNN 获得的信息比前面的基准模型更少。 # 另一部分原因在于, SimpleRNN 不擅长处理长序列,比如文本
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。