当前位置:   article > 正文

keras_循环神经网络_4、试写出循环神经网络用keras编程建立模型用到的主要函数4、试写出循环神经

4、试写出循环神经网络用keras编程建立模型用到的主要函数4、试写出循环神经

keras_循环神经网络
6.2.1 Keras 中的循环层
https://blog.csdn.net/qq_30614345/article/details/98714874
代码清单 6-22 准备 IMDB 数据
代码清单 6-23 用 Embedding 层和 SimpleRNN 层来训练模型
代码清单 6-24 绘制结果


# 与 Keras 中的所有循环层一样, SimpleRNN 可以在两种不同的模式下运行:一种是返回每
# 个时间步连续输出的完整序列,即形状为 (batch_size, timesteps, output_features)
# 的三维张量;另一种是只返回每个输入序列的最终输出,即形状为 (batch_size, output_
# features) 的二维张量。这两种模式由 return_sequences 这个构造函数参数来控制。我们
# 来看一个使用 SimpleRNN 的例子,它只返回最后一个时间步的输出from keras.models import Sequential
from keras.layers import Embedding, SimpleRNN
​
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32))
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, None, 32)          320000    
_________________________________________________________________
simple_rnn_1 (SimpleRNN)     (None, 32)                2080      
=================================================================
Total params: 322,080
Trainable params: 322,080
Non-trainable params: 0
_________________________________________________________________
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_2 (Embedding)      (None, None, 32)          320000    
_________________________________________________________________
simple_rnn_2 (SimpleRNN)     (None, None, 32)          2080      
=================================================================
Total params: 322,080
Trainable params: 322,080
Non-trainable params: 0
_________________________________________________________________
It is sometimes useful to stack several recurrent layers one after the other in order to increase the representational power of a network. In such a setup, you have to get all intermediate layers to return full sequences:

为了提高网络的表示能力,将多个循环层逐个堆叠有时也是很有用的。在这种情况下,你
# 需要让所有中间层都返回完整的输出序列
# 为了提高网络的表示能力,将多个循环层逐个堆叠有时也是很有用的。在这种情况下,你
# 需要让所有中间层都返回完整的输出序列
​
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32))  # This last layer only returns the last outputs.
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_3 (Embedding)      (None, None, 32)          320000    
_________________________________________________________________
simple_rnn_3 (SimpleRNN)     (None, None, 32)          2080      
_________________________________________________________________
simple_rnn_4 (SimpleRNN)     (None, None, 32)          2080      
_________________________________________________________________
simple_rnn_5 (SimpleRNN)     (None, None, 32)          2080      
_________________________________________________________________
simple_rnn_6 (SimpleRNN)     (None, 32)                2080      
=================================================================
Total params: 328,320
Trainable params: 328,320
Non-trainable params: 0
_________________________________________________________________
Now let's try to use such a model on the IMDB movie review classification problem. First, let's preprocess the data:

在这么多单词之后截断文本(这些单词都
# 属于前 max_features 个最常见的单词)
# 接下来,我们将这个模型应用于 IMDB 电影评论分类问题。首先,对数据进行预处理。
# 代码清单 6-22 准备 IMDB 数据from keras.datasets import imdb
from keras.preprocessing import sequence
​
max_features = 10000  # number of words to consider as features
# 作为特征的单词个数
maxlen = 500  # cut texts after this number of words (among top max_features most common words)
# 在这么多单词之后截断文本(这些单词都
# 属于前 max_features 个最常见的单词)
batch_size = 32print('Loading data...')
(input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features)
print(len(input_train), 'train sequences')
print(len(input_test), 'test sequences')print('Pad sequences (samples x time)')
input_train = sequence.pad_sequences(input_train, maxlen=maxlen)
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)
print('input_train shape:', input_train.shape)
print('input_test shape:', input_test.shape)
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
input_train shape: (25000, 500)
input_test shape: (25000, 500)
Let's train a simple recurrent network using an Embedding layer and a SimpleRNN layer:

我们用一个 Embedding 层和一个 SimpleRNN 层来训练一个简单的循环网络。
# 代码清单 6-23 用 Embedding 层和 SimpleRNN 层来训练模型
# 我们用一个 Embedding 层和一个 SimpleRNN 层来训练一个简单的循环网络。
# 代码清单 6-23 用 Embedding 层和 SimpleRNN 层来训练模型
from keras.layers import Dense
​
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
​
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
history = model.fit(input_train, y_train,
                    epochs=10,
                    batch_size=128,
                    validation_split=0.2)
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 22s - loss: 0.6455 - acc: 0.6210 - val_loss: 0.5293 - val_acc: 0.7758
Epoch 2/10
20000/20000 [==============================] - 20s - loss: 0.4005 - acc: 0.8362 - val_loss: 0.4752 - val_acc: 0.7742
Epoch 3/10
20000/20000 [==============================] - 19s - loss: 0.2739 - acc: 0.8920 - val_loss: 0.4947 - val_acc: 0.8064
Epoch 4/10
20000/20000 [==============================] - 19s - loss: 0.1916 - acc: 0.9290 - val_loss: 0.3783 - val_acc: 0.8460
Epoch 5/10
20000/20000 [==============================] - 19s - loss: 0.1308 - acc: 0.9528 - val_loss: 0.5755 - val_acc: 0.7376
Epoch 6/10
20000/20000 [==============================] - 19s - loss: 0.0924 - acc: 0.9675 - val_loss: 0.5829 - val_acc: 0.7634
Epoch 7/10
20000/20000 [==============================] - 19s - loss: 0.0726 - acc: 0.9768 - val_loss: 0.5541 - val_acc: 0.7932
Epoch 8/10
20000/20000 [==============================] - 19s - loss: 0.0426 - acc: 0.9862 - val_loss: 0.5551 - val_acc: 0.8292
Epoch 9/10
20000/20000 [==============================] - 20s - loss: 0.0300 - acc: 0.9918 - val_loss: 0.5962 - val_acc: 0.8312
Epoch 10/10
20000/20000 [==============================] - 19s - loss: 0.0256 - acc: 0.9925 - val_loss: 0.6707 - val_acc: 0.8054
Let's display the training and validation loss and accuracy:

# 另一部分原因在于, SimpleRNN 不擅长处理长序列,比如文本
# 接下来显示训练和验证的损失和精度(见图 6-11 和图 6-12)。
# 代码清单 6-24 绘制结果
import matplotlib.pyplot as plt
​
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
​
epochs = range(len(acc))
​
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
​
plt.figure()
​
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# 提醒一下,在第 3 章,处理这个数据集的第一个简单方法得到的测试精度是 88%。不幸的是,
# 与这个基准相比,这个小型循环网络的表现并不好(验证精度只有 85%)。问题的部分原因在于,
# 输入只考虑了前 500 个单词,而不是整个序列, 因此, RNN 获得的信息比前面的基准模型更少。
# 另一部分原因在于, SimpleRNN 不擅长处理长序列,比如文本

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174

在这里插入图片描述

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/579347
推荐阅读
相关标签
  

闽ICP备14008679号