赞
踩
Keras是什么?Keras是一个中Python编写的神经网络的api,能够和tensorflow结合快速搭建神经网络
keras最简单的使用方式就是model.Sequential,中文叫做顺序模型,可以用来表示多个网络中的线性堆叠。
使用的时候可以通过以网络层实例的列表对象传递给Squentail,比如:
- import tensorflow as tf
- from tensorflow import keras
- from keras.layers import Dense,Activation
- model = keras.Sequential([
- #units是神经元个数,由于是输入层,所以要有input_shape,必须是一个元组
- #或者用input_dim = 数字也可以
- Dense(units=32,input_shape = (784,)),
- Activation('relu'),
- Dense(units=16),
- Activation('relu'),
- Dense(10),
- #多分类问题,所以输出层加了softmax激活函数,回归问题可以不用
- Activation('softmax')
- ])
也可以简单的使用add()方法将Dense添加到model中
- model = keras.Sequential()
- model.add(Dense(32,input_shape = (784,)))
- model.add(Activation('relu'))
- model.add(Dense(10))
- model.add(Activation('softmax'))
对于输入层,需要指定输入数据的尺寸,通过Dense对象中的input_shape属性.注意无需写batch的大小.
input_shape=(784,) 等价于我们在神经网络中定义的shape为(None, 784)的Tensor.
或者input_dim = 784等价于input_shape=(784,)
模型创建成功后需要进行编译,使用.compile()方法对创建的模型进行编译,compile()方法主要需要指定以下参数:
optimizer优化器:可以是kreas定义好的优化器的字符串名字,如’adam‘,也可以是optimizer的实例对象,常见的优化器有SGD,adam,RMSprop,Adagrad等
loss损失函数:模型视图最小化的目标函数,它可以是现有损害函数的字符串形式,比如:’categorical_entropy‘,也可以是自定义损失函数
metrics评估指标:评估算法性能的衡量指标.对于多分类问题, 建议设置为metrics = ['categorical_accuracy'].评估标准可以是现有的标准的字符串标识符,也可以是自定义的评估标准函数。
以下为compile的常见写法:
- #多分类
- model.compile(optimizer='adam',loss = 'categorical_crossentropy',metrics=['categorical_accuarcy'])
- #二分类
- model.compile(optimizer='adam',loss = 'binary_crossentropy',metrics=['binary_accuary'])
- #均方误差回归问题
- model.compile(optimizer='adam',loss = 'mse',metrics=['mse'])
- #自定义评估指标函数
- def mean_square_error(y_pred,y_true):
- return tf.reduce_mean(tf.square(y_pred - y_true))
- model.compile(optimizer='adam',loss = 'mse',metrics=['mean_square_error'])
训练模型,使用.fit()方法,依次将训练数据、训练次数(epochs),批次尺寸(batch_size),验证数据集传给fit()方法即可
比如:
- #具有两个类的单输入模型
- import numpy as np
- #生成数据
- data= np.random.random((1000,100))
- labels = np.random.randint(2,size = (1000,1))
-
- #搭建网络
- model = keras.Sequential()
- model.add(Dense(32,activation='relu',input_dim = 100))
- model.add(Dense(1,activation = 'sigmoid'))
-
- #编译网络
- model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['binary_accuracy'])
-
- #训练网路
- model.fit(data,labels,epochs=10,batch_size = 32)
Epoch 1/10
32/32 [==============================] - 1s 2ms/step - loss: 0.7029 - binary_accuracy: 0.4880
Epoch 2/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6896 - binary_accuracy: 0.5280
Epoch 3/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6846 - binary_accuracy: 0.5410
Epoch 4/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6822 - binary_accuracy: 0.5490
Epoch 5/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6809 - binary_accuracy: 0.5610
Epoch 6/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6734 - binary_accuracy: 0.5990
Epoch 7/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6701 - binary_accuracy: 0.6020
Epoch 8/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6661 - binary_accuracy: 0.6080
Epoch 9/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6608 - binary_accuracy: 0.6280
Epoch 10/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6569 - binary_accuracy: 0.6270
<keras.callbacks.History at 0x1b82e8c2f20>
下面用一个完整的例子来说明keras的使用
- import numpy as np
- import tensorflow as tf
- import pandas as pd
- from tensorflow import keras
- from sklearn.model_selection import train_test_split
- from sklearn.preprocessing import StandardScaler
- from keras.datasets import cifar10
-
- #获取数据
- (x_train,y_train),(x_test,y_test) = cifar10.load_data()
- #拆分训练数据、验证数据、测试数据
- x_train,x_valid,y_train,y_valid = train_test_split(x_train,y_train)
-
- #对数据归一化
- x_train = x_train.reshape(-1,32*32*3) / 255.
- x_valid = x_valid.reshape(-1,32*32*3) / 255.
- x_test = x_test.reshape(-1,32*32*3) / 255.
-
- #标准化
- scaler = StandardScaler()
- x_train_scaled = scaler.fit_transform(x_train)
- x_test_scaled = scaler.transform(x_test)
- x_valid_scaled = scaler.transform(x_valid)
-
- y_train = keras.utils.to_categorical(y_train,num_classes=10)
- y_valid = keras.utils.to_categorical(y_valid,num_classes=10)
- y_test = keras.utils.to_categorical(y_test,num_classes=10)
- #定义网络
- model = keras.Sequential()
- model.add(Dense(32,activation='relu',input_dim = 3072))
- model.add(Dense(16,activation='relu'))
- model.add(Dense(10,activation='softmax'))
-
- #配置网络
- model.compile(optimizer='adam',loss = 'categorical_crossentropy',metrics=['categorical_accuracy'])
-
- #训练
- model.fit(x_train_scaled,y_train,validation_data=(x_valid_scaled,y_valid),epochs=50,batch_size = 32)
Epoch 1/50
1172/1172 [==============================] - 5s 3ms/step - loss: 1.9087 - categorical_accuracy: 0.3299 - val_loss: 1.7375 - val_categorical_accuracy: 0.3861
Epoch 2/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.6572 - categorical_accuracy: 0.4123 - val_loss: 1.6373 - val_categorical_accuracy: 0.4234
Epoch 3/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.5730 - categorical_accuracy: 0.4411 - val_loss: 1.6272 - val_categorical_accuracy: 0.4226
Epoch 4/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.5206 - categorical_accuracy: 0.4602 - val_loss: 1.5677 - val_categorical_accuracy: 0.4451
Epoch 5/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.4751 - categorical_accuracy: 0.4750 - val_loss: 1.5647 - val_categorical_accuracy: 0.4529
Epoch 6/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.4438 - categorical_accuracy: 0.4859 - val_loss: 1.5301 - val_categorical_accuracy: 0.4662
Epoch 7/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.4148 - categorical_accuracy: 0.4959 - val_loss: 1.5553 - val_categorical_accuracy: 0.4551
Epoch 8/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.3921 - categorical_accuracy: 0.5031 - val_loss: 1.5410 - val_categorical_accuracy: 0.4561
Epoch 9/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.3712 - categorical_accuracy: 0.5097 - val_loss: 1.5326 - val_categorical_accuracy: 0.4617
Epoch 10/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.3525 - categorical_accuracy: 0.5177 - val_loss: 1.5390 - val_categorical_accuracy: 0.4564
Epoch 11/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.3357 - categorical_accuracy: 0.5244 - val_loss: 1.5291 - val_categorical_accuracy: 0.4671
Epoch 12/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.3210 - categorical_accuracy: 0.5287 - val_loss: 1.5442 - val_categorical_accuracy: 0.4642
Epoch 13/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.3038 - categorical_accuracy: 0.5338 - val_loss: 1.5494 - val_categorical_accuracy: 0.4634
Epoch 14/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.2931 - categorical_accuracy: 0.5375 - val_loss: 1.5337 - val_categorical_accuracy: 0.4676
Epoch 15/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.2801 - categorical_accuracy: 0.5431 - val_loss: 1.5629 - val_categorical_accuracy: 0.4616
Epoch 16/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.2689 - categorical_accuracy: 0.5448 - val_loss: 1.5706 - val_categorical_accuracy: 0.4558
Epoch 17/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.2582 - categorical_accuracy: 0.5518 - val_loss: 1.5472 - val_categorical_accuracy: 0.4685
Epoch 18/50
1172/1172 [==============================] - 5s 4ms/step - loss: 1.2454 - categorical_accuracy: 0.5537 - val_loss: 1.5618 - val_categorical_accuracy: 0.4651
Epoch 19/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.2356 - categorical_accuracy: 0.5560 - val_loss: 1.5642 - val_categorical_accuracy: 0.4647
Epoch 20/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.2279 - categorical_accuracy: 0.5610 - val_loss: 1.5724 - val_categorical_accuracy: 0.4683
Epoch 21/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.2181 - categorical_accuracy: 0.5655 - val_loss: 1.5854 - val_categorical_accuracy: 0.4645
Epoch 22/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.2122 - categorical_accuracy: 0.5668 - val_loss: 1.5938 - val_categorical_accuracy: 0.4634
Epoch 23/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.1990 - categorical_accuracy: 0.5689 - val_loss: 1.6023 - val_categorical_accuracy: 0.4569
Epoch 24/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.1906 - categorical_accuracy: 0.5740 - val_loss: 1.5832 - val_categorical_accuracy: 0.4637
Epoch 25/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.1835 - categorical_accuracy: 0.5767 - val_loss: 1.6023 - val_categorical_accuracy: 0.4650
Epoch 26/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.1747 - categorical_accuracy: 0.5770 - val_loss: 1.6166 - val_categorical_accuracy: 0.4618
Epoch 27/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.1695 - categorical_accuracy: 0.5797 - val_loss: 1.6303 - val_categorical_accuracy: 0.4588
Epoch 28/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.1600 - categorical_accuracy: 0.5827 - val_loss: 1.6489 - val_categorical_accuracy: 0.4578
Epoch 29/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.1511 - categorical_accuracy: 0.5858 - val_loss: 1.6261 - val_categorical_accuracy: 0.4647
Epoch 30/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.1444 - categorical_accuracy: 0.5889 - val_loss: 1.6377 - val_categorical_accuracy: 0.4626
Epoch 31/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.1385 - categorical_accuracy: 0.5888 - val_loss: 1.6485 - val_categorical_accuracy: 0.4592
Epoch 32/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.1331 - categorical_accuracy: 0.5904 - val_loss: 1.6449 - val_categorical_accuracy: 0.4682
Epoch 33/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.1274 - categorical_accuracy: 0.5915 - val_loss: 1.6620 - val_categorical_accuracy: 0.4652
Epoch 34/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.1211 - categorical_accuracy: 0.5957 - val_loss: 1.6778 - val_categorical_accuracy: 0.4603
Epoch 35/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.1122 - categorical_accuracy: 0.5973 - val_loss: 1.6761 - val_categorical_accuracy: 0.4554
Epoch 36/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.1108 - categorical_accuracy: 0.6001 - val_loss: 1.6719 - val_categorical_accuracy: 0.4577
Epoch 37/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.1054 - categorical_accuracy: 0.6007 - val_loss: 1.6938 - val_categorical_accuracy: 0.4534
Epoch 38/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.0971 - categorical_accuracy: 0.6020 - val_loss: 1.6876 - val_categorical_accuracy: 0.4574
Epoch 39/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.0902 - categorical_accuracy: 0.6061 - val_loss: 1.7000 - val_categorical_accuracy: 0.4585
Epoch 40/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.0896 - categorical_accuracy: 0.6086 - val_loss: 1.7030 - val_categorical_accuracy: 0.4576
Epoch 41/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.0815 - categorical_accuracy: 0.6097 - val_loss: 1.7073 - val_categorical_accuracy: 0.4637
Epoch 42/50
1172/1172 [==============================] - 3s 2ms/step - loss: 1.0746 - categorical_accuracy: 0.6126 - val_loss: 1.7007 - val_categorical_accuracy: 0.4581
Epoch 43/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.0666 - categorical_accuracy: 0.6143 - val_loss: 1.7090 - val_categorical_accuracy: 0.4595
Epoch 44/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.0671 - categorical_accuracy: 0.6157 - val_loss: 1.7434 - val_categorical_accuracy: 0.4526
Epoch 45/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.0638 - categorical_accuracy: 0.6145 - val_loss: 1.7556 - val_categorical_accuracy: 0.4552
Epoch 46/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.0565 - categorical_accuracy: 0.6179 - val_loss: 1.7351 - val_categorical_accuracy: 0.4544
Epoch 47/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.0538 - categorical_accuracy: 0.6188 - val_loss: 1.7682 - val_categorical_accuracy: 0.4530
Epoch 48/50
1172/1172 [==============================] - 4s 3ms/step - loss: 1.0449 - categorical_accuracy: 0.6234 - val_loss: 1.7398 - val_categorical_accuracy: 0.4606
Epoch 49/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.0405 - categorical_accuracy: 0.6208 - val_loss: 1.7586 - val_categorical_accuracy: 0.4542
Epoch 50/50
1172/1172 [==============================] - 3s 3ms/step - loss: 1.0382 - categorical_accuracy: 0.6243 - val_loss: 1.7620 - val_categorical_accuracy: 0.4510
<keras.callbacks.History at 0x1b833266290>
- from keras.layers import Input, Dense
- from keras.models import Model
-
- #返回一个tensor
- inputs = Input(shape=(3072,))
-
- #层的实例是可以调用的,它以tensor作为参数,并返回一个tensor
- output_1 = Dense(units=32,activation='relu')(inputs)
- output_2 = Dense(units=16,activation='relu')(output_1)
- output_3 = Dense(units=16,activation='relu')(output_2)
- predictions = Dense(units=10,activation='softmax')(output_3)
-
- #创建了一个输入层和四个全连接层的模型
- model = Model(inputs = inputs,outputs = predictions)
- #配置网络
- model.compile(optimizer='rmsprop',
- loss='categorical_crossentropy',
- metrics=['accuracy'])
- #训练
- model.fit(x_train_scaled,y_train,validation_data=(x_valid_scaled,y_valid),epochs=10,batch_size = 32)
Epoch 1/10
1172/1172 [==============================] - 5s 4ms/step - loss: 1.9020 - accuracy: 0.3208 - val_loss: 1.7769 - val_accuracy: 0.3576
Epoch 2/10
1172/1172 [==============================] - 4s 3ms/step - loss: 1.6904 - accuracy: 0.3969 - val_loss: 1.6819 - val_accuracy: 0.3958
Epoch 3/10
1172/1172 [==============================] - 4s 3ms/step - loss: 1.6070 - accuracy: 0.4299 - val_loss: 1.6379 - val_accuracy: 0.4198
Epoch 4/10
1172/1172 [==============================] - 4s 3ms/step - loss: 1.5505 - accuracy: 0.4476 - val_loss: 1.6109 - val_accuracy: 0.4325
Epoch 5/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.5083 - accuracy: 0.4644 - val_loss: 1.6018 - val_accuracy: 0.4421
Epoch 6/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.4759 - accuracy: 0.4776 - val_loss: 1.5803 - val_accuracy: 0.4422
Epoch 7/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.4493 - accuracy: 0.4858 - val_loss: 1.5758 - val_accuracy: 0.4443
Epoch 8/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.4265 - accuracy: 0.4939 - val_loss: 1.5672 - val_accuracy: 0.4545
Epoch 9/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.4055 - accuracy: 0.5027 - val_loss: 1.5873 - val_accuracy: 0.4450
Epoch 10/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.3908 - accuracy: 0.5075 - val_loss: 1.5650 - val_accuracy: 0.4541
<keras.callbacks.History at 0x1b837731300>
这个例子能够帮助我们进行一些简单的理解。
网络层的实例是可调用的,它以张量为参数,并且返回一个张量
输入和输出均为张量,它们都可以用来定义一个模型(Model)
这样的模型同 Keras 的 Sequential 模型一样,都可以被训练
keras集成在tf.keras中.
创建模型
创建一个简单的模型,使用tf.keras.sequential.
- model = keras.Sequential()
-
- #创建一层有32个神经元的网络
- model.add(Dense(units=32,activation='relu',input_dim = 3072))
- #添加另一层网络
- model.add(Dense(units=16,activation='relu'))
- model.add(Dense(units=16,activation='relu'))
- #输出层
- model.add(Dense(units=10,activation='softmax'))
- model.compile(optimizer='rmsprop',
- loss='categorical_crossentropy',
- metrics=['accuracy'])
- #训练
- model.fit(x_train_scaled,y_train,validation_data=(x_valid_scaled,y_valid),epochs=10,batch_size = 32)
Epoch 1/10
1172/1172 [==============================] - 5s 4ms/step - loss: 1.9345 - accuracy: 0.3143 - val_loss: 1.8328 - val_accuracy: 0.3518
Epoch 2/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.7256 - accuracy: 0.3870 - val_loss: 1.7017 - val_accuracy: 0.3914
Epoch 3/10
1172/1172 [==============================] - 4s 3ms/step - loss: 1.6307 - accuracy: 0.4256 - val_loss: 1.6666 - val_accuracy: 0.4175
Epoch 4/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.5745 - accuracy: 0.4479 - val_loss: 1.6085 - val_accuracy: 0.4316
Epoch 5/10
1172/1172 [==============================] - 4s 3ms/step - loss: 1.5276 - accuracy: 0.4630 - val_loss: 1.5814 - val_accuracy: 0.4462
Epoch 6/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.4885 - accuracy: 0.4755 - val_loss: 1.6277 - val_accuracy: 0.4314
Epoch 7/10
1172/1172 [==============================] - 3s 3ms/step - loss: 1.4595 - accuracy: 0.4849 - val_loss: 1.6094 - val_accuracy: 0.4465
Epoch 8/10
1172/1172 [==============================] - 4s 3ms/step - loss: 1.4337 - accuracy: 0.4957 - val_loss: 1.5762 - val_accuracy: 0.4534
Epoch 9/10
1172/1172 [==============================] - 4s 3ms/step - loss: 1.4125 - accuracy: 0.5013 - val_loss: 1.5565 - val_accuracy: 0.4584
Epoch 10/10
1172/1172 [==============================] - 4s 3ms/step - loss: 1.3930 - accuracy: 0.5096 - val_loss: 1.5682 - val_accuracy: 0.4571
<keras.callbacks.History at 0x1b83a65b430>
配置layers
layers包含以下三组重要参数:
activation: 激活函数, 'relu', 'sigmoid', 'tanh'.
kernel_initializer 和 bias_initializer: 权重和偏差的初始化器. Glorot uniform是默认的初始化器.一般不用改.
kernel_regularizer 和 bias_regularizer: 权重和偏差的正则化.L1, L2.
- #激活函数为sigmoid
- keras.layers.Activation(activation='sigmoid')
- #或者:
- keras.layers.Activation(activation=tf.sigmoid)
-
- #权重加L1正则
- Dense(units=64,kernel_regularizer=tf.keras.regularizers.l1(0.001))
-
- #偏差加L2正则
- Dense(units=64,bias_regularizer=keras.regularizers.l2(0.01))
-
- #随机正交矩阵初始化器
- Dense(units=64,kernel_initializer='orthogonal')
-
- #偏差加了常数初始化器
- keras.layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
<keras.layers.core.dense.Dense at 0x1b83c40d720>
训练和评估
配置模型
使用compile配置模型, 主要有以下几组重要参数.
optimizer: 优化器, 主要有:tf.train.AdamOptimizer, tf.train.RMSPropOptimizer, or tf.train.GradientDescentOptimizer.
loss: 损失函数. 主要有:mean square error (mse, 回归), categorical_crossentropy(多分类), and binary_crossentropy(二分类).
metrics: 算法的评估标准, 一般分类用accuracy.
以下是compile的 实例:
- #配置均方误差回归
- model.compile(optimizer='adam',loss='mse',metrics='mse')
-
- #配置多分类的模型
- model.compile(optimizer=tf.optimizers.SGD(),loss = 'categorical_crossentropy',metrics = 'acc')
训练
使用model的fit方法进行训练, 主要有以下参数:
epochs: 训练次数
batch_size: 每批数据多少
validation_data: 测试数据
对于小数量级的数据,可以直接把训练数据传入fit.
- import numpy as np
-
- data= np.random.random((1000,100))
- labels = np.random.randint(2,size = (1000,1))
-
- val_data = np.random.random((100,100))
- val_labels = np.random.randint(2,size = (100,1))
- model = keras.Sequential()
-
- #创建一层有32个神经元的网络
- model.add(Dense(units=32,activation='relu',input_dim = 100))
- #添加另一层网络
- model.add(Dense(units=16,activation='relu'))
- model.add(Dense(units=16,activation='relu'))
- #输出层
- model.add(Dense(units=1,activation='sigmoid'))
- model.compile(optimizer='rmsprop',
- loss='binary_crossentropy',
- metrics=['binary_accuracy'])
-
- model.fit(data, labels, epochs=10, batch_size=32,
- validation_data=(val_data, val_labels))
Epoch 1/10
32/32 [==============================] - 1s 10ms/step - loss: 0.7005 - binary_accuracy: 0.4960 - val_loss: 0.7016 - val_binary_accuracy: 0.5000
Epoch 2/10
32/32 [==============================] - 0s 2ms/step - loss: 0.6920 - binary_accuracy: 0.4970 - val_loss: 0.6922 - val_binary_accuracy: 0.5200
Epoch 3/10
32/32 [==============================] - 0s 3ms/step - loss: 0.6899 - binary_accuracy: 0.5290 - val_loss: 0.6906 - val_binary_accuracy: 0.5300
Epoch 4/10
32/32 [==============================] - 0s 3ms/step - loss: 0.6884 - binary_accuracy: 0.5580 - val_loss: 0.6927 - val_binary_accuracy: 0.5500
Epoch 5/10
32/32 [==============================] - 0s 3ms/step - loss: 0.6843 - binary_accuracy: 0.5650 - val_loss: 0.6975 - val_binary_accuracy: 0.5100
Epoch 6/10
32/32 [==============================] - 0s 3ms/step - loss: 0.6796 - binary_accuracy: 0.5640 - val_loss: 0.6966 - val_binary_accuracy: 0.5500
Epoch 7/10
32/32 [==============================] - 0s 3ms/step - loss: 0.6804 - binary_accuracy: 0.5670 - val_loss: 0.6989 - val_binary_accuracy: 0.5100
Epoch 8/10
32/32 [==============================] - 0s 3ms/step - loss: 0.6760 - binary_accuracy: 0.5900 - val_loss: 0.6922 - val_binary_accuracy: 0.5300
Epoch 9/10
32/32 [==============================] - 0s 3ms/step - loss: 0.6729 - binary_accuracy: 0.5910 - val_loss: 0.7059 - val_binary_accuracy: 0.5100
Epoch 10/10
32/32 [==============================] - 0s 3ms/step - loss: 0.6681 - binary_accuracy: 0.5840 - val_loss: 0.6927 - val_binary_accuracy: 0.5400
<keras.callbacks.History at 0x1b83bdae920>
对于大数量级的训练数据,使用tensorflow中dataset.
- #把数据变成dataset
- dataset = tf.data.Dataset.from_tensor_slices((data,labels))
- #指定一批数据为32,并且可以无限重复
- dataset = dataset.batch(32).repeat()
-
- val_dataset = tf.data.Dataset.from_tensor_slices((val_data,val_labels))
- val_dataset = val_dataset.batch(32).repeat()
-
- # 别忘了steps_per_epoch, 表示执行完全部数据的steps
- model.fit(dataset, epochs=10, steps_per_epoch=1000 // 32,
- validation_data=val_dataset,
- validation_steps=3)
Epoch 1/10
31/31 [==============================] - 1s 8ms/step - loss: 0.6623 - binary_accuracy: 0.6270 - val_loss: 0.6938 - val_binary_accuracy: 0.5312
Epoch 2/10
31/31 [==============================] - 0s 2ms/step - loss: 0.6588 - binary_accuracy: 0.6260 - val_loss: 0.6939 - val_binary_accuracy: 0.5521
Epoch 3/10
31/31 [==============================] - 0s 2ms/step - loss: 0.6553 - binary_accuracy: 0.6322 - val_loss: 0.6935 - val_binary_accuracy: 0.5417
Epoch 4/10
31/31 [==============================] - 0s 3ms/step - loss: 0.6486 - binary_accuracy: 0.6550 - val_loss: 0.7048 - val_binary_accuracy: 0.5208
Epoch 5/10
31/31 [==============================] - 0s 2ms/step - loss: 0.6427 - binary_accuracy: 0.6601 - val_loss: 0.6937 - val_binary_accuracy: 0.4896
Epoch 6/10
31/31 [==============================] - 0s 2ms/step - loss: 0.6398 - binary_accuracy: 0.6601 - val_loss: 0.6916 - val_binary_accuracy: 0.5000
Epoch 7/10
31/31 [==============================] - 0s 2ms/step - loss: 0.6321 - binary_accuracy: 0.6643 - val_loss: 0.6964 - val_binary_accuracy: 0.5104
Epoch 8/10
31/31 [==============================] - 0s 2ms/step - loss: 0.6234 - binary_accuracy: 0.6777 - val_loss: 0.7023 - val_binary_accuracy: 0.5000
Epoch 9/10
31/31 [==============================] - 0s 2ms/step - loss: 0.6214 - binary_accuracy: 0.6756 - val_loss: 0.6996 - val_binary_accuracy: 0.5208
Epoch 10/10
31/31 [==============================] - 0s 2ms/step - loss: 0.6131 - binary_accuracy: 0.6860 - val_loss: 0.7113 - val_binary_accuracy: 0.5521
<keras.callbacks.History at 0x1b912c7e3b0>
评估和预测
使用tf.keras.Model.evaluateandtf.keras.Model.predict进行评估和预测. 评估会打印算法的损失和得分.
- data= np.random.random((1000,100))
- labels = np.random.randint(2,size = (1000,1))
-
- #普通numpy数据
- model.evaluate(data,labels,batch_size=32)
-
- # tensorflow dataset数据
- model.evaluate(dataset, steps=30)
32/32 [==============================] - 0s 1ms/step - loss: 0.7235 - binary_accuracy: 0.5000
30/30 [==============================] - 0s 1ms/step - loss: 0.6014 - binary_accuracy: 0.6906
[0.6014314293861389, 0.690625011920929]
预测:
- result = model.predict(data, batch_size=32)
- print(result.shape)
32/32 [==============================] - 0s 1ms/step
(1000, 1)
使用函数式API
函数式API,主要是需要自己把各个组件的对象定义出来,并且手动传递.
- from keras import layers
- inputs = tf.keras.Input(shape=(100,)) # 返回placeholder
-
- # A layer instance is callable on a tensor, and returns a tensor.
- x = layers.Dense(64, activation='relu')(inputs)
- x = layers.Dense(64, activation='relu')(x)
- predictions = layers.Dense(1, activation='sigmoid')(x)
-
- model = tf.keras.Model(inputs=inputs, outputs=predictions)
-
- # The compile step specifies the training configuration.
- model.compile(optimizer=tf.optimizers.RMSprop(0.001),
- loss='categorical_crossentropy',
- metrics=['accuracy'])
-
- # Trains for 5 epochs
- model.fit(data, labels, batch_size=32, epochs=5)
Epoch 1/5
32/32 [==============================] - 1s 2ms/step - loss: 0.0000e+00 - accuracy: 0.5310
Epoch 2/5
32/32 [==============================] - 0s 2ms/step - loss: 0.0000e+00 - accuracy: 0.5310
Epoch 3/5
32/32 [==============================] - 0s 2ms/step - loss: 0.0000e+00 - accuracy: 0.5310
Epoch 4/5
32/32 [==============================] - 0s 2ms/step - loss: 0.0000e+00 - accuracy: 0.5310
Epoch 5/5
32/32 [==============================] - 0s 2ms/step - loss: 0.0000e+00 - accuracy: 0.5310
<keras.callbacks.History at 0x1b845c8bf10>
保存和恢复
使用model.save把整个模型保存为HDF5文件
model.save('my_model.h5')
恢复使用tf.keras.models.load_model即可.
model = keras.models.load_model('./my_model.h5')
<keras.engine.functional.Functional at 0x1b840a45630>
注意: 如果使用的tensorflow的optimizer, 那么保存的model中没有model配置信息, 恢复以后需要重新配置.推荐用keras的optimizer.
- model.compile(optimizer='rmsprop',
- loss='binary_crossentropy',
- metrics=['binary_accuracy'])
- result = model.predict(data, batch_size=32)
- print(result.shape)
32/32 [==============================] - 0s 1ms/step
(1000, 1)
model.evaluate(data,labels,batch_size=32)
32/32 [==============================] - 0s 1ms/step - loss: 542.6467 - binary_accuracy: 0.5310
[542.6466674804688, 0.531000018119812]
model.evaluate(dataset, steps=30)
30/30 [==============================] - 0s 1ms/step - loss: 572.7234 - binary_accuracy: 0.5042
[572.723388671875, 0.5041666626930237]
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。