当前位置:   article > 正文

keras: Train & Evaluate & Predict Model_keras evaluate和predict结果不一样

keras evaluate和predict结果不一样


model.compile()

model.compile(
    optimizer=keras.optimizers.RMSprop(),					# optimizer='rmsprop'
    loss=keras.losses.SparseCategoricalCrossentropy(),		# loss='sparse_categorical_crossentropy'
    metrics=["accuracy"],
)
  • 1
  • 2
  • 3
  • 4
  • 5

optimizer

RMSprop()	# 'rmsprop'

SGD()		# 'sgd'

Adam()		# "adam"
  • 1
  • 2
  • 3
  • 4
  • 5
  • learning_rate=0.01
  • momentum=0.9
  • nesterov=True

loss

MeanSquaredError()					# "mse"

CategoricalCrossentropy()			# 'categorical_crossentropy'

SparseCategoricalCrossentropy()		# "sparse_categorical_crossentropy"

KLDivergence()						# "kl_divergence"

CosineSimilarity()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • from_logits=True

metrics

"acc"	# "accuracy"

AUC()

Precision()

Recall()

MeanAbsoluteError()

MeanAbsolutePercentageError()

CategoricalAccuracy()

SparseCategoricalAccuracy()		# "sparse_categorical_accuracy"
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

model.fit()

fit()会自己打印训练进度、训练水平。

history = model.fit(
	x_train, y_train, 
	batch_size=64, epochs=2, 
	validation_split=0.2
)
'''
Epoch 1/2
750/750 [==============================] - 2s 2ms/step - loss: 0.5648 - accuracy: 0.8473 - val_loss: 0.1793 - val_accuracy: 0.9474
Epoch 2/2
750/750 [==============================] - 1s 1ms/step - loss: 0.1686 - accuracy: 0.9506 - val_loss: 0.1398 - val_accuracy: 0.9576
313/313 - 0s - loss: 0.1401 - accuracy: 0.9580
'''
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

Epoch的进度条750表示的批数,而不是样本个数,训练是一批一批样本的。

基础

  • 使用 Numpy data 格式的数据集,则fit()要指定batch_size
# Train the model for 1 epoch from Numpy data
batch_size = 64
history = model.fit(
	x_train, y_train, 
	batch_size=batch_size, epochs=1
)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 使用 tf.data.Dataset 格式的数据集,则fit()不用指定batch_size,因为tf.data.Dataset已经指定好了(且必须指定,不然fit()指定了也还是报错)。
# Train the model for 1 epoch using a dataset
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size)	# 必须在这里指定batch(batch_size)
history = model.fit(dataset, epochs=1)
  • 1
  • 2
  • 3

进阶:Validation

  • Numpy data:在fit()中使用validation_split,从训练集中划分一部分出来当作验证集。
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
  • 1
  • tf.data.Dataset:不支持在fit()中使用validation_split划分训练集,只能用validation_data指定独立的验证集。
model.fit(train_dataset, epochs=epochs, validation_data=val_dataset)
  • 1

进阶:Callbacks

You can also use callbacks to do things like periodically changing the learning of your optimizer, streaming metrics to a Slack bot, sending yourself an email notification when training is complete, etc.

  • 在每个epoch结束时保存模型,如同model.save("path_to_my_model")一样。
path_checkpoint = "path_to_my_model_{epoch}"	
modelckpt_callback = keras.callbacks.ModelCheckpoint(
	filepath=path_checkpoint,
	save_freq='epoch'			# 每个epoch结束
)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 早早停止
es_callback = keras.callbacks.EarlyStopping(
    monitor="val_loss",		# 检测值
    min_delta=0,				
    patience=5				# 如果5个epoch还没提升,那就停
)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 存储最佳模型的权重
path_checkpoint = "model_checkpoint.h5"
modelckpt_callback = keras.callbacks.ModelCheckpoint(
    monitor="val_loss",			# 检测值
    filepath=path_checkpoint,
    verbose=1,
    save_weights_only=True,		# 只保存权重
    save_best_only=True,		# 只保存最好的
)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

model.evaluate()

# score = model.evaluate(test_dataset)
score = model.evaluate(x_test, y_test)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
  • 1
  • 2
  • 3
  • 4

model.predict()

# predictions = model.predict(x_test, batch_size=batch_size)
predictions = model.predict(x_test)
  • 1
  • 2
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/293102
推荐阅读
相关标签
  

闽ICP备14008679号