赞
踩
Keras应用是可用的具有预训练权重的深度学习模型。这些模型可用于预测,特征提取和细调。
实例化模型时权重自动下载,储存在~/.keras/models/
可用模型
在ImageNet上预训练权重的图像分类模型有:
-Xception
-VGG16
-VGG19
-ResNet50
-InceptionV3
Xception模型只有TensorFlow版,因为它依赖于SeparableConvolution层。其他模型有TensorFlow和Theano两个版本。
图像分类模型使用举例
使用ResNet50分类图像
- from keras.applications.resnet50 import ResNet50
- from keras.preprocessing import image
- from keras.applications.resnet50 import preprocess_input, decode_predictions
- import numpy as np
-
- model = ResNet50(weights='imagenet')
-
- img_path = 'elephant.jpg'
- img = image.load_img(img_path, target_size=(224, 224))
- x = image.img_to_array(img)
- x = np.expand_dims(x, axis=0)
- x = preprocess_input(x)
-
- preds = model.predict(x)
- # decode the results into a list of tuples (class, description, probability)
- # (one such list for each sample in the batch)
- print('Predicted:', decode_predictions(preds, top=3)[0])
- # Predicted: [(u'n02504013', u'Indian_elephant', 0.82658225), (u'n01871265', u'tusker', 0.1122357), (u'n02504458', u'African_elephant', 0.061040461)]
使用VGG16提取特征
- from keras.applications.vgg16 import VGG16
- from keras.preprocessing import image
- from keras.applications.vgg16 import preprocess_input
- import numpy as np
-
- model = VGG16(weights='imagenet', include_top=False)
-
- img_path = 'elephant.jpg'
- img = image.load_img(img_path, target_size=(224, 224))
- x = image.img_to_array(img)
- x = np.expand_dims(x, axis=0)
- x = preprocess_input(x)
-
- features = model.predict(x)
从VGG19任意中间层提取特征
- from keras.applications.vgg19 import VGG19
- from keras.preprocessing import image
- from keras.applications.vgg19 import preprocess_input
- from keras.models import Model
- import numpy as np
-
- base_model = VGG19(weights='imagenet')
- model = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_pool').output)
-
- img_path = 'elephant.jpg'
- img = image.load_img(img_path, target_size=(224, 224))
- x = image.img_to_array(img)
- x = np.expand_dims(x, axis=0)
- x = preprocess_input(x)
-
- block4_pool_features = model.predict(x)
在新类别上细调InceptionV3
- from keras.applications.inception_v3 import InceptionV3
- from keras.preprocessing import image
- from keras.models import Model
- from keras.layers import Dense, GlobalAveragePooling2D
- from keras import backend as K
-
- # create the base pre-trained model
- base_model = InceptionV3(weights='imagenet', include_top=False)
-
- # add a global spatial average pooling layer
- x = base_model.output
- x = GlobalAveragePooling2D()(x)
- # let's add a fully-connected layer
- x = Dense(1024, activation='relu')(x)
- # and a logistic layer -- let's say we have 200 classes
- predictions = Dense(200, activation='softmax')(x)
-
- # this is the model we will train
- model = Model(inputs=base_model.input, outputs=predictions)
-
- # first: train only the top layers (which were randomly initialized)
- # i.e. freeze all convolutional InceptionV3 layers
- for layer in base_model.layers:
- layer.trainable = False
-
- # compile the model (should be done *after* setting layers to non-trainable)
- model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
-
- # train the model on the new data for a few epochs
- model.fit_generator(...)
-
- # at this point, the top layers are well trained and we can start fine-tuning
- # convolutional layers from inception V3. We will freeze the bottom N layers
- # and train the remaining top layers.
-
- # let's visualize layer names and layer indices to see how many layers
- # we should freeze:
- for i, layer in enumerate(base_model.layers):
- print(i, layer.name)
-
- # we chose to train the top 2 inception blocks, i.e. we will freeze
- # the first 172 layers and unfreeze the rest:
- for layer in model.layers[:172]:
- layer.trainable = False
- for layer in model.layers[172:]:
- layer.trainable = True
-
- # we need to recompile the model for these modifications to take effect
- # we use SGD with a low learning rate
- from keras.optimizers import SGD
- model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')
-
- # we train our model again (this time fine-tuning the top 2 inception blocks
- # alongside the top Dense layers
- model.fit_generator(...)
在定制输入张量上构建InceptionV3
- from keras.applications.inception_v3 import InceptionV3
- from keras.layers import Input
-
- # this could also be the output a different Keras model or layer
- input_tensor = Input(shape=(224, 224, 3)) # this assumes K.image_data_format() == 'channels_last'
-
- model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=True)
模型文档
Xception
keras.applications.xception.Xception(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
在ImageNet上,模型top-1验证准确率为0.790,top-5验证准确率0.945。
注意,由于依赖SeparableConvolution层,该模型只支持TensorFlow后端。此外只支持数据格式"channel_last"(高度、宽度、通道)
默认输入大小为299*299
VGG16
keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
默认输入大小为224*224
VGG19
keras.applications.vgg19.VGG19(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
默认输入大小为224*224
ResNet50
keras.applications.resnet50.ResNet50(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
默认输入大小为224*224
InceptionV3
keras.applications.inception_v3.InceptionV3(include_top=True, weights='imagenet', input_tensor=None, input_shape=None)
默认输入大小为299*299
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。