当前位置:   article > 正文

Tensorflow2.0加载图片数据集的两种方式_tensorflow2.0读取图片数据集

tensorflow2.0读取图片数据集

前言

在tensorflow2.0中加载图片数据集一般有两种方式,第一种是使用tf.keras中的ImageDataGenerator生成器,适合图片分类问题,简单但不灵活;第二种是使用tf.data.Dataset搭配tf.image中的一些图片处理方法,较为灵活。下面分别介绍一下。
我们使用热狗数据集,从这里下载
在这里插入图片描述

使用tf.keras的ImageDataGenerator生成器

使用tf.keras的ImageDataGenerator生成器要求文件夹下的图片如图所示的形式放置。hotdog文件夹下分别是用于训练和评估的train和test文件夹,这两个文件夹下面均有hotdog和not-hotdog两个类别文件夹,每个类别文件夹里面是图像文件。
首先创建两个tf.keras.preprocessing.image.ImageDataGenerator实例来分别读取训练数据集和测试数据集中的所有图像文件。rescale参数表示将图片数据归一化到(0,1)范围内,此外还有一些其它参数可用于数据增强。
flow_from_directory函数中,directory参数为每个类别文件夹所在的路径,target_size参数表示将图片调整为224×224尺寸。

import pathlib
train_dir = "../data/hotdog/train"
test_dir = "../data/hotdog/test"

train_dir = pathlib.Path(train_dir)
train_count = len(list(train_dir.glob('*/*.png')))
test_dir = pathlib.Path(test_dir)
test_count = len(list(test_dir.glob('*/*.png')))
CLASS_NAMES = np.array([item.name for item in train_dir.glob('*') if item.is_dir()])

image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1. / 255)
BATCH_SIZE = 32
IMG_HEIGHT = 224
IMG_WIDTH = 224
train_data_gen = image_generator.flow_from_directory(directory=str(train_dir),
                                                     batch_size=BATCH_SIZE,
                                                     target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                     shuffle=True,
                                                     classes=list(CLASS_NAMES))
test_data_gen = image_generator.flow_from_directory(directory=str(test_dir),
                                                    batch_size=BATCH_SIZE,
                                                    target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                    shuffle=True,
                                                    classes=list(CLASS_NAMES))
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
Found 2000 images belonging to 2 classes.
Found 800 images belonging to 2 classes.
  • 1
  • 2

随机选取9张图片可视化一下:

import matplotlib.pyplot as plt
def show_batch(image_batch, label_batch):
    plt.figure(figsize=(8, 8))
    for n in range(9):
        plt.subplot(3, 3, n + 1)
        plt.imshow(image_batch[n])
        plt.title(CLASS_NAMES[label_batch[n] == 1][0].title())
        plt.axis('off')
    plt.show()
image_batch, label_batch = next(train_data_gen)
show_batch(image_batch, label_batch)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

在这里插入图片描述
然后使用ResNet50预训练模型进行微调:

ResNet50 = tf.keras.applications.resnet_v2.ResNet50V2(include_top=False, weights='imagenet', input_shape=(224, 224, 3))
ResNet50.trainable=False
net = tf.keras.models.Sequential()
net.add(ResNet50)
net.add(tf.keras.layers.GlobalAveragePooling2D())
net.add(tf.keras.layers.Dense(2, activation='softmax'))
net.summary()
net.compile(optimizer=tf.keras.optimizers.Adam(),
            loss='categorical_crossentropy',
            metrics=['accuracy'])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

使用tf.keras内置的fit_generator方法进行训练:

epoch_steps = train_count // BATCH_SIZE
val_steps = test_count // BATCH_SIZE
net.fit_generator(
    train_data_gen,
    steps_per_epoch=epoch_steps,
    epochs=5,
    validation_data=test_data_gen,
    validation_steps=val_steps
)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

训练过程:

Epoch 1/5
62/62 [==============================] - 31s 505ms/step - loss: 0.3595 - accuracy: 0.8415 - val_loss: 0.2036 - val_accuracy: 0.9212
Epoch 2/5
62/62 [==============================] - 24s 391ms/step - loss: 0.2185 - accuracy: 0.9187 - val_loss: 0.1722 - val_accuracy: 0.9325
Epoch 3/5
62/62 [==============================] - 24s 390ms/step - loss: 0.1748 - accuracy: 0.9339 - val_loss: 0.1339 - val_accuracy: 0.9450
Epoch 4/5
62/62 [==============================] - 24s 394ms/step - loss: 0.1634 - accuracy: 0.9334 - val_loss: 0.1269 - val_accuracy: 0.9500
Epoch 5/5
62/62 [==============================] - 24s 391ms/step - loss: 0.1434 - accuracy: 0.9477 - val_loss: 0.1218 - val_accuracy: 0.9488
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

完整代码

import tensorflow as tf
import numpy as np
import pathlib

for gpu in tf.config.experimental.list_physical_devices('GPU'):
    tf.config.experimental.set_memory_growth(gpu, True)

train_dir = "../data/hotdog/train"
test_dir = "../data/hotdog/test"

train_dir = pathlib.Path(train_dir)
train_count = len(list(train_dir.glob('*/*.png')))
test_dir = pathlib.Path(test_dir)
test_count = len(list(test_dir.glob('*/*.png')))
CLASS_NAMES = np.array([item.name for item in train_dir.glob('*') if item.is_dir()])

image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1. / 255)
BATCH_SIZE = 32
IMG_HEIGHT = 224
IMG_WIDTH = 224
train_data_gen = image_generator.flow_from_directory(directory=str(train_dir),
                                                     batch_size=BATCH_SIZE,
                                                     target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                     shuffle=True,
                                                     classes=list(CLASS_NAMES))
test_data_gen = image_generator.flow_from_directory(directory=str(test_dir),
                                                    batch_size=BATCH_SIZE,
                                                    target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                    shuffle=True,
                                                    classes=list(CLASS_NAMES))

# 使用ResNet50预训练模型进行微调
ResNet50 = tf.keras.applications.resnet_v2.ResNet50V2(weights='imagenet', input_shape=(224, 224, 3), include_top=False)
ResNet50.trainable=False
net = tf.keras.models.Sequential()
net.add(ResNet50)
net.add(tf.keras.layers.GlobalAveragePooling2D())
net.add(tf.keras.layers.Dense(2, activation='softmax'))
net.summary()

net.compile(optimizer=tf.keras.optimizers.Adam(),
            loss='categorical_crossentropy',
            metrics=['accuracy'])

epoch_steps = train_count // BATCH_SIZE
val_steps = test_count // BATCH_SIZE
history = net.fit_generator(
    train_data_gen,
    steps_per_epoch=epoch_steps,
    epochs=5,
    validation_data=test_data_gen,
    validation_steps=val_steps
)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53

使用tf.data.Dataset

定义一个加载图片数据集的函数:

AUTOTUNE = tf.data.experimental.AUTOTUNE

def load_data(path, batch_size, epochs):
    data_root = pathlib.Path(path)
    all_image_paths = list(data_root.glob('*/*'))
    all_image_paths = [str(path) for path in all_image_paths]
    label_names = sorted(item.name for item in data_root.glob('*/') if item.is_dir())
    label_to_index = dict((label, index) for index, label in enumerate(label_names))
    all_image_labels = [label_to_index[pathlib.Path(path).parent.name] for path in all_image_paths]

    image_ds = tf.data.Dataset.from_tensor_slices(all_image_paths) \
        .map(load_and_preprocess_image, num_parallel_calls=AUTOTUNE)
    label_ds = tf.data.Dataset.from_tensor_slices(all_image_labels)
    image_label_ds = tf.data.Dataset.zip((image_ds, label_ds))

    image_count = len(all_image_paths)
    dataset = image_label_ds.shuffle(buffer_size=image_count) \
        .batch(batch_size=batch_size) \
        .repeat(epochs) \
        .prefetch(buffer_size=AUTOTUNE)
    return dataset, image_count

train_path = "../data/hotdog/train"
test_path = "../data/hotdog/test"
BATCH_SIZE = 32
EPOCHS = 5
ds_train, train_count = load_data(train_path, BATCH_SIZE, EPOCHS)
ds_test, test_count = load_data(test_path, BATCH_SIZE, EPOCHS)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

在该函数中,首先获取所有图片样本文件的路径all_image_paths,对每一个图片样本,有对应的标签all_image_labels。接着,使用tf.data.Dataset.from_tensor_slices将字符串列表all_image_pathsall_image_labels转化为张量,对于图片,利用map方法映射到一个图片预处理函数load_and_preprocess_image

def load_and_preprocess_image(path, size=(224, 224)):
    image = tf.io.read_file(path)
    image = tf.io.decode_jpeg(image)
    image = tf.image.resize(image, size) / 255.
    return image
  • 1
  • 2
  • 3
  • 4
  • 5

该函数将图片调整为224×224尺寸,并归一化到(0,1)范围。
接下来,zip方法将图片数据和标签数据压缩成(图片,标签)对。然后依次使用shufflebatchrepeatprefetch方法加载数据集。shuffle方法将数据集进行打乱,batch方法将数据集分批,repeat方法将数据集复制epochs份,使用并行化预处理num_parallel_calls和预存数据prefetch来提升性能。
随机选取9张图片可视化一下:

from matplotlib import pyplot as plt
plt.figure(figsize=(8,8))
for i,(img,label) in enumerate(ds_train.unbatch().take(9)):
    ax=plt.subplot(3,3,i+1)
    ax.imshow(img.numpy())
    ax.set_title("label = %d"%label)
    ax.set_xticks([])
    ax.set_yticks([])
plt.show()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

在这里插入图片描述
然后使用ResNet50预训练模型进行微调:

ResNet50 = tf.keras.applications.resnet_v2.ResNet50V2(weights='imagenet', input_shape=(224, 224, 3), include_top=False)
ResNet50.trainable = False
model = tf.keras.models.Sequential([
    ResNet50,
    tf.keras.layers.GlobalAveragePooling2D(),
    tf.keras.layers.Dense(2, activation='softmax')
])
model.summary()
model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

使用tf.keras内置的fit方法进行训练:

epoch_steps = train_count // BATCH_SIZE
val_steps = test_count // BATCH_SIZE
model.fit(ds_train, epochs=EPOCHS, steps_per_epoch=epoch_steps,
          validation_data=ds_test, validation_steps=val_steps)
  • 1
  • 2
  • 3
  • 4

需要注意,这里传入了参数step_per_epochvalidation_steps,表示每次epoch遍历批量数据的次数,那么每次epoch就会对不同份的训练集和验证集进行遍历,因此之前在加载数据集时需要repeat(epochs)份,从而可以对epochs份数据遍历epochs次;
如果不传入step_per_epochvalidation_steps参数的话,那么之前就无需repeat(),keras会在第一次epoch的遍历中自动计算steps_per_epoch值。
训练过程:

Train for 62 steps, validate for 25 steps
Epoch 1/5
62/62 [==============================] - 16s 255ms/step - loss: 0.3609 - accuracy: 0.8271 - val_loss: 0.1352 - val_accuracy: 0.9450
Epoch 2/5
62/62 [==============================] - 10s 163ms/step - loss: 0.2186 - accuracy: 0.9116 - val_loss: 0.1122 - val_accuracy: 0.9488
Epoch 3/5
62/62 [==============================] - 10s 169ms/step - loss: 0.1927 - accuracy: 0.9278 - val_loss: 0.1075 - val_accuracy: 0.9563
Epoch 4/5
62/62 [==============================] - 9s 138ms/step - loss: 0.1683 - accuracy: 0.9355 - val_loss: 0.1096 - val_accuracy: 0.9563
Epoch 5/5
62/62 [==============================] - 10s 160ms/step - loss: 0.1443 - accuracy: 0.9497 - val_loss: 0.0943 - val_accuracy: 0.9638
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

完整代码

import tensorflow as tf
import pathlib

for gpu in tf.config.experimental.list_physical_devices('GPU'):
    tf.config.experimental.set_memory_growth(gpu, True)

AUTOTUNE = tf.data.experimental.AUTOTUNE

def load_and_preprocess_image(path, size=(224, 224)):
    image = tf.io.read_file(path)
    image = tf.io.decode_jpeg(image)
    image = tf.image.resize(image, size) / 255.
    return image

def load_data(path, batch_size, epochs):
    data_root = pathlib.Path(path)
    all_image_paths = list(data_root.glob('*/*'))
    all_image_paths = [str(path) for path in all_image_paths]
    label_names = sorted(item.name for item in data_root.glob('*/') if item.is_dir())
    label_to_index = dict((label, index) for index, label in enumerate(label_names))
    all_image_labels = [label_to_index[pathlib.Path(path).parent.name] for path in all_image_paths]

    image_ds = tf.data.Dataset.from_tensor_slices(all_image_paths) \
        .map(load_and_preprocess_image, num_parallel_calls=AUTOTUNE)
    label_ds = tf.data.Dataset.from_tensor_slices(all_image_labels)
    image_label_ds = tf.data.Dataset.zip((image_ds, label_ds))

    image_count = len(all_image_paths)
    dataset = image_label_ds.shuffle(buffer_size=image_count) \
        .batch(batch_size=batch_size) \
        .repeat(epochs) \
        .prefetch(buffer_size=AUTOTUNE)
    return dataset, image_count

train_path = "../data/hotdog/train"
test_path = "../data/hotdog/test"
BATCH_SIZE = 32
EPOCHS = 5
ds_train, train_count = load_data(train_path, BATCH_SIZE, EPOCHS)
ds_test, test_count = load_data(test_path, BATCH_SIZE, EPOCHS)

# 使用ResNet50预训练模型进行微调
ResNet50 = tf.keras.applications.resnet_v2.ResNet50V2(weights='imagenet', input_shape=(224, 224, 3), include_top=False)
ResNet50.trainable = False
model = tf.keras.models.Sequential([
    ResNet50,
    tf.keras.layers.GlobalAveragePooling2D(),
    tf.keras.layers.Dense(2, activation='softmax')
])
model.summary()

model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

epoch_steps = train_count // BATCH_SIZE
val_steps = test_count // BATCH_SIZE
model.fit(ds_train, epochs=EPOCHS, steps_per_epoch=epoch_steps,
          validation_data=ds_test, validation_steps=val_steps)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59

总结一下,tf.data.Dataset各方法的使用顺序:from_tensor_slices -> map -> shuffle -> batch -> repeat -> prefetch

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/123871
推荐阅读
相关标签
  

闽ICP备14008679号