当前位置:   article > 正文

【深度学习】PyTorch入门_pythorch

pythorch

1. 环境搭建


1.1 安装PyTorch环境

  1. 安装anaconda

  2. 打开anaconda的控制台,输入conda create -n pytorch python=3.8,创建一个新的python环境

  3. 安装pythorchconda install pytorch torchvision torchaudio cpuonly -c pytorch

    清华镜像

    首先输这个,更换镜像源(注意顺序,第四条一定要在最后,原因不详)

    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
    conda config --set show_channel_urls yes
    conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
    最后,运行执行安装pytorch

    conda install pytorch torchvision cudatoolkit=10.0

    在安装其他包时,可采用

    pip install -i https://pypi.tuna.tsinghua.edu.cn/simple 包名

  4. 命令行输入下放代码,不报错即成功

    import torch
    torch.cuda.is_available()
    
    • 1
    • 2
  5. 切换解释器环境conda activate 环境名


1.2 安装jupyter

# 安装
conda install nb_conda
# 启动
jupyter lab
  • 1
  • 2
  • 3
  • 4

1.3 安装tensorboard

pip install tensorboard
  • 1

安装完运行下方代码启动

tensorboard --logdir=文件夹名称

# 文件夹名称取决于代码中创建的文件夹名称,例如
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter("logs")

文件夹名称就是logs
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

修改端口,默认6006

tensorboard --logdir=文件夹名称 --port 自定义端口
  • 1

报错解决

tensorboard --logdir=logs 报错报错:ValueError: Duplicate plugins for name projector

删除site-packages文件夹下的tensorboard--x.x.xdist-info


1.4 安装opencv

pip install -i https://pypi.tuna.tsinghua.edu.cn/simple opencv-contrib-python
  • 1

1.5 Torchversion

下载:将链接复制到迅雷即可高速下载https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz


2. 数据集

pytorch中加载数据的顺序是:

  1. 创建一个dataset对象
  2. 创建一个dataloader对象
  3. 循环dataloader对象,将data,label拿到模型中去训练

转载自:Pytorch中DataLoader的使用


2.1 DataLoader

DataLoader是数据加载器。结合数据集(dataset)和采样器(sampler),并提供给定数据集的可迭代对象。

DataLoader支持具有单进程或多进程加载、自定义加载顺序和可选的自动批处理(整理)和内存固定的地图样式和可迭代样式数据集。

说白了就是加载数据集用来迭代的

官网介绍

class torch.utils.data. DataLoader ( dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, generator=None, *, prefetch_factor=2, persistent_workers=False ) [source]

Data loader. Combines a dataset and a sampler, and provides an iterable over the given dataset.

The DataLoader supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning.

See torch.utils.data documentation page for more details.

Parameters
  • dataset (Dataset) – dataset from which to load the data.

  • batch_size (int, optional) – how many samples per batch to load (default: 1).

  • shuffle (bool, optional) – set to True to have the data reshuffled at every epoch (default: False).

  • sampler (Sampler or Iterable, optional) – defines the strategy to draw samples from the dataset. Can be any Iterable with __len__ implemented. If specified, shuffle must not be specified.

  • batch_sampler (Sampler or Iterable, optional) – like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last.

  • num_workers (int, optional) – how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)

  • collate_fn (callable, optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.

  • pin_memory (bool, optional) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type, see the example below.

  • drop_last (bool, optional) – set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False)

  • timeout (numeric, optional) – if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0)

  • worker_init_fn (callable, optional) – If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None)

  • generator (torch.Generator, optional) – If not None, this RNG will be used by RandomSampler to generate random indexes and multiprocessing to generate base_seed for workers. (default: None)

  • prefetch_factor (int, optional, keyword-only arg) – Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers. (default: 2)

  • persistent_workers (bool, optional) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default: False)


2.2 Dataset

数据集,包含一组样本和对应的标签。

可以采用自己的数据集或者采用官方封装好的CIFAR10、CIFAR100。

class torch.utils.data. Dataset ( *args, **kwds ) [source]

An abstract class representing a Dataset.

All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite __getitem__(), supporting fetching a data sample for a given key. Subclasses could also optionally overwrite __len__(), which is expected to return the size of the dataset by many Sampler implementations and the default options of DataLoader.

Note

DataLoader by default constructs a index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided.


Dataset Demo

import os

from torch.utils.data import Dataset
from PIL import Image


# 自定义一个类,继承自Dataset
class MyDataset(Dataset):

    def __init__(self, root_dir, label_dir):
        # 将传入的参数保存到类中
        self.root_dir = root_dir
        self.label_dir = label_dir
        # 将两个路径拼接起来,得到完整的路径,
        # 不直接用 + 是因为在不同操作系统下,路径的分隔符不一样,Windows是/,其他的不一定
        self.path = os.path.join(root_dir, label_dir)
        # 获取当前路径下的整个文件列表
        self.img_path = os.listdir(self.path)

    # 重写方法
    def __getitem__(self, index):
        # 从图片列表中根据索引获取指定图片名称
        img_name = self.img_path[index]
        # 将图片名称和图片的文件夹路径进行拼接,得到完整的图片路径
        img_item_path = os.path.join(self.path, img_name)
        # 获取图片
        img = Image.open(img_item_path)
        label = self.label_dir
        return img, label

    # 返回图片路径列表的大小,也就是数据集的长度
    def __len__(self):
        return len(self.img_path)


# 根路径
root_dir = r"E:\机器学习\深度学习\PyTorch\code\study\day01\dataset\train"
# 每个标签的路径
ants_label_dir = r"ants"

# 定义一个自定义类的变量
ants_dataset = MyDataset(root_dir, ants_label_dir)
# 获取第1个图片
ants_img, ants_label = ants_dataset.__getitem__(0)
print(ants_dataset.__len__())
ants_img.show()

print("-----------------------------------------------")
# 获取蜜蜂
bees_label_dir = r"bees"
bees_dataset = MyDataset(root_dir, bees_label_dir)
bees_img, bees_label = bees_dataset.__getitem__(0)
print(bees_dataset.__len__())
bees_img.show()

print("-----------------------------------------------")

# 合并数据集
total_dataset = ants_dataset + bees_dataset
print(total_dataset.__len__())


# 使用CIFAR10数据集
# root: 存放数据集的位置
# train:是否是训练数据集
# transform:将数据转换到什么类型
# download:是否下载,当数据集不存在时,自动下载
train_set = torchvision.datasets.CIFAR10(root="./dataset", train=True, transform=PIL_to_tensor, download=True)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68

DataLoader Demo


# DataLoader 测试
import torchvision.datasets
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
from torchvision import transforms

test_set = torchvision.datasets.CIFAR10("./dataset", train=False, transform=transforms.ToTensor())
# dataset: 加载哪个数据集
# batch_size: 一次加载多少数据
# shuffle: 多次加载时,是否打乱重新加载
# num_workers: 子进程数量,0表示只是用主进程
# drop_last: 当最后一次加载时数量不足batch_size时,是否舍弃掉最后的数据
test_loader = DataLoader(dataset=test_set, batch_size=64, shuffle=True, num_workers=0, drop_last=False)

# 测试集中的第一张图片以及target
img, target = test_set[0]
print(img)
print(target)

writer = SummaryWriter("dataloader")
step = 0
for data in test_loader:
    imgs, targets = data
    writer.add_images("test_data", imgs, step)
    step += 1

writer.close()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

3. Tensorboard

PYTORCH学习笔记九:通过TORCH.UTILS.TENSORBOARD在PYTORCH中使用TENSORBOARD

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/357143
推荐阅读
相关标签
  

闽ICP备14008679号