当前位置:   article > 正文

【Pytorch框架安装——笔记】Anaconda安装Pytorch详解教程_conda install pytorch torchvision torchaudio pytor

conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c

1、查看conda 版本

cmd进入窗口
nvidia-smi
  • 1
  • 2

在这里插入图片描述
conda --version将输出当前安装的Anaconda或Miniconda版本号。

nvcc -V

  • 1
  • 2

测试安装情况

日常使用中
系统升级可能导致nvcc -V失效
查看安装
在这里插入图片描述
缺少NVIDIA CUDA 相关完整版如下

在这里插入图片描述

重新安装cuda,参考链接相关链接
(1)英伟达官网英伟达官网

在这里插入图片描述
版本选择,11.80
在这里插入图片描述
在这里插入图片描述
(2)解压安装
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
结束后
添加环境变量
在这里插入图片描述

4、cudnn

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\extras\demo_suite> bandwidthTest.exe
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: NVIDIA GeForce RTX 3060
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)        Bandwidth(MB/s)
   33554432                     23579.9

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)        Bandwidth(MB/s)
   33554432                     24883.5

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)        Bandwidth(MB/s)
   33554432                     306609.8

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
                                                                           deviceQuery.exe
deviceQuery.exe Starting... Computing Toolkit\CUDA\v11.8\extras\demo_suite> bandwidthTest.exe

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA GeForce RTX 3060"
  CUDA Driver Version / Runtime Version          11.8 / 11.8
  CUDA Capability Major/Minor version number:    8.6
  Total amount of global memory:                 12288 MBytes (12884377600 bytes)
  (28) Multiprocessors, (128) CUDA Cores/MP:     3584 CUDA Cores
  GPU Max Clock rate:                            1852 MHz (1.85 GHz)
  Memory Clock rate:                             7501 Mhz
  Memory Bus Width:                              192-bit
  L2 Cache Size:                                 2359296 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               zu bytes
  Total amount of shared memory per block:       zu bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          zu bytes
  Texture alignment:                             zu bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  CUDA Device Driver Mode (TCC or WDDM):         WDDM (Windows Display Driver Model)
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.8, CUDA Runtime Version = 11.8, NumDevs = 1, Device0 = NVIDIA GeForce RTX 3060
Result = PASS

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\extras\demo_suite>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73

检查确认
在这里插入图片描述

2、创建环境,并激活确认要下载的python版本
查看当前有多少环境
打开Anaconda Prompt

conda env list
  • 1

或者

conda info --envs
  • 1

在这里插入图片描述

打开Anaconda Prompt
输入命令conda create -n torch2 python=3.9
其中torch2 是虚拟环境的名字,
输入命令conda activate pytorch
激活pytorch虚拟环境
3、安装pytorch
cmd命令行输入nvidia-smi,再次查看一下更新后的CUDA安装对应版本的pytorch
官网:https://pytorch.org/
在这里插入图片描述

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
  • 1

或者

conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
  • 1

4、pytorch检查
命令行检验

(torch2) C:\Windows\System32>python
Python 3.9.16 | packaged by conda-forge | (main, Feb  1 2023, 21:28:38) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

代码检验

import torch
print(torch.__version__)#打印当前PyTorch版本号。
print(torch.version.cuda)#打印当前CUDA版本号。
print(torch.backends.cudnn.version())# 打印当前cuDNN版本号。
print(torch.cuda.get_device_name(0))# 打印第一个GPU设备的名称。
  • 1
  • 2
  • 3
  • 4
  • 5

显示

2.0.1+cu118
11.8
8700
NVIDIA GeForce RTX 3060
  • 1
  • 2
  • 3
  • 4

pip 清华镜像

-i https://pypi.tuna.tsinghua.edu.cn/simple
  • 1
pip install scipy -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install scipy -i https://pypi.tuna.tsinghua.edu.cn/simple
  • 1
  • 2
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/代码探险家/article/detail/937012
推荐阅读
相关标签
  

闽ICP备14008679号