赞
踩
本文不介绍CUDA安装方法,只介绍安装cudnn,验证cuda、cudnn是否安装成功。
实验环境:
优先 nvidia 官方文档:
cuda安装
cudnn安装
推荐安装博客:
Install CUDA 10.0 and cuDNN 7.5.0 for PyTorch on Ubuntu 18.04 LTS(包含验证cuda和cudnn的命令)
这个cudnn安装,当我执行sudo dpkg -i libcudnn7_7.6.5.32-1+cuda10.0_amd64.deb
命令时,出现了软链接问题,见后文 Cudnn安装流程。
cd /usr/local/cuda-10.0/samples
sudo make
忽略大量关于弃用架构(sm_20和这样古老的gpu)的警告。完成之后,让我们运行两个测试:deviceQuery和matrixMulCUBLAS。首先,尝试deviceQuery:
/usr/local/cuda-10.0/samples/bin/x86_64/linux/release/deviceQuery
期望的返回结果:
/usr/local/cuda-10.0/samples/bin/x86_64/linux/release/deviceQuery /usr/local/cuda-10.0/samples/bin/x86_64/linux/release/deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GeForce GTX 1060" CUDA Driver Version / Runtime Version 10.0 / 10.0 CUDA Capability Major/Minor version number: 6.1 Total amount of global memory: 6078 MBytes (6373572608 bytes) (10) Multiprocessors, (128) CUDA Cores/MP: 1280 CUDA Cores GPU Max Clock rate: 1671 MHz (1.67 GHz) Memory Clock rate: 4004 Mhz Memory Bus Width: 192-bit L2 Cache Size: 1572864 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 10.0, NumDevs = 1 Result = PASS
英文的:
Install CUDA 10.0 and cuDNN 7.5.0 for PyTorch on Ubuntu 18.04 LTS(包含验证cuda和cudnn的命令)
这个cudnn安装,当我执行 sudo dpkg -i libcudnn7_7.6.5.32-1+cuda10.0_amd64.deb
命令时,出现了类似下面的问题(原错误未保存):
/sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8 is not a symbolic link
/sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8 is not a symbolic link
/sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8 is not a symbolic link
/sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8 is not a symbolic link
/sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8 is not a symbolic link
/sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8 is not a symbolic link
/sbin/ldconfig.real: /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8 is not a symbolic link
解决方案:
网上有介绍可以将上述的链接使用 sudo ln -sf ....
改成软链接,可以解决问题。但我思考了一下,多版本害怕出现问题就没有使用这个办法。
进入正题:换一种cudnn安装方法
我下载的是这个
下载下来的文件是:cudnn-10.0-linux-x64-v7.6.5.32.solitairetheme8 (463M左右,随便放在哪个目录下)
cp cudnn-10.0-linux-x64-v7.6.5.32.solitairetheme8 cudnn-10.0-linux-x64-v7.6.5.32.tgz# 保险起见,先拷贝一份
tar -zxvf cudnn-10.0-linux-x64-v7.6.5.32.tgz # 解压后为一个cuda文件夹
可使用
tree -L 2
查看文件树结构如下:
(base) username@username:~/Downloads/cuda$ tree -L 2
.
├── include
│ └── cudnn.h
├── lib64
│ ├── libcudnn.so -> libcudnn.so.7
│ ├── libcudnn.so.7 -> libcudnn.so.7.6.5
│ ├── libcudnn.so.7.6.5
│ └── libcudnn_static.a
└── NVIDIA_SLA_cuDNN_Support.txt
cudnn.h
文件# 也可以复制到需要的特定版本: `sudo cp cuda/include/cudnn*.h /usr/local/cuda-10.0/include`
$ sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
$ sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
软链接机制:
将文件复制到软链接链接的文件夹=
复制到软链接源文件夹。
因此假如我目前/usr/local/cuda
软链接指向的是/usr/local/cuda-10.0
,也可将上述cuda
替换为cuda-10.0
。
旧版查看版本信息在cudnn.h
, 新版本的版本信息在 cudnn_version.h
,例如cudnn_8.1.0版本。
# ${CUDNN_H_PATH} 举例:/usr/local/cuda-10.0/include/cudnn.h
cat ${CUDNN_H_PATH} | grep CUDNN_MAJOR -A 2`
运行结果如下:
#define CUDNN_MAJOR 7
#define CUDNN_MINOR 6
#define CUDNN_PATCHLEVEL 2
--
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
那么该cudnn版本就为===>7.6.2
error:cudnn.h 没有那个文件或目录
如果是下面的错误,这是因为安装CUDA和CUDNN时选择的安装包处于 .deb格式 ,这种新的方式不再具有cudnn.h文件! !! !! 。 如果您使用的安装包处于“tar.gz”格式,则上述命令就足够了。参考链接:Check CUDA and cuDNN version under Ubuntu
如果选择以“.deb”格式安装,cudnn安装验证方法:
dpkg -l | grep cudnn # 可一键查看cuda &cudnn版本
查看使用CUDNN的所有流程,结果如下:
上面的图像在Ubuntu 16.04上运行,这意味着:安装在此计算机上的CUDA版本为:9.0,CUDNN版本为:7.3.1.20。
将cuDNN_samples
复制到一个可写路径
$cp -r /usr/src/cudnn_samples_v8/ $HOME
进入可写路径。
$ cd $HOME/cudnn_samples_v8/mnistCUDNN
编译 mnistCUDNN
。
$make clean && make
运行mnistCUDNN样本。
$ ./mnistCUDNN
如果 cuDNN 正确安装并运行在您的Linux系统上,您将看到类似如下的消息:
Test passed!
期待的打印结果:
(base) username@username:/usr/src/cudnn_samples_v7/mnistCUDNN$ ./mnistCUDNN cudnnGetVersion() : 7605 , CUDNN_VERSION from cudnn.h : 7605 (7.6.5) Host compiler version : GCC 7.5.0 There are 1 CUDA capable devices on your machine : device 0 : sms 36 Capabilities 7.5, SmClock 1620.0 Mhz, MemSize (Mb) 7979, MemClock 7001.0 Mhz, Ecc=0, boardGroupID=0 Using device 0 Testing single precision Loading image data/one_28x28.pgm Performing forward propagation ... Testing cudnnGetConvolutionForwardAlgorithm ... Fastest algorithm is Algo 0 Testing cudnnFindConvolutionForwardAlgorithm ... ^^^^ CUDNN_STATUS_SUCCESS for Algo 0: 0.009952 time requiring 0 memory ^^^^ CUDNN_STATUS_SUCCESS for Algo 2: 0.018272 time requiring 57600 memory ^^^^ CUDNN_STATUS_SUCCESS for Algo 1: 0.024256 time requiring 3464 memory ^^^^ CUDNN_STATUS_SUCCESS for Algo 4: 0.049152 time requiring 207360 memory ^^^^ CUDNN_STATUS_SUCCESS for Algo 7: 0.061344 time requiring 2057744 memory Resulting weights from Softmax: 0.0000000 0.9999399 0.0000000 0.0000000 0.0000561 0.0000000 0.0000012 0.0000017 0.0000010 0.0000000 Loading image data/three_28x28.pgm Performing forward propagation ... Resulting weights from Softmax: 0.0000000 0.0000000 0.0000000 0.9999288 0.0000000 0.0000711 0.0000000 0.0000000 0.0000000 0.0000000 Loading image data/five_28x28.pgm Performing forward propagation ... Resulting weights from Softmax: 0.0000000 0.0000008 0.0000000 0.0000002 0.0000000 0.9999820 0.0000154 0.0000000 0.0000012 0.0000006 Result of classification: 1 3 5 Test passed! Testing half precision (math in single precision) Loading image data/one_28x28.pgm Performing forward propagation ... Testing cudnnGetConvolutionForwardAlgorithm ... Fastest algorithm is Algo 0 Testing cudnnFindConvolutionForwardAlgorithm ... ^^^^ CUDNN_STATUS_SUCCESS for Algo 0: 0.018016 time requiring 0 memory ^^^^ CUDNN_STATUS_SUCCESS for Algo 2: 0.026784 time requiring 28800 memory ^^^^ CUDNN_STATUS_SUCCESS for Algo 1: 0.028576 time requiring 3464 memory ^^^^ CUDNN_STATUS_SUCCESS for Algo 4: 0.047744 time requiring 207360 memory ^^^^ CUDNN_STATUS_SUCCESS for Algo 7: 0.061440 time requiring 2057744 memory Resulting weights from Softmax: 0.0000001 1.0000000 0.0000001 0.0000000 0.0000563 0.0000001 0.0000012 0.0000017 0.0000010 0.0000001 Loading image data/three_28x28.pgm Performing forward propagation ... Resulting weights from Softmax: 0.0000000 0.0000000 0.0000000 1.0000000 0.0000000 0.0000714 0.0000000 0.0000000 0.0000000 0.0000000 Loading image data/five_28x28.pgm Performing forward propagation ... Resulting weights from Softmax: 0.0000000 0.0000008 0.0000000 0.0000002 0.0000000 1.0000000 0.0000154 0.0000000 0.0000012 0.0000006 Result of classification: 1 3 5 Test passed!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。