赞
踩
文章作者:Tyan
博客:noahsnail.com | CSDN | 简书
注:模型的训练、测试、部署都可以通过Docker环境完成,环境问题会更少。
# CUDA PATH
export PATH="/usr/local/cuda-8.0/bin:$PATH"
# CUDA LDLIBRARY_PATH
export LD_LIBRARY_PATH="/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH"
$ nvcc --version
nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61
# unzip cudnn
tar zxvf cudnn-8.0-linux-x64-v5.1.tgz
cd cuda
# copy include file
sudo cp include/cudnn.h /usr/local/cuda-8.0/include/
# copy .so file
sudo cp lib64/libcudnn.so.5.1.10 /usr/local/cuda-8.0/lib64/
# add ln link
cd /usr/local/cuda-8.0/lib64/
sudo ln -s libcudnn.so.5.1.10 libcudnn.so.5
sudo ln -s libcudnn.so.5 libcudnn.so
# clone nccl git clone https://github.com/NVIDIA/nccl.git make CUDA_HOME=/usr/local/cuda-8.0 test export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:./build/lib ./build/test/single/all_reduce_test 10000000 make PREFIX=nccl install # Copy files sudo cp /yourpath/nccl/build/include/nccl.h /usr/local/include sudo cp /yourpath/nccl/build/lib/libnccl* /usr/local/lib # Edit ~/.bashrc export LD_LIBRARY_PATH="/usr/local/cuda-8.0/lib64:/yourpath/nccl/build/lib:$LD_LIBRARY_PATH"
sudo yum install protobuf-devel leveldb-devel snappy-devel opencv-devel boost-devel hdf5-devel gflags-devel glog-devel lmdb-devel atlas-devel
sudo yum install python-pip
sudo pip install --upgrade pip
sudo pip install numpy
参考http://blog.csdn.net/quincuntial/article/details/53494949
参考http://blog.csdn.net/quincuntial/article/details/53468000
sudo pip install tensorflow-gpu
pip install http://download.pytorch.org/whl/cu80/torch-0.1.12.post2-cp27-none-linux_x86_64.whl
pip install torchvision
pip install lmdb
pip install mahotas
pip install cffi
# Install docker
sudo yum install docker-ce
# Start docker
sudo systemctl start docker
# Test docker
sudo docker run hello-world
# Install nvida-docker
# https://github.com/NVIDIA/nvidia-docker
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker-1.0.1-1.x86_64.rpm
sudo rpm -i /tmp/nvidia-docker*.rpm && rm /tmp/nvidia-docker*.rpm
# start
sudo systemctl start nvidia-docker
# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。