当前位置:   article > 正文

在2023年,在Ubuntu 18中 安装tensorflow 1.14.0+cuda 10.0/10.2 + 对应版本的cudnn 7.6.5 +GTX 960/2080ti/QuadroP600_ubuntu 安装cuda和tensorflow1.14

ubuntu 安装cuda和tensorflow1.14

也是为了复现最原始的 Attention is all you need 实验过程;

1. cuda10.0 

一, 安装cuda10.0 和 对应的 cudnn7.6.5

下载cuda 10.0 for ubuntu18.04

CUDA Toolkit 10.0 Download | NVIDIA Developer

安装cuda:
 
预备工作

创建blacklist***.conf具体如下,加入两行内容

  1. sudo vim /etc/modprobe.d/blacklist-nouveau.conf
  2. 内容:
  3. blacklist nouveau
  4. options nouveau modeset=0
  5. sudo update-initramfs -u

安装cuda 10.0:

1. 手动下载 cuda 10.0 for  ubuntu18.04,并安装:

  1. sudo apt-get install linux-headers-$(uname -r) \
  2. && sudo apt-key del 7fa2af80 \
  3. && sudo dpkg -i cuda-repo-ubuntu1804-10-0-local-10.0.130-410.48_1.0-1_amd64.deb \
  4. && sudo apt-key add /var/cuda-repo-10-0-local-10.0.130-410.48/7fa2af80.pub \
  5. && sudo apt-get update \
  6. && sudo apt-get -y install cuda


注册nvidia开发者,并下载对应 ***cudnn***.deb,安装命令: 

  1. #注意安装顺序:
  2. sudo dpkg -i libcudnn7_7.6.5.32-1+cuda10.0_amd64.deb && \
  3. sudo dpkg -i libcudnn7-dev_7.6.5.32-1+cuda10.0_amd64.deb && \
  4. sudo dpkg -i libcudnn7-doc_7.6.5.32-1+cuda10.0_amd64.deb

二,安装python3 环境

预先安装合适版本的python3   module:

  1. sudo apt install python3.7*
  2. sudo rm /usr/bin/python3
  3. sudo ln /usr/bin/python3.7 /usr/bin/python3
  4. sudo apt install python3-pip
  5. sudo pip3 install ipython
  6. sudo pip3 install Cython
  7. sudo apt-get install -y glibc-doc manpages-posix-dev
  8. sudo pip3 install numpy==1.15.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/
  9. sudo pip3 install pkgconfig==1.4.0
  10. cd /usr/lib/python3/dist-packages
  11. sudo ln -s apt_pkg.cpython-36m-x86_64-linux-gnu.so apt_pkg.so
  12. # install HDF5:具体方法在文末
  13. sudo pip3 install h5py==2.10.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/
  14. sudo pip3 install setuptools==57.5.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/

三,安装tensorflow 1.14.0
 sudo pip3 install tensorflow-gpu==1.14.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/

四,下载,编译,安装hdf5

HDF5 1.10.10

  1. sudo apt-get update
  2. sudo apt-get install build-essential
  3. sudo apt-get build-dep hdf5
  4. mkdir ~/Software
  5. cd ~/Software
  6. wget https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.10/hdf5-1.10.10/src/hdf5-1.10.10.tar.gz
  7. tar -xf hdf5-1.8.10.tar.gz
  8. cd hdf5-1.8.10/
  9. ./configure
  10. make -j9
  11. sudo make install

五,测试tensorflow的安装效果

demo 测试源码   hello_tf114.py

  1. import os
  2. os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
  3. def tf114_demo():
  4. a = 3
  5. b = 4
  6. c = a + b
  7. print("a + b in py =",c)
  8. a_t = tf.constant(3)
  9. b_t = tf.constant(4)
  10. c_t = a_t + b_t
  11. print("TensorFlow add a_t + b_t =", c_t)
  12. with tf.Session() as sess:
  13. c_t_value = sess.run(c_t)
  14. print("c_t_value= ", c_t_value)
  15. return None
  16. if __name__ == "__main__":
  17. tf114_demo()

运行结果:

2. cuda10.2 + rtx2080ti

一,尝试使用cuda10.2来支持tensorflow1.14.0

rtx 2080ti 是2018年5月发布,cuda 10.2 首版是2019年发布的,故可以推断,cuda 10.2也支持2080ti,并可以在 2080ti上 运行  tensorflow 1.14.0;待更深入的网络训练的测试

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

实测发现,在ubuntu18.04中,使用cuda10.2并不能直接支持tensorflow 1.14.0,因为tensorflow 1.14.0会默认 load cuda相关lib的10.0版本,例如 libcudart.so.10.0   libcublas.so.10.0 等;

如果安装cuda10.2后不做任何改动,将出现如下错误:

所以,要尝试用10.2的so充当 10.0 的 so文件:

  1. cd /usr/lib/x86_64-linux-gnu \
  2. && sudo ln -s libcublas.so.10.2.2.89 libcublas.so.10.0 \
  3. && sudo ln -s libcublasLt.so.10.2.2.89 libcublasLt.so.10.0
  4. export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.2/targets/x86_64-linux/lib/
  5. cd /usr/local/cuda-10.2/targets/x86_64-linux/lib/ \
  6. && sudo ln -s libcufft.so.10.1.2.89 libcufft.so.10.0 \
  7. && sudo ln -s libcufftw.so.10.1.2.89 libcufftw.so.10.0 \
  8. && sudo ln -s libcurand.so.10.1.2.89 libcurand.so.10.0 \
  9. && sudo ln -s libcusolverMg.so.10.3.0.89 libcusolverMg.so.10.0 \
  10. && sudo ln -s libcusolver.so.10.3.0.89 libcusolver.so.10.0 \
  11. && sudo ln -s libcusparse.so.10.3.1.89 libcusparse.so.10.0 \
  12. && sudo ln -s libnppc.so.10.2.1.89 libnppc.so.10.0 \
  13. && sudo ln -s libnppial.so.10.2.1.89 libnppial.so.10.0 \
  14. && sudo ln -s libnppicc.so.10.2.1.89 libnppicc.so.10.0 \
  15. && sudo ln -s libnppicom.so.10.2.1.89 libnppicom.so.10.0 \
  16. && sudo ln -s libnppidei.so.10.2.1.89 libnppidei.so.10.0 \
  17. && sudo ln -s libnppif.so.10.2.1.89 libnppif.so.10.0 \
  18. && sudo ln -s libnppig.so.10.2.1.89 libnppig.so.10.0 \
  19. && sudo ln -s libnppim.so.10.2.1.89 libnppim.so.10.0 \
  20. && sudo ln -s libnppist.so.10.2.1.89 libnppist.so.10.0 \
  21. && sudo ln -s libnppisu.so.10.2.1.89 libnppisu.so.10.0 \
  22. && sudo ln -s libnppitc.so.10.2.1.89 libnppitc.so.10.0 \
  23. && sudo ln -s libnpps.so.10.2.1.89 libnpps.so.10.0 \
  24. && sudo ln -s libnvgraph.so.10.2.89 libnvgraph.so.10.0 \
  25. && sudo ln -s libnvjpeg.so.10.3.1.89 libnvjpeg.so.10.0 \
  26. && sudo ln -s libcudart.so.10.2.89 libcudart.so.10.0 \
  27. && sudo ln -s libaccinj64.so.10.2.89 libaccinj64.so.10.0 \
  28. && sudo ln -s libcuinj64.so.10.2.89 libcuinj64.so.10.0 \
  29. && sudo ln -s libcupti.so.10.2.75 libcupti.so.10.0 \
  30. && sudo ln -s libnvrtc-builtins.so.10.2.89 libnvrtc-builtins.so.10.0 \
  31. && sudo ln -s libnvrtc.so.10.2.89 libnvrtc.so.10.0
  32. #别忘了这个:
  33. export LD_LIBRARY_PATH=/usr/local/cuda-10.2/targets/x86_64-linux/lib/

export  LD_LIBRARY_PATH=/usr/local/cuda-10.2/targets/x86_64-linux/lib/

浅测可以:

cuda10.0 应该是调用不动2080TI卡,因为发布时间的先后;而 cuda10.2发布的时间晚于2080ti的发布时间,所以是可以支持到的;

二,深入的测试 demo_01

bidirectional_rnn.py

  1. """ Bi-directional Recurrent Neural Network.
  2. A Bi-directional Recurrent Neural Network (LSTM) implementation example using
  3. TensorFlow library. This example is using the MNIST database of handwritten
  4. digits (http://yann.lecun.com/exdb/mnist/)
  5. Links:
  6. [Long Short Term Memory](http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf)
  7. [MNIST Dataset](http://yann.lecun.com/exdb/mnist/).
  8. Author: Aymeric Damien
  9. Project: https://github.com/aymericdamien/TensorFlow-Examples/
  10. """
  11. from __future__ import print_function
  12. import tensorflow as tf
  13. from tensorflow.contrib import rnn
  14. import numpy as np
  15. # Import MNIST data
  16. from tensorflow.examples.tutorials.mnist import input_data
  17. mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
  18. '''
  19. To classify images using a bidirectional recurrent neural network, we consider
  20. every image row as a sequence of pixels. Because MNIST image shape is 28*28px,
  21. we will then handle 28 sequences of 28 steps for every sample.
  22. '''
  23. # Training Parameters
  24. learning_rate = 0.001
  25. training_steps = 10000
  26. batch_size = 128
  27. display_step = 200
  28. # Network Parameters
  29. num_input = 28 # MNIST data input (img shape: 28*28)
  30. timesteps = 28 # timesteps
  31. num_hidden = 128 # hidden layer num of features
  32. num_classes = 10 # MNIST total classes (0-9 digits)
  33. # tf Graph input
  34. X = tf.placeholder("float", [None, timesteps, num_input])
  35. Y = tf.placeholder("float", [None, num_classes])
  36. # Define weights
  37. weights = {
  38. # Hidden layer weights => 2*n_hidden because of forward + backward cells
  39. 'out': tf.Variable(tf.random_normal([2*num_hidden, num_classes]))
  40. }
  41. biases = {
  42. 'out': tf.Variable(tf.random_normal([num_classes]))
  43. }
  44. def BiRNN(x, weights, biases):
  45. # Prepare data shape to match `rnn` function requirements
  46. # Current data input shape: (batch_size, timesteps, n_input)
  47. # Required shape: 'timesteps' tensors list of shape (batch_size, num_input)
  48. # Unstack to get a list of 'timesteps' tensors of shape (batch_size, num_input)
  49. x = tf.unstack(x, timesteps, 1)
  50. # Define lstm cells with tensorflow
  51. # Forward direction cell
  52. lstm_fw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
  53. # Backward direction cell
  54. lstm_bw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
  55. # Get lstm cell output
  56. try:
  57. outputs, _, _ = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
  58. dtype=tf.float32)
  59. except Exception: # Old TensorFlow version only returns outputs not states
  60. outputs = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x,
  61. dtype=tf.float32)
  62. # Linear activation, using rnn inner loop last output
  63. return tf.matmul(outputs[-1], weights['out']) + biases['out']
  64. logits = BiRNN(X, weights, biases)
  65. prediction = tf.nn.softmax(logits)
  66. # Define loss and optimizer
  67. loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
  68. logits=logits, labels=Y))
  69. optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
  70. train_op = optimizer.minimize(loss_op)
  71. # Evaluate model (with test logits, for dropout to be disabled)
  72. correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
  73. accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
  74. # Initialize the variables (i.e. assign their default value)
  75. init = tf.global_variables_initializer()
  76. # Start training
  77. with tf.Session() as sess:
  78. # Run the initializer
  79. sess.run(init)
  80. for step in range(1, training_steps+1):
  81. batch_x, batch_y = mnist.train.next_batch(batch_size)
  82. # Reshape data to get 28 seq of 28 elements
  83. batch_x = batch_x.reshape((batch_size, timesteps, num_input))
  84. # Run optimization op (backprop)
  85. sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
  86. if step % display_step == 0 or step == 1:
  87. # Calculate batch loss and accuracy
  88. loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
  89. Y: batch_y})
  90. print("Step " + str(step) + ", Minibatch Loss= " + \
  91. "{:.4f}".format(loss) + ", Training Accuracy= " + \
  92. "{:.3f}".format(acc))
  93. print("Optimization Finished!")
  94. # Calculate accuracy for 128 mnist test images
  95. test_len = 128
  96. test_data = mnist.test.images[:test_len].reshape((-1, timesteps, num_input))
  97. test_label = mnist.test.labels[:test_len]
  98. print("Testing Accuracy:", \
  99. sess.run(accuracy, feed_dict={X: test_data, Y: test_label}))

$ python3  bidirectional_rnn.py


 

训练效果:

 nvidia-smi  gpu 使用效果:

三,深入的测试 demo_02

 convolutional_network_raw_deviceInfo.py

  1. """ Convolutional Neural Network.
  2. Build and train a convolutional neural network with TensorFlow.
  3. This example is using the MNIST database of handwritten digits
  4. (http://yann.lecun.com/exdb/mnist/)
  5. Author: Aymeric Damien
  6. Project: https://github.com/aymericdamien/TensorFlow-Examples/
  7. """
  8. from __future__ import division, print_function, absolute_import
  9. import tensorflow as tf
  10. # Import MNIST data
  11. from tensorflow.examples.tutorials.mnist import input_data
  12. mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
  13. # Training Parameters
  14. learning_rate = 0.001
  15. num_steps = 200
  16. batch_size = 128
  17. display_step = 10
  18. # Network Parameters
  19. num_input = 784 # MNIST data input (img shape: 28*28)
  20. num_classes = 10 # MNIST total classes (0-9 digits)
  21. dropout = 0.75 # Dropout, probability to keep units
  22. # tf Graph input
  23. X = tf.placeholder(tf.float32, [None, num_input])
  24. Y = tf.placeholder(tf.float32, [None, num_classes])
  25. keep_prob = tf.placeholder(tf.float32) # dropout (keep probability)
  26. # Create some wrappers for simplicity
  27. def conv2d(x, W, b, strides=1):
  28. # Conv2D wrapper, with bias and relu activation
  29. x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
  30. x = tf.nn.bias_add(x, b)
  31. return tf.nn.relu(x)
  32. def maxpool2d(x, k=2):
  33. # MaxPool2D wrapper
  34. return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
  35. padding='SAME')
  36. # Create model
  37. def conv_net(x, weights, biases, dropout):
  38. # MNIST data input is a 1-D vector of 784 features (28*28 pixels)
  39. # Reshape to match picture format [Height x Width x Channel]
  40. # Tensor input become 4-D: [Batch Size, Height, Width, Channel]
  41. x = tf.reshape(x, shape=[-1, 28, 28, 1])
  42. # Convolution Layer
  43. conv1 = conv2d(x, weights['wc1'], biases['bc1'])
  44. # Max Pooling (down-sampling)
  45. conv1 = maxpool2d(conv1, k=2)
  46. # Convolution Layer
  47. conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
  48. # Max Pooling (down-sampling)
  49. conv2 = maxpool2d(conv2, k=2)
  50. # Fully connected layer
  51. # Reshape conv2 output to fit fully connected layer input
  52. fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
  53. fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
  54. fc1 = tf.nn.relu(fc1)
  55. # Apply Dropout
  56. fc1 = tf.nn.dropout(fc1, dropout)
  57. # Output, class prediction
  58. out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
  59. return out
  60. # Store layers weight & bias
  61. weights = {
  62. # 5x5 conv, 1 input, 32 outputs
  63. 'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
  64. # 5x5 conv, 32 inputs, 64 outputs
  65. 'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
  66. # fully connected, 7*7*64 inputs, 1024 outputs
  67. 'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
  68. # 1024 inputs, 10 outputs (class prediction)
  69. 'out': tf.Variable(tf.random_normal([1024, num_classes]))
  70. }
  71. biases = {
  72. 'bc1': tf.Variable(tf.random_normal([32])),
  73. 'bc2': tf.Variable(tf.random_normal([64])),
  74. 'bd1': tf.Variable(tf.random_normal([1024])),
  75. 'out': tf.Variable(tf.random_normal([num_classes]))
  76. }
  77. # Construct model
  78. logits = conv_net(X, weights, biases, keep_prob)
  79. prediction = tf.nn.softmax(logits)
  80. # Define loss and optimizer
  81. loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
  82. logits=logits, labels=Y))
  83. optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
  84. train_op = optimizer.minimize(loss_op)
  85. # Evaluate model
  86. correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
  87. accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
  88. # Initialize the variables (i.e. assign their default value)
  89. init = tf.global_variables_initializer()
  90. # Start training
  91. with tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)) as sess:
  92. # Run the initializer
  93. sess.run(init)
  94. for step in range(1, num_steps+1):
  95. batch_x, batch_y = mnist.train.next_batch(batch_size)
  96. # Run optimization op (backprop)
  97. sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: 0.8})
  98. if step % display_step == 0 or step == 1:
  99. # Calculate batch loss and accuracy
  100. loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
  101. Y: batch_y,
  102. keep_prob: 1.0})
  103. print("Step " + str(step) + ", Minibatch Loss= " + \
  104. "{:.4f}".format(loss) + ", Training Accuracy= " + \
  105. "{:.3f}".format(acc))
  106. print("Optimization Finished!")
  107. # Calculate accuracy for 256 MNIST test images
  108. print("Testing Accuracy:", \
  109. sess.run(accuracy, feed_dict={X: mnist.test.images[:256],
  110. Y: mnist.test.labels[:256],
  111. keep_prob: 1.0}))

$ python3    convolutional_network_raw_deviceInfo.py

训练效果:

 $nvidia-smi

————————————————————————————————————————

  1. sudo apt install python3-pip \
  2. && sudo pip3 install Cython \
  3. && sudo pip3 install ipython \
  4. && sudo apt-get install -y glibc-doc manpages-posix-dev \
  5. && sudo pip3 install numpy==1.15.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/ \
  6. && sudo pip3 install pkgconfig==1.4.0 \
  7. && cd /usr/lib/python3/dist-packages \
  8. && sudo ln -s apt_pkg.cpython-36m-x86_64-linux-gnu.so apt_pkg.so
  9. sudo apt-get update \
  10. && sudo apt-get install build-essential \
  11. && sudo apt-get build-dep hdf5 \
  12. && mkdir ~/Software \
  13. && cd ~/Software \
  14. && wget https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.10/hdf5-1.10.10/src/hdf5-1.10.10.tar.gz \
  15. && tar -xf hdf5-1.10.10.tar.gz \
  16. && cd hdf5-1.10.10/ \
  17. && ./configure \
  18. && make -j9 \
  19. && sudo make install
  20. sudo pip3 install h5py==2.10.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/ \
  21. && sudo pip3 install setuptools==57.5.0 -i https://pypi.tuna.tsinghua.edu.cn/simple/
  22. cd /usr/lib/x86_64-linux-gnu \
  23. && sudo ln -s libcublas.so.10.2.2.89 libcublas.so.10.0 \
  24. && sudo ln -s libcublasLt.so.10.2.2.89 libcublasLt.so.10.0
  25. cd /usr/local/cuda-10.2/targets/x86_64-linux/lib/ \
  26. && sudo ln -s libcufft.so.10.1.2.89 libcufft.so.10.0 \
  27. && sudo ln -s libcufftw.so.10.1.2.89 libcufftw.so.10.0 \
  28. && sudo ln -s libcurand.so.10.1.2.89 libcurand.so.10.0 \
  29. && sudo ln -s libcusolverMg.so.10.3.0.89 libcusolverMg.so.10.0 \
  30. && sudo ln -s libcusolver.so.10.3.0.89 libcusolver.so.10.0 \
  31. && sudo ln -s libcusparse.so.10.3.1.89 libcusparse.so.10.0 \
  32. && sudo ln -s libnppc.so.10.2.1.89 libnppc.so.10.0 \
  33. && sudo ln -s libnppial.so.10.2.1.89 libnppial.so.10.0 \
  34. && sudo ln -s libnppicc.so.10.2.1.89 libnppicc.so.10.0 \
  35. && sudo ln -s libnppicom.so.10.2.1.89 libnppicom.so.10.0 \
  36. && sudo ln -s libnppidei.so.10.2.1.89 libnppidei.so.10.0 \
  37. && sudo ln -s libnppif.so.10.2.1.89 libnppif.so.10.0 \
  38. && sudo ln -s libnppig.so.10.2.1.89 libnppig.so.10.0 \
  39. && sudo ln -s libnppim.so.10.2.1.89 libnppim.so.10.0 \
  40. && sudo ln -s libnppist.so.10.2.1.89 libnppist.so.10.0 \
  41. && sudo ln -s libnppisu.so.10.2.1.89 libnppisu.so.10.0 \
  42. && sudo ln -s libnppitc.so.10.2.1.89 libnppitc.so.10.0 \
  43. && sudo ln -s libnpps.so.10.2.1.89 libnpps.so.10.0 \
  44. && sudo ln -s libnvgraph.so.10.2.89 libnvgraph.so.10.0 \
  45. && sudo ln -s libnvjpeg.so.10.3.1.89 libnvjpeg.so.10.0 \
  46. && sudo ln -s libcudart.so.10.2.89 libcudart.so.10.0 \
  47. && sudo ln -s libaccinj64.so.10.2.89 libaccinj64.so.10.0 \
  48. && sudo ln -s libcuinj64.so.10.2.89 libcuinj64.so.10.0 \
  49. && sudo ln -s libcupti.so.10.2.75 libcupti.so.10.0 \
  50. && sudo ln -s libnvrtc-builtins.so.10.2.89 libnvrtc-builtins.so.10.0 \
  51. && sudo ln -s libnvrtc.so.10.2.89 libnvrtc.so.10.0
  52. export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.2/targets/x86_64-linux/lib/

测试Quadro  P600 4GB显卡也可以, cuda 10.2 + cudnn 7.6.5 + tensorflow 1.14.0

 成功训练:

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/128599
推荐阅读
相关标签
  

闽ICP备14008679号