当前位置:   article > 正文

世界最简单的Tensorflow入门教程_tensorflow入门极简教程

tensorflow入门极简教程

Tensorflow入门

Tensorflow graphs

Tensorflow是基于graph的并行计算模型。关于graph的理解可以参考官方文档。举个例子,计算a=(b+c)∗(c+2)a=(b+c)∗(c+2),我们可以将算式拆分成一下:

  1. <span style="color:#000000"><code>d = b + c
  2. e = c + <span style="color:#006666 !important">2</span>
  3. a = d * e</code></span>
  • 1
  • 2
  • 3

转换成graph后的形式为: 

graph表示

 

讲一个简单的算式搞成这样确实大材小用,但是我们可以通过这个例子发现:d=b+cd=b+c和e=c+2e=c+2是不相关的,也就是可以并行计算。对于更复杂的CNN和RNN,graph的并行计算的能力将得到更好的展现。

实际中,基于Tensorflow构建的三层(单隐层)神经网络如下图所示: 

这里写图片描述 
Tensorflow data flow graph 

 

上图中,圆形或方形的节点被称为node,在node中流动的数据流被称为张量(tensor)。更多关于tensor的描述见官方文档

0阶张量 == 标量 
1阶张量 == 向量(一维数组) 
2阶张量 == 二维数组 
… 
n阶张量 == n维数组

tensor与node之间的关系: 
  如果输入tensor的维度是5000×645000×64,表示有5000个训练样本,每个样本有64个特征,则输入层必须有64个node来接受这些特征。

上图表示的三层网络包括:输入层(图中的input)、隐藏层(这里取名为ReLU layer表示它的激活函数是ReLU)、输出层(图中的Logit Layer)。

可以看到,每一层中都有相关tensor流入Gradient节点计算梯度,然后这些梯度tensor进入SGD Trainer节点进行网络优化(也就是update网络参数)。

Tensorflow正是通过graph表示神经网络,实现网络的并行计算,提高效率。下面我们将通过一个简单的例子来介绍TensorFlow的基础语法。

A Simple TensorFlow example

用Tensorflow计算a=(b+c)∗(c+2)a=(b+c)∗(c+2), 1. 定义数据:

  1. <span style="color:#000000"><code class="language-python"><span style="color:#000088 !important">import</span> tensorflow <span style="color:#000088 !important">as</span> tf
  2. <span style="color:#880000 !important"><em># 首先,创建一个TensorFlow常量=>2</em></span>
  3. const = tf.constant(<span style="color:#006666 !important">2.0</span>, name=<span style="color:#009900 !important">'const'</span>)
  4. <span style="color:#880000 !important"><em># 创建TensorFlow变量b和c</em></span>
  5. b = tf.Variable(<span style="color:#006666 !important">2.0</span>, name=<span style="color:#009900 !important">'b'</span>)
  6. c = tf.Variable(<span style="color:#006666 !important">1.0</span>, dtype=tf.float32, name=<span style="color:#009900 !important">'c'</span>)</code></span>

如上,TensorFlow中,使用tf.constant()定义常量,使用tf.Variable()定义变量。Tensorflow可以自动进行数据类型检测,比如:赋值2.0就默认为tf.float32,但最好还是显式地定义。更多关于TensorFlow数据类型的介绍查看官方文档。 
2. 定义运算(也称TensorFlow operation):

  1. <span style="color:#000000"><code class="language-python"><span style="color:#880000 !important"><em># 创建operation</em></span>
  2. d = tf.add(b, c, name=<span style="color:#009900 !important">'d'</span>)
  3. e = tf.add(c, const, name=<span style="color:#009900 !important">'e'</span>)
  4. a = tf.multiply(d, e, name=<span style="color:#009900 !important">'a'</span>)</code></span>

发现了没,在TensorFlow中,+−×÷+−×÷都有其特殊的函数表示。实际上,TensorFlow定义了足够多的函数来表示所有的数学运算,当然也对部分数学运算进行了运算符重载,但保险起见,我还是建议你使用函数代替运算符。

!!TensorFlow中所有的变量必须经过初始化才能使用,初始化方式分两步:

  1. 定义初始化operation
  2. 运行初始化operation
  1. <span style="color:#000000"><code class="language-python"><span style="color:#880000 !important"><em># 1. 定义init operation</em></span>
  2. init_op = tf.global_variables_initializer()</code></span>

以上已经完成TensorFlow graph的搭建,下一步即计算并输出。

运行graph需要先调用tf.Session()函数创建一个会话(session)。session就是我们与graph交互的handle。更多关于session的介绍见官方文档

  1. <span style="color:#000000"><code class="language-python"><span style="color:#880000 !important"><em># session</em></span>
  2. <span style="color:#000088 !important">with</span> tf.Session() <span style="color:#000088 !important">as</span> sess:
  3. <span style="color:#880000 !important"><em># 2. 运行init operation</em></span>
  4. sess.run(init_op)
  5. <span style="color:#880000 !important"><em># 计算</em></span>
  6. a_out = sess.run(a)
  7. print(<span style="color:#009900 !important">"Variable a is {}"</span>.format(a_out))</code></span>

值得一提的是,TensorFlow有一个极好的可视化工具TensorBoard,详见官方文档。将上面例子的graph可视化之后的结果为: 

simple example visualization

 

The TensorFlow placeholder

对上面例子的改进:使变量b可以接收任意值。TensorFlow中接收值的方式为占位符(placeholder),通过tf.placeholder()创建。

  1. <span style="color:#000000"><code><span style="color:#009900 !important"># 创建placeholder</span>
  2. b = tf<span style="color:#009900 !important">.placeholder</span>(tf<span style="color:#009900 !important">.float</span>32, [None, <span style="color:#006666 !important">1</span>], name=<span style="color:#009900 !important">'b'</span>)</code></span>

第二个参数值为[None, 1],其中None表示不确定,即不确定第一个维度的大小,第一维可以是任意大小。特别对应tensor数量(或者样本数量),输入的tensor数目可以是32、64…

现在,如果得到计算结果,需要在运行过程中feed占位符b的值,具体为将a_out = sess.run(a)改为:

<span style="color:#000000"><code class="language-python">a_out = sess.run(a, feed_dict={b: np.arange(<span style="color:#006666 !important">0</span>, <span style="color:#006666 !important">10</span>)[:, np.newaxis]})</code></span>

输出:

  1. <span style="color:#000000"><code>Variable a is <span style="color:#009900 !important">[[ 3.]
  2. [ 6.]
  3. [ 9.]
  4. [ 12.]
  5. [ 15.]
  6. [ 18.]
  7. [ 21.]
  8. [ 24.]
  9. [ 27.]
  10. [ 30.]]</span></code></span>

A Neural Network Example

神经网络的例子,数据集为MNIST数据集。 
1. 加载数据:

  1. <span style="color:#000000"><code class="language-python"><span style="color:#000088 !important">from</span> tensorflow.examples.tutorials.mnist <span style="color:#000088 !important">import</span> input_data
  2. mnist = input_data.read_data_sets(<span style="color:#009900 !important">"MNIST_data/"</span>, one_hot=<span style="color:#000088 !important">True</span>)</code></span>

one_hot=True表示对label进行one-hot编码,比如标签4可以表示为[0, 0, 0, 0, 1, 0, 0, 0, 0, 0]。这是神经网络输出层要求的格式。

Setting things up

2. 定义超参数和placeholder

  1. <span style="color:#000000"><code class="language-python"><span style="color:#880000 !important"><em># 超参数</em></span>
  2. learning_rate = <span style="color:#006666 !important">0.5</span>
  3. epochs = <span style="color:#006666 !important">10</span>
  4. batch_size = <span style="color:#006666 !important">100</span>
  5. <span style="color:#880000 !important"><em># placeholder</em></span>
  6. <span style="color:#880000 !important"><em># 输入图片为28 x 28 像素 = 784</em></span>
  7. x = tf.placeholder(tf.float32, [<span style="color:#000088 !important">None</span>, <span style="color:#006666 !important">784</span>])
  8. <span style="color:#880000 !important"><em># 输出为0-9的one-hot编码</em></span>
  9. y = tf.placeholder(tf.float32, [<span style="color:#000088 !important">None</span>, <span style="color:#006666 !important">10</span>])</code></span>

再次强调,[None, 784]中的None表示任意值,特别对应tensor数目。

3. 定义参数w和b

  1. <span style="color:#000000"><code class="language-python"><span style="color:#880000 !important"><em># hidden layer => w, b</em></span>
  2. W1 = tf.Variable(tf.random_normal([<span style="color:#006666 !important">784</span>, <span style="color:#006666 !important">300</span>], stddev=<span style="color:#006666 !important">0.03</span>), name=<span style="color:#009900 !important">'W1'</span>)
  3. b1 = tf.Variable(tf.random_normal([<span style="color:#006666 !important">300</span>]), name=<span style="color:#009900 !important">'b1'</span>)
  4. <span style="color:#880000 !important"><em># output layer => w, b</em></span>
  5. W2 = tf.Variable(tf.random_normal([<span style="color:#006666 !important">300</span>, <span style="color:#006666 !important">10</span>], stddev=<span style="color:#006666 !important">0.03</span>), name=<span style="color:#009900 !important">'W2'</span>)
  6. b2 = tf.Variable(tf.random_normal([<span style="color:#006666 !important">10</span>]), name=<span style="color:#009900 !important">'b2'</span>)</code></span>

在这里,要了解全连接层的两个参数wb都是需要随机初始化的,tf.random_normal()生成正态分布的随机数。

4. 构造隐层网络

  1. <span style="color:#000000"><code><span style="color:#009900 !important"># hidden layer</span>
  2. hidden_out = tf<span style="color:#009900 !important">.add</span>(tf<span style="color:#009900 !important">.matmul</span>(<span style="color:#4f4f4f !important">x</span>, W1), b1)
  3. hidden_out = tf<span style="color:#009900 !important">.nn</span><span style="color:#009900 !important">.relu</span>(hidden_out)</code></span>

上面代码对应于公式: 

z=wx+bz=wx+b

h=relu(z)h=relu(z)

 

5. 构造输出(预测值)

  1. <span style="color:#000000"><code><span style="color:#009900 !important"># 计算输出</span>
  2. y_ = tf<span style="color:#009900 !important">.nn</span><span style="color:#009900 !important">.softmax</span>(tf<span style="color:#009900 !important">.add</span>(tf<span style="color:#009900 !important">.matmul</span>(hidden_out, W2), b2))</code></span>

对于单标签多分类任务,输出层的激活函数都是tf.nn.softmax()。更多关于softmax的知识见维基百科

6. BP部分—定义loss 
损失为交叉熵,公式为 

J=−1m∑i=1m∑j=1nyijlog(y(i)j)+(1−y(i)jlog(1−y(i)j)J=−1m∑i=1m∑j=1nyjilog(yj(i))+(1−yj(i)log(1−yj(i))

 

公式分为两步:

  1. 对n个标签计算交叉熵
  2. 对m个样本取平均
  1. <span style="color:#000000"><code class="language-python">y_clipped = tf.clip_by_value(y_, <span style="color:#006666 !important">1e-10</span>, <span style="color:#006666 !important">0.9999999</span>)
  2. cross_entropy = -tf.reduce_mean(tf.reduce_sum(y * tf.log(y_clipped) + (<span style="color:#006666 !important">1</span> - y) * tf.log(<span style="color:#006666 !important">1</span> - y_clipped), axis=<span style="color:#006666 !important">1</span>))</code></span>

7. BP部分—定义优化算法

  1. <span style="color:#000000"><code><span style="color:#009900 !important"># 创建优化器,确定优化目标</span>
  2. optimizer = tf<span style="color:#009900 !important">.train</span><span style="color:#009900 !important">.GradientDescentOptimizer</span>(learning_rate=learning_rate)<span style="color:#009900 !important">.minimizer</span>(cross_entropy)</code></span>

TensorFlow中更多优化算法详见官方文档

8. 定义初始化operation和准确率node

  1. <span style="color:#000000"><code><span style="color:#009900 !important"># init operator</span>
  2. init_op = tf<span style="color:#009900 !important">.global</span>_variables_initializer()
  3. <span style="color:#009900 !important"># 创建准确率节点</span>
  4. correct_prediction = tf<span style="color:#009900 !important">.equal</span>(tf<span style="color:#009900 !important">.argmax</span>(<span style="color:#4f4f4f !important">y</span>, <span style="color:#006666 !important">1</span>), tf<span style="color:#009900 !important">.argmax</span>(y_, <span style="color:#006666 !important">1</span>))
  5. accuracy = tf<span style="color:#009900 !important">.reduce</span>_mean(tf<span style="color:#009900 !important">.cast</span>(correct_prediction, tf<span style="color:#009900 !important">.float</span>32))</code></span>

correct_predicion会返回一个m×1m×1的tensor,tensor的值为True/False表示是否正确预测。

Setting up the trianing

9. 开始训练

  1. <span style="color:#000000"><code class="language-python"><span style="color:#880000 !important"><em># 创建session</em></span>
  2. <span style="color:#000088 !important">with</span> tf.Session() <span style="color:#000088 !important">as</span> sess:
  3. <span style="color:#880000 !important"><em># 变量初始化</em></span>
  4. sess.run(init)
  5. total_batch = int(len(mnist.train.labels) / batch_size)
  6. <span style="color:#000088 !important">for</span> epoch <span style="color:#000088 !important">in</span> range(epochs):
  7. avg_cost = <span style="color:#006666 !important">0</span>
  8. <span style="color:#000088 !important">for</span> i <span style="color:#000088 !important">in</span> range(total_batch):
  9. batch_x, batch_y = mnist.train.next_batch(batch_size=batch_size)
  10. _, c = sess.run([optimizer, cross_entropy], feed_dict={x: batch_x, y: batch_y})
  11. avg_cost += c / total_batch
  12. print(<span style="color:#009900 !important">"Epoch:"</span>, (epoch + <span style="color:#006666 !important">1</span>), <span style="color:#009900 !important">"cost = "</span>, <span style="color:#009900 !important">"{:.3f}"</span>.format(avg_cost))
  13. print(sess.run(accuracy, feed_dict={x: mnist.test.images, y: mnist.test.labels}))</code></span>

输出:

  1. <span style="color:#000000"><code><span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">1 cost = 0.586</span>
  2. <span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">2 cost = 0.213</span>
  3. <span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">3 cost = 0.150</span>
  4. <span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">4 cost = 0.113</span>
  5. <span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">5 cost = 0.094</span>
  6. <span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">6 cost = 0.073</span>
  7. <span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">7 cost = 0.058</span>
  8. <span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">8 cost = 0.045</span>
  9. <span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">9 cost = 0.036</span>
  10. <span style="color:#50a14f">Epoch</span>: <span style="color:#009900 !important">10 cost = 0.027</span>
  11. <span style="color:#4f4f4f !important">Training</span> complete!
  12. <span style="color:#006666 !important">0.9787</span></code></span>

通过TensorBoard可视化训练过程: 
accuracy

本文内容由网友自发贡献,转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/779558
推荐阅读
相关标签
  

闽ICP备14008679号