当前位置:   article > 正文

受限玻尔兹曼机RBM 代码阅读_受限玻尔兹曼机 代码

受限玻尔兹曼机 代码

read rbm.py

  1. tf.constant
    tf.matmul
import tensorflow as tf

# 创建一个常量 op, 产生一个 1x2 矩阵. 这个 op 被作为一个节点
# 加到默认图中.
#
# 构造器的返回值代表该常量 op 的返回值.
matrix1 = tf.constant([[3., 3.]])

# 创建另外一个常量 op, 产生一个 2x1 矩阵.
matrix2 = tf.constant([[2.],[2.]])

# 创建一个矩阵乘法 matmul op , 把 'matrix1' 和 'matrix2' 作为输入.
# 返回值 'product' 代表矩阵乘法的结果.
product = tf.matmul(matrix1, matrix2)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  1. 占位符 placeholder
x = tf.placeholder("float", [None, 784])
  • 1

x是一个占位符,二维浮点数,None表示第一个维度可以是任何长度的

  1. 可修改的张量 Variable
import tensorflow as tf

W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
  • 1
  • 2
  • 3
  • 4

W的维度是[784, 10], b的维度是10

  1. tf.random_normal
    输出的值服从正态分布
tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
  • 1

Args:
shape: 用一个list表示产出的Tensor的形状。比如[2,3]表示2*3的一个张量
mean: 类型为dtype的0-D张量或Python值。 正态分布的均值。
stddev: 类型为dtype的0-D张量或Python值。 正态分布的标准差。
dtype: 产生数据类型
seed: 一个Python整数。 用于为分发创建随机种子。 有关行为,请参阅 set_random_seed。作用:若固定seed值,则每次产生的随机数结果是一样的,若需要每次产生不一样的随机数,则可将seed值设置为None
name: 操作的名称(可选)。
函数返回值: 指定形状的张量填充随机正常值

  1. astype
    astype:转换数组的数据类型
    int32 --> float64 完全ojbk
    float64 --> int32 会将小数部分截断
    string_ --> float64 如果字符串数组表示的全是数字,也可以用astype转化为数值类型

  2. matmul 矩阵相乘

    a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
    # [[1 2 3]
    #  [4 5 6]]

    b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
    # [[ 7  8]
    #   [ 9 10]
    #   [11 12]]

    c = tf.matmul(a, b)
    # [[ 58  64]
    #  [139 154]]

    # 1*7+2*9+3*11=58
    # 1*8+2*10+3*12=64
    # 4*7+5*9+6*11=139
    # 4*8+5*10+6*12=154
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  1. tf.nn.sigmoid 逻辑函数
    Sigmoid :y = 1/(1 + exp(-x))
input_data = tf.Variable( [[0, 10, -10],[1,2,3]] , dtype = tf.float32 )
output = tf.nn.sigmoid(input_data)
# 输出:[[ 5.00000000e-01 9.99954581e-01 4.53978719e-05]
#       [ 7.31058598e-01 8.80797029e-01 9.52574134e-01]]
  • 1
  • 2
  • 3
  • 4
  1. tf.shape(input, name=None, out_type=tf.int32) 将矩阵的维度输出为一个维度矩阵
import tensorflow as tf
import numpy as np

A = np.array([[[1, 1, 1],
               [2, 2, 2]],
              [[3, 3, 3],
               [4, 4, 4]],
              [[5, 5, 5],
               [6, 6, 6]]])

t = tf.shape(A)
with tf.Session() as sess:
    print(sess.run(t))

# 输出
[3 2 3]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  1. tf.random_uniform
    tf.random_uniform((6, 6), minval=low,maxval=high,dtype=tf.float32)))返回6*6的矩阵,产生于low和high之间,产生的值是均匀分布的。
import tensorflow as tf
with tf.Session() as sess:
    print(sess.run(tf.random_uniform(
        (6,6), minval=-0.5,
        maxval=0.5, dtype=tf.float32)))
'''
[[ 0.47818196 -0.0463798  -0.48545432  0.48667777  0.1448754   0.31394303]
 [ 0.07446766  0.37638378  0.3001852  -0.1716789   0.03881919  0.14070213]
 [ 0.14747012 -0.14895666 -0.35274172 -0.19400203 -0.26068127  0.10212302]
 [ 0.29586768  0.16780066 -0.34365273 -0.3228333   0.42329776  0.35237122]
 [-0.34602797 -0.46733367  0.46615827 -0.20312655 -0.37987483  0.41316974]
 [ 0.39296162  0.32745218 -0.32554448 -0.14309132 -0.16133463  0.40627968]]
'''
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  1. tf.sign
    y = sign(x)
    如果x < 0,则有 y = sign(x) = -1;如果x == 0,则有 0 或者tf.is_nan(x);如果x > 0,则有1.
  2. tf.nn.relu 激活函数 relu,即 max(features, 0)。即将矩阵中每行的非最大值置0。
a = tf.constant([-1.0, 2.0])
with tf.Session() as sess:
    b = tf.nn.relu(a)
    print sess.run(b)
# 输出 [0. 2.]
  • 1
  • 2
  • 3
  • 4
  • 5
  1. tf.reduce_mean
    计算张量tensor沿着指定的数轴(tensor的某一维度)上的的平均值,主要用作降维或者计算tensor(图像)的平均值。
import tensorflow as tf
    import numpy as np

    #  Computes the mean of elements across dimensions of a tensor. (deprecated arguments)
    
    #  tf.reduce_mean(
    #      input_tensor,
    #      axis=None,           axis: 指定的轴,如果不指定,则计算所有元素的均值;
    #      keepdims=None,       keep_dims:是否降维度,设置为True,输出的结果保持输入tensor的形状,设置为False,输出结果会降低维度;
    #      name=None,
    #      reduction_indices=None,
    #      keep_dims=None
    #      )
    
    c = np.array([[3.,4], [5.,6], [6.,7]])
    
    step = tf.reduce_mean(c, 1)
    with tf.Session() as sess:
        print(sess.run(step))
# 输出 [3.5 5.5 6.5]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  1. tf.train.AdamOptimizer

  2. tf.nn.sigmoid_cross_entropy_with_logits
    https://blog.csdn.net/luoxuexiong/article/details/90109822

import tensorflow as tf

labels = [[0.2,0.3,0.5],
          [0.1,0.6,0.3]]
logits = [[2,0.5,1],
          [0.1,1,3]]
logits_scaled = tf.nn.softmax(logits)

result1 = tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)
result2 = -tf.reduce_sum(labels*tf.log(logits_scaled),1)
result3 = tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits_scaled)

with tf.Session() as sess:
    print(sess.run(result1))
    print(sess.run(result2))
    print(sess.run(result3))

> [1.4143689 1.6642545]
  [1.4143689 1.6642545]
  [1.1718578 1.1757141]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
# -*- coding: utf-8 -*-

import tensorflow as tf 
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt

class RBM(object):
    def __init__(self, m, n, lr=0.0001):
        """
        m: Number of neurons in visible layer
        n: Number of neurous in hidden layer
        """
        self._m = m
        self._n = n
        self.lr = lr
        #self.epoach = epoach
        # Creat the Computational graph
        # Weight and biases
        self._W = tf.Variable(tf.random_normal(shape=(self._m, self._n)))
        # bias for hidden layer
        self._c = tf.Variable(np.zeros(self._n).astype(np.float32))
        # bias for Visible layer
        self._b = tf.Variable(np.zeros(self._m).astype(np.float32))
        # Placeholder for inputs
        self._X = tf.placeholder('float', [None, self._m])
        # Forward Pass 
        _h = tf.nn.sigmoid(tf.matmul(self._X, self._W) + self._c)
        self._h = tf.nn.relu(tf.sign(_h - tf.random_uniform(tf.shape(_h))))
        # Backward pass
        _v = tf.nn.sigmoid(tf.matmul(self._h, tf.transpose(self._W)) + self._b)
        self.V = tf.nn.relu(tf.sign(_v - tf.random_uniform(tf.shape(_v))))
        # Object Function
        objective = tf.reduce_mean(self.free_energy(self._X)) - tf.reduce_mean(self.free_energy(self.V))
        self._train_op = tf.train.AdamOptimizer(self.lr).minimize(objective)
        #Cross entropy cost
        reconstructed_input = self.one_pass(self._X)
        self.cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
                                   labels = self._X, logits = reconstructed_input))
            
        
    def fit(self, X, epoachs = 1, batch_size = 3):        
        N, D = X.shape
        num_batches = N // batch_size
        obj = []
        for i in range(epoachs):
            #X = shuffle(X)
            for j in range(num_batches):
                batch = X[j * batch_size : (j * batch_size + batch_size)]
                _, ob = self.session.run([self._train_op, self.cost], feed_dict = {self._X:batch})
                if j % 10 == 0:
                    print("Number of iterations:",i,"The step is: ", j," loss:",ob)
                    #print('training epoch {0} cost{1}'.format(j,ob))
                obj.append(ob)
        return obj
    
    def set_session(self, session):
        self.session = session
    
    def free_energy(self, V):
        b = tf.reshape(self._b, (self._m, 1))
        term_1 = -tf.matmul(V,b)
        term_1 = tf.reshape(term_1, (-1,))
        term_2 = -tf.reduce_sum(tf.nn.softplus(tf.matmul(V, self._W) + self._c))
        return term_1 + term_2
    
    def one_pass(self, X):
        h = tf.nn.sigmoid(tf.matmul(X, self._W) + self._c)
        return tf.matmul(h, tf.transpose(self._W)) + self._b
   
    def reconstruct(self, X):
        x = tf.nn.sigmoid(self.one_pass(X))
        return self.session.run(x, feed_dict={self._X: X}) 
    
    def rbm_output(self, X):
        x = tf.nn.sigmoid(tf.matmul(X, self._W) + self._c)
        return self.session.run(x, feed_dict = {self._X: X})
    
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
trX, trY, teX, teY = mnist.train.images, mnist.train.labels,mnist.test.images, mnist.test.labels


Xtrain = trX.astype(np.float32)
Xtest = teX.astype(np.float32)
_, m = Xtrain.shape
rbm = RBM(m, 50)
#Initialize all variables
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    rbm.set_session(sess)
    err = rbm.fit(Xtrain)
    out = rbm.reconstruct(Xtest[0:100])

plt.plot(list(range(len(err))), err)

row, col = 2, 10
idx = np.random.randint(0, 100, row * col // 2)
f, axarr = plt.subplots(row, col, sharex=True, sharey=True, figsize=(20,4))
for fig, row in zip([Xtest, out], axarr):
    for i, ax in zip(idx, row):
        ax.imshow(fig[i].reshape((28,28)), cmap='Greys_r')
        ax.get_xaxis().set_visible(False)
        ax.get_yaxis().set_visible(False)
   
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/秋刀鱼在做梦/article/detail/944263
推荐阅读
相关标签
  

闽ICP备14008679号