赞
踩
我在Tensorflow中有两个张量,它们有以下两种形状:
print(tf.valid_dataset.get_shape())
print(weights1.get_shape())
有结果:
(10000, 784)
(784, 1024)
但是,如果我试图将它们相乘,就像这样:
tf.matmul(tf_valid_dataset, weights1)
我明白了:
Tensor("Variable:0", shape=(784, 1024), dtype=float32_ref) must be from the same graph as Tensor("Const:0", shape=(10000, 784), dtype=float32).
由于我将它们放在它们都具有 784 大小的一侧,这对我来说似乎是正确的 .
知道什么可能是错的吗?
编辑:
我在print语句之前的代码是这样的:
num_hidden_nodes=1024
batch_size = 128
learning_rate = 0.5
graph = tf.Graph()
with graph.as_default():
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size*image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf.valid_dataset = tf.constant(valid_dataset)
tf.test_dataset = tf.constant(test_dataset)
weights1 = tf.Variable(tf.truncated_normal([image_size * image_size, num_hidden_nodes]))
biases1 = tf.Variable(tf.zeros([num_hidden_nodes]))
weights2 = tf.Variable(tf.truncated_normal([num_hidden_nodes, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
weights = [weights1, biases1, weights2, biases2]
lay1_train = tf.nn.relu(tf.matmul(tf_train_dataset, weights1) + biases1)
logits = tf.matmul(lay1_train, weights2) + biases2
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。