赞
踩
最近看到一个Github内容不错,就当做个笔记。
2003年,NNLM首次将神经网络应用到语言模型的问题中,从此深度学习就登上了NLP的舞台,并有把传统模型赶下去的趋势。
语言模型可以说是用前n-1个单词做为输入去预测第n个单词,也就是说第n个词是哪个单词的时候,才使得这句话像是正常的话,有正常的语序,使用正确且恰当的词。
下面我将结合代码理解NNLM。
import tensorflow as tf
import numpy as np
print(tf.__version__)
tf.reset_default_graph() # 重置图
sentences = [ "i like dog", "i love coffee", "i hate milk"] # 数据集
word_list = " ".join(sentences).split()
word_list = list(set(word_list))
word_dict = {w: i for i, w in enumerate(word_list)}
number_dict = {i: w for i, w in enumerate(word_list)}
n_class = len(word_dict) # 字典中词的种类
# NNLM Parameter
n_step = 2 # number of steps ['i like', 'i love', 'i hate']
n_hidden = 2 # number of hidden units
# 训练集是前两个词 标签是最后一个词 相当于one_hot
def make_batch(sentences):
input_batch = []
target_batch = []
for sen in sentences:
word = sen.split()
input = [word_dict[n] for n in word[:-1]]
target = word_dict[word[-1]]
input_batch.append(np.eye(n_class)[input])
target_batch.append(np.eye(n_class)[target])
return input_batch, target_batch
# Model 初始化
X = tf.placeholder(tf.float32, [None, n_step, n_class]) # [batch_size, number of steps, number of Vocabulary]
Y = tf.placeholder(tf.float32, [None, n_class])
input = tf.reshape(X, shape=[-1, n_step * n_class]) # [batch_size, n_step * n_class]
H = tf.Variable(tf.random_normal([n_step * n_class, n_hidden]))
d = tf.Variable(tf.random_normal([n_hidden]))
U = tf.Variable(tf.random_normal([n_hidden, n_class]))
b = tf.Variable(tf.random_normal([n_class]))
tanh = tf.nn.tanh(d + tf.matmul(input, H)) # [batch_size, n_hidden]
model = tf.matmul(tanh, U) + b # [batch_size, n_class]
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=model, labels=Y))
optimizer = tf.train.AdamOptimizer(0.001).minimize(cost)
prediction =tf.argmax(model, 1) # 得到每行的最大值
# Training
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
input_batch, target_batch = make_batch(sentences) # 得到数据
print("input_batch ",input_batch,len(input_batch),"\n","target_batch", target_batch,len(target_batch))
for epoch in range(5000):
_, loss = sess.run([optimizer, cost], feed_dict={X: input_batch, Y: target_batch})
if (epoch + 1)%1000 == 0:
print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))
predict = sess.run([prediction], feed_dict={X: input_batch})
# Test
input = [sen.split()[:2] for sen in sentences]
print([sen.split()[:2] for sen in sentences], '->', [number_dict[n] for n in predict[0]])
我们发现最终预测的结果和之前标签完全符合。
参考的Github:https://github.com/graykode/nlp-tutorial
详细总结的代码:https://github.com/Clayygou/NLP/blob/master/NNLM.ipynb
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。