当前位置:   article > 正文

Tensorflow之损失函数与交叉熵

Tensorflow之损失函数与交叉熵

损失函数:预测值与已知答案之间的差距

NN优化目标:loss最小{mse, 自定义, ce)

均方误差tensorflow实现,loss_mse = tf.reduce_mean(tf.sqrue(y_-y)

预测酸奶日销量,y,x1, x2是影响日销量的因素

建模前,应预先采集每日x1,x2,和效率y

拟造数据集x,y:y_=x1 + x2 ,噪声 -0.05-+0.05

  1. import tensorflow as tf
  2. import numpy as np
  3. SEED = 2345
  4. rdm = np.random.RandomState()
  5. x = rdm.rand(32,2) # 生成32行两列之间的数字
  6. y_ = [[x1 + x2 + (rdm.rand()/10.0 - 0.05)] for (x1, x2) in x] #0.1-0.05=0.005
  7. x = tf.cast(x, dtype=tf.float32)
  8. # 随机初始化w121
  9. w1 = tf.Variable(tf.random.normal([2, 1], stddev = 1, seed = 1))
  10. epoch = 15000
  11. lr = 0.002
  12. for epoch in range(epoch):
  13. with tf.GradientTape() as tape:
  14. y = tf.matmul(x, w1)
  15. loss_mse = tf.reduce_mean(tf.square(y_ - y))
  16. grads = tape.gradient(loss_mse, w1)
  17. w1.assign_sub(lr * grads) #更新参数

使用均方误差,预测多和预测少是一样的

预测多了,损失成本,预测少了,损失利润,利润不等于成本

自定义损失函数 loss(y_, y) = \sum{n} f(y_, y)

  1. import tensorflow as tf
  2. import numpy as np
  3. SEED = 23455
  4. COST = 1
  5. PROFIT = 99
  6. rdm = np.random.RandomState(SEED)
  7. x = rdm.rand(32, 2)
  8. y_ = [[x1 + x2 + (rdm.rand() / 10.0 - 0.05)] for (x1, x2) in x] # 生成噪声[0,1)/10=[0,0.1); [0,0.1)-0.05=[-0.05,0.05)
  9. x = tf.cast(x, dtype=tf.float32)
  10. w1 = tf.Variable(tf.random.normal([2, 1], stddev=1, seed=1))
  11. epoch = 10000
  12. lr = 0.002
  13. for epoch in range(epoch):
  14. with tf.GradientTape() as tape:
  15. y = tf.matmul(x, w1)
  16. loss = tf.reduce_sum(tf.where(tf.greater(y, y_), (y - y_) * COST, (y_ - y) * PROFIT))
  17. grads = tape.gradient(loss, w1)
  18. w1.assign_sub(lr * grads)
  19. if epoch % 500 == 0:
  20. print("After %d training steps,w1 is " % (epoch))
  21. print(w1.numpy(), "\n")
  22. print("Final w1 is: ", w1.numpy())
  23. # 自定义损失函数
  24. # 酸奶成本1元, 酸奶利润99
  25. # 成本很低,利润很高,人们希望多预测些,生成模型系数大于1,往多了预测

 交叉熵

交叉熵可以表示两个概率分布之间的距离

例如 二分类,已知答案y_(1, 0) 预测 y1(0.6, 0.4), y2=(0.8, 0.2),  那个答案接近标准答案

代码实现, tf.losses.categorical_crossentropy(y_,y)

  1. import tensorflow as tf
  2. loss_ce1 = tf.losses.categorical_crossentropy([1, 0], [0.6, 0.4])
  3. loss_ce2 = tf.losses.categorical_crossentropy([1, 0], [0.8, 0.2])
  4. print("loss_ce1:", loss_ce1)
  5. print("loss_ce2:", loss_ce2)

sotfmax与交叉熵结合

tf.nn.sotfmax_cross_entropy_with_logits(y_, y)

例子:

  1. # softmax与交叉熵损失函数的结合
  2. import tensorflow as tf
  3. import numpy as np
  4. y_ = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0], [0, 1, 0]])
  5. y = np.array([[12, 3, 2], [3, 10, 1], [1, 2, 5], [4, 6.5, 1.2], [3, 6, 1]])
  6. y_pro = tf.nn.softmax(y)
  7. loss_ce1 = tf.losses.categorical_crossentropy(y_,y_pro)
  8. loss_ce2 = tf.nn.softmax_cross_entropy_with_logits(y_, y)
  9. print('分步计算的结果:\n', loss_ce1)
  10. print('结合计算的结果:\n', loss_ce2)
  11. # 输出的结果相同

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/黑客灵魂/article/detail/817099
推荐阅读
相关标签
  

闽ICP备14008679号