当前位置:   article > 正文

【deeplearning.ai】第二门课:提升深层神经网络——权重初始化_cnn aiver 初始化

cnn aiver 初始化

一、初始化

合理的权重初始化可以防止梯度爆炸和消失。对于ReLu激活函数,权重可初始化为:


也叫作“He初始化”。对于tanh激活函数,权重初始化为:


也称为“Xavier初始化”。也可以使用下面这个公式进行初始化:


上述公式中的l指当前处在神经网络的第几层,l-1为上一层。


二、编程作业

有如下二维数据:


训练网络正确分类红点和蓝点。导入需要的扩展包,其中init_utils.py在这里下载

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. import sklearn
  4. import sklearn.datasets
  5. from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
  6. from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
  7. %matplotlib inline
  8. plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
  9. plt.rcParams['image.interpolation'] = 'nearest'
  10. plt.rcParams['image.cmap'] = 'gray'
  11. # load image dataset: blue/red dots in circles
  12. train_X, train_Y, test_X, test_Y = load_dataset()



1、建立神经网络模型

  1. def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
  2. """
  3. Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
  4. Arguments:
  5. X -- input data, of shape (2, number of examples)
  6. Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
  7. learning_rate -- learning rate for gradient descent
  8. num_iterations -- number of iterations to run gradient descent
  9. print_cost -- if True, print the cost every 1000 iterations
  10. initialization -- flag to choose which initialization to use ("zeros","random" or "he")
  11. Returns:
  12. parameters -- parameters learnt by the model
  13. """
  14. grads = {}
  15. costs = [] # to keep track of the loss
  16. m = X.shape[1] # number of examples
  17. layers_dims = [X.shape[0], 10, 5, 1]
  18. # Initialize parameters dictionary.
  19. if initialization == "zeros":
  20. parameters = initialize_parameters_zeros(layers_dims)
  21. elif initialization == "random":
  22. parameters = initialize_parameters_random(layers_dims)
  23. elif initialization == "he":
  24. parameters = initialize_parameters_he(layers_dims)
  25. # Loop (gradient descent)
  26. for i in range(0, num_iterations):
  27. # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
  28. a3, cache = forward_propagation(X, parameters)
  29. # Loss
  30. cost = compute_loss(a3, Y)
  31. # Backward propagation.
  32. grads = backward_propagation(X, Y, cache)
  33. # Update parameters.
  34. parameters = update_parameters(parameters, grads, learning_rate)
  35. # Print the loss every 1000 iterations
  36. if print_cost and i % 1000 == 0:
  37. print("Cost after iteration {}: {}".format(i, cost))
  38. costs.append(cost)
  39. # plot the loss
  40. plt.plot(costs)
  41. plt.ylabel('cost')
  42. plt.xlabel('iterations (per hundreds)')
  43. plt.title("Learning rate =" + str(learning_rate))
  44. plt.show()
  45. return parameters



2、将权重初始化为0

  1. def initialize_parameters_zeros(layers_dims):
  2. """
  3. Arguments:
  4. layer_dims -- python array (list) containing the size of each layer.
  5. Returns:
  6. parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
  7. W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
  8. b1 -- bias vector of shape (layers_dims[1], 1)
  9. ...
  10. WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
  11. bL -- bias vector of shape (layers_dims[L], 1)
  12. """
  13. parameters = {}
  14. L = len(layers_dims) # number of layers in the network
  15. for l in range(1, L):
  16. parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l-1]))
  17. parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
  18. return parameters




训练网络:

  1. parameters = model(train_X, train_Y, initialization = "zeros")
  2. print ("On the train set:")
  3. predictions_train = predict(train_X, train_Y, parameters)
  4. print ("On the test set:")
  5. predictions_test = predict(test_X, test_Y, parameters)



训练完成后绘制的cost曲线:


训练准确率为0.5,测试准确率为0.5,。将测试集的预测结果输出:


画出分类界线:


这个模型将所有测试集都预测成了0,将权重初始化为0使网络没有打破平衡,每个神经元都学到了相同的东西。


3、将权重随机初始化为较大的数

  1. def initialize_parameters_random(layers_dims):
  2. """
  3. Arguments:
  4. layer_dims -- python array (list) containing the size of each layer.
  5. Returns:
  6. parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
  7. W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
  8. b1 -- bias vector of shape (layers_dims[1], 1)
  9. ...
  10. WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
  11. bL -- bias vector of shape (layers_dims[L], 1)
  12. """
  13. np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
  14. parameters = {}
  15. L = len(layers_dims) # integer representing the number of layers
  16. for l in range(1, L):
  17. parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1])*10
  18. parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
  19. return parameters



训练这个模型,得到cost曲线:


训练集准确率为0.83,测试集准确率为0.86。分类界线如下:


可以看出cost一开始很大,是因为权重初始化得较大,使某些样本的输出(sigmoid激活函数)非常接近0或1。糟糕的初始化可能导致梯度爆炸或消失,同时降低训练速度。


4、使用He初始化

  1. def initialize_parameters_he(layers_dims):
  2. """
  3. Arguments:
  4. layer_dims -- python array (list) containing the size of each layer.
  5. Returns:
  6. parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
  7. W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
  8. b1 -- bias vector of shape (layers_dims[1], 1)
  9. ...
  10. WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
  11. bL -- bias vector of shape (layers_dims[L], 1)
  12. """
  13. np.random.seed(3)
  14. parameters = {}
  15. L = len(layers_dims) - 1 # integer representing the number of layers
  16. for l in range(1, L + 1):
  17. parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2/layers_dims[l-1])
  18. parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
  19. return parameters



cost曲线:


训练集的准确率为0.9933333,测试集的准确率为0.96。分类界线:

可以看出合理的权重初始化使网络性能得到了很好的改善。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小蓝xlanll/article/detail/523264
推荐阅读
  

闽ICP备14008679号