当前位置:   article > 正文

神经网络思维的逻辑回归python代码_np.zeros(dim,1)

np.zeros(dim,1)

一、工具包

  • numpy是Python科学计算的基本包。
  • h5py是一个与存储在H5文件中的数据集交互的常用包。
  • matplotlib是一个著名的Python绘图库。
  • PILscipy在这里用来测试模型。
  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. import h5py
  4. import scipy
  5. from PIL import Image
  6. from scipy import ndimage
  7. from lr_utils import load_dataset
  8. %matplotlib inline

二、问题概述

一个数据集(“data.h5”)包含:训练集训练图像标记为cat(y = 1)或non-cat(y = 0),测试集测试图像标记为cat或non-cat,图像大小为(num px, num px, 3) 。因此,每个图像是正方形(height =num px)和(width=num px)。构建一个简单的图像识别算法,可以正确地将图片分类为猫或非猫

  1. # Loading the data (cat/non-cat)
  2. train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
  1. # Example of a picture
  2. index = 25
  3. plt.imshow(train_set_x_orig[index])
  4. print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")

 index=25的训练集图像:

  1. ### START CODE HERE ### (≈ 3 lines of code)
  2. m_train = train_set_x_orig.shape[0]
  3. m_test = test_set_x_orig.shape[0]
  4. num_px = train_set_x_orig.shape[1]
  5. ### END CODE HERE ###
  6. print ("Number of training examples: m_train = " + str(m_train))
  7. print ("Number of testing examples: m_test = " + str(m_test))
  8. print ("Height/Width of each image: num_px = " + str(num_px))
  9. print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
  10. print ("train_set_x shape: " + str(train_set_x_orig.shape))
  11. print ("train_set_y shape: " + str(train_set_y.shape))
  12. print ("test_set_x shape: " + str(test_set_x_orig.shape))
  13. print ("test_set_y shape: " + str(test_set_y.shape))

 输出:

  1. # Reshape the training and test examples
  2. ### START CODE HERE ### (≈ 2 lines of code)
  3. train_set_x_flatten = train_set_x_orig.reshape(m_train, -1).T
  4. test_set_x_flatten = test_set_x_orig.reshape(m_test, -1).T
  5. ### END CODE HERE ###
  6. print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
  7. print ("train_set_y shape: " + str(train_set_y.shape))
  8. print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
  9. print ("test_set_y shape: " + str(test_set_y.shape))
  10. print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))

Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px∗num_px∗3, 1).

输出:

 标准化数据集:

  1. train_set_x = train_set_x_flatten/255.
  2. test_set_x = test_set_x_flatten/255.

三、学习算法的总体架构

逻辑回归其实就是一个非常简单的神经网络。

四、算法构建部分

建立神经网络的主要步骤是:

1. 定义模型结构(例如输入特性的数量)

2. 初始化模型的参数

3.循环:

  • 计算电流损耗(正向传播)
  • 计算电流梯度(反向传播)
  • 更新参数(梯度下降)

通常分别构建1-3个函数,并将它们集成到一个称为model()的函数中。

定义sigmoid函数

  1. # GRADED FUNCTION: sigmoid
  2. def sigmoid(z):
  3. """
  4. Compute the sigmoid of z
  5. Arguments:
  6. z -- A scalar or numpy array of any size.
  7. Return:
  8. s -- sigmoid(z)
  9. """
  10. ### START CODE HERE ### (≈ 1 line of code)
  11. s = 1.0 / (1.0 + np.exp(-1.0 * z))
  12. ### END CODE HERE ###
  13. return s

初始化参数

  1. # GRADED FUNCTION: initialize_with_zeros
  2. def initialize_with_zeros(dim):
  3. """
  4. This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
  5. Argument:
  6. dim -- size of the w vector we want (or number of parameters in this case)
  7. Returns:
  8. w -- initialized vector of shape (dim, 1)
  9. b -- initialized scalar (corresponds to the bias)
  10. """
  11. ### START CODE HERE ### (≈ 1 line of code)
  12. w = np.zeros((dim,1))
  13. b = 0
  14. ### END CODE HERE ###
  15. assert(w.shape == (dim, 1))
  16. assert(isinstance(b, float) or isinstance(b, int))
  17. return w, b

前向和后向传播

  1. # GRADED FUNCTION: propagate
  2. def propagate(w, b, X, Y):
  3. """
  4. Implement the cost function and its gradient for the propagation explained above
  5. Arguments:
  6. w -- weights, a numpy array of size (num_px * num_px * 3, 1)
  7. b -- bias, a scalar
  8. X -- data of size (num_px * num_px * 3, number of examples)
  9. Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
  10. Return:
  11. cost -- negative log-likelihood cost for logistic regression
  12. dw -- gradient of the loss with respect to w, thus same shape as w
  13. db -- gradient of the loss with respect to b, thus same shape as b
  14. Tips:
  15. - Write your code step by step for the propagation. np.log(), np.dot()
  16. """
  17. m = X.shape[1]
  18. # FORWARD PROPAGATION (FROM X TO COST)
  19. ### START CODE HERE ### (≈ 2 lines of code)
  20. A = sigmoid(np.dot(w.T, X) + b) # compute activation
  21. cost = -(1.0 / m) * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A)) # compute cost
  22. ### END CODE HERE ###
  23. # BACKWARD PROPAGATION (TO FIND GRAD)
  24. ### START CODE HERE ### (≈ 2 lines of code)
  25. dw = (1.0 / m) * np.dot(X, (A - Y).T)
  26. db = (1.0 / m) * np.sum(A - Y)
  27. ### END CODE HERE ###
  28. assert(dw.shape == w.shape)
  29. assert(db.dtype == float)
  30. cost = np.squeeze(cost)
  31. assert(cost.shape == ())
  32. grads = {"dw": dw,
  33. "db": db}
  34. return grads, cost

优化参数

Write down the optimization function. The goal is to learn 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/123539?site
推荐阅读
相关标签