赞
踩
- import numpy as np
- import matplotlib.pyplot as plt
- import h5py
- import scipy
- from PIL import Image
- from scipy import ndimage
- from lr_utils import load_dataset
-
- %matplotlib inline
一个数据集(“data.h5”)包含:训练集训练图像标记为cat(y = 1)或non-cat(y = 0),测试集测试图像标记为cat或non-cat,图像大小为(num px, num px, 3) 。因此,每个图像是正方形(height =num px)和(width=num px)。构建一个简单的图像识别算法,可以正确地将图片分类为猫或非猫。
- # Loading the data (cat/non-cat)
- train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
- # Example of a picture
- index = 25
- plt.imshow(train_set_x_orig[index])
- print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
index=25的训练集图像:
- ### START CODE HERE ### (≈ 3 lines of code)
- m_train = train_set_x_orig.shape[0]
- m_test = test_set_x_orig.shape[0]
- num_px = train_set_x_orig.shape[1]
- ### END CODE HERE ###
-
- print ("Number of training examples: m_train = " + str(m_train))
- print ("Number of testing examples: m_test = " + str(m_test))
- print ("Height/Width of each image: num_px = " + str(num_px))
- print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
- print ("train_set_x shape: " + str(train_set_x_orig.shape))
- print ("train_set_y shape: " + str(train_set_y.shape))
- print ("test_set_x shape: " + str(test_set_x_orig.shape))
- print ("test_set_y shape: " + str(test_set_y.shape))
输出:
- # Reshape the training and test examples
-
- ### START CODE HERE ### (≈ 2 lines of code)
- train_set_x_flatten = train_set_x_orig.reshape(m_train, -1).T
- test_set_x_flatten = test_set_x_orig.reshape(m_test, -1).T
- ### END CODE HERE ###
-
- print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
- print ("train_set_y shape: " + str(train_set_y.shape))
- print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
- print ("test_set_y shape: " + str(test_set_y.shape))
- print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px∗num_px∗3, 1).
输出:
标准化数据集:
- train_set_x = train_set_x_flatten/255.
- test_set_x = test_set_x_flatten/255.
逻辑回归其实就是一个非常简单的神经网络。
建立神经网络的主要步骤是:
1. 定义模型结构(例如输入特性的数量)
2. 初始化模型的参数
3.循环:
通常分别构建1-3个函数,并将它们集成到一个称为model()的函数中。
定义sigmoid函数
- # GRADED FUNCTION: sigmoid
-
- def sigmoid(z):
- """
- Compute the sigmoid of z
- Arguments:
- z -- A scalar or numpy array of any size.
- Return:
- s -- sigmoid(z)
- """
-
- ### START CODE HERE ### (≈ 1 line of code)
- s = 1.0 / (1.0 + np.exp(-1.0 * z))
- ### END CODE HERE ###
-
- return s
初始化参数
- # GRADED FUNCTION: initialize_with_zeros
-
- def initialize_with_zeros(dim):
- """
- This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
-
- Argument:
- dim -- size of the w vector we want (or number of parameters in this case)
-
- Returns:
- w -- initialized vector of shape (dim, 1)
- b -- initialized scalar (corresponds to the bias)
- """
-
- ### START CODE HERE ### (≈ 1 line of code)
- w = np.zeros((dim,1))
- b = 0
- ### END CODE HERE ###
-
- assert(w.shape == (dim, 1))
- assert(isinstance(b, float) or isinstance(b, int))
-
- return w, b
前向和后向传播
- # GRADED FUNCTION: propagate
-
- def propagate(w, b, X, Y):
- """
- Implement the cost function and its gradient for the propagation explained above
- Arguments:
- w -- weights, a numpy array of size (num_px * num_px * 3, 1)
- b -- bias, a scalar
- X -- data of size (num_px * num_px * 3, number of examples)
- Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
- Return:
- cost -- negative log-likelihood cost for logistic regression
- dw -- gradient of the loss with respect to w, thus same shape as w
- db -- gradient of the loss with respect to b, thus same shape as b
-
- Tips:
- - Write your code step by step for the propagation. np.log(), np.dot()
- """
-
- m = X.shape[1]
-
- # FORWARD PROPAGATION (FROM X TO COST)
- ### START CODE HERE ### (≈ 2 lines of code)
- A = sigmoid(np.dot(w.T, X) + b) # compute activation
- cost = -(1.0 / m) * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A)) # compute cost
- ### END CODE HERE ###
-
- # BACKWARD PROPAGATION (TO FIND GRAD)
- ### START CODE HERE ### (≈ 2 lines of code)
- dw = (1.0 / m) * np.dot(X, (A - Y).T)
- db = (1.0 / m) * np.sum(A - Y)
- ### END CODE HERE ###
-
- assert(dw.shape == w.shape)
- assert(db.dtype == float)
- cost = np.squeeze(cost)
- assert(cost.shape == ())
-
- grads = {"dw": dw,
- "db": db}
-
- return grads, cost
优化参数
Write down the optimization function. The goal is to learn
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。