当前位置:   article > 正文

【机器学习】多元线性回归

【机器学习】多元线性回归

多元线性回归模型(multiple regression model)

  • 多元线性回归模型:

f w ⃗ , b ( x ⃗ ) = w ⃗ ⋅ x ⃗ + b = w 1 x 1 + w 2 x 2 + . . . + w n x n + b = ∑ j = 1 n w j x j + b

fw,b(x)=wx+b=w1x1+w2x2+...+wnxn+b=j=1nwjxj+b
fw ,b(x )=w x +b=w1x1+w2x2+...+wnxn+b=j=1nwjxj+b

其中:

  • w ⃗ \vec{w} w 为权重(weight)= ( w 1 , w 2 , . . . , w n ) (w_1, w_2, ..., w_n) (w1,w2,...,wn) n n n 为向量维数
  • b b b 为偏置(bias)
  • x ⃗ = ( x 1 , x 2 , . . . , x n ) \vec{x} = (x_1, x_2, ..., x_n) x =(x1,x2,...,xn) n n n 为向量维数

损失/代价函数(cost function)——均方误差(mean squared error)

  • 一个训练样本: x ⃗ ( i ) = ( x 1 ( i ) , x 2 ( i ) , . . . , x n ( i ) ) \vec{x}^{(i)} = (x_1^{(i)}, x_2^{(i)}, ..., x_n^{(i)}) x (i)=(x1(i),x2(i),...,xn(i)) y ( i ) y^{(i)} y(i)
  • 训练样本总数 = m m m
  • 损失/代价函数:

J ( w ⃗ , b ) = 1 2 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] 2 = 1 2 m ∑ i = 1 m [ w ⃗ ⋅ x ⃗ ( i ) + b − y ( i ) ] 2

J(w,b)=12mi=1m[fw,b(x(i))y(i)]2=12mi=1m[wx(i)+by(i)]2
J(w ,b)=2m1i=1m[fw ,b(x (i))y(i)]2=2m1i=1m[w x (i)+by(i)]2

批量梯度下降算法(batch gradient descent algorithm)

  • α \alpha α:学习率(learning rate),用于控制梯度下降时的步长,以抵达损失函数的最小值处。若 α \alpha α 太小,梯度下降太慢;若 α \alpha α 太大,下降过程可能无法收敛。
  • 批量梯度下降算法:

r e p e a t { t m p _ w 1 = w 1 − α 1 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] x 1 ( i ) t m p _ w 2 = w 2 − α 1 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] x 2 ( i ) . . . t m p _ w n = w n − α 1 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] x n ( i ) t m p _ b = b − α 1 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] s i m u l t a n e o u s   u p d a t e   e v e r y   p a r a m e t e r s } u n t i l   c o n v e r g e

repeat{tmp_w1=w1α1mi=1m[fw,b(x(i))y(i)]x1(i)tmp_w2=w2α1mi=1m[fw,b(x(i))y(i)]x2(i)...tmp_wn=wnα1mi=1m[fw,b(x(i))y(i)]xn(i)tmp_b=bα1mi=1m[fw,b(x(i))y(i)]simultaneous update every parameters}until converge
repeat{}until tmp_w1=w1αm1i=1m[fw ,b(x (i))y(i)]x1(i)tmp_w2=w2αm1i=1m[fw ,b(x (i))y(i)]x2(i)...tmp_wn=wnαm1i=1m[fw ,b(x (i))y(i)]xn(i)tmp_b=bαm1i=1m[fw ,b(x (i))y(i)]simultaneous update every parametersconverge

  • 检查梯度下降是否收敛(converge):函数 J ( w ⃗ , b ) J(\vec{w}, b) J(w ,b) 随迭代次数的增加应逐渐减小。令 ϵ = 0.001 \epsilon = 0.001 ϵ=0.001,若在某一次迭代中发现函数 J ( w ⃗ , b ) J(\vec{w}, b) J(w ,b) 的增长值 ≤ ϵ \leq \epsilon ϵ,则说明收敛。
  • 实现代码:
import numpy as np
import matplotlib.pyplot as plt

# 计算误差均方函数 J(w,b)
def cost_function(X, y, w, b):
    m = X.shape[0] # 训练集的数据样本数
    cost_sum = 0.0
    for i in range(m):
        f_wb_i = np.dot(w, X[i]) + b
        cost = (f_wb_i - y[i]) ** 2
        cost_sum += cost
    return cost_sum / (2 * m)

# 计算梯度值 dJ/dw, dJ/db
def compute_gradient(X, y, w, b):
    m = X.shape[0] # 训练集的数据样本数(矩阵行数)
    n = X.shape[1] # 每个数据样本的维度(矩阵列数)
    dj_dw = np.zeros((n,))
    dj_db = 0.0
    for i in range(m): # 每个数据样本
        f_wb_i = np.dot(w, X[i]) + b
        for j in range(n): # 每个数据样本的维度
            dj_dw[j] += (f_wb_i - y[i]) * X[i, j]
        dj_db += (f_wb_i - y[i])
    dj_dw = dj_dw / m
    dj_db = dj_db / m
    return dj_dw, dj_db

# 梯度下降算法
def linear_regression(X, y, w, b, learning_rate=0.01, epochs=1000):
    J_history = [] # 记录每次迭代产生的误差值
    for epoch in range(epochs):
        dj_dw, dj_db = compute_gradient(X, y, w, b)
        # w 和 b 需同步更新
        w = w - learning_rate * dj_dw
        b = b - learning_rate * dj_db
        J_history.append(cost_function(X, y, w, b)) # 记录每次迭代产生的误差值
    return w, b, J_history

# 绘制散点图
def draw_scatter(x, y, title):
    plt.xlabel("X-axis", size=15)
    plt.ylabel("Y-axis", size=15)
    plt.title(title, size=20)
    plt.scatter(x, y)

# 打印训练集数据和预测值数据以便对比
def print_contrast(train, prediction, n):
    print("train  prediction")
    for i in range(n):
        print(np.round(train[i], 4), np.round(prediction[i], 4))

# 从这里开始执行
if __name__ == '__main__':
    # 训练集样本
    data = np.loadtxt("./data.txt", delimiter=',', skiprows=1)
    X_train = data[:, :4] # 训练集的第 0-3 列为 X = (x0, x1, x2, x3)
    y_train = data[:, 4] # 训练集的第 4 列为 y
    w = np.zeros((X_train.shape[1],)) # 权重
    b = 0.0 # 偏置
    epochs = 1000  # 迭代次数
    learning_rate = 1e-7  # 学习率
    J_history = []  # 记录每次迭代产生的误差值

    # 线性回归模型的建立
    w, b, J_history = linear_regression(X_train, y_train, w, b, learning_rate, epochs)
    print(f"result: w = {np.round(w, 4)}, b = {b:0.4f}")  # 打印结果

    # 训练集 y_train 与预测值 y_hat 的对比(这里其实我偷了个懒,训练集当测试集用,以后不要这样做!)
    y_hat = np.zeros(X_train.shape[0])
    for i in range(X_train.shape[0]):
        y_hat[i] = np.dot(w, X_train[i]) + b
    print_contrast(y_train, y_hat, y_train.shape[0])

    # 绘制误差值的散点图
    x_axis = list(range(0, epochs))
    draw_scatter(x_axis, J_history, "Cost Function in Every Epoch")
    plt.show()

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79

特征工程(feature engineering)

将原有特征值通过组合或转化等方式变成新特征值。

特征缩放(feature scaling)

  • 特征缩放的作用:

在这里插入图片描述

  • 均值归一化(mean normalization):

x j ( i ) : = x j ( i ) − μ j max ⁡ ( x j ) − min ⁡ ( x j ) x_j^{(i)} := \frac{x_j^{(i)} - \mu_j}{\max (x_j) - \min (x_j)} xj(i):=max(xj)min(xj)xj(i)μj

其中: x ⃗ ( i ) = ( x 1 ( i ) , x 2 ( i ) , . . . , x j ( i ) , . . . , x n ( i ) ) \vec{x}^{(i)} = (x_1^{(i)}, x_2^{(i)}, ..., x_j^{(i)}, ..., x_n^{(i)}) x (i)=(x1(i),x2(i),...,xj(i),...,xn(i)) μ j \mu_j μj 为所有 x j x_j xj 的平均值(mean),即

μ j = 1 n ∑ i = 1 n x j ( i ) \mu_j = \frac{1}{n} \sum_{i=1}^{n} x_j^{(i)} μj=n1i=1nxj(i)

  • z-score 归一化(z-score normalization):

x j ( i ) : = x j ( i ) − μ j σ j x_j^{(i)} := \frac{x_j^{(i)} - \mu_j}{\sigma_j} xj(i):=σjxj(i)μj

其中: x ⃗ ( i ) = ( x 1 ( i ) , x 2 ( i ) , . . . , x j ( i ) , . . . , x n ( i ) ) \vec{x}^{(i)} = (x_1^{(i)}, x_2^{(i)}, ..., x_j^{(i)}, ..., x_n^{(i)}) x (i)=(x1(i),x2(i),...,xj(i),...,xn(i)) σ j \sigma_j σj 为所有 x j x_j xj 的标准差(Standard Deviation,std),即

μ j = 1 n ∑ i = 1 n [ x j ( i ) − μ j ] 2 \mu_j = \sqrt {\frac{1}{n} \sum_{i=1}^{n} [x_j^{(i)} - \mu_j]^2} μj=n1i=1n[xj(i)μj]2

  • 归一化的问题】训练出的结果 W 和 B,在使用测试集推理时有两种使用方式:
    • 直接使用,此时必须把预测时输入的 X 也做相同规则的归一化。
    • 反归一化为 W,B 的本来值 W_real 和 B_real,推理时输入的 X 不需要改动。
    • 另外,Y 也可以归一化,好处是迭代次数少。如果结果收敛,也可以不归一化,如果不收敛(数值过大),就必须归一化。如果 Y 归一化,对得出来的结果做关于 Y 的反归一化。
  • 实现代码:
import numpy as np
import matplotlib.pyplot as plt

# 均值归一化
def mean_normalize_features(X):
    mu = np.mean(X, axis=0) # 计算平均值,矩阵可指定计算行(axis=1)或列(axis=0,此处即特征值)
    X_mean = (X - mu) / (np.max(X, axis=0) - np.min(X, axis=0))
    return X_mean

# z-score 归一化
def zscore_normalize_features(X):
    mu = np.mean(X, axis=0) # 计算平均值,矩阵可指定计算行(axis=1)或列(axis=0,此处即特征值)
    sigma = np.std(X, axis=0) # 计算标准差,矩阵可指定计算行(axis=1)或列(axis=0,此处即特征值)
    X_zscore = (X - sigma) / mu
    return X_zscore

# 计算误差均方函数 J(w,b)
def cost_function(X, y, w, b):
    m = X.shape[0] # 训练集的数据样本数
    cost_sum = 0.0
    for i in range(m):
        f_wb_i = np.dot(w, X[i]) + b
        cost = (f_wb_i - y[i]) ** 2
        cost_sum += cost
    return cost_sum / (2 * m)

# 计算梯度值 dJ/dw, dJ/db
def compute_gradient(X, y, w, b):
    m = X.shape[0] # 训练集的数据样本数(矩阵行数)
    n = X.shape[1] # 每个数据样本的维度(矩阵列数)
    dj_dw = np.zeros((n,))
    dj_db = 0.0
    for i in range(m): # 每个数据样本
        f_wb_i = np.dot(w, X[i]) + b
        for j in range(n): # 每个数据样本的维度
            dj_dw[j] += (f_wb_i - y[i]) * X[i, j]
        dj_db += (f_wb_i - y[i])
    dj_dw = dj_dw / m
    dj_db = dj_db / m
    return dj_dw, dj_db

# 梯度下降算法
def linear_regression(X, y, w, b, learning_rate=0.01, epochs=1000):
    J_history = [] # 记录每次迭代产生的误差值
    for epoch in range(epochs):
        dj_dw, dj_db = compute_gradient(X, y, w, b)
        # w 和 b 需同步更新
        w = w - learning_rate * dj_dw
        b = b - learning_rate * dj_db
        J_history.append(cost_function(X, y, w, b)) # 记录每次迭代产生的误差值
    return w, b, J_history

# 绘制散点图
def draw_scatter(x, y, title):
    plt.xlabel("X-axis", size=15)
    plt.ylabel("Y-axis", size=15)
    plt.title(title, size=20)
    plt.scatter(x, y)

# 打印训练集数据和预测值数据以便对比
def print_contrast(train, prediction, n):
    print("train  prediction")
    for i in range(n):
        print(np.round(train[i], 4), np.round(prediction[i], 4))

# 从这里开始执行
if __name__ == '__main__':
    # 训练集样本
    data = np.loadtxt("./data.txt", delimiter=',', skiprows=1)
    X_train = data[:, :4]  # 训练集的第 0-3 列为 X = (x0, x1, x2, x3)
    y_train = data[:, 4]  # 训练集的第 4 列为 y
    w = np.zeros((X_train.shape[1],))  # 权重
    b = 0.0  # 偏置
    epochs = 1000  # 迭代次数
    learning_rate = 0.01  # 学习率
    J_history = []  # 记录每次迭代产生的误差值

    # Z-score 归一化
    X_norm = zscore_normalize_features(X_train)
    #y_norm = zscore_normalize_features(y_train)
    print(f"X_norm = {np.round(X_norm, 4)}")
    #print(f"y_norm = {np.round(y_norm, 4)}")

    # 线性回归模型的建立
    w, b, J_history = linear_regression(X_norm, y_train, w, b, learning_rate, epochs)
    print(f"result: w = {np.round(w, 4)}, b = {b:0.4f}")  # 打印结果

    # 训练集 y_train 与预测值 y_hat 的对比(这里其实我偷了个懒,训练集当测试集用,以后不要这样做!)
    y_hat = np.zeros(X_train.shape[0])
    for i in range(X_train.shape[0]):
        # 注意,测试集的输入也需要进行归一化!
        y_hat[i] = np.dot(w, X_norm[i]) + b
    print_contrast(y_train, y_hat, y_train.shape[0])

    # 绘制误差值的散点图
    x_axis = list(range(0, epochs))
    draw_scatter(x_axis, J_history, "Cost Function in Every Epoch")
    plt.show()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98

正则化线性回归(regularization linear regression)

  • 正则化的作用:解决过拟合(overfitting)问题(也可通过增加训练样本数据解决)。
  • 损失/代价函数(仅需正则化 w w w,无需正则化 b b b):

J ( w ⃗ , b ) = 1 2 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] 2 + λ 2 m ∑ j = 1 n w j 2

J(w,b)=12mi=1m[fw,b(x(i))y(i)]2+λ2mj=1nwj2
J(w ,b)=2m1i=1m[fw ,b(x (i))y(i)]2+2mλj=1nwj2

其中,第一项称为均方误差(mean squared error),第二项称为正则化项(regularization term),使 w j w_j wj 变小。初始设置的 λ \lambda λ 越大,最终得到的 w j w_j wj 越小。

  • 梯度下降算法:

r e p e a t { t m p _ w 1 = w 1 − α 1 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] x 1 ( i ) + λ m w 1 t m p _ w 2 = w 2 − α 1 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] x 2 ( i ) + λ m w 2 . . . t m p _ w n = w n − α 1 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] x n ( i ) + λ m w n t m p _ b = b − α 1 m ∑ i = 1 m [ f w ⃗ , b ( x ⃗ ( i ) ) − y ( i ) ] s i m u l t a n e o u s   u p d a t e   e v e r y   p a r a m e t e r s } u n t i l   c o n v e r g e

repeat{tmp_w1=w1α1mi=1m[fw,b(x(i))y(i)]x1(i)+λmw1tmp_w2=w2α1mi=1m[fw,b(x(i))y(i)]x2(i)+λmw2...tmp_wn=wnα1mi=1m[fw,b(x(i))y(i)]xn(i)+λmwntmp_b=bα1mi=1m[fw,b(x(i))y(i)]simultaneous update every parameters}until converge
repeat{}until tmp_w1=w1αm1i=1m[fw ,b(x (i))y(i)]x1(i)+mλw1tmp_w2=w2αm1i=1m[fw ,b(x (i))y(i)]x2(i)+mλw2...tmp_wn=wnαm1i=1m[fw ,b(x (i))y(i)]xn(i)+mλwntmp_b=bαm1i=1m[fw ,b(x (i))y(i)]simultaneous update every parametersconverge

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号