赞
踩
为了方便研究最小二乘法问题,现提供如下车辆时间和行驶距离的观测数据用于讨论和分析。
令:车辆的初速度
v
0
v_0
v0 、加速度
a
a
a、时间
t
t
t、距离
y
^
\hat{y}
y^ ,第
i
i
i组观测值为
(
t
i
,
y
^
i
)
(t_i,\hat{y}_i )
(ti,y^i),时间
t
t
t与距离
y
^
\hat{y}
y^满足的数学模型如下:
y
^
=
f
(
v
0
,
a
,
t
)
=
1
2
a
t
2
+
v
0
t
(
公式
59
)
\hat{y}=f(v_0,a,t)=\frac{1}{2}at^2+v_0t \qquad (公式59)
y^=f(v0,a,t)=21at2+v0t(公式59)
min
F
(
a
,
v
0
)
=
1
2
∑
i
=
1
10
(
f
(
v
0
,
a
,
t
i
)
−
y
i
^
)
2
(
公式
60
)
\text{min}F(a,v_0)=\frac{1}{2}\sum_{i=1}^{10} (f(v_0,a,t_i)-\hat{y_i})^2 \qquad (公式60)
minF(a,v0)=21i=1∑10(f(v0,a,ti)−yi^)2(公式60)
观察公式 60 发现,
y
i
^
\hat{y_i}
yi^ 和
t
i
t_i
ti 是 常 数 ,
a
a
a 和
v
0
v_0
v0 是变量,可知该最小二乘问题是函数
F
F
F关于
a
a
a 和
v
v
v 的最小极值问题。
如下代码是对二次函数求导的代码例子
import tensorflow as tf # 定义二次函数 y = 0.5*3*x*x + 10*x def my_function(x): a = 3 b = 10 return 0.5*a*x*x+b*x # 定义测试数据 x_test = tf.constant([[0.0], [1.0], [2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0],[10.0]]) # 创建一个 tf.GradientTape 实例,并将其作为上下文管理器使用 with tf.GradientTape(persistent=True) as tape: # 监控需要计算梯度的变量 tape.watch(x_test) # 如果测试数据为x_test,那么函数的输出值为y_test y_test = my_function(x_test) # y对x求导 dy_dx = tape.gradient(y_test,x_test) print(f'y对x的导数:{dy_dx}') # 设置学习率,求梯度下降步长 learning_rate=0.001 print(f'当学习率为 {learning_rate} 的时候 , 梯度下降步长:{learning_rate*dy_dx}') # 因为tape的tf.GradientTape.persistent值为True,必须手动释放tape资源 del tape
y对x的导数:[[10.] [13.] [16.] [19.] [22.] [25.] [28.] [31.] [34.] [37.] [40.]] 当学习率为 0.001 的时候 , 梯度下降步长:[[0.01 ] [0.013] [0.016] [0.019] [0.022] [0.025] [0.028] [0.031] [0.034] [0.037] [0.04 ]]
import tensorflow as tf # 定义二次函数 y = 0.5*a*x*x + v0*x def model_function(a , v0 , x): # 计算二次函数的输出 return 0.5 * a * tf.square(x) + v0 * x # 定义损失函数 def custom_loss(y_true, y_pred): return tf.reduce_mean(tf.square(y_true - y_pred)) # 准备测试数据 x_test = tf.constant([[0.0], [1.0], [2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0],[10.0]]) y_test = tf.constant([[0.0], [11.5], [26],[43.5],[64.12],[87.57],[114.12],[143.5],[176.3],[211.5],[250.12]]) learning_rate_test = 0.001 # 使用梯度下降优化器 optimizer = tf.optimizers.SGD(learning_rate=learning_rate_test) # 初始化待训练的模型参数 # arg_a = tf.constant(1.0, dtype=tf.float32); # arg_v0 = tf.constant(1.0, dtype=tf.float32); arg_a = tf.Variable(1.0, dtype=tf.float32); arg_v0 = tf.Variable(1.0, dtype=tf.float32); # 迭代优化参数 for i in range(10000): with tf.GradientTape() as tape: tape.watch([arg_a,arg_v0]) y_test1 = model_function(arg_a,arg_v0,x_test) loss_value = custom_loss(y_test,y_test1) # 求得导数 gradients = tape.gradient(loss_value, [arg_a, arg_v0]) # 手动优化参数 # arg_a = arg_a - learning_rate_test * gradients[0]; # arg_v0 = arg_v0 - learning_rate_test * gradients[1]; # 优化器来优化参数 optimizer.apply_gradients(zip(gradients, [arg_a, arg_v0])) print(f'{i}->损失函数值: [ {loss_value} ]') if loss_value < 0.01: break # 打印优化后的参数值 print("Optimized parameters:") print(f'a = {arg_a.numpy()}') print(f'v0 = {arg_v0.numpy()}')
1270->损失函数值: [ 0.010442578233778477 ] 1271->损失函数值: [ 0.010407980531454086 ] 1272->损失函数值: [ 0.010373931378126144 ] 1273->损失函数值: [ 0.010339929722249508 ] 1274->损失函数值: [ 0.010305940173566341 ] 1275->损失函数值: [ 0.010272250510752201 ] 1276->损失函数值: [ 0.010238551534712315 ] 1277->损失函数值: [ 0.01020563393831253 ] 1278->损失函数值: [ 0.010172807611525059 ] 1279->损失函数值: [ 0.010140151716768742 ] 1280->损失函数值: [ 0.010108486749231815 ] 1281->损失函数值: [ 0.010076496750116348 ] 1282->损失函数值: [ 0.010045137256383896 ] 1283->损失函数值: [ 0.010013815015554428 ] 1284->损失函数值: [ 0.009982440620660782 ] Optimized parameters: a = 3.0083563327789307 v0 = 9.978113174438477
import tensorflow as tf # 自定义损失函数 def custom_loss(y_true, y_pred): return tf.reduce_mean(tf.square(y_true - y_pred)) # 定义模型函数 class ModelFunction(tf.keras.layers.Layer): def __init__(self, **kwargs): super(ModelFunction, self).__init__(**kwargs) # 初始化参数,假设a[0]和a[1]的初始值为1.0 self.a = self.add_weight(shape=(2,), initializer='ones', trainable=True) def call(self, inputs): # 计算二次函数的输出 return 0.5*self.a[0] * tf.square(inputs) + self.a[1] * inputs # 构建模型 model_inputs = tf.keras.Input(shape=(1,)) model_outputs = ModelFunction()(model_inputs) model = tf.keras.Model(inputs=model_inputs, outputs=model_outputs) # 编译模型(设置优化器和损失函数) custom_optimizer = tf.keras.optimizers.SGD(learning_rate=0.001) model.compile(optimizer=custom_optimizer, loss=custom_loss) # 准备测试数据,f(x)=0.5*a*x*x+b*x (a=3,b=10) x_test = tf.constant([[0.0], [1.0], [2.0],[3.0],[4.0],[5.0],[6.0],[7.0],[8.0],[9.0],[10.0]]) y_test = tf.constant([[0.0], [11.5], [26],[43.5],[64.12],[87.57],[114.12],[143.5],[176.3],[211.5],[250.12]]) # 添加 EarlyStopping 回调函数 early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=10, min_delta=0.001,mode='min', verbose=2) # 训练模型 history = model.fit(x_test, y_test, epochs=2000, verbose=2, callbacks=[early_stopping_callback]) loss = model.evaluate(x_test, y_test, verbose=2) print(f"The loss on the test data is: {loss}") # 获取自定义层的权重 print(model.get_weights())
Epoch 1145/2000 1/1 - 0s - loss: 0.0181 - 1ms/epoch - 1ms/step Epoch 1146/2000 1/1 - 0s - loss: 0.0180 - 1ms/epoch - 1ms/step Epoch 1147/2000 1/1 - 0s - loss: 0.0179 - 1ms/epoch - 1ms/step Epoch 1148/2000 1/1 - 0s - loss: 0.0178 - 1ms/epoch - 1ms/step Epoch 1149/2000 1/1 - 0s - loss: 0.0177 - 1ms/epoch - 1ms/step Epoch 1150/2000 1/1 - 0s - loss: 0.0176 - 1ms/epoch - 1ms/step Epoch 1151/2000 1/1 - 0s - loss: 0.0175 - 1ms/epoch - 1ms/step Epoch 1152/2000 1/1 - 0s - loss: 0.0175 - 1ms/epoch - 1ms/step Epoch 1153/2000 1/1 - 0s - loss: 0.0174 - 1ms/epoch - 1ms/step Epoch 1153: early stopping 1/1 - 0s - loss: 0.0173 - 76ms/epoch - 76ms/step The loss on the test data is: 0.017272843047976494 [array([3.0155222, 9.948215 ], dtype=float32)]
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。