当前位置:   article > 正文

【新手入门】课程3-Paddle入门-波士顿房价预测

【新手入门】课程3-Paddle入门-波士顿房价预测

经典的线性回归模型主要用来预测一些存在着线性关系的数据集。回归模型可以理解为:存在一个点集,用一条曲线去拟合它分布的过程。如果拟合曲线是一条直线,则称为线性回归。如果是一条二次曲线,则被称为二次回归。线性回归是回归模型中最简单的一种。 本教程使用PaddlePaddle建立起一个房价预测模型。

在线性回归中: 

(1)假设函数是指,用数学的方法描述自变量和因变量之间的关系,它们之间可以是一个线性函数或非线性函数。 在本次线性回顾模型中,我们的假设函数为 Y’= wX+b ,其中,Y’表示模型的预测结果(预测房价),用来和真实的Y区分。模型要学习的参数即:w,b。

(2)损失函数是指,用数学的方法衡量假设函数预测结果与真实值之间的误差。这个差距越小预测越准确,而算法的任务就是使这个差距越来越小。 建立模型后,我们需要给模型一个优化目标,使得学到的参数能够让预测值Y’尽可能地接近真实值Y。这个实值通常用来反映模型误差的大小。不同问题场景下采用不同的损失函数。 对于线性模型来讲,最常用的损失函数就是均方误差(Mean Squared Error, MSE)。

(3)优化算法:神经网络的训练就是调整权重(参数)使得损失函数值尽可能得小,在训练过程中,将损失函数值逐渐收敛,得到一组使得神经网络拟合真实模型的权重(参数)。所以,优化算法的最终目标是找到损失函数的最小值。而这个寻找过程就是不断地微调变量w和b的值,一步一步地试出这个最小值。 常见的优化算法有随机梯度下降法(SGD)、Adam算法等等

首先导入必要的包,分别是:

paddle.fluid--->PaddlePaddle深度学习框架

numpy---------->python基本库,用于科学计算

os------------------>python的模块,可使用该模块对操作系统进行操作

matplotlib----->python绘图库,可方便绘制折线图、散点图等图形

In[1]

  1. import paddle.fluid as fluid
  2. import paddle
  3. import numpy as np
  4. import os
  5. import matplotlib.pyplot as plt

Step1:准备数据。

(1)uci-housing数据集介绍

数据集共506行,每行14列。前13列用来描述房屋的各种信息,最后一列为该类房屋价格中位数。

PaddlePaddle提供了读取uci_housing训练集和测试集的接口,分别为paddle.dataset.uci_housing.train()和paddle.dataset.uci_housing.test()。

(2)train_reader和test_reader

paddle.reader.shuffle()表示每次缓存BUF_SIZE个数据项,并进行打乱

paddle.batch()表示每BATCH_SIZE组成一个batch

In[2]

  1. BUF_SIZE=500
  2. BATCH_SIZE=20
  3. #用于训练的数据提供器,每次从缓存中随机读取批次大小的数据
  4. train_reader = paddle.batch(
  5. paddle.reader.shuffle(paddle.dataset.uci_housing.train(),
  6. buf_size=BUF_SIZE),
  7. batch_size=BATCH_SIZE)
  8. #用于测试的数据提供器,每次从缓存中随机读取批次大小的数据
  9. test_reader = paddle.batch(
  10. paddle.reader.shuffle(paddle.dataset.uci_housing.test(),
  11. buf_size=BUF_SIZE),
  12. batch_size=BATCH_SIZE)
  1. [==================================================]housing/housing.data not found, downloading http://paddlemodels.bj.bcebos.com/uci_housing/housing.data
  2. /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/dataset/uci_housing.py:49: UserWarning:
  3. This call to matplotlib.use() has no effect because the backend has already
  4. been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot,
  5. or matplotlib.backends is imported for the first time.
  6. The backend was *originally* set to 'module://ipykernel.pylab.backend_inline' by the following code:
  7. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 193, in _run_module_as_main
  8. "__main__", mod_spec)
  9. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 85, in _run_code
  10. exec(code, run_globals)
  11. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in <module>
  12. app.launch_new_instance()
  13. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
  14. app.start()
  15. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 505, in start
  16. self.io_loop.start()
  17. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/platform/asyncio.py", line 132, in start
  18. self.asyncio_loop.run_forever()
  19. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 421, in run_forever
  20. self._run_once()
  21. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 1425, in _run_once
  22. handle._run()
  23. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/events.py", line 127, in _run
  24. self._callback(*self._args)
  25. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/ioloop.py", line 758, in _run_callback
  26. ret = callback()
  27. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/stack_context.py", line 300, in null_wrapper
  28. return fn(*args, **kwargs)
  29. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1233, in inner
  30. self.run()
  31. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1147, in run
  32. yielded = self.gen.send(value)
  33. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 357, in process_one
  34. yield gen.maybe_future(dispatch(*args))
  35. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
  36. yielded = next(result)
  37. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell
  38. yield gen.maybe_future(handler(stream, idents, msg))
  39. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
  40. yielded = next(result)
  41. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 534, in execute_request
  42. user_expressions, allow_stdin,
  43. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
  44. yielded = next(result)
  45. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 294, in do_execute
  46. res = shell.run_cell(code, store_history=store_history, silent=silent)
  47. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
  48. return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  49. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2821, in run_cell
  50. self.events.trigger('post_run_cell', result)
  51. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/events.py", line 88, in trigger
  52. func(*args, **kwargs)
  53. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/pylab/backend_inline.py", line 164, in configure_once
  54. activate_matplotlib(backend)
  55. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/pylabtools.py", line 314, in activate_matplotlib
  56. matplotlib.pyplot.switch_backend(backend)
  57. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/matplotlib/pyplot.py", line 231, in switch_backend
  58. matplotlib.use(newbackend, warn=False, force=True)
  59. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/matplotlib/__init__.py", line 1422, in use
  60. reload(sys.modules['matplotlib.backends'])
  61. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/importlib/__init__.py", line 166, in reload
  62. _bootstrap._exec(spec, module)
  63. File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/matplotlib/backends/__init__.py", line 16, in <module>
  64. line for line in traceback.format_stack()
  65. matplotlib.use('Agg')

(3)打印看下数据是什么样的?PaddlePaddle接口提供的数据已经经过归一化等处理

(array([-0.02964322, -0.11363636, 0.39417967, -0.06916996, 0.14260276, -0.10109875, 0.30715859, -0.13176829, -0.24127857, 0.05489093, 0.29196451, -0.2368098 , 0.12850267]), array([15.6])),

In[3]

  1. #用于打印,查看uci_housing数据
  2. train_data=paddle.dataset.uci_housing.train();
  3. sampledata=next(train_data())
  4. print(sampledata)
  1. (array([-0.0405441 , 0.06636364, -0.32356227, -0.06916996, -0.03435197,
  2. 0.05563625, -0.03475696, 0.02682186, -0.37171335, -0.21419304,
  3. -0.33569506, 0.10143217, -0.21172912]), array([24.]))
 

Step2:网络配置

(1)网络搭建:对于线性回归来讲,它就是一个从输入到输出的简单的全连接层。

对于波士顿房价数据集,假设属性和房价之间的关系可以被属性间的线性组合描述。

In[4]

  1. #定义张量变量x,表示13维的特征值
  2. x = fluid.layers.data(name='x', shape=[13], dtype='float32')
  3. #定义张量y,表示目标值
  4. y = fluid.layers.data(name='y', shape=[1], dtype='float32')
  5. #定义一个简单的线性网络,连接输入和输出的全连接层
  6. #input:输入tensor;
  7. #size:该层输出单元的数目
  8. #act:激活函数
  9. y_predict=fluid.layers.fc(input=x,size=1,act=None)

(2)定义损失函数

此处使用均方差损失函数。

square_error_cost(input,lable):接受输入预测值和目标值,并返回方差估计,即为(y-y_predict)的平方

In[5]

  1. cost = fluid.layers.square_error_cost(input=y_predict, label=y) #求一个batch的损失值
  2. avg_cost = fluid.layers.mean(cost) #对损失值求平均值

(3)定义优化函数

此处使用的是随机梯度下降。

In[6]

  1. optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.001)
  2. opts = optimizer.minimize(avg_cost)

In[7]

test_program = fluid.default_main_program().clone(for_test=True)

在上述模型配置完毕后,得到两个fluid.Program:fluid.default_startup_program() 与fluid.default_main_program() 配置完毕了。

参数初始化操作会被写入fluid.default_startup_program()

fluid.default_main_program()用于获取默认或全局main program(主程序)。该主程序用于训练和测试模型。fluid.layers 中的所有layer函数可以向 default_main_program 中添加算子和变量。default_main_program 是fluid的许多编程接口(API)的Program参数的缺省值。例如,当用户program没有传入的时候, Executor.run() 会默认执行 default_main_program 。

Step3.模型训练 and Step4.模型评估

(1)创建Executor

首先定义运算场所 fluid.CPUPlace()和 fluid.CUDAPlace(0)分别表示运算场所为CPU和GPU

Executor:接收传入的program,通过run()方法运行program。

In[8]

  1. use_cuda = False #use_cuda为False,表示运算场所为CPU;use_cuda为True,表示运算场所为GPU
  2. place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
  3. exe = fluid.Executor(place) #创建一个Executor实例exe
  4. exe.run(fluid.default_startup_program()) #Executor的run()方法执行startup_program(),进行参数初始化
[]

(2)定义输入数据维度

DataFeeder负责将数据提供器(train_reader,test_reader)返回的数据转成一种特殊的数据结构,使其可以输入到Executor中。

feed_list设置向模型输入的向变量表或者变量表名

In[9]

  1. # 定义输入数据维度
  2. feeder = fluid.DataFeeder(place=place, feed_list=[x, y])#feed_list:向模型输入的变量表或变量表名

(3)定义绘制训练过程的损失值变化趋势的方法draw_train_process

In[10]

  1. iter=0;
  2. iters=[]
  3. train_costs=[]
  4. def draw_train_process(iters,train_costs):
  5. title="training cost"
  6. plt.title(title, fontsize=24)
  7. plt.xlabel("iter", fontsize=14)
  8. plt.ylabel("cost", fontsize=14)
  9. plt.plot(iters, train_costs,color='red',label='training cost')
  10. plt.grid()
  11. plt.show()

(4)训练并保存模型

Executor接收传入的program,并根据feed map(输入映射表)和fetch_list(结果获取表) 向program中添加feed operators(数据输入算子)和fetch operators(结果获取算子)。 feed map为该program提供输入数据。fetch_list提供program训练结束后用户预期的变量。

注:enumerate() 函数用于将一个可遍历的数据对象(如列表、元组或字符串)组合为一个索引序列,同时列出数据和数据下标,

In[11]

  1. EPOCH_NUM=50
  2. model_save_dir = "/home/aistudio/work/fit_a_line.inference.model"
  3. for pass_id in range(EPOCH_NUM): #训练EPOCH_NUM轮
  4. # 开始训练并输出最后一个batch的损失值
  5. train_cost = 0
  6. for batch_id, data in enumerate(train_reader()): #遍历train_reader迭代器
  7. train_cost = exe.run(program=fluid.default_main_program(),#运行主程序
  8. feed=feeder.feed(data), #喂入一个batch的训练数据,根据feed_list和data提供的信息,将输入数据转成一种特殊的数据结构
  9. fetch_list=[avg_cost])
  10. if batch_id % 40 == 0:
  11. print("Pass:%d, Cost:%0.5f" % (pass_id, train_cost[0][0])) #打印最后一个batch的损失值
  12. iter=iter+BATCH_SIZE
  13. iters.append(iter)
  14. train_costs.append(train_cost[0][0])
  15. # 开始测试并输出最后一个batch的损失值
  16. test_cost = 0
  17. for batch_id, data in enumerate(test_reader()): #遍历test_reader迭代器
  18. test_cost= exe.run(program=test_program, #运行测试cheng
  19. feed=feeder.feed(data), #喂入一个batch的测试数据
  20. fetch_list=[avg_cost]) #fetch均方误差
  21. print('Test:%d, Cost:%0.5f' % (pass_id, test_cost[0][0])) #打印最后一个batch的损失值
  22. #保存模型
  23. # 如果保存路径不存在就创建
  24. if not os.path.exists(model_save_dir):
  25. os.makedirs(model_save_dir)
  26. print ('save models to %s' % (model_save_dir))
  27. #保存训练参数到指定路径中,构建一个专门用预测的program
  28. fluid.io.save_inference_model(model_save_dir, #保存推理model的路径
  29. ['x'], #推理(inference)需要 feed 的数据
  30. [y_predict], #保存推理(inference)结果的 Variables
  31. exe) #exe 保存 inference model
  32. draw_train_process(iters,train_costs)
  1. Pass:0, Cost:783.22180
  2. Test:0, Cost:154.32607
  3. Pass:1, Cost:519.22571
  4. Test:1, Cost:92.79891
  5. Pass:2, Cost:606.54010
  6. Test:2, Cost:92.41443
  7. Pass:3, Cost:279.49731
  8. Test:3, Cost:185.88803
  9. Pass:4, Cost:356.30026
  10. Test:4, Cost:129.92186
  11. Pass:5, Cost:419.08685
  12. Test:5, Cost:111.01654
  13. Pass:6, Cost:390.89267
  14. Test:6, Cost:102.50714
  15. Pass:7, Cost:363.88116
  16. Test:7, Cost:103.34782
  17. Pass:8, Cost:256.58975
  18. Test:8, Cost:110.39152
  19. Pass:9, Cost:351.81763
  20. Test:9, Cost:102.23664
  21. Pass:10, Cost:283.22528
  22. Test:10, Cost:28.30433
  23. Pass:11, Cost:168.48587
  24. Test:11, Cost:13.69916
  25. Pass:12, Cost:151.80196
  26. Test:12, Cost:62.51043
  27. Pass:13, Cost:270.92618
  28. Test:13, Cost:86.74022
  29. Pass:14, Cost:277.52686
  30. Test:14, Cost:102.50578
  31. Pass:15, Cost:138.70033
  32. Test:15, Cost:7.86415
  33. Pass:16, Cost:165.45930
  34. Test:16, Cost:64.38410
  35. Pass:17, Cost:152.47154
  36. Test:17, Cost:29.15284
  37. Pass:18, Cost:138.93571
  38. Test:18, Cost:10.48840
  39. Pass:19, Cost:119.68816
  40. Test:19, Cost:29.43134
  41. Pass:20, Cost:197.25444
  42. Test:20, Cost:24.20947
  43. Pass:21, Cost:160.16647
  44. Test:21, Cost:9.46981
  45. Pass:22, Cost:119.94437
  46. Test:22, Cost:44.95092
  47. Pass:23, Cost:123.78200
  48. Test:23, Cost:50.50562
  49. Pass:24, Cost:124.72739
  50. Test:24, Cost:12.59006
  51. Pass:25, Cost:87.68204
  52. Test:25, Cost:1.80894
  53. Pass:26, Cost:120.34269
  54. Test:26, Cost:128.54480
  55. Pass:27, Cost:92.00354
  56. Test:27, Cost:12.28447
  57. Pass:28, Cost:31.87757
  58. Test:28, Cost:97.24959
  59. Pass:29, Cost:53.11855
  60. Test:29, Cost:29.35019
  61. Pass:30, Cost:125.88458
  62. Test:30, Cost:16.12622
  63. Pass:31, Cost:74.63180
  64. Test:31, Cost:13.28822
  65. Pass:32, Cost:31.88729
  66. Test:32, Cost:43.91414
  67. Pass:33, Cost:130.80821
  68. Test:33, Cost:25.24403
  69. Pass:34, Cost:75.27191
  70. Test:34, Cost:12.12042
  71. Pass:35, Cost:93.65819
  72. Test:35, Cost:10.82217
  73. Pass:36, Cost:115.08681
  74. Test:36, Cost:14.19905
  75. Pass:37, Cost:53.95051
  76. Test:37, Cost:13.51565
  77. Pass:38, Cost:63.14687
  78. Test:38, Cost:25.04268
  79. Pass:39, Cost:15.51875
  80. Test:39, Cost:16.89660
  81. Pass:40, Cost:34.37993
  82. Test:40, Cost:7.67218
  83. Pass:41, Cost:105.88936
  84. Test:41, Cost:73.32098
  85. Pass:42, Cost:43.80605
  86. Test:42, Cost:41.20872
  87. Pass:43, Cost:28.96686
  88. Test:43, Cost:0.36368
  89. Pass:44, Cost:113.72699
  90. Test:44, Cost:4.48252
  91. Pass:45, Cost:133.08170
  92. Test:45, Cost:10.91978
  93. Pass:46, Cost:70.03806
  94. Test:46, Cost:48.56998
  95. Pass:47, Cost:68.39425
  96. Test:47, Cost:2.08680
  97. Pass:48, Cost:133.85884
  98. Test:48, Cost:1.99625
  99. Pass:49, Cost:48.71880
  100. Test:49, Cost:19.31082
  101. save models to /home/aistudio/work/fit_a_line.inference.model

Step5.模型预测

(1)创建预测用的Executor

In[12]

  1. infer_exe = fluid.Executor(place) #创建推测用的executor
  2. inference_scope = fluid.core.Scope() #Scope指定作用域

(2)可视化真实值与预测值方法定义

In[13]

  1. infer_results=[]
  2. groud_truths=[]
  3. #绘制真实值和预测值对比图
  4. def draw_infer_result(groud_truths,infer_results):
  5. title='Boston'
  6. plt.title(title, fontsize=24)
  7. x = np.arange(1,20)
  8. y = x
  9. plt.plot(x, y)
  10. plt.xlabel('ground truth', fontsize=14)
  11. plt.ylabel('infer result', fontsize=14)
  12. plt.scatter(groud_truths, infer_results,color='green',label='training cost')
  13. plt.grid()
  14. plt.show()

(3)开始预测      

通过fluid.io.load_inference_model,预测器会从params_dirname中读取已经训练好的模型,来对从未遇见过的数据进行预测。

In[14]

  1. with fluid.scope_guard(inference_scope):#修改全局/默认作用域(scope), 运行时中的所有变量都将分配给新的scope。
  2. #从指定目录中加载 推理model(inference model)
  3. [inference_program, #推理的program
  4. feed_target_names, #需要在推理program中提供数据的变量名称
  5. fetch_targets] = fluid.io.load_inference_model(#fetch_targets: 推断结果
  6. model_save_dir, #model_save_dir:模型训练路径
  7. infer_exe) #infer_exe: 预测用executor
  8. #获取预测数据
  9. infer_reader = paddle.batch(paddle.dataset.uci_housing.test(), #获取uci_housing的测试数据
  10. batch_size=200) #从测试数据中读取一个大小为200的batch数据
  11. #从test_reader中分割x
  12. test_data = next(infer_reader())
  13. test_x = np.array([data[0] for data in test_data]).astype("float32")
  14. test_y= np.array([data[1] for data in test_data]).astype("float32")
  15. results = infer_exe.run(inference_program, #预测模型
  16. feed={feed_target_names[0]: np.array(test_x)}, #喂入要预测的x值
  17. fetch_list=fetch_targets) #得到推测结果
  18. print("infer results: (House Price)")
  19. for idx, val in enumerate(results[0]):
  20. print("%d: %.2f" % (idx, val))
  21. infer_results.append(val)
  22. print("ground truth:")
  23. for idx, val in enumerate(test_y):
  24. print("%d: %.2f" % (idx, val))
  25. groud_truths.append(val)
  26. draw_infer_result(groud_truths,infer_results)
 
  1. infer results: (House Price)
  2. 0: 15.05
  3. 1: 15.20
  4. 2: 15.07
  5. 3: 16.41
  6. 4: 15.53
  7. 5: 16.07
  8. 6: 15.84
  9. 7: 15.56
  10. 8: 13.82
  11. 9: 15.49
  12. 10: 13.18
  13. 11: 14.53
  14. 12: 15.07
  15. 13: 14.68
  16. 14: 14.66
  17. 15: 15.43
  18. 16: 16.23
  19. 17: 16.14
  20. 18: 16.56
  21. 19: 15.37
  22. 20: 16.00
  23. 21: 14.77
  24. 22: 16.37
  25. 23: 15.77
  26. 24: 15.60
  27. 25: 15.12
  28. 26: 16.18
  29. 27: 16.04
  30. 28: 16.87
  31. 29: 15.89
  32. 30: 15.74
  33. 31: 15.16
  34. 32: 15.32
  35. 33: 14.41
  36. 34: 14.04
  37. 35: 15.54
  38. 36: 15.59
  39. 37: 16.00
  40. 38: 16.18
  41. 39: 16.03
  42. 40: 15.12
  43. 41: 14.89
  44. 42: 15.99
  45. 43: 16.22
  46. 44: 16.16
  47. 45: 15.94
  48. 46: 15.59
  49. 47: 16.29
  50. 48: 16.37
  51. 49: 16.62
  52. 50: 15.47
  53. 51: 15.68
  54. 52: 15.33
  55. 53: 15.57
  56. 54: 16.32
  57. 55: 16.65
  58. 56: 16.31
  59. 57: 16.69
  60. 58: 16.80
  61. 59: 17.07
  62. 60: 17.30
  63. 61: 17.21
  64. 62: 15.79
  65. 63: 16.29
  66. 64: 16.90
  67. 65: 17.38
  68. 66: 17.09
  69. 67: 17.48
  70. 68: 17.44
  71. 69: 17.73
  72. 70: 16.37
  73. 71: 15.98
  74. 72: 16.69
  75. 73: 15.57
  76. 74: 16.52
  77. 75: 17.01
  78. 76: 18.03
  79. 77: 18.27
  80. 78: 18.41
  81. 79: 18.29
  82. 80: 17.80
  83. 81: 18.08
  84. 82: 17.24
  85. 83: 17.80
  86. 84: 17.32
  87. 85: 16.63
  88. 86: 16.02
  89. 87: 17.39
  90. 88: 18.01
  91. 89: 21.00
  92. 90: 21.09
  93. 91: 20.90
  94. 92: 19.93
  95. 93: 20.66
  96. 94: 20.87
  97. 95: 20.43
  98. 96: 20.56
  99. 97: 21.70
  100. 98: 21.49
  101. 99: 21.83
  102. 100: 21.74
  103. 101: 21.52
  104. ground truth:
  105. 0: 8.50
  106. 1: 5.00
  107. 2: 11.90
  108. 3: 27.90
  109. 4: 17.20
  110. 5: 27.50
  111. 6: 15.00
  112. 7: 17.20
  113. 8: 17.90
  114. 9: 16.30
  115. 10: 7.00
  116. 11: 7.20
  117. 12: 7.50
  118. 13: 10.40
  119. 14: 8.80
  120. 15: 8.40
  121. 16: 16.70
  122. 17: 14.20
  123. 18: 20.80
  124. 19: 13.40
  125. 20: 11.70
  126. 21: 8.30
  127. 22: 10.20
  128. 23: 10.90
  129. 24: 11.00
  130. 25: 9.50
  131. 26: 14.50
  132. 27: 14.10
  133. 28: 16.10
  134. 29: 14.30
  135. 30: 11.70
  136. 31: 13.40
  137. 32: 9.60
  138. 33: 8.70
  139. 34: 8.40
  140. 35: 12.80
  141. 36: 10.50
  142. 37: 17.10
  143. 38: 18.40
  144. 39: 15.40
  145. 40: 10.80
  146. 41: 11.80
  147. 42: 14.90
  148. 43: 12.60
  149. 44: 14.10
  150. 45: 13.00
  151. 46: 13.40
  152. 47: 15.20
  153. 48: 16.10
  154. 49: 17.80
  155. 50: 14.90
  156. 51: 14.10
  157. 52: 12.70
  158. 53: 13.50
  159. 54: 14.90
  160. 55: 20.00
  161. 56: 16.40
  162. 57: 17.70
  163. 58: 19.50
  164. 59: 20.20
  165. 60: 21.40
  166. 61: 19.90
  167. 62: 19.00
  168. 63: 19.10
  169. 64: 19.10
  170. 65: 20.10
  171. 66: 19.90
  172. 67: 19.60
  173. 68: 23.20
  174. 69: 29.80
  175. 70: 13.80
  176. 71: 13.30
  177. 72: 16.70
  178. 73: 12.00
  179. 74: 14.60
  180. 75: 21.40
  181. 76: 23.00
  182. 77: 23.70
  183. 78: 25.00
  184. 79: 21.80
  185. 80: 20.60
  186. 81: 21.20
  187. 82: 19.10
  188. 83: 20.60
  189. 84: 15.20
  190. 85: 7.00
  191. 86: 8.10
  192. 87: 13.60
  193. 88: 20.10
  194. 89: 21.80
  195. 90: 24.50
  196. 91: 23.10
  197. 92: 19.70
  198. 93: 18.30
  199. 94: 21.20
  200. 95: 17.50
  201. 96: 16.80
  202. 97: 22.40
  203. 98: 20.60
  204. 99: 23.90
  205. 100: 22.00
  206. 101: 11.90

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/609334
推荐阅读
相关标签
  

闽ICP备14008679号