当前位置:   article > 正文

基于深度学习的PM2.5实时预测系统开发_开发实时预测系统

开发实时预测系统

尊敬的读者您好:笔者很高兴自己的文章能被阅读,但原创与编辑均不易,所以转载请必须注明本文出处并附上本文地址超链接以及博主博客地址:https://blog.csdn.net/vensmallzeng。若觉得本文对您有益处还请帮忙点个赞鼓励一下,笔者在此感谢每一位读者,如需联系笔者,请记下邮箱:zengzenghe@gmail.com,谢谢合作!

今天去首师大参加了农行软开笔试,时间2小时,题目分为四大类型,总体感觉难度不大但是很多题目都触碰到了我的知识盲区,所以还是得加强基础知识的学习,果真应验了那句话"基础不牢,地动山摇"啊!这次想把我申请的那个科技基金项目好好总结一下,该科技基金是学校设立用于鼓励大家崇尚科研,学会写本子的,每年好像资助100项左右且资助额度是4000RMB/年,感觉学校在这一点做的很好,给学校点个大大的赞。言归正传,下面就自己是如何将科技基金顺利结题的做一个小小的总结。

一、数据集建立

图片的来源:Beijing Realtime Weather Photos

图片对应的PM2.5来源:http://aqicn.org/city/beijing/us-embassy/cn/

整理收集到的图片与相应的PM2.5,得到一个共包含几万个样本的数据集,且每个样本的以“序号+PM2.5浓度值”命名。具体数据集如下图所示

二、预测模型建立

考虑到所建立数据集的规模不足以训练好一个VGG16的网络,所以经过相关文献发现可以通过“迁移学习”的方式解决这个问题。大体思路分两步,第一步:复现在ImageNet数据集上训练好的VGG16网络,第二步:基于迁移学习思想采用训练数据集对VGG16进行微调。

第一步的目的在于要弄懂VGG16的网络结构长啥样,下图是论文作者给出的一张网络结构描述表。

下图是真实的网络结构:

第二步是要基于原始的VGG16网络,通过微调最后一层全连接层参数来实现迁移学习。在微调之前,需要先下载训练好的vgg16.npy文件,如果您还没有下载请转向“vgg16.npy_免费高速下载|百度网盘-分享无限制”。下面就来分析一下基于迁移学习的VGG16网络源码。

1、导入需要的包

  1. import os
  2. import numpy as np
  3. import tensorflow as tf
  4. import skimage.io
  5. import skimage.transform
  6. import matplotlib.pyplot as plt

2、读取图片并进行resize操作

  1. # 读取图片
  2. def load_data():
  3. imgs = {'training': []}
  4. fpaths = []
  5. labels = []
  6. for k in imgs.keys():
  7. dir = 'C:/PycharmFiles/Tensorflow_VGG_PM2.5/for_transfer_learning/data/' + k
  8. for file in os.listdir(dir):
  9. fpath = os.path.join(dir, file)
  10. fpaths.append(fpath)
  11. if not file.lower().endswith('.jpg'):
  12. continue
  13. try:
  14. resized_img = load_img(os.path.join(dir, file))
  15. except OSError:
  16. continue
  17. imgs[k].append(resized_img) # [1, height, width, depth] * n
  18. if len(imgs[k]) == 400: # only use 400 imgs to reduce my memory load
  19. break
  20. # fake length data for tiger and cat
  21. label = int(file.split("_")[0])
  22. labels.append(label)
  23. ys = [[label] for label in labels]
  24. ys1 = np.array(ys)
  25. xs = np.concatenate(imgs['training'], axis=0)
  26. return xs, ys1
  27. #对图片进行resize操作
  28. def load_img(path):
  29. img = skimage.io.imread(path)
  30. img = img / 255.0
  31. # print "Original Image Shape: ", img.shape
  32. # we crop image from center
  33. short_edge = min(img.shape[:2])
  34. yy = int((img.shape[0] - short_edge) / 2)
  35. xx = int((img.shape[1] - short_edge) / 2)
  36. crop_img = img[yy: yy + short_edge, xx: xx + short_edge]
  37. # resize to 224, 224
  38. resized_img = skimage.transform.resize(crop_img, (224, 224))[None, :, :, :] # shape [1, 224, 224, 3]
  39. return resized_img

3、微调VGG,只对后面的全连接层进行训练

  1. class Vgg16:
  2. vgg_mean = [103.939, 116.779, 123.68]
  3. print(3)
  4. def __init__(self, vgg16_npy_path=None, restore_from=None):
  5. # pre-trained parameters
  6. try:
  7. self.data_dict = np.load(vgg16_npy_path, encoding='latin1', allow_pickle = True).item()
  8. print(Vgg16)
  9. except FileNotFoundError:
  10. print(
  11. 'Please download VGG16 parameters from here https://mega.nz/#!YU1FWJrA!O1ywiCS2IiOlUCtCpI6HTJOMrneN-Qdv3ywQP5poecM\nOr from my Baidu Cloud: https://pan.baidu.com/s/1Spps1Wy0bvrQHH2IMkRfpg')
  12. self.tfx = tf.placeholder(tf.float32, [None, 224, 224, 3])
  13. self.tfy = tf.placeholder(tf.float32, [None, 1])
  14. # Convert RGB to BGR
  15. red, green, blue = tf.split(axis=3, num_or_size_splits=3, value=self.tfx * 255.0)
  16. bgr = tf.concat(axis=3, values=[
  17. blue - self.vgg_mean[0],
  18. green - self.vgg_mean[1],
  19. red - self.vgg_mean[2],
  20. ])
  21. # pre-trained VGG layers are fixed in fine-tune
  22. conv1_1 = self.conv_layer(bgr, "conv1_1")
  23. conv1_2 = self.conv_layer(conv1_1, "conv1_2")
  24. pool1 = self.max_pool(conv1_2, 'pool1')
  25. conv2_1 = self.conv_layer(pool1, "conv2_1")
  26. conv2_2 = self.conv_layer(conv2_1, "conv2_2")
  27. pool2 = self.max_pool(conv2_2, 'pool2')
  28. conv3_1 = self.conv_layer(pool2, "conv3_1")
  29. conv3_2 = self.conv_layer(conv3_1, "conv3_2")
  30. conv3_3 = self.conv_layer(conv3_2, "conv3_3")
  31. pool3 = self.max_pool(conv3_3, 'pool3')
  32. conv4_1 = self.conv_layer(pool3, "conv4_1")
  33. conv4_2 = self.conv_layer(conv4_1, "conv4_2")
  34. conv4_3 = self.conv_layer(conv4_2, "conv4_3")
  35. pool4 = self.max_pool(conv4_3, 'pool4')
  36. conv5_1 = self.conv_layer(pool4, "conv5_1")
  37. conv5_2 = self.conv_layer(conv5_1, "conv5_2")
  38. conv5_3 = self.conv_layer(conv5_2, "conv5_3")
  39. pool5 = self.max_pool(conv5_3, 'pool5')
  40. # detach original VGG fc layers and
  41. # reconstruct your own fc layers serve for your own purpose
  42. # flatten
  43. self.flatten = tf.reshape(pool5, [-1, 7 * 7 * 512])
  44. # fully connected
  45. self.fc6 = tf.layers.dense(self.flatten, 256, tf.nn.relu, name='fc6')
  46. self.out = tf.layers.dense(self.fc6, 1, name='out')
  47. self.sess = tf.Session()
  48. if restore_from:
  49. saver = tf.train.Saver()
  50. saver.restore(self.sess, restore_from)
  51. else: # training graph
  52. self.loss = tf.losses.mean_squared_error(labels=self.tfy, predictions=self.out)
  53. self.train_op = tf.train.RMSPropOptimizer(0.001).minimize(self.loss)
  54. self.sess.run(tf.global_variables_initializer())
  55. def max_pool(self, bottom, name):
  56. return tf.nn.max_pool(bottom, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name=name)
  57. def conv_layer(self, bottom, name):
  58. with tf.variable_scope(name): # CNN's filter is constant, NOT Variable that can be trained
  59. conv = tf.nn.conv2d(bottom, self.data_dict[name][0], [1, 1, 1, 1], padding='SAME')
  60. lout = tf.nn.relu(tf.nn.bias_add(conv, self.data_dict[name][1]))
  61. return lout
  62. def train(self, x, y):
  63. loss, _ = self.sess.run([self.loss, self.train_op], {self.tfx: x, self.tfy: y})
  64. return loss
  65. def predict(self, paths):
  66. fig, axs = plt.subplots(1, 2)
  67. for i, path in enumerate(paths):
  68. x = load_img(path)
  69. length = self.sess.run(self.out, {self.tfx: x})
  70. print("%s's prediction results: %s" % (path,length))
  71. def save(self, path='C:/PycharmFiles/Tensorflow_VGG_PM2.5/for_transfer_learning/model/transfer_learn'):
  72. saver = tf.train.Saver()
  73. saver.save(self.sess, path, write_meta_graph=False)

4、进行训练,定义 epoch 为1000, batchsize为10,并打印输出每个 epoch的 loss 值

  1. def train():
  2. # tigers_x, cats_x, tigers_y, cats_y = load_data()
  3. xs, ys = load_data()
  4. print(1)
  5. vgg = Vgg16(vgg16_npy_path='C:/PycharmFiles/Tensorflow_VGG_PM2.5/for_transfer_learning/vgg16.npy')
  6. print(2)
  7. print('Net built')
  8. for i in range(1000):
  9. b_idx = np.random.randint(0, len(xs), 10)
  10. train_loss = vgg.train(xs[b_idx], ys[b_idx])
  11. print(i, 'train loss: ', train_loss)
  12. vgg.save('C:/PycharmFiles/Tensorflow_VGG_PM2.5/for_transfer_learning/model/transfer_learn') # save learned fc layers

5、基于训练好的模型进行预测,输入一张任意大小图片,输出一个预测的PM2.5值

  1. def eval():
  2. vgg = Vgg16(vgg16_npy_path='C:/PycharmFiles/Tensorflow_VGG_PM2.5/for_transfer_learning/vgg16.npy',
  3. restore_from='C:/PycharmFiles/Tensorflow_VGG_PM2.5/for_transfer_learning/model/transfer_learn')
  4. fpaths = []
  5. dir = 'C:/PycharmFiles/Tensorflow_VGG_PM2.5/for_transfer_learning/data/test'
  6. for file in os.listdir(dir):
  7. fpath = os.path.join(dir, file)
  8. fpaths.append(fpath)
  9. vgg.predict( fpaths )

三、模型迁移移动端

刚开始将tensorflow模型迁移到移动端,参考了很多网上的教程的,也走了很多弯路,但最终还是将训练效果最好的VGG16网络成功迁移到Android Studio的移动客户端上了,实现了通过手机对周围环境进行拍照即可预测周围PM2.5浓度功能。这部分考虑到与自己的硕士毕业论文挂钩,所以在未毕业之前暂时不能对外公布,但可以给大家一个博客“一步步做一个数字手势识别APP_手势识别app代码_天泽28的博客-CSDN博客”参考参考,让大家少走一些弯路,还请各位多多包涵。

日积月累,与君共进,增增小结,未完待续。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/笔触狂放9/article/detail/627912
推荐阅读
相关标签
  

闽ICP备14008679号