当前位置:   article > 正文

【人脸识别】基于Flask网页实现虚拟主播实验_face_position

face_position

目录

一、虚拟主播实现

(一)读取视频流

(二)绘制关键点

(三)计算面部特征并绘图

(四)完整代码

二、前端网页制作

(一)建立数据库

 (二)导入虚拟主播代码并将数据存入数据库

 (三)绘制echarts折线图

 (四)绘制ajax动态折线图(未实现)


        该实验基于人脸识别,简单地将人脸实时绘制为虚拟主播的动态图像,并将头部姿态数据生成到了Flask网页前端。完成该实验需要借助带有前置摄像头或外置摄像头的电脑,shape_predictor_68_face_landmarks.dat文件、Chart.min.js文件和echarts.min.js文件,大家可通过以下网盘链接获取三个文件。

链接:https://pan.baidu.com/s/1-nQkAFq9TJU999lNrydYXQ?pwd=u15u 
提取码:u15u 

 

一、虚拟主播实现

(一)读取视频流

         我使用的编辑器是Pycharm。首先使用OpenCV从前置摄像头或外置摄像头读取视频流,并将其播放在窗口上。运行后系统将自动开启摄像头获取图像。

  1. # Step1.py
  2. import cv2
  3. cap = cv2.VideoCapture(0) # 表示内置摄像头,使用外置摄像头参数改为1
  4. while True:
  5. ret, img = cap.read()
  6. img = cv2.flip(img, 1) # 表示把帧左右翻转
  7. cv2.imshow('vtuber', img)
  8. cv2.waitKey(1)

(二)绘制关键点

        运行后系统将自动开启摄像头获取图像,并对获取到的最大人脸进行68个点的定位。

  1. # Step2.py
  2. import cv2
  3. import numpy as np
  4. import dlib
  5. detector = dlib.get_frontal_face_detector()
  6. # 确保img中存在人脸,并找出最大的那一张脸
  7. def face_positioning(img):
  8. dets = detector(img, 0)
  9. if not dets:
  10. return None
  11. return max(dets, key=lambda det: (det.right() - det.left()) * (det.bottom() - det.top()))
  12. # 使用Dlib提取面部关键点
  13. predictor = dlib.shape_predictor(
  14. './data/shape_predictor_68_face_landmarks.dat') # 该文件保存在对应项目的data文件夹下
  15. def extract_key_points(img, position):
  16. landmark_shape = predictor(img, position)
  17. key_points = []
  18. for i in range(68):
  19. pos = landmark_shape.part(i)
  20. key_points.append(np.array([pos.x, pos.y], dtype=np.float32))
  21. return key_points
  22. if __name__ == '__main__':
  23. cap = cv2.VideoCapture(0)
  24. while True:
  25. ret, img = cap.read()
  26. img = cv2.flip(img, 1)
  27. face_position = face_positioning(img)
  28. key_points = extract_key_points(img, face_position)
  29. for i, (px, py) in enumerate(key_points): # 绘制关键点
  30. cv2.putText(img, str(i), (int(px), int(py)),
  31. cv2.FONT_HERSHEY_COMPLEX, 0.25, (255, 255, 255))
  32. cv2.imshow('vtuber', img)
  33. cv2.waitKey(1)

(三)计算面部特征并绘图

        运行后系统将自动开启摄像头获取图像,并对获取到的最大人脸进行68个点的定位,同时绘制简易的虚拟主播动态图像。

  1. # Step3.py
  2. import cv2
  3. import numpy as np
  4. import dlib
  5. detector = dlib.get_frontal_face_detector()
  6. # 确保img中存在人脸,并找出最大的那一张脸
  7. def face_positioning(img):
  8. dets = detector(img, 0)
  9. if not dets:
  10. return None
  11. return max(dets, key=lambda det: (det.right() - det.left()) * (det.bottom() - det.top()))
  12. # 使用Dlib提取面部关键点
  13. predictor = dlib.shape_predictor(
  14. './data/shape_predictor_68_face_landmarks.dat')
  15. def extract_key_points(img, position):
  16. landmark_shape = predictor(img, position)
  17. key_points = []
  18. for i in range(68):
  19. pos = landmark_shape.part(i)
  20. key_points.append(np.array([pos.x, pos.y], dtype=np.float32))
  21. return key_points
  22. def generate_points(key_points):
  23. def center(array):
  24. return sum([key_points[i] for i in array]) / len(array)
  25. left_brow = [18, 19, 20, 21]
  26. right_brow = [22, 23, 24, 25]
  27. chin = [6, 7, 8, 9, 10]
  28. nose = [29, 30]
  29. return center(left_brow + right_brow), center(chin), center(nose)
  30. # 水平旋转角度和垂直旋转角度
  31. def generate_features(contruction_points):
  32. brow_center, chin_center, nose_center = contruction_points
  33. mid_edge = brow_center - chin_center
  34. bevel_edge = brow_center - nose_center
  35. mid_edge_length = np.linalg.norm(mid_edge)
  36. horizontal_rotation = np.cross(
  37. mid_edge, bevel_edge) / mid_edge_length ** 2
  38. vertical_rotation = mid_edge @ bevel_edge / mid_edge_length ** 2
  39. return np.array([horizontal_rotation, vertical_rotation])
  40. # 绘图
  41. def draw_image(h_rotation, v_rotation):
  42. img = np.ones([512, 512], dtype=np.float32)
  43. face_length = 200
  44. center = 256, 256
  45. left_eye = int(220 - h_rotation *
  46. face_length), int(249 + v_rotation * face_length)
  47. right_eye = int(292 - h_rotation *
  48. face_length), int(249 + v_rotation * face_length)
  49. month = int(256 - h_rotation * face_length /
  50. 2), int(310 + v_rotation * face_length / 2)
  51. cv2.circle(img, center, 100, 0, 1)
  52. cv2.circle(img, left_eye, 15, 0, 1)
  53. cv2.circle(img, right_eye, 15, 0, 1)
  54. cv2.circle(img, month, 5, 0, 1)
  55. return img
  56. def extract_img_features(img):
  57. face_position = face_positioning(img)
  58. if not face_position:
  59. cv2.imshow('self', img)
  60. cv2.waitKey(1)
  61. return None
  62. key_points = extract_key_points(img, face_position)
  63. for i, (p_x, p_y) in enumerate(key_points):
  64. cv2.putText(img, str(i), (int(p_x), int(p_y)),
  65. cv2.FONT_HERSHEY_COMPLEX, 0.25, (255, 255, 255))
  66. construction_points = generate_points(key_points)
  67. for i, (p_x, p_y) in enumerate(construction_points):
  68. cv2.putText(img, str(i), (int(p_x), int(p_y)),
  69. cv2.FONT_HERSHEY_COMPLEX, 0.25, (255, 255, 255))
  70. rotation = generate_features(construction_points)
  71. cv2.putText(img, str(rotation),
  72. (int(construction_points[-1][0]),
  73. int(construction_points[-1][1])),
  74. cv2.FONT_HERSHEY_COMPLEX, 0.5, (255, 255, 255))
  75. cv2.imshow('self', img)
  76. return rotation
  77. if __name__ == '__main__':
  78. cap = cv2.VideoCapture(0)
  79. ORIGIN_FEATURE_GROUP = [-0.00899233, 0.39529446]
  80. FEATURE_GROUP = [0, 0]
  81. while True:
  82. ret, img = cap.read()
  83. img = cv2.flip(img, 1)
  84. NEW_FEATURE_GROUP = extract_img_features(img)
  85. if NEW_FEATURE_GROUP is not None:
  86. FEATURE_GROUP = NEW_FEATURE_GROUP - ORIGIN_FEATURE_GROUP
  87. HORI_ROTATION, VERT_ROTATION = FEATURE_GROUP
  88. cv2.imshow('vtuber', draw_image(HORI_ROTATION, VERT_ROTATION))
  89. cv2.waitKey(1)
  90. if cv2.waitKey(1) & 0xFF == ord('q'):
  91. print("close")
  92. break

(四)完整代码

        将以上三步整合,并在运行环境中输出部分数据以便我们对数据有更好的认识。完整代码如下:

  1. # face.py
  2. """
  3. detect face
  4. """
  5. import cv2
  6. import numpy as np
  7. # to detect face key point
  8. import dlib
  9. DETECTOR = dlib.get_frontal_face_detector()
  10. # 人脸模型数据
  11. PREDICTOR = dlib.shape_predictor(
  12. './data/shape_predictor_68_face_landmarks.dat')
  13. def face_positioning(img):
  14. """
  15. 定位人脸
  16. 计算最大面积
  17. """
  18. dets = DETECTOR(img, 0)
  19. if not dets:
  20. return None
  21. return max(dets, key=lambda det: (det.right() - det.left()) * (det.bottom() - det.top()))
  22. def extract_key_points(img, position):
  23. """
  24. 提取关键点
  25. """
  26. landmark_shape = PREDICTOR(img, position)
  27. key_points = []
  28. for i in range(68):
  29. pos = landmark_shape.part(i)
  30. key_points.append(np.array([pos.x, pos.y], dtype=np.float32))
  31. return key_points
  32. def generate_points(key_points):
  33. """
  34. 生成构造点
  35. """
  36. def center(array):
  37. return sum([key_points[i] for i in array]) / len(array)
  38. left_brow = [18, 19, 20, 21]
  39. right_brow = [22, 23, 24, 25]
  40. # 下巴
  41. chin = [6, 7, 8, 9, 10]
  42. nose = [29, 30]
  43. return center(left_brow + right_brow), center(chin), center(nose)
  44. def generate_features(contruction_points):
  45. """
  46. 生成特征
  47. """
  48. brow_center, chin_center, nose_center = contruction_points
  49. mid_edge = brow_center - chin_center
  50. # 斜边
  51. bevel_edge = brow_center - nose_center
  52. mid_edge_length = np.linalg.norm(mid_edge)
  53. # 高与底的比值
  54. horizontal_rotation = np.cross(
  55. mid_edge, bevel_edge) / mid_edge_length ** 2
  56. # @ 点乘
  57. vertical_rotation = mid_edge @ bevel_edge / mid_edge_length**2
  58. return np.array([horizontal_rotation, vertical_rotation])
  59. def draw_image(h_rotation, v_rotation):
  60. """
  61. 画脸
  62. Args:
  63. h_rotation: 水平旋转量
  64. v_rotation: 垂直旋转量
  65. """
  66. img = np.ones([512, 512], dtype=np.float32)
  67. face_length = 200
  68. center = 256, 256
  69. left_eye = int(220 - h_rotation *
  70. face_length), int(249 + v_rotation * face_length)
  71. right_eye = int(292 - h_rotation *
  72. face_length), int(249 + v_rotation * face_length)
  73. month = int(256 - h_rotation * face_length /
  74. 2), int(310 + v_rotation * face_length / 2)
  75. cv2.circle(img, center, 100, 0, 1)
  76. cv2.circle(img, left_eye, 15, 0, 1)
  77. cv2.circle(img, right_eye, 15, 0, 1)
  78. cv2.circle(img, month, 5, 0, 1)
  79. return img
  80. def extract_img_features(img):
  81. """
  82. 提取图片特征
  83. """
  84. face_position = face_positioning(img)
  85. print("图片中最大人脸的范围为:", face_position)
  86. if not face_position:
  87. cv2.imshow('self', img)
  88. cv2.waitKey(1)
  89. return None
  90. key_points = extract_key_points(img, face_position)
  91. print("68个关键点的位置为:", key_points)
  92. for i, (p_x, p_y) in enumerate(key_points):
  93. cv2.putText(img, str(i), (int(p_x), int(p_y)),
  94. cv2.FONT_HERSHEY_COMPLEX, 0.25, (255, 255, 255))
  95. construction_points = generate_points(key_points)
  96. print("生成的构造点的位置为:", construction_points)
  97. for i, (p_x, p_y) in enumerate(construction_points):
  98. cv2.putText(img, str(i), (int(p_x), int(p_y)),
  99. cv2.FONT_HERSHEY_COMPLEX, 0.25, (255, 255, 255))
  100. rotation = generate_features(construction_points)
  101. print("水平旋转量和垂直旋转量为:", rotation)
  102. cv2.putText(img, str(rotation),
  103. (int(construction_points[-1][0]),
  104. int(construction_points[-1][1])),
  105. cv2.FONT_HERSHEY_COMPLEX, 0.5, (255, 255, 255))
  106. cv2.imshow('self', img)
  107. return rotation
  108. if __name__ == '__main__':
  109. CAP = cv2.VideoCapture(0)
  110. # 原点特征组 my front side
  111. ORIGIN_FEATURE_GROUP = [-0.00899233, 0.39529446]
  112. FEATURE_GROUP = [0, 0]
  113. while True:
  114. RETVAL, IMAGE = CAP.read()
  115. # 翻转视频
  116. IMAGE = cv2.flip(IMAGE, 1)
  117. NEW_FEATURE_GROUP = extract_img_features(IMAGE)
  118. if NEW_FEATURE_GROUP is not None:
  119. FEATURE_GROUP = NEW_FEATURE_GROUP - ORIGIN_FEATURE_GROUP
  120. HORI_ROTATION, VERT_ROTATION = FEATURE_GROUP
  121. cv2.imshow('Vtuber', draw_image(HORI_ROTATION, VERT_ROTATION))
  122. if cv2.waitKey(1) & 0xFF == ord('q'):
  123. print("close")
  124. break

        效果如图所示:

         这样我们就实现了最简单的虚拟主播实验了,接下来让我们结合之前学过的Flask制作前端网页。

二、前端网页制作

        首先我们新建一个项目,在该项目中新建main.py文件,static文件夹和templates文件夹。在static文件夹中导入网盘链接中的三个文件,将shape_predictor_68_face_landmarks.dat直接放在文件夹下,Chart.min.js和echarts.min.js放在js文件夹下(也可以都放置在static文件夹下)。在templates文件夹中新建home.html(主页)、image.html(动态折线图页)、show_data.html(头部姿态展示页)和virtual_echarts.html(静态折线图页)。如下所示:

(一)建立数据库

        首先,我们先建立一个数据库,便于后续将获取到的人脸数据存入。打开Navicat Premium,在某一数据库下新建一个名为"face_data"的表,并在其中添加列名id、face、new1和new2,其类型和长度等属性如图所示,注意一定要将id设置为主键,并勾选“自动递增”,否则后续实验会报错。

 (二)导入虚拟主播代码并将数据存入数据库

        我们将上面已经实现的虚拟主播代码与Flask网页框架结合在一起,并将虚拟主播的代码稍作修改,从而将图片中最大人脸范围、水平旋转量和垂直旋转量的数据存入数据库中。main.py代码如下所示:

  1. # main.py
  2. from flask import Flask, render_template, jsonify
  3. from flask_sqlalchemy import SQLAlchemy
  4. import pymysql
  5. # 连接数据库,便于后续存取数据
  6. app = Flask(__name__)
  7. app.config['SECRET_KEY'] = 'hard to guess string'
  8. app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:123456@localhost:3306/pyc'
  9. app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
  10. app.config['SQLALCHEMY_COMMIT_ON_TEARDOWN'] = True
  11. # 添加一行 显示utf-8
  12. app.config['JSON_AS_ASCII'] = False
  13. db = SQLAlchemy(app)
  14. # 在Flask中定义数据库的列
  15. class face_data(db.Model):
  16. id = db.Column(db.Integer, primary_key=True)
  17. face = db.Column(db.String(255)) # 图片中最大人脸的范围
  18. new1 = db.Column(db.String(255)) # 水平旋转量
  19. new2 = db.Column(db.String(255)) # 垂直旋转量
  20. # 主页
  21. @app.route("/")
  22. def hello():
  23. return render_template("home.html")
  24. # 稍作修改后的虚拟主播代码
  25. @app.route("/save_data", methods=["GET", "POST"])
  26. def save_data():
  27. """
  28. detect face
  29. """
  30. import cv2
  31. import numpy as np
  32. # to detect face key point
  33. import dlib
  34. DETECTOR = dlib.get_frontal_face_detector()
  35. # 人脸模型数据
  36. PREDICTOR = dlib.shape_predictor(
  37. './static/shape_predictor_68_face_landmarks.dat')
  38. def face_positioning(img):
  39. """
  40. 定位人脸
  41. 计算最大面积
  42. """
  43. dets = DETECTOR(img, 0)
  44. if not dets:
  45. return None
  46. return max(dets, key=lambda det: (det.right() - det.left()) * (det.bottom() - det.top()))
  47. def extract_key_points(img, position):
  48. """
  49. 提取关键点
  50. """
  51. landmark_shape = PREDICTOR(img, position)
  52. key_points = []
  53. for i in range(68):
  54. pos = landmark_shape.part(i)
  55. key_points.append(np.array([pos.x, pos.y], dtype=np.float32))
  56. return key_points
  57. def generate_points(key_points):
  58. """
  59. 生成构造点
  60. """
  61. def center(array):
  62. return sum([key_points[i] for i in array]) / len(array)
  63. left_brow = [18, 19, 20, 21]
  64. right_brow = [22, 23, 24, 25]
  65. # 下巴
  66. chin = [6, 7, 8, 9, 10]
  67. nose = [29, 30]
  68. return center(left_brow + right_brow), center(chin), center(nose)
  69. def generate_features(contruction_points):
  70. """
  71. 生成特征
  72. """
  73. brow_center, chin_center, nose_center = contruction_points
  74. mid_edge = brow_center - chin_center
  75. # 斜边
  76. bevel_edge = brow_center - nose_center
  77. mid_edge_length = np.linalg.norm(mid_edge)
  78. # 高与底的比值
  79. horizontal_rotation = np.cross(
  80. mid_edge, bevel_edge) / mid_edge_length ** 2
  81. # @ 点乘
  82. vertical_rotation = mid_edge @ bevel_edge / mid_edge_length ** 2
  83. return np.array([horizontal_rotation, vertical_rotation])
  84. def draw_image(h_rotation, v_rotation):
  85. """
  86. 画脸
  87. Args:
  88. h_rotation: 水平旋转量
  89. v_rotation: 垂直旋转量
  90. """
  91. img = np.ones([512, 512], dtype=np.float32)
  92. face_length = 200
  93. center = 256, 256
  94. left_eye = int(220 - h_rotation *
  95. face_length), int(249 + v_rotation * face_length)
  96. right_eye = int(292 - h_rotation *
  97. face_length), int(249 + v_rotation * face_length)
  98. month = int(256 - h_rotation * face_length /
  99. 2), int(310 + v_rotation * face_length / 2)
  100. cv2.circle(img, center, 100, 0, 1)
  101. cv2.circle(img, left_eye, 15, 0, 1)
  102. cv2.circle(img, right_eye, 15, 0, 1)
  103. cv2.circle(img, month, 5, 0, 1)
  104. return img
  105. def extract_img_features(img):
  106. """
  107. 提取图片特征
  108. """
  109. face_position = face_positioning(img)
  110. print("图片中最大人脸的范围为:", face_position)
  111. if not face_position:
  112. cv2.imshow('self', img)
  113. cv2.waitKey(1)
  114. return None
  115. key_points = extract_key_points(img, face_position)
  116. print("68个关键点的位置为:", key_points)
  117. for i, (p_x, p_y) in enumerate(key_points):
  118. cv2.putText(img, str(i), (int(p_x), int(p_y)),
  119. cv2.FONT_HERSHEY_COMPLEX, 0.25, (255, 255, 255))
  120. construction_points = generate_points(key_points)
  121. print("生成的构造点的位置为:", construction_points)
  122. for i, (p_x, p_y) in enumerate(construction_points):
  123. cv2.putText(img, str(i), (int(p_x), int(p_y)),
  124. cv2.FONT_HERSHEY_COMPLEX, 0.25, (255, 255, 255))
  125. rotation = generate_features(construction_points)
  126. print("水平旋转量和垂直旋转量为:", rotation)
  127. cv2.putText(img, str(rotation),
  128. (int(construction_points[-1][0]),
  129. int(construction_points[-1][1])),
  130. cv2.FONT_HERSHEY_COMPLEX, 0.5, (255, 255, 255))
  131. cv2.imshow('self', img)
  132. return face_position, rotation
  133. if __name__ == '__main__':
  134. CAP = cv2.VideoCapture(0 + cv2.CAP_DSHOW)
  135. # 原点特征组 my front side
  136. ORIGIN_FEATURE_GROUP = [-0.00899233, 0.39529446]
  137. FEATURE_GROUP = [0, 0]
  138. while True:
  139. RETVAL, IMAGE = CAP.read()
  140. # 翻转视频
  141. IMAGE = cv2.flip(IMAGE, 1)
  142. face_position, NEW_FEATURE_GROUP = extract_img_features(IMAGE)
  143. if NEW_FEATURE_GROUP is not None:
  144. FEATURE_GROUP = NEW_FEATURE_GROUP - ORIGIN_FEATURE_GROUP
  145. HORI_ROTATION, VERT_ROTATION = FEATURE_GROUP
  146. res = face_data() # 实例化一条记录
  147. res.face = face_position
  148. res.new1 = NEW_FEATURE_GROUP[0]
  149. res.new2 = NEW_FEATURE_GROUP[1]
  150. db.session.add(res) # 逻辑添加
  151. db.session.commit() # 添加记录
  152. if cv2.waitKey(1) & 0xFF == ord('q'):
  153. print("close")
  154. break
  155. return "数据存储成功!"
  156. # 将数据库数据传到前端展示
  157. @app.route('/show_data')
  158. def show_data():
  159. db = pymysql.connect(host='localhost', user='root', password='123456', database='pyc', charset='utf8')
  160. cursor = db.cursor()
  161. cursor.execute("Select * from face_data")
  162. rs = cursor.fetchall()
  163. rs = list(rs)
  164. print(rs[0:10])
  165. return render_template("show_data.html", rs=rs)
  166. if __name__ == '__main__':
  167. app.run(debug=True)

        接着,我们修改主页的模板home.html如下所示:

  1. # templates/home.html
  2. <!DOCTYPE html>
  3. <html lang="en">
  4. <head>
  5. <meta charset="UTF-8">
  6. <title>虚拟主播展示首页</title>
  7. </head>
  8. <body>
  9. <h1 align = "center" style = "font-family: 楷体"> 虚拟主播展示 </h1>
  10. <p><a href = "{{ url_for('save_data') }}" rel = "external nofollow"
  11. style = "top: 200px; text-decoration: none; font-size: 24px; font-family: 楷体"> 获取虚拟主播人脸数据 </a></p>
  12. <p><a href = "{{ url_for('show_data') }}" rel = "external nofollow"
  13. style = "top: 30px; text-decoration: none; font-size: 24px; font-family: 楷体"> 展示人脸数据 </a></p>
  14. </body>
  15. </html>

        修改展示数据库数据的模板show_data.html如下所示:

  1. <!DOCTYPE html>
  2. <html lang="en">
  3. <head>
  4. <meta charset="UTF-8">
  5. <title>虚拟主播数据展示页</title>
  6. </head>
  7. <body>
  8. <h1>这是图片中最大人脸的范围、水平旋转量和垂直旋转量</h1>
  9. {% for r in rs %}
  10. {{r}}<br>
  11. {% endfor %}
  12. </body>
  13. </html>

        之后运行程序,首先点击“获取虚拟主播人脸数据”,出现人脸视频和虚拟主播视频后按q键停止,http://127.0.0.1:5000/save_data会显示“数据存储成功!”,证明我们的人脸数据已经存储到了数据库中,到Navicat Premium中查看,的确有数据存入。

         之后点击主页中的“展示人脸数据”,会发现http://127.0.0.1:5000/show_data将数据库中的数据都展示出来了。

 (三)绘制echarts折线图

        在main.py文件中添加如下代码:

  1. # main.py
  2. class Mysql(object):
  3. def __init__(self):
  4. try:
  5. self.conn = pymysql.connect(host='localhost', user='root', password='123456', database='pyc', charset='utf8')
  6. self.cursor = self.conn.cursor() # 用来获得python执行Mysql命令的方法(游标操作)
  7. print("连接数据库成功")
  8. except:
  9. print("连接失败")
  10. def getItems(self):
  11. sql = "select id,new1,new2 from face_data" #获取food数据表的内容
  12. self.cursor.execute(sql)
  13. items = self.cursor.fetchall() #接收全部的返回结果行
  14. return items
  15. @app.route('/line')
  16. def line():
  17. db = Mysql()
  18. items = db.getItems()
  19. return render_template('virtual_echart.html', items=items)

        并将virtual_echart.html修改如下:

  1. # templates/virtual_echart.html
  2. <!DOCTYPE html>
  3. <html lang="en">
  4. <head>
  5. <meta charset="UTF-8">
  6. <title>虚拟主播静态折线图展示</title>
  7. <script src="/static/js/echarts.min.js"></script>
  8. <style>
  9. #main{
  10. width: 1200px;
  11. height: 400px;
  12. position:relative;
  13. left:0px;
  14. top:80px;
  15. }
  16. </style>
  17. </head>
  18. <body>
  19. <h1 align="center" style="font-family:楷体;font-size: 36px">虚拟主播静态折线图展示</h1>
  20. <p style="position:absolute;right:80px;top:100px">裴雨晨&&赵艺瑶</p>
  21. <div id="main" ></div>
  22. <script type="text/javascript">
  23. // 基于准备好的dom,初始化 echarts 实例并绘制图表。
  24. var myChart=echarts.init(document.getElementById('main'));
  25. // 指定图表的配置项和数据
  26. var option = {
  27. title: {
  28. text: ''
  29. },
  30. legend:{
  31. data:['水平旋转量','垂直旋转量']
  32. },
  33. dataZoom: [{
  34. type: 'slider',
  35. show: true, //flase直接隐藏图形
  36. xAxisIndex: [0],
  37. left: '9%', //滚动条靠左侧的百分比
  38. bottom: -5,
  39. start: 1,//滚动条的起始位置
  40. end: 40//滚动条的截止位置(按比例分割你的柱状图x轴长度)
  41. }],
  42. xAxis: {
  43. name:"序号",
  44. type: 'category',
  45. data:[
  46. {% for item in items %}
  47. "{{ item[0]}}",
  48. {% endfor %}
  49. ]},
  50. yAxis: {
  51. name:"水平旋转量和垂直旋转量",
  52. type: 'value',
  53. axisLabel : {
  54. formatter: '{value} '}
  55. },
  56. series: [
  57. {
  58. name:'水平旋转量',
  59. type: 'line', //line折线图。bar柱形图
  60. data:[{% for item in items %}
  61. "{{ item[1]}}",
  62. {% endfor %}],
  63. itemStyle:{normal: {color:"#31b0d5"}}
  64. },
  65. {
  66. name:'垂直旋转量',
  67. type: 'line', //line折线图。bar柱形图
  68. data:[{% for item in items %}
  69. "{{ item[2]}}",
  70. {% endfor %}],
  71. itemStyle:{normal: {color:"#ff0000"}}
  72. },
  73. ]
  74. };
  75. myChart.setOption(option);// 基于准备好的dom,初始化 echarts 实例并绘制图表。
  76. </script>
  77. </body>
  78. </html>

        在home.html的body中添加如下代码:

  1. # templates/home.html
  2. <p><a href = "{{ url_for('line') }}" rel = "external nofollow"
  3. style = "top: 400px; text-decoration: none; font-size: 24px;
  4. font-family: 楷体"> 展示静态折线图 </a></p>

        再次运行,点击主页中的“展示静态折线图”,便可在http://127.0.0.1:5000/line看到用echarts画的折线图。该图反映了人脸在水平方向和垂直方向的旋转量。

 (四)绘制ajax动态折线图(未实现)

        在main.py文件中添加如下代码:

  1. # main.py
  2. @app.route('/show_test') # 路由
  3. def show_test():
  4. return render_template('image.html')
  5. @app.route('/show_test/setData/') # 路由
  6. def setData():
  7. db = pymysql.connect(host='localhost', user='root', password='123456', database='pyc', charset='utf8')
  8. cursor = db.cursor()
  9. cursor.execute("Select count(*) from face_data")
  10. number = cursor.fetchall()
  11. # print(number[0][0])
  12. cursor.execute("Select new1 from face_data")
  13. new1 = cursor.fetchall()
  14. new1 = list(new1)
  15. for i in range(0, number[0][0]):
  16. data = {'id': i, 'h': new1[i]}
  17. print(data)
  18. return jsonify(data) # 将数据以字典的形式传回

        并将templates文件夹中的image.html文件修改如下:

  1. # templates/image.html
  2. <!DOCTYPE html>
  3. <html lang="en">
  4. <head>
  5. <meta charset="UTF-8">
  6. <title>虚拟主播动态折线图展示</title>
  7. </head>
  8. <body>
  9. <h1>这是人脸水平旋转量的动态折线图展示</h1>
  10. <canvas id="panel" height="350px" width="700px"> </canvas> <!--折线图位置-->
  11. <script src="https://code.jquery.com/jquery-3.3.1.min.js"></script> <!--导入jQuery-->
  12. <script src="/static/js/Chart.min.js"></script> <!--导入jQuery-->
  13. <script>
  14. $(function () {
  15. var can = $('#panel').get(0).getContext('2d'); /*绘制类型*/
  16. //定义图标的数据
  17. var canData = {
  18. labels:["a","b","c","d","e","f"], /*初始x轴数据*/
  19. datasets : [
  20. {
  21. //折线的填充颜色
  22. fillColor:"rgba(255,255,255,0.1)",
  23. //线条颜色:
  24. strokeColor:"rgba(255,255,0,1)",
  25. //y轴初始数据:
  26. data:[0.01,0.03,-0.02,0.01,0.05,-0.04]
  27. }
  28. ]
  29. };
  30. //绘制图片
  31. var line = new Chart(can).Line(canData);
  32. var int = setInterval(function () { //设置定时器
  33. $.ajax(
  34. {
  35. url:"/show_test/setData/", //从setData函数中获取数据
  36. type:"get",
  37. data:"",
  38. success:function (data) {
  39. line.addData(
  40. [data["h"]], //y轴,因为同一个x轴可以有多个折线
  41. data["id"] //x轴
  42. );
  43. <!-- //保持x轴只有8个数据,要不随着时间推移x轴会越来越长-->
  44. <!-- var len = line.datasets[0].points.length;-->
  45. <!-- if(len>8){-->
  46. <!-- line.removeData()-->
  47. <!-- }-->
  48. }
  49. }
  50. )
  51. },1000)
  52. })
  53. </script>
  54. </body>
  55. </html>

          再次运行,点击主页中的“展示动态折线图”,便可在http://127.0.0.1:5000/show_test中看到用ajax画的动态折线图。这部分代码我参考了以下文章,该文章是每隔一秒随机生成一个数字传到前端进行展示,而我企图实时获取已经存入数据库中的数据进行前端展示,结果显示后端只把数据库中的最后一条数据传到前端了。Flask框架中利用Ajax制作动态折线图_我是一只程序⚪的博客-CSDN博客_flask折线图制作动态折线图,在视图中需要两个函数,其一为页面函数,用于页面显示。另一个为折线图数据视图函数,用来生成数据传递给Ajax。创建前端页面,名为image的html页面,然后准备视图函数:函数一:关联HTML页面#函数一:关联页面@app.route('/') #路由def image(): return render_template('image.html')...https://blog.csdn.net/weixin_39561473/article/details/86608661        其实原因就在于我的代码在传递数据到前端之前对数据进行了一次for循环遍历,最后结果肯定停留在最后一条数据上。有人会说把return语句放在for循环里试试呢?这个我试过了也是不行的,因为前端从后端获取数据的时候会调用整个setData函数,这就意味着每次调用的时候循环次数都会清零,所以我这部分代码在逻辑上有很大的问题,但由于时间关系,我没能对其进行进一步的研究,所以只能暂时搁置,至于能不能真的实现我还不太清楚。或许有大佬可以在评论区解答我的疑惑~

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Cpp五条/article/detail/467162
推荐阅读
相关标签
  

闽ICP备14008679号