当前位置:   article > 正文

人脸活体检测人脸识别:眨眼+张口_人脸识别眨眼

人脸识别眨眼

一:dlib的shape_predictor_68_face_landmarks模型

该模型能够检测人脸的68个特征点(facial landmarks),定位图像中的眼睛,眉毛,鼻子,嘴巴,下颌线(ROI,Region of Interest)

 

  1. 下颌线[1,17]
  2. 左眼眉毛[18,22]
  3. 右眼眉毛[23,27]
  4. 鼻梁[28,31]
  5. 鼻子[32,36]
  6. 左眼[37,42]
  7. 右眼[43,48]
  8. 上嘴唇外边缘[49,55]
  9. 上嘴唇内边缘[66,68]
  10. 下嘴唇外边缘[56,60]
  11. 下嘴唇内边缘[61,65]

在使用的过程中对应的下标要减1,像数组的下标是从0开始。

二、眨眼检测

基本原理:计算眼睛长宽比 Eye Aspect Ratio,EAR.当人眼睁开时,EAR在某个值上下波动,当人眼闭合时,EAR迅速下降,理论上会接近于零,当时人脸检测模型还没有这么精确。所以我们认为当EAR低于某个阈值时,眼睛处于闭合状态。为检测眨眼次数,需要设置同一次眨眼的连续帧数。眨眼速度比较快,一般1~3帧就完成了眨眼动作。两个阈值都要根据实际情况设置。

 程序实现:

  1. from imutils import face_utils
  2. import numpy as np
  3. import dlib
  4. import cv2
  5. # 眼长宽比例
  6. def eye_aspect_ratio(eye):
  7. # (|e1-e5|+|e2-e4|) / (2|e0-e3|)
  8. A = np.linalg.norm(eye[1] - eye[5])
  9. B = np.linalg.norm(eye[2] - eye[4])
  10. C = np.linalg.norm(eye[0] - eye[3])
  11. ear = (A + B) / (2.0 * C)
  12. return ear
  13. # 进行活体检测(包含眨眼和张嘴)
  14. def liveness_detection():
  15. vs = cv2.VideoCapture(0) # 调用第一个摄像头的信息
  16. # 眼长宽比例值
  17. EAR_THRESH = 0.15
  18. EAR_CONSEC_FRAMES_MIN = 1
  19. EAR_CONSEC_FRAMES_MAX = 3 # 当EAR小于阈值时,接连多少帧一定发生眨眼动作
  20. # 初始化眨眼的连续帧数
  21. blink_counter = 0
  22. # 初始化眨眼次数总数
  23. blink_total = 0
  24. print("[INFO] loading facial landmark predictor...")
  25. # 人脸检测器
  26. detector = dlib.get_frontal_face_detector()
  27. # 特征点检测器
  28. predictor = dlib.shape_predictor("model/shape_predictor_68_face_landmarks.dat")
  29. # 获取左眼的特征点
  30. (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
  31. # 获取右眼的特征点
  32. (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
  33. print("[INFO] starting video stream thread...")
  34. while True:
  35. flag, frame = vs.read() # 返回一帧的数据
  36. if not flag:
  37. print("不支持摄像头", flag)
  38. break
  39. if frame is not None:
  40. gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # 转成灰度图像
  41. rects = detector(gray, 0) # 人脸检测
  42. # 只能处理一张人脸
  43. if len(rects) == 1:
  44. shape = predictor(gray, rects[0]) # 保存68个特征点坐标的<class 'dlib.dlib.full_object_detection'>对象
  45. shape = face_utils.shape_to_np(shape) # 将shape转换为numpy数组,数组中每个元素为特征点坐标
  46. left_eye = shape[lStart:lEnd] # 取出左眼对应的特征点
  47. right_eye = shape[rStart:rEnd] # 取出右眼对应的特征点
  48. left_ear = eye_aspect_ratio(left_eye) # 计算左眼EAR
  49. right_ear = eye_aspect_ratio(right_eye) # 计算右眼EAR
  50. ear = (left_ear + right_ear) / 2.0 # 求左右眼EAR的均值
  51. left_eye_hull = cv2.convexHull(left_eye) # 寻找左眼轮廓
  52. right_eye_hull = cv2.convexHull(right_eye) # 寻找右眼轮廓
  53. # mouth_hull = cv2.convexHull(mouth) # 寻找嘴巴轮廓
  54. cv2.drawContours(frame, [left_eye_hull], -1, (0, 255, 0), 1) # 绘制左眼轮廓
  55. cv2.drawContours(frame, [right_eye_hull], -1, (0, 255, 0), 1) # 绘制右眼轮廓
  56. # EAR低于阈值,有可能发生眨眼,眨眼连续帧数加一次
  57. if ear < EAR_THRESH:
  58. blink_counter += 1
  59. # EAR高于阈值,判断前面连续闭眼帧数,如果在合理范围内,说明发生眨眼
  60. else:
  61. if EAR_CONSEC_FRAMES_MIN <= blink_counter <= EAR_CONSEC_FRAMES_MAX:
  62. blink_total += 1
  63. blink_counter = 0
  64. cv2.putText(frame, "Blinks: {}".format(blink_total), (0, 30),
  65. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  66. cv2.putText(frame, "EAR: {:.2f}".format(ear), (300, 30),
  67. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  68. elif len(rects) == 0:
  69. cv2.putText(frame, "No face!", (0, 30),
  70. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  71. else:
  72. cv2.putText(frame, "More than one face!", (0, 30),
  73. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  74. cv2.namedWindow("Frame", cv2.WINDOW_NORMAL)
  75. cv2.imshow("Frame", frame)
  76. # 按下q键退出循环(鼠标要点击一下图片使图片获得焦点)
  77. if cv2.waitKey(1) & 0xFF == ord('q'):
  78. break
  79. cv2.destroyAllWindows()
  80. vs.release()
  81. liveness_detection()

三、张口检测

检测原理:类似眨眼检测,计算Mouth Aspect Ratio,MAR.当MAR大于设定的阈值时,认为张开了嘴巴。

1:采用的判定是张开后闭合计算一次张嘴动作。

mar     # 嘴长宽比例

MAR_THRESH = 0.2    # 嘴长宽比例值

mouth_status_open   # 初始化张嘴状态为闭嘴

当mar大于设定的比例值表示张开,张开后闭合代表一次张嘴动作

  1. # 通过张、闭来判断一次张嘴动作
  2. if mar > MAR_THRESH:
  3. mouth_status_open = 1
  4. else:
  5. if mouth_status_open:
  6. mouth_total += 1
  7. mouth_status_open = 0

2: 嘴长宽比例的计算

  1. # 嘴长宽比例
  2. def mouth_aspect_ratio(mouth):
  3. A = np.linalg.norm(mouth[1] - mouth[7]) # 61, 67
  4. B = np.linalg.norm(mouth[3] - mouth[5]) # 63, 65
  5. C = np.linalg.norm(mouth[0] - mouth[4]) # 60, 64
  6. mar = (A + B) / (2.0 * C)
  7. return mar

原本采用嘴唇外边缘来计算,发现嘟嘴也会被判定为张嘴,故才用嘴唇内边缘进行计算,会更加准确。

这里mouth下标的值取决于取的是“mouth”还是“inner_mouth”,由于我要画的轮廓是内嘴,所以我采用的是inner_mouth

 (mStart, mEnd) = face_utils.FACIAL_LANDMARKS_IDXS["inner_mouth"]

打开以下方法,进入到源码,可以看到每个特征点对应的下标是不一样的,对应的mouth特征点的下标也是不同的

 (以上的区间包左边代表开始下标,右边值-1)从上面可知mouth是从(48,68),inner_mouth从(60, 68),mouth包含inner_mouth,如果取得是mouth的值,则嘴长宽比例的计算如下

  1. # 嘴长宽比例
  2. def mouth_aspect_ratio(mouth):
  3. # (|m13-m19|+|m15-m17|)/(2|m12-m16|)
  4. A = np.linalg.norm(mouth[13] - mouth[19]) # 61, 67
  5. B = np.linalg.norm(mouth[15] - mouth[17]) # 63, 65
  6. C = np.linalg.norm(mouth[12] - mouth[16]) # 60, 64
  7. mar = (A + B) / (2.0 * C)
  8. return mar

3:完整程序实现如下

  1. from imutils import face_utils
  2. import numpy as np
  3. import dlib
  4. import cv2
  5. # 嘴长宽比例
  6. def mouth_aspect_ratio(mouth):
  7. A = np.linalg.norm(mouth[1] - mouth[7]) # 61, 67
  8. B = np.linalg.norm(mouth[3] - mouth[5]) # 63, 65
  9. C = np.linalg.norm(mouth[0] - mouth[4]) # 60, 64
  10. mar = (A + B) / (2.0 * C)
  11. return mar
  12. # 进行活体检测(张嘴)
  13. def liveness_detection():
  14. vs = cv2.VideoCapture(0) # 调用第一个摄像头的信息
  15. # 嘴长宽比例值
  16. MAR_THRESH = 0.2
  17. # 初始化张嘴次数
  18. mouth_total = 0
  19. # 初始化张嘴状态为闭嘴
  20. mouth_status_open = 0
  21. print("[INFO] loading facial landmark predictor...")
  22. # 人脸检测器
  23. detector = dlib.get_frontal_face_detector()
  24. # 特征点检测器
  25. predictor = dlib.shape_predictor("model/shape_predictor_68_face_landmarks.dat")
  26. # 获取嘴巴特征点
  27. (mStart, mEnd) = face_utils.FACIAL_LANDMARKS_IDXS["inner_mouth"]
  28. print("[INFO] starting video stream thread...")
  29. while True:
  30. flag, frame = vs.read() # 返回一帧的数据
  31. if not flag:
  32. print("不支持摄像头", flag)
  33. break
  34. if frame is not None:
  35. # 图片转换成灰色(去除色彩干扰,让图片识别更准确)
  36. gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
  37. rects = detector(gray, 0) # 人脸检测
  38. # 只能处理一张人脸
  39. if len(rects) == 1:
  40. shape = predictor(gray, rects[0]) # 保存68个特征点坐标的<class 'dlib.dlib.full_object_detection'>对象
  41. shape = face_utils.shape_to_np(shape) # 将shape转换为numpy数组,数组中每个元素为特征点坐标
  42. inner_mouth = shape[mStart:mEnd] # 取出嘴巴对应的特征点
  43. mar = mouth_aspect_ratio(inner_mouth) # 求嘴巴mar的均值
  44. mouth_hull = cv2.convexHull(inner_mouth) # 寻找内嘴巴轮廓
  45. cv2.drawContours(frame, [mouth_hull], -1, (0, 255, 0), 1) # 绘制嘴巴轮廓
  46. # 通过张、闭来判断一次张嘴动作
  47. if mar > MAR_THRESH:
  48. mouth_status_open = 1
  49. else:
  50. if mouth_status_open:
  51. mouth_total += 1
  52. mouth_status_open = 0
  53. cv2.putText(frame, "Mouth: {}".format(mouth_total),
  54. (130, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  55. cv2.putText(frame, "MAR: {:.2f}".format(mar), (450, 30),
  56. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  57. elif len(rects) == 0:
  58. cv2.putText(frame, "No face!", (0, 30),
  59. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  60. else:
  61. cv2.putText(frame, "More than one face!", (0, 30),
  62. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  63. cv2.namedWindow("Frame", cv2.WINDOW_NORMAL)
  64. cv2.imshow("Frame", frame)
  65. # 按下q键退出循环(鼠标要点击一下图片使图片获得焦点)
  66. if cv2.waitKey(1) & 0xFF == ord('q'):
  67. break
  68. cv2.destroyAllWindows()
  69. vs.release()
  70. liveness_detection()

三:眨眼和张嘴结合(摄像头)

  1. from imutils import face_utils
  2. import numpy as np
  3. import dlib
  4. import cv2
  5. # 眼长宽比例
  6. def eye_aspect_ratio(eye):
  7. # (|e1-e5|+|e2-e4|) / (2|e0-e3|)
  8. A = np.linalg.norm(eye[1] - eye[5])
  9. B = np.linalg.norm(eye[2] - eye[4])
  10. C = np.linalg.norm(eye[0] - eye[3])
  11. ear = (A + B) / (2.0 * C)
  12. return ear
  13. # 嘴长宽比例
  14. def mouth_aspect_ratio(mouth):
  15. A = np.linalg.norm(mouth[1] - mouth[7]) # 61, 67
  16. B = np.linalg.norm(mouth[3] - mouth[5]) # 63, 65
  17. C = np.linalg.norm(mouth[0] - mouth[4]) # 60, 64
  18. mar = (A + B) / (2.0 * C)
  19. return mar
  20. # 进行活体检测(包含眨眼和张嘴)
  21. def liveness_detection():
  22. vs = cv2.VideoCapture(0) # 调用第一个摄像头的信息
  23. # 眼长宽比例值
  24. EAR_THRESH = 0.15
  25. EAR_CONSEC_FRAMES_MIN = 1
  26. EAR_CONSEC_FRAMES_MAX = 5 # 当EAR小于阈值时,接连多少帧一定发生眨眼动作
  27. # 嘴长宽比例值
  28. MAR_THRESH = 0.2
  29. # 初始化眨眼的连续帧数
  30. blink_counter = 0
  31. # 初始化眨眼次数总数
  32. blink_total = 0
  33. # 初始化张嘴次数
  34. mouth_total = 0
  35. # 初始化张嘴状态为闭嘴
  36. mouth_status_open = 0
  37. print("[INFO] loading facial landmark predictor...")
  38. # 人脸检测器
  39. detector = dlib.get_frontal_face_detector()
  40. # 特征点检测器
  41. predictor = dlib.shape_predictor("model/shape_predictor_68_face_landmarks.dat")
  42. # 获取左眼的特征点
  43. (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
  44. # 获取右眼的特征点
  45. (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
  46. # 获取嘴巴特征点
  47. (mStart, mEnd) = face_utils.FACIAL_LANDMARKS_IDXS["inner_mouth"]
  48. print("[INFO] starting video stream thread...")
  49. while True:
  50. flag, frame = vs.read() # 返回一帧的数据
  51. if not flag:
  52. print("不支持摄像头", flag)
  53. break
  54. if frame is not None:
  55. # 图片转换成灰色(去除色彩干扰,让图片识别更准确)
  56. gray = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
  57. rects = detector(gray, 0) # 人脸检测
  58. # 只能处理一张人脸
  59. if len(rects) == 1:
  60. shape = predictor(gray, rects[0]) # 保存68个特征点坐标的<class 'dlib.dlib.full_object_detection'>对象
  61. shape = face_utils.shape_to_np(shape) # 将shape转换为numpy数组,数组中每个元素为特征点坐标
  62. left_eye = shape[lStart:lEnd] # 取出左眼对应的特征点
  63. right_eye = shape[rStart:rEnd] # 取出右眼对应的特征点
  64. left_ear = eye_aspect_ratio(left_eye) # 计算左眼EAR
  65. right_ear = eye_aspect_ratio(right_eye) # 计算右眼EAR
  66. ear = (left_ear + right_ear) / 2.0 # 求左右眼EAR的均值
  67. inner_mouth = shape[mStart:mEnd] # 取出嘴巴对应的特征点
  68. mar = mouth_aspect_ratio(inner_mouth) # 求嘴巴mar的均值
  69. left_eye_hull = cv2.convexHull(left_eye) # 寻找左眼轮廓
  70. right_eye_hull = cv2.convexHull(right_eye) # 寻找右眼轮廓
  71. mouth_hull = cv2.convexHull(inner_mouth) # 寻找内嘴巴轮廓
  72. cv2.drawContours(frame, [left_eye_hull], -1, (0, 255, 0), 1) # 绘制左眼轮廓
  73. cv2.drawContours(frame, [right_eye_hull], -1, (0, 255, 0), 1) # 绘制右眼轮廓
  74. cv2.drawContours(frame, [mouth_hull], -1, (0, 255, 0), 1) # 绘制嘴巴轮廓
  75. # EAR低于阈值,有可能发生眨眼,眨眼连续帧数加一次
  76. if ear < EAR_THRESH:
  77. blink_counter += 1
  78. # EAR高于阈值,判断前面连续闭眼帧数,如果在合理范围内,说明发生眨眼
  79. else:
  80. # if the eyes were closed for a sufficient number of
  81. # then increment the total number of blinks
  82. if EAR_CONSEC_FRAMES_MIN <= blink_counter <= EAR_CONSEC_FRAMES_MAX:
  83. blink_total += 1
  84. blink_counter = 0
  85. # 通过张、闭来判断一次张嘴动作
  86. if mar > MAR_THRESH:
  87. mouth_status_open = 1
  88. else:
  89. if mouth_status_open:
  90. mouth_total += 1
  91. mouth_status_open = 0
  92. cv2.putText(frame, "Blinks: {}".format(blink_total), (0, 30),
  93. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  94. cv2.putText(frame, "Mouth: {}".format(mouth_total),
  95. (130, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  96. cv2.putText(frame, "EAR: {:.2f}".format(ear), (300, 30),
  97. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  98. cv2.putText(frame, "MAR: {:.2f}".format(mar), (450, 30),
  99. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  100. elif len(rects) == 0:
  101. cv2.putText(frame, "No face!", (0, 30),
  102. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  103. else:
  104. cv2.putText(frame, "More than one face!", (0, 30),
  105. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
  106. cv2.namedWindow("Frame", cv2.WINDOW_NORMAL)
  107. cv2.imshow("Frame", frame)
  108. # 按下q键退出循环(鼠标要点击一下图片使图片获得焦点)
  109. if cv2.waitKey(1) & 0xFF == ord('q'):
  110. break
  111. cv2.destroyAllWindows()
  112. vs.release()
  113. # 调用摄像头进行张嘴眨眼活体检测
  114. liveness_detection()

四:采用视频进行活体检测

最大的区别是原来通过摄像头获取一帧一帧的视频流进行判断,现在是通过视频获取一帧一帧的视频流进行判断

1:先看下获取摄像头的图像信息 

  1. # -*-coding:GBK -*-
  2. import cv2
  3. from PIL import Image, ImageDraw
  4. import numpy as np
  5. # 1.调用摄像头
  6. # 2.读取摄像头图像信息
  7. # 3.在图像上添加文字信息
  8. # 4.保存图像
  9. cap = cv2.VideoCapture(0) # 调用第一个摄像头信息
  10. while True:
  11. flag, frame = cap.read() # 返回一帧的数据
  12. # #返回值:flag:bool值:True:读取到图片,False:没有读取到图片 frame:一帧的图片
  13. # BGR是cv2 的图像保存格式,RGB是PIL的图像保存格式,在转换时需要做格式上的转换
  14. img_PIL = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
  15. draw = ImageDraw.Draw(img_PIL)
  16. draw.text((100, 100), 'press q to exit', fill=(255, 255, 255))
  17. # 将frame对象转换成cv2的格式
  18. frame = cv2.cvtColor(np.array(img_PIL), cv2.COLOR_RGB2BGR)
  19. cv2.imshow('capture', frame)
  20. if cv2.waitKey(1) & 0xFF == ord('q'):
  21. cv2.imwrite('images/out.jpg', frame)
  22. break
  23. cap.release()

2:获取视频的图像信息 

  1. # -*-coding:GBK -*-
  2. import cv2
  3. from PIL import Image, ImageDraw
  4. import numpy as np
  5. # 1.调用摄像头
  6. # 2.读取摄像头图像信息
  7. # 3.在图像上添加文字信息
  8. # 4.保存图像
  9. cap = cv2.VideoCapture(r'video\face13.mp4') # 调用第一个摄像头信息
  10. while True:
  11. flag, frame = cap.read() # 返回一帧的数据
  12. if not flag:
  13. break
  14. if frame is not None:
  15. # BGR是cv2 的图像保存格式,RGB是PIL的图像保存格式,在转换时需要做格式上的转换
  16. img_PIL = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
  17. draw = ImageDraw.Draw(img_PIL)
  18. draw.text((100, 100), 'press q to exit', fill=(255, 255, 255))
  19. # # 将frame对象转换成cv2的格式
  20. frame = cv2.cvtColor(np.array(img_PIL), cv2.COLOR_RGB2BGR)
  21. cv2.imshow('capture', frame)
  22. if cv2.waitKey(1) & 0xFF == ord('q'):
  23. cv2.imwrite('images/out.jpg', frame)
  24. break
  25. cv2.destroyAllWindows()
  26. cap.release()

五:视频进行人脸识别和活体检测

1:原理

计算当出现1次眨眼或1次张嘴就判断为活人,记录下一帧的人脸图片,和要判定的人员图片进行比对,获取比对后的相似度,进行判断是否是同一个人,为了增加判断的速度,才用2帧进行一次活体检测判断。

2:代码实现

  1. import face_recognition
  2. from imutils import face_utils
  3. import numpy as np
  4. import dlib
  5. import cv2
  6. import sys
  7. # 初始化眨眼次数
  8. blink_total = 0
  9. # 初始化张嘴次数
  10. mouth_total = 0
  11. # 设置图片存储路径
  12. pic_path = r'images\viode_face.jpg'
  13. # 图片数量
  14. pic_total = 0
  15. # 初始化眨眼的连续帧数以及总的眨眼次数
  16. blink_counter = 0
  17. # 初始化张嘴状态为闭嘴
  18. mouth_status_open = 0
  19. def getFaceEncoding(src):
  20. image = face_recognition.load_image_file(src) # 加载人脸图片
  21. # 获取图片人脸定位[(top,right,bottom,left )]
  22. face_locations = face_recognition.face_locations(image)
  23. img_ = image[face_locations[0][0]:face_locations[0][2], face_locations[0][3]:face_locations[0][1]]
  24. img_ = cv2.cvtColor(img_, cv2.COLOR_BGR2RGB)
  25. # display(img_)
  26. face_encoding = face_recognition.face_encodings(image, face_locations)[0] # 对人脸图片进行编码
  27. return face_encoding
  28. def simcos(a, b):
  29. a = np.array(a)
  30. b = np.array(b)
  31. dist = np.linalg.norm(a - b) # 二范数
  32. sim = 1.0 / (1.0 + dist) #
  33. return sim
  34. # 提供对外比对的接口 返回比对的相似度
  35. def comparison(face_src1, face_src2):
  36. xl1 = getFaceEncoding(face_src1)
  37. xl2 = getFaceEncoding(face_src2)
  38. value = simcos(xl1, xl2)
  39. print(value)
  40. # 眼长宽比例
  41. def eye_aspect_ratio(eye):
  42. # (|e1-e5|+|e2-e4|) / (2|e0-e3|)
  43. A = np.linalg.norm(eye[1] - eye[5])
  44. B = np.linalg.norm(eye[2] - eye[4])
  45. C = np.linalg.norm(eye[0] - eye[3])
  46. ear = (A + B) / (2.0 * C)
  47. return ear
  48. # 嘴长宽比例
  49. def mouth_aspect_ratio(mouth):
  50. A = np.linalg.norm(mouth[1] - mouth[7]) # 61, 67
  51. B = np.linalg.norm(mouth[3] - mouth[5]) # 63, 65
  52. C = np.linalg.norm(mouth[0] - mouth[4]) # 60, 64
  53. mar = (A + B) / (2.0 * C)
  54. return mar
  55. # 进行活体检测(包含眨眼和张嘴)
  56. # filePath 视频路径
  57. def liveness_detection():
  58. global blink_total # 使用global声明blink_total,在函数中就可以修改全局变量的值
  59. global mouth_total
  60. global pic_total
  61. global blink_counter
  62. global mouth_status_open
  63. # 眼长宽比例值
  64. EAR_THRESH = 0.15
  65. EAR_CONSEC_FRAMES_MIN = 1
  66. EAR_CONSEC_FRAMES_MAX = 5 # 当EAR小于阈值时,接连多少帧一定发生眨眼动作
  67. # 嘴长宽比例值
  68. MAR_THRESH = 0.2
  69. # 人脸检测器
  70. detector = dlib.get_frontal_face_detector()
  71. # 特征点检测器
  72. predictor = dlib.shape_predictor("model/shape_predictor_68_face_landmarks.dat")
  73. # 获取左眼的特征点
  74. (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
  75. # 获取右眼的特征点
  76. (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
  77. # 获取嘴巴特征点
  78. (mStart, mEnd) = face_utils.FACIAL_LANDMARKS_IDXS["inner_mouth"]
  79. vs = cv2.VideoCapture(video_path)
  80. # 总帧数(frames)
  81. frames = vs.get(cv2.CAP_PROP_FRAME_COUNT)
  82. frames_total = int(frames)
  83. for i in range(frames_total):
  84. ok, frame = vs.read(i) # 读取视频流的一帧
  85. if not ok:
  86. break
  87. if frame is not None and i % 2 == 0:
  88. # 图片转换成灰色(去除色彩干扰,让图片识别更准确)
  89. gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
  90. rects = detector(gray, 0) # 人脸检测
  91. # 只能处理一张人脸
  92. if len(rects) == 1:
  93. if pic_total == 0:
  94. cv2.imwrite(pic_path, frame) # 存储为图像,保存名为 文件夹名_数字(第几个文件).jpg
  95. cv2.waitKey(1)
  96. pic_total += 1
  97. shape = predictor(gray, rects[0]) # 保存68个特征点坐标的<class 'dlib.dlib.full_object_detection'>对象
  98. shape = face_utils.shape_to_np(shape) # 将shape转换为numpy数组,数组中每个元素为特征点坐标
  99. left_eye = shape[lStart:lEnd] # 取出左眼对应的特征点
  100. right_eye = shape[rStart:rEnd] # 取出右眼对应的特征点
  101. left_ear = eye_aspect_ratio(left_eye) # 计算左眼EAR
  102. right_ear = eye_aspect_ratio(right_eye) # 计算右眼EAR
  103. ear = (left_ear + right_ear) / 2.0 # 求左右眼EAR的均值
  104. mouth = shape[mStart:mEnd] # 取出嘴巴对应的特征点
  105. mar = mouth_aspect_ratio(mouth) # 求嘴巴mar的均值
  106. # EAR低于阈值,有可能发生眨眼,眨眼连续帧数加一次
  107. if ear < EAR_THRESH:
  108. blink_counter += 1
  109. # EAR高于阈值,判断前面连续闭眼帧数,如果在合理范围内,说明发生眨眼
  110. else:
  111. if EAR_CONSEC_FRAMES_MIN <= blink_counter <= EAR_CONSEC_FRAMES_MAX:
  112. blink_total += 1
  113. blink_counter = 0
  114. # 通过张、闭来判断一次张嘴动作
  115. if mar > MAR_THRESH:
  116. mouth_status_open = 1
  117. else:
  118. if mouth_status_open:
  119. mouth_total += 1
  120. mouth_status_open = 0
  121. elif len(rects) == 0 and i == 90:
  122. print("No face!")
  123. break
  124. elif len(rects) > 1:
  125. print("More than one face!")
  126. # 判断眨眼次数大于2、张嘴次数大于1则为活体,退出循环
  127. if blink_total >= 1 or mouth_total >= 1:
  128. break
  129. cv2.destroyAllWindows()
  130. vs.release()
  131. # video_path, src = sys.argv[1], sys.argv[2]
  132. video_path = r'video\face13.mp4' # 输入的video文件夹位置
  133. # src = r'C:\Users\666\Desktop\zz5.jpg'
  134. liveness_detection()
  135. print("眨眼次数》》", blink_total)
  136. print("张嘴次数》》", mouth_total)
  137. # comparison(pic_path, src)

六:涉及到的代码

代码包含face_recognition库所有功能的用例,和上面涉及到的dilb库进行人脸识别的所有代码

使用dilb、face_recognition库实现,眨眼+张嘴的活体检测、和人脸识别功能。包含摄像头和视频-Python文档类资源-CSDN下载

参考:

使用dlib人脸检测模型进行人脸活体检测:眨眼+张口_Lee_01的博客-CSDN博客

python dlib学习(十一):眨眼检测_hongbin_xu的博客-CSDN博客_眨眼检测算法       

Python开发系统实战项目:人脸识别门禁监控系统_闭关修炼——暂退的博客-CSDN博客_face_encodings

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Cpp五条/article/detail/231681
推荐阅读
相关标签
  

闽ICP备14008679号