当前位置:   article > 正文

超详细的(视频)人脸情感特征提取教程【Python】_表情特征提取算法大体分为基于静态图像的特征提取方法和基于动态图像的特征提取方

表情特征提取算法大体分为基于静态图像的特征提取方法和基于动态图像的特征提取方

详细代码下载链接在文末!

1 人脸特征提取算法

人脸特征提取算法大体分为基于静态图像的特征提取方法和基于动态图像的特征提取方法。其中基于静态图像的特征提取算法可分为整体法和局部法,基于动态图像的特征提取算法又分为光流法、模型法和几何法

在表情特征提取方法中,研究者考虑到表情的产生与表达在很大程度上是依靠面部器官的变化来反映的。人脸的主要器官及其褶皱部分都会成为表情特征集中的区域。因此在面部器官区域标记特征点,计算特征点之间的距离和特征点所在曲线的曲率,就成为了采用几何形式提取人脸表情的方法。文献[1]使用形变网格对不同表情的人脸进行网格化表示,将第一帧与该序列表情最大帧之间的网格节点坐标变化作为几何特征,实现对表情的识别。

本次实现的特征提取方法即为几何法。

2 Facial Action Coding System

为了客观地捕捉面部表情的丰富性和复杂性,行为科学家发现有必要制定客观的编码标准。而面部动作编码系统(FACS),就是行为科学中应用最广泛的表情编码系统之一。FACS是由Ekman和Friesen开发的一种综合的面部表情客观编码方法。训练有素的FACS编码员根据46个动作的明显强度分解面部表情,而这些动作大致对应于各个面部肌肉。这些基本动作被称为动作单位(AUs),可以看作是面部表情的“音位”。

简单来讲,这里的AUs,就是“人脸情感特征”。

具体关于AU的介绍,可以参考这个链接:
http://www.360doc.com/content/15/0128/13/10690471_444446832.shtml

3 具体实现

3.1 图片裁剪

利用dlib.get_frontal_face_detector()方法初始化人脸检测器detector,检测到人脸所在图片的位置,再进行图片裁剪,仅保留人脸部分。

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
clahe_image = clahe.apply(gray)
detections = detector(clahe_image, 1)
for k, d in enumerate(detections):
    shape = predictor(clahe_image, d)
    xlist = []
    ylist = []
    landmarks = []
    for i in range(0, 68):
        cv2.circle(clahe_image, (shape.part(i).x, shape.part(i).y), 1, (0, 0, 255), thickness=2)

        xlist.append(float(shape.part(i).x))
        ylist.append(float(shape.part(i).y))

    xmean = np.mean(xlist)
    ymean = np.mean(ylist)
    x_max = np.max(xlist)
    x_min = np.min(xlist)
    y_max = np.max(ylist)
    y_min = np.min(ylist)
    cv2.rectangle(clahe_image, (int(x_min), int(y_min - ((ymean - y_min) / 3))), (int(x_max), int(y_max)),
                  (255, 150, 0), 2)

    cv2.circle(clahe_image, (int(xmean), int(ymean)), 1, (0, 255, 255), thickness=2)

    x_start = int(x_min)
    y_start = int(y_min - ((ymean - y_min) / 3))
    w = int(x_max) - x_start
    h = int(y_max) - y_start

    crop_img = image[y_start:y_start + h, x_start:x_start + w]
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

3.2 提取人脸坐标特征

利用dlib.shape_predictor()方法初始化predictor,即可提取出68个人脸坐标。

if len(detections) > 0:
    mywidth = 255
    hsize = 255
    cv2.imwrite('crop_img.png', crop_img)
    img = Image.open('crop_img.png')
    img = img.resize((mywidth, hsize), PIL.Image.ANTIALIAS)
    img.save('resized.png')

    image_resized = cv2.imread('resized.png')
    gray = cv2.cvtColor(image_resized, cv2.COLOR_BGR2GRAY)
    clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
    clahe_image = clahe.apply(gray)
    detections = detector(clahe_image, 1)
    for k, d in enumerate(detections):
        shape = predictor(clahe_image, d)
        xlist = []
        ylist = []

        for i in range(0, 68):
            cv2.circle(clahe_image, (shape.part(i).x, shape.part(i).y), 1, (0, 0, 255), thickness=2)

            xlist.append(float(shape.part(i).x))
            ylist.append(float(shape.part(i).y))

        xmean = np.mean(xlist)
        ymean = np.mean(ylist)
        x_max = np.max(xlist)
        x_min = np.min(xlist)
        y_max = np.max(ylist)
        y_min = np.min(ylist)
        cv2.rectangle(clahe_image, (int(x_min), int(y_min)), (int(x_max), int(y_max)), (255, 150, 0), 2)

        cv2.circle(clahe_image, (int(xmean), int(ymean)), 1, (0, 255, 255), thickness=2)

        xlist = np.array(xlist, dtype=np.float64)
        ylist = np.array(ylist, dtype=np.float64)

    if len(detections) > 0:
        return xlist, ylist
    else:
        xlist = np.array([])
        ylist = np.array([])
        return xlist, ylist
else:
    xlist = np.array([])
    ylist = np.array([])
    return xlist, ylist

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48

3.3 提取AUs

利用68个人脸坐标特征,通过几何法运算得到AUs。

AU1_1_x = xlist[19:22]
AU1_1_y = ylist[19:22]
AU1_1_x, AU1_1_y = linear_interpolation(AU1_1_x, AU1_1_y)
AU_feature = [get_average_curvature(AU1_1_x, AU1_1_y)]
AU1_2_x = xlist[22:25]
AU1_2_y = ylist[22:25]
AU1_2_x, AU1_2_y = linear_interpolation(AU1_2_x, AU1_2_y)
AU_feature = AU_feature + [get_average_curvature(AU1_2_x, AU1_2_y)]
AU2_1_x = xlist[17:20]
AU2_1_y = ylist[17:20]
AU2_1_x, AU2_1_y = linear_interpolation(AU2_1_x, AU2_1_y)
AU_feature = AU_feature + [get_average_curvature(AU2_1_x, AU2_1_y)]
AU2_2_x = xlist[24:27]
AU2_2_y = ylist[24:27]
AU2_2_x, AU2_2_y = linear_interpolation(AU2_2_x, AU2_2_y)
AU_feature = AU_feature + [get_average_curvature(AU2_2_x, AU2_2_y)]
AU5_1_x = xlist[36:40]
AU5_1_y = ylist[36:40]
AU5_1_x, AU5_1_y = linear_interpolation(AU5_1_x, AU5_1_y)
AU_feature = AU_feature + [get_average_curvature(AU5_1_x, AU5_1_y)]
AU5_2_x = xlist[42:46]
AU5_2_y = ylist[42:46]
AU5_2_x, AU5_2_y = linear_interpolation(AU5_2_x, AU5_2_y)
AU_feature = AU_feature + [get_average_curvature(AU5_2_x, AU5_2_y)]
AU7_1_x = np.append(xlist[39:42], xlist[36])
AU7_1_y = np.append(ylist[39:42], ylist[36])
AU7_1_x, AU7_1_y = linear_interpolation(AU7_1_x, AU7_1_y)
AU_feature = AU_feature + [get_average_curvature(AU7_1_x, AU7_1_y)]
AU7_2_x = np.append(xlist[46:48], xlist[42])
AU7_2_y = np.append(ylist[46:48], ylist[42])
AU7_2_x, AU7_2_y = linear_interpolation(AU7_2_x, AU7_2_y)
AU_feature = AU_feature + [get_average_curvature(AU7_2_x, AU7_2_y)]
AU9_x = xlist[31:36]
AU9_y = ylist[31:36]
AU9_x, AU9_y = linear_interpolation(AU9_x, AU9_y)
AU_feature = AU_feature + [get_average_curvature(AU9_x, AU9_y)]
AU10_x = np.append(xlist[48:51], xlist[52:55])
AU10_y = np.append(ylist[48:51], ylist[52:55])
AU10_x, AU10_y = linear_interpolation(AU10_x, AU10_y)
AU_feature = AU_feature + [get_average_curvature(AU10_x, AU10_y)]
AU12_1_x = [xlist[48]] + [xlist[60]] + [xlist[67]]
AU12_1_y = [ylist[48]] + [ylist[60]] + [ylist[67]]
AU12_1_x, AU12_1_y = linear_interpolation(AU12_1_x, AU12_1_y)
AU_feature = AU_feature + [get_average_curvature(AU12_1_x, AU12_1_y)]
AU12_2_x = [xlist[54]] + [xlist[64]] + [xlist[65]]
AU12_2_y = [ylist[54]] + [ylist[64]] + [ylist[65]]
AU12_2_x, AU12_2_y = linear_interpolation(AU12_2_x, AU12_2_y)
AU_feature = AU_feature + [get_average_curvature(AU12_2_x, AU12_2_y)]
AU20_x = xlist[55:60]
AU20_y = ylist[55:60]
AU20_x, AU20_y = linear_interpolation(AU20_x, AU20_y)
AU_feature = AU_feature + [get_average_curvature(AU20_x, AU20_y)]

Norm_AU_feature = (AU_feature - np.min(AU_feature)) / np.ptp(AU_feature)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54

3.4 提取视频特征

上述方法可以得到单个图片的特征,再结合opencv2中的VideoCapture()方法,依次遍历视频的每一帧图像,即可得到整个视频特征。

def getVideoFeature(path_video, path_feature):
    for root, dirs, files in os.walk(path_video):
        files = natsorted(files)  # 排好序
        for i in range(len(files)):
            # 初始化窗口
            video_path = path_video + files[i]
            cap = cv2.VideoCapture(video_path)

            with open(path_feature + files[i][:-4] + ".txt", "a", encoding="utf-8")as f:
                f.write("Frame number ")
                for point in range(1,69):
                    f.write("p" + str(point) + "X " + "p" + str(point) + "Y ")
                f.write("AU1_1 AU1_2 AU2_1 AU2_2 AU5_1 AU5_2 AU7_1 AU7_2 AU9 AU10 AU12_1 AU12_2 AU20" + "\n")

            frame = 0
            while cap.isOpened():
                ok, cv_img = cap.read()
                if not ok:
                    break
                try:
                    frame = frame + 1  # 从第一帧开始
                    [xlist, ylist] = get_landmarks(cv_img)
                    Norm_AU_feature = extract_AU(xlist, ylist)
                    print("No."+ str(i+1) + " " + files[i]+" FrameNumber: " + str(frame))

                    with open(path_feature + files[i][:-4] + ".txt", "a", encoding="utf-8")as f:
                        f.write(files[i]+ ":" + str(frame) + " ")

                        for pt in range(0, 68):
                            f.write(str(float(xlist[pt])) + " " + str(float(ylist[pt])) + " ")
                        for au_number in range(0, 13):
                            f.write(str(format(Norm_AU_feature[au_number], ".6f")) + " ")  # 小数点后保留6位数
                        f.write("\n")
                except BaseException:
                    with open(path_feature + files[i][:-4] + ".txt", "a", encoding="utf-8")as f:
                        f.write(files[i]+ ":" + str(frame) + " ")
                        for index in range(149):
                            f.write("NaN ")
                        f.write("\n")
                    print("No."+ str(i+1) + " " + files[i] + " FrameNumber: " + str(frame) + " Can not detect face!")

            cap.release()
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42

参考文献

[1] Kotsia I, Pitas I. Facial Expression Recognition in ImageSequences Using Geometric Deformation Features and SupportVector Machines[J]. IEEE Transactions on Image Processing APublication of the IEEE Signal Processing Society, 2007,16(1):172.

下载链接

https://download.csdn.net/download/qq_44186838/66832744

其中包括详细的代码运行环境和使用说明。

有需要的请自行提取,不想hua前的朋友,可评论同我说,我会回复你,但可能会比较慢。祝好!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/355340
推荐阅读
相关标签
  

闽ICP备14008679号