当前位置:   article > 正文

证件照制作如此简单——基于人脸检测与自动人像分割轻松制作个人证件照(C++实现)_bbox_blob.channel_range

bbox_blob.channel_range

前言
1.关于证件照,有好多种制作办法,最常见的是使用PS来做图像处理,或者下载各种证件照相关的APP,一键制作,基本的步骤是先按人脸为基准切出适合的尺寸,然后把人像给抠出来,对人像进行美化处理,然后替换上要使用的背景色,比如蓝色或红色。
2.我这里也按着上面的步骤来用代码实现,先是人脸检测,剪切照片,替换背景色,美化和修脸暂时还没有时间写完。
3.因为是考虑到要移植到移动端(安卓和iOS),这里使用了ncnn做推理加速库,之前做过一些APP,加速库都选了ncnn,不管在安卓或者iOS上,性能都是不错的。
4.我的开发环境是win10, vs2019, opencv4.5, ncnn,如果要启用GPU加速,所以用到VulkanSDK,实现语言是C++。
5.先上效果图,对于背景纯度的要求不高,如果使用场景背景复杂的话,也可以完美抠图。
原始图像:

在这里插入图片描述
————————————————

在这里插入图片描述

在这里插入图片描述 

自动剪切出来的证件照: 

在这里插入图片描述

原图:在这里插入图片描述 

自动剪切出来的证件照:在这里插入图片描述 

一.项目创建
1.使用vs2019新建一个C++项目,把OpenC和NCNN库导入,NCNN可以下载官方编译好的库,我也会在后面上传我使用的库和源码以及用到的模型。
2.如果要启用GPU推理,就要安装VulkanSDK,安装的步骤可以参考我之前的博客。

二.人脸检测
1.人脸检测这里面使用 SCRFD ,它带眼睛,鼻子,嘴角五个关键点的坐标,这个可以用做证件照参考点,人脸检测库这个也可以用libfacedetection,效果都差不多,如果是移动端最好选择SCRFD。

代码实现:
推理代码

  1. #include "scrfd.h"
  2. #include <string.h>
  3. #include <opencv2/core/core.hpp>
  4. #include <opencv2/imgproc/imgproc.hpp>
  5. #include <ncnn/cpu.h> //安卓才用到
  6. static inline float intersection_area(const FaceObject& a, const FaceObject& b)
  7. {
  8. cv::Rect_<float> inter = a.rect & b.rect;
  9. return inter.area();
  10. }
  11. static void qsort_descent_inplace(std::vector<FaceObject>& faceobjects, int left, int right)
  12. {
  13. int i = left;
  14. int j = right;
  15. float p = faceobjects[(left + right) / 2].prob;
  16. while (i <= j)
  17. {
  18. while (faceobjects[i].prob > p)
  19. i++;
  20. while (faceobjects[j].prob < p)
  21. j--;
  22. if (i <= j)
  23. {
  24. // swap
  25. std::swap(faceobjects[i], faceobjects[j]);
  26. i++;
  27. j--;
  28. }
  29. }
  30. // #pragma omp parallel sections
  31. {
  32. // #pragma omp section
  33. {
  34. if (left < j) qsort_descent_inplace(faceobjects, left, j);
  35. }
  36. // #pragma omp section
  37. {
  38. if (i < right) qsort_descent_inplace(faceobjects, i, right);
  39. }
  40. }
  41. }
  42. static void qsort_descent_inplace(std::vector<FaceObject>& faceobjects)
  43. {
  44. if (faceobjects.empty())
  45. return;
  46. qsort_descent_inplace(faceobjects, 0, faceobjects.size() - 1);
  47. }
  48. static void nms_sorted_bboxes(const std::vector<FaceObject>& faceobjects, std::vector<int>& picked, float nms_threshold)
  49. {
  50. picked.clear();
  51. const int n = faceobjects.size();
  52. std::vector<float> areas(n);
  53. for (int i = 0; i < n; i++)
  54. {
  55. areas[i] = faceobjects[i].rect.area();
  56. }
  57. for (int i = 0; i < n; i++)
  58. {
  59. const FaceObject& a = faceobjects[i];
  60. int keep = 1;
  61. for (int j = 0; j < (int)picked.size(); j++)
  62. {
  63. const FaceObject& b = faceobjects[picked[j]];
  64. // intersection over union
  65. float inter_area = intersection_area(a, b);
  66. float union_area = areas[i] + areas[picked[j]] - inter_area;
  67. // float IoU = inter_area / union_area
  68. if (inter_area / union_area > nms_threshold)
  69. keep = 0;
  70. }
  71. if (keep)
  72. picked.push_back(i);
  73. }
  74. }
  75. static ncnn::Mat generate_anchors(int base_size, const ncnn::Mat& ratios, const ncnn::Mat& scales)
  76. {
  77. int num_ratio = ratios.w;
  78. int num_scale = scales.w;
  79. ncnn::Mat anchors;
  80. anchors.create(4, num_ratio * num_scale);
  81. const float cx = 0;
  82. const float cy = 0;
  83. for (int i = 0; i < num_ratio; i++)
  84. {
  85. float ar = ratios[i];
  86. int r_w = round(base_size / sqrt(ar));
  87. int r_h = round(r_w * ar); //round(base_size * sqrt(ar));
  88. for (int j = 0; j < num_scale; j++)
  89. {
  90. float scale = scales[j];
  91. float rs_w = r_w * scale;
  92. float rs_h = r_h * scale;
  93. float* anchor = anchors.row(i * num_scale + j);
  94. anchor[0] = cx - rs_w * 0.5f;
  95. anchor[1] = cy - rs_h * 0.5f;
  96. anchor[2] = cx + rs_w * 0.5f;
  97. anchor[3] = cy + rs_h * 0.5f;
  98. }
  99. }
  100. return anchors;
  101. }
  102. static void generate_proposals(const ncnn::Mat& anchors, int feat_stride, const ncnn::Mat& score_blob, const ncnn::Mat& bbox_blob, const ncnn::Mat& kps_blob, float prob_threshold, std::vector<FaceObject>& faceobjects)
  103. {
  104. int w = score_blob.w;
  105. int h = score_blob.h;
  106. // generate face proposal from bbox deltas and shifted anchors
  107. const int num_anchors = anchors.h;
  108. for (int q = 0; q < num_anchors; q++)
  109. {
  110. const float* anchor = anchors.row(q);
  111. const ncnn::Mat score = score_blob.channel(q);
  112. const ncnn::Mat bbox = bbox_blob.channel_range(q * 4, 4);
  113. // shifted anchor
  114. float anchor_y = anchor[1];
  115. float anchor_w = anchor[2] - anchor[0];
  116. float anchor_h = anchor[3] - anchor[1];
  117. for (int i = 0; i < h; i++)
  118. {
  119. float anchor_x = anchor[0];
  120. for (int j = 0; j < w; j++)
  121. {
  122. int index = i * w + j;
  123. float prob = score[index];
  124. if (prob >= prob_threshold)
  125. {
  126. // insightface/detection/scrfd/mmdet/models/dense_heads/scrfd_head.py _get_bboxes_single()
  127. float dx = bbox.channel(0)[index] * feat_stride;
  128. float dy = bbox.channel(1)[index] * feat_stride;
  129. float dw = bbox.channel(2)[index] * feat_stride;
  130. float dh = bbox.channel(3)[index] * feat_stride;
  131. // insightface/detection/scrfd/mmdet/core/bbox/transforms.py distance2bbox()
  132. float cx = anchor_x + anchor_w * 0.5f;
  133. float cy = anchor_y + anchor_h * 0.5f;
  134. float x0 = cx - dx;
  135. float y0 = cy - dy;
  136. float x1 = cx + dw;
  137. float y1 = cy + dh;
  138. FaceObject obj;
  139. obj.rect.x = x0;
  140. obj.rect.y = y0;
  141. obj.rect.width = x1 - x0 + 1;
  142. obj.rect.height = y1 - y0 + 1;
  143. obj.prob = prob;
  144. if (!kps_blob.empty())
  145. {
  146. const ncnn::Mat kps = kps_blob.channel_range(q * 10, 10);
  147. obj.landmark[0].x = cx + kps.channel(0)[index] * feat_stride;
  148. obj.landmark[0].y = cy + kps.channel(1)[index] * feat_stride;
  149. obj.landmark[1].x = cx + kps.channel(2)[index] * feat_stride;
  150. obj.landmark[1].y = cy + kps.channel(3)[index] * feat_stride;
  151. obj.landmark[2].x = cx + kps.channel(4)[index] * feat_stride;
  152. obj.landmark[2].y = cy + kps.channel(5)[index] * feat_stride;
  153. obj.landmark[3].x = cx + kps.channel(6)[index] * feat_stride;
  154. obj.landmark[3].y = cy + kps.channel(7)[index] * feat_stride;
  155. obj.landmark[4].x = cx + kps.channel(8)[index] * feat_stride;
  156. obj.landmark[4].y = cy + kps.channel(9)[index] * feat_stride;
  157. }
  158. faceobjects.push_back(obj);
  159. }
  160. anchor_x += feat_stride;
  161. }
  162. anchor_y += feat_stride;
  163. }
  164. }
  165. }
  166. SCRFD::SCRFD()
  167. {}
  168. int SCRFD::detect(const cv::Mat& rgb, std::vector<FaceObject>& faceobjects, float prob_threshold, float nms_threshold)
  169. {
  170. int width = rgb.cols;
  171. int height = rgb.rows;
  172. // insightface/detection/scrfd/configs/scrfd/scrfd_500m.py
  173. const int target_size = 640;
  174. // pad to multiple of 32
  175. int w = width;
  176. int h = height;
  177. float scale = 1.f;
  178. if (w > h)
  179. {
  180. scale = (float)target_size / w;
  181. w = target_size;
  182. h = h * scale;
  183. }
  184. else
  185. {
  186. scale = (float)target_size / h;
  187. h = target_size;
  188. w = w * scale;
  189. }
  190. ncnn::Mat in = ncnn::Mat::from_pixels_resize(rgb.data, ncnn::Mat::PIXEL_RGB, width, height, w, h);
  191. // pad to target_size rectangle
  192. int wpad = (w + 31) / 32 * 32 - w;
  193. int hpad = (h + 31) / 32 * 32 - h;
  194. ncnn::Mat in_pad;
  195. ncnn::copy_make_border(in, in_pad, hpad / 2, hpad - hpad / 2, wpad / 2, wpad - wpad / 2, ncnn::BORDER_CONSTANT, 0.f);
  196. const float mean_vals[3] = {127.5f, 127.5f, 127.5f};
  197. const float norm_vals[3] = {1/128.f, 1/128.f, 1/128.f};
  198. in_pad.substract_mean_normalize(mean_vals, norm_vals);
  199. ncnn::Extractor ex = scrfd_net.create_extractor();
  200. ex.input("input.1", in_pad);
  201. std::vector<FaceObject> faceproposals;
  202. // stride 8
  203. {
  204. ncnn::Mat score_blob, bbox_blob, kps_blob;
  205. ex.extract("score_8", score_blob);
  206. ex.extract("bbox_8", bbox_blob);
  207. if (has_kps)
  208. ex.extract("kps_8", kps_blob);
  209. const int base_size = 16;
  210. const int feat_stride = 8;
  211. ncnn::Mat ratios(1);
  212. ratios[0] = 1.f;
  213. ncnn::Mat scales(2);
  214. scales[0] = 1.f;
  215. scales[1] = 2.f;
  216. ncnn::Mat anchors = generate_anchors(base_size, ratios, scales);
  217. std::vector<FaceObject> faceobjects32;
  218. generate_proposals(anchors, feat_stride, score_blob, bbox_blob, kps_blob, prob_threshold, faceobjects32);
  219. faceproposals.insert(faceproposals.end(), faceobjects32.begin(), faceobjects32.end());
  220. }
  221. // stride 16
  222. {
  223. ncnn::Mat score_blob, bbox_blob, kps_blob;
  224. ex.extract("score_16", score_blob);
  225. ex.extract("bbox_16", bbox_blob);
  226. if (has_kps)
  227. ex.extract("kps_16", kps_blob);
  228. const int base_size = 64;
  229. const int feat_stride = 16;
  230. ncnn::Mat ratios(1);
  231. ratios[0] = 1.f;
  232. ncnn::Mat scales(2);
  233. scales[0] = 1.f;
  234. scales[1] = 2.f;
  235. ncnn::Mat anchors = generate_anchors(base_size, ratios, scales);
  236. std::vector<FaceObject> faceobjects16;
  237. generate_proposals(anchors, feat_stride, score_blob, bbox_blob, kps_blob, prob_threshold, faceobjects16);
  238. faceproposals.insert(faceproposals.end(), faceobjects16.begin(), faceobjects16.end());
  239. }
  240. // stride 32
  241. {
  242. ncnn::Mat score_blob, bbox_blob, kps_blob;
  243. ex.extract("score_32", score_blob);
  244. ex.extract("bbox_32", bbox_blob);
  245. if (has_kps)
  246. ex.extract("kps_32", kps_blob);
  247. const int base_size = 256;
  248. const int feat_stride = 32;
  249. ncnn::Mat ratios(1);
  250. ratios[0] = 1.f;
  251. ncnn::Mat scales(2);
  252. scales[0] = 1.f;
  253. scales[1] = 2.f;
  254. ncnn::Mat anchors = generate_anchors(base_size, ratios, scales);
  255. std::vector<FaceObject> faceobjects8;
  256. generate_proposals(anchors, feat_stride, score_blob, bbox_blob, kps_blob, prob_threshold, faceobjects8);
  257. faceproposals.insert(faceproposals.end(), faceobjects8.begin(), faceobjects8.end());
  258. }
  259. // sort all proposals by score from highest to lowest
  260. qsort_descent_inplace(faceproposals);
  261. // apply nms with nms_threshold
  262. std::vector<int> picked;
  263. nms_sorted_bboxes(faceproposals, picked, nms_threshold);
  264. int face_count = picked.size();
  265. faceobjects.resize(face_count);
  266. for (int i = 0; i < face_count; i++)
  267. {
  268. faceobjects[i] = faceproposals[picked[i]];
  269. // adjust offset to original unpadded
  270. float x0 = (faceobjects[i].rect.x - (wpad / 2)) / scale;
  271. float y0 = (faceobjects[i].rect.y - (hpad / 2)) / scale;
  272. float x1 = (faceobjects[i].rect.x + faceobjects[i].rect.width - (wpad / 2)) / scale;
  273. float y1 = (faceobjects[i].rect.y + faceobjects[i].rect.height - (hpad / 2)) / scale;
  274. x0 = std::max(std::min(x0, (float)width - 1), 0.f);
  275. y0 = std::max(std::min(y0, (float)height - 1), 0.f);
  276. x1 = std::max(std::min(x1, (float)width - 1), 0.f);
  277. y1 = std::max(std::min(y1, (float)height - 1), 0.f);
  278. faceobjects[i].rect.x = x0;
  279. faceobjects[i].rect.y = y0;
  280. faceobjects[i].rect.width = x1 - x0;
  281. faceobjects[i].rect.height = y1 - y0;
  282. if (has_kps)
  283. {
  284. float x0 = (faceobjects[i].landmark[0].x - (wpad / 2)) / scale;
  285. float y0 = (faceobjects[i].landmark[0].y - (hpad / 2)) / scale;
  286. float x1 = (faceobjects[i].landmark[1].x - (wpad / 2)) / scale;
  287. float y1 = (faceobjects[i].landmark[1].y - (hpad / 2)) / scale;
  288. float x2 = (faceobjects[i].landmark[2].x - (wpad / 2)) / scale;
  289. float y2 = (faceobjects[i].landmark[2].y - (hpad / 2)) / scale;
  290. float x3 = (faceobjects[i].landmark[3].x - (wpad / 2)) / scale;
  291. float y3 = (faceobjects[i].landmark[3].y - (hpad / 2)) / scale;
  292. float x4 = (faceobjects[i].landmark[4].x - (wpad / 2)) / scale;
  293. float y4 = (faceobjects[i].landmark[4].y - (hpad / 2)) / scale;
  294. faceobjects[i].landmark[0].x = std::max(std::min(x0, (float)width - 1), 0.f);
  295. faceobjects[i].landmark[0].y = std::max(std::min(y0, (float)height - 1), 0.f);
  296. faceobjects[i].landmark[1].x = std::max(std::min(x1, (float)width - 1), 0.f);
  297. faceobjects[i].landmark[1].y = std::max(std::min(y1, (float)height - 1), 0.f);
  298. faceobjects[i].landmark[2].x = std::max(std::min(x2, (float)width - 1), 0.f);
  299. faceobjects[i].landmark[2].y = std::max(std::min(y2, (float)height - 1), 0.f);
  300. faceobjects[i].landmark[3].x = std::max(std::min(x3, (float)width - 1), 0.f);
  301. faceobjects[i].landmark[3].y = std::max(std::min(y3, (float)height - 1), 0.f);
  302. faceobjects[i].landmark[4].x = std::max(std::min(x4, (float)width - 1), 0.f);
  303. faceobjects[i].landmark[4].y = std::max(std::min(y4, (float)height - 1), 0.f);
  304. }
  305. }
  306. return 0;
  307. }
  308. int SCRFD::readModels(std::string param_path, std::string model_path, bool use_gpu)
  309. {
  310. bool has_gpu = false;
  311. #if NCNN_VULKAN
  312. ncnn::create_gpu_instance();
  313. has_gpu = ncnn::get_gpu_count() > 0;
  314. #endif
  315. bool to_use_gpu = has_gpu && use_gpu;
  316. scrfd_net.opt.use_vulkan_compute = to_use_gpu;
  317. int rp = scrfd_net.load_param(param_path.c_str());
  318. int rb = scrfd_net.load_model(model_path.c_str());
  319. if (rp < 0 || rb < 0)
  320. {
  321. return 1;
  322. }
  323. return 0;
  324. }

2.把检测的结果画出来。

  1. int SCRFD::draw(cv::Mat& rgb, const std::vector<FaceObject>& faceobjects)
  2. {
  3. for (size_t i = 0; i < faceobjects.size(); i++)
  4. {
  5. const FaceObject& obj = faceobjects[i];
  6. cv::rectangle(rgb, obj.rect, cv::Scalar(0, 255, 0));
  7. if (has_kps)
  8. {
  9. cv::circle(rgb, obj.landmark[0], 2, cv::Scalar(0, 255, 255), -1);
  10. cv::circle(rgb, obj.landmark[1], 2, cv::Scalar(0, 0, 255), -1);
  11. cv::circle(rgb, obj.landmark[2], 2, cv::Scalar(255, 255, 0), -1);
  12. cv::circle(rgb, obj.landmark[3], 2, cv::Scalar(255, 255, 0), -1);
  13. cv::circle(rgb, obj.landmark[4], 2, cv::Scalar(255, 255, 0), -1);
  14. }
  15. char text[256];
  16. sprintf(text, "%.1f%%", obj.prob * 100);
  17. int baseLine = 0;
  18. cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
  19. int x = obj.rect.x;
  20. int y = obj.rect.y - label_size.height - baseLine;
  21. if (y < 0)
  22. y = 0;
  23. if (x + label_size.width > rgb.cols)
  24. x = rgb.cols - label_size.width;
  25. cv::rectangle(rgb, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)), cv::Scalar(255, 255, 255), -1);
  26. cv::putText(rgb, text, cv::Point(x, y + label_size.height), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0), 1);
  27. }
  28. return 0;
  29. }

3.检测效果

在这里插入图片描述

 

三.证件照剪切

1.筛选人脸,如果有一张图像有多张人脸的话,取最大最正的脸的坐标来做基准点。
代码:

  1. int faceFind(const cv::Mat& cv_src, std::vector<FaceObject> &face_object, cv::Rect& cv_rect, std::vector<cv::Point> &five_point)
  2. {
  3. //只检测到一张脸
  4. if (face_object.size() == 1)
  5. {
  6. if (face_object[0].prob > 0.7)
  7. {
  8. for (int i = 0; i < 5; ++i)
  9. {
  10. five_point.push_back(face_object[0].landmark[i]);
  11. }
  12. cv_rect = face_object[0].rect;
  13. return 0;
  14. }
  15. }
  16. //检测到多张脸
  17. else if (face_object.size() >= 2)
  18. {
  19. cv::Rect max_rect;
  20. for (int i = 0; i < face_object.size(); ++i)
  21. {
  22. if (face_object[i].prob >= 0.7)
  23. {
  24. cv::Rect rect = face_object[i].rect;
  25. if (max_rect.area() <= rect.area())
  26. {
  27. max_rect = rect;
  28. }
  29. }
  30. }
  31. for (int i = 0; i < face_object.size(); ++i)
  32. {
  33. if (face_object[i].prob >= 0.7)
  34. {
  35. cv::Rect rect = face_object[i].rect;
  36. if (max_rect.area() == rect.area())
  37. {
  38. for (int j = 0; j < 5; ++j)
  39. {
  40. five_point.push_back(face_object[0].landmark[j]);
  41. }
  42. cv_rect = rect;
  43. }
  44. }
  45. }
  46. return 0;
  47. }
  48. return 1;
  49. }

效果:在这里插入图片描述

 2.上面取基准的方法只是一个比较简单的方法,如果算力够的话,或者需要精度更高的话,这里可以加入更多关键点和头部姿态估计和判断。然后用头部姿态估计来判断图像或者摄像头头里的人脸是否摆正了。在这里插入图片描述

 3.以人脸为基准剪切出证件照的尺寸图像,先把脸基准中心,计算上下左右的尺寸,然后按比例剪切出合适的证件照的尺寸。
代码:

  1. int faceLocation(const cv::Mat cv_src,cv::Mat& cv_dst, std::vector<cv::Point>& five_point, cv::Rect &cv_rect)
  2. {
  3. float w_block = cv_rect.width / 5.5;
  4. float h_block = cv_rect.height / 8;
  5. //头部
  6. cv::Rect face_rect;
  7. face_rect.x = cv_rect.x - (w_block * 0.8);//加上双耳的大小
  8. face_rect.y = cv_rect.y - (h_block * 2);
  9. face_rect.width = cv_rect.width + (w_block * 1.6);
  10. face_rect.height = cv_rect.height + (h_block * 2);
  11. //人脸离左边边框的距离
  12. int tl_face_w = face_rect.tl().x;
  13. int tr_face_w = cv_src.cols - (face_rect.width + face_rect.tl().x);
  14. int t_face_h = face_rect.tl().y;
  15. int b_face_h = cv_src.rows - face_rect.br().y;
  16. //算出头像的位置
  17. int w_scale = face_rect.width / 7;
  18. int h_scale = face_rect.height / 10;
  19. cv::Rect id_rect;
  20. //判断位置
  21. if (tl_face_w >= (w_scale * 2) && tr_face_w >= (w_scale * 2) && t_face_h >= (h_scale * 0.5) && b_face_h > (h_scale * 5))
  22. {
  23. //判断眼睛的位置
  24. std::cout << five_point.size() << std::endl;
  25. if (abs(five_point.at(0).y - five_point.at(1).y) < 8)
  26. {
  27. id_rect.x = ((face_rect.x - w_scale * 3) <= 0) ? 0 : (face_rect.x - w_scale * 3);
  28. id_rect.y = ((face_rect.y - h_scale * 3) < 0) ? 0 : (face_rect.y - h_scale * 3);
  29. id_rect.width = (w_scale * 13) + id_rect.x > cv_src.size().width ? cv_src.size().width - id_rect.x : w_scale * 13;
  30. id_rect.height = (h_scale * 19) + id_rect.y > cv_src.size().height ? cv_src.size().height - id_rect.y : h_scale * 19;
  31. cv_dst = cv_src(id_rect);
  32. return 0;
  33. }
  34. }
  35. return -1;
  36. }

效果:在这里插入图片描述

四.抠图与背景替换

1.经过上面的步骤,已经得到一个证件照的图像,现在要把头像抠出来就可以做背景替换了。

 

  1. int matting(cv::Mat &cv_src, ncnn::Net& net, ncnn::Mat &alpha)
  2. {
  3. int width = cv_src.cols;
  4. int height = cv_src.rows;
  5. ncnn::Mat in_resize = ncnn::Mat::from_pixels_resize(cv_src.data, ncnn::Mat::PIXEL_RGB, width, height, 256,256);
  6. const float meanVals[3] = { 127.5f, 127.5f, 127.5f };
  7. const float normVals[3] = { 0.0078431f, 0.0078431f, 0.0078431f };
  8. in_resize.substract_mean_normalize(meanVals, normVals);
  9. ncnn::Mat out;
  10. ncnn::Extractor ex = net.create_extractor();
  11. ex.set_vulkan_compute(true);
  12. ex.input("input", in_resize);
  13. ex.extract("output", out);
  14. ncnn::resize_bilinear(out, alpha, width, height);
  15. return 0;
  16. }

2.替换背景色。

  1. void replaceBG(const cv::Mat cv_src, ncnn::Mat &alpha,cv::Mat &cv_matting, std::vector<int> &bg_color)
  2. {
  3. int width = cv_src.cols;
  4. int height = cv_src.rows;
  5. cv_matting = cv::Mat::zeros(cv::Size(width, height), CV_8UC3);
  6. float* alpha_data = (float*)alpha.data;
  7. for (int i = 0; i < height; i++)
  8. {
  9. for (int j = 0; j < width; j++)
  10. {
  11. float alpha_ = alpha_data[i * width + j];
  12. cv_matting.at < cv::Vec3b>(i, j)[0] = cv_src.at < cv::Vec3b>(i, j)[0] * alpha_ + (1 - alpha_) * bg_color[0];
  13. cv_matting.at < cv::Vec3b>(i, j)[1] = cv_src.at < cv::Vec3b>(i, j)[1] * alpha_ + (1 - alpha_) * bg_color[1];
  14. cv_matting.at < cv::Vec3b>(i, j)[2] = cv_src.at < cv::Vec3b>(i, j)[2] * alpha_ + (1 - alpha_) * bg_color[2];
  15. }
  16. }
  17. }

3.效果图。
原图:
证件照:在这里插入图片描述

原图(背景比较复杂的原图):
在这里插入图片描述

证件照:在这里插入图片描述 

动漫头像:

在这里插入图片描述
五.结语
1.这只是个可以实现功能的demo,如果想要应用到商业上,还有很多细节上的处理,比如果头部姿态估计,眼球检测(是否闭眼),皮肤美化,瘦脸,换装等,这些功能有时间我会去试之后放上来。
2.这个demo改改可以在安卓上运行,demo我在安卓上测试过,速度和精度都有不错的表现。
3.整个工程和源码的地址:https://download.csdn.net/download/matt45m/67756246
————————————————
版权声明:本文为CSDN博主「知来者逆」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/matt45m/article/details/122051306 

 

版权声明:本文为CSDN博主「知来者逆」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/matt45m/article/details/122051306

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Gausst松鼠会/article/detail/285014
推荐阅读
相关标签
  

闽ICP备14008679号