赞
踩
1)Firebae是什么?
由 Google 提供支持,基于 Google 基础架构而构建,可以自动扩缩的全面移动开发平台。
官网地址:https://firebase.google.com/
2)ML Kit是什么?
Firebase提供的面向移动开发者的机器学习产品。
官网地址:https://firebase.google.com/products/ml-kit/
文档地址:https://firebase.google.com/docs/ml-kit/
机器学习套件将 Google 的机器学习技术(如 Google Cloud Vision API、TensorFlow Lite 和 Android Neural Networks API)聚集到单个 SDK 中,使您可以在自己的应用中轻松使用机器学习技术。无论您是需要强大的云端处理能力、针对移动设备进行了优化的设备端模型的实时功能,还是自定义 TensorFlow Lite 模型的灵活性,机器学习套件都只需几行代码即可实现。
功能合集:
在 Android 上使用机器学习套件检测人脸:https://firebase.google.com/docs/ml-kit/android/detect-faces
1)准备工作:
① 请将Firebase添加到Android项目中(如果尚未添加)。
对于人脸识别而言,需要先在Firebase控制台进行注册,用应用的包名生成一个Json文件,然后将该文件放置到项目中。
测试使用,可直接拿本示例项目的Json文件,文末给出。
② 请务必在您的项目级 build.gradle
文件的 buildscript
和 allprojects
部分添加 Google 的 Maven 代码库。
- repositories {
- google()
- jcenter()
- }
-
- ... ...
- allprojects {
- repositories {
- google()
- jcenter()
- }
- }
③ 将 Android 版机器学习套件库的依赖项添加到您的模块(应用级)Gradle 文件(通常为 app/build.gradle
):
- // ML Kit dependencies
- compile 'com.google.firebase:firebase-core:16.0.9'
- compile 'com.google.firebase:firebase-ml-vision:20.0.0'
- compile 'com.google.firebase:firebase-ml-vision-face-model:17.0.2'
2)输入图片指南:
为了使机器学习套件准确检测人脸,输入图片必须包含由足够像素数据表示的人脸。通常,要在图片中检测的每个人脸应至少为 100x100 像素。如果要检测人脸轮廓,机器学习套件需要更高的分辨率输入:每个人脸应至少为 200x200 像素。
如果您是在实时应用中检测人脸,则可能还需要考虑输入图片的整体尺寸。较小图片的处理速度相对较快,因此,为了减少延迟时间,请以较低的分辨率捕获图片(牢记上述准确性要求),并确保主体的面部在图片中占尽可能大的部分。
① 配置面部检测器 FirebaseVisionFaceDetectorOptions
- FirebaseVisionFaceDetectorOptions options =
- new FirebaseVisionFaceDetectorOptions.Builder()
- .setPerformanceMode(FirebaseVisionFaceDetectorOptions.FAST)
- .setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS)
- .build();
② 运行面部检测器 FirebaseVisionImage
要检测图片中的人脸,请基于设备上的以下资源创建一个 FirebaseVisionImage
对象:Bitmap
、media.Image
、ByteBuffer
、字节数组或文件。然后,将 FirebaseVisionImage
对象传递给 FirebaseVisionFaceDetector
的 detectInImage
方法。
对于人脸识别,您使用的图片尺寸应至少为 480x360 像素。如果您要实时识别人脸,以此最低分辨率捕获帧有助于减少延迟时间。
1. 通过图片创建 FirebaseVisionImage
对象
FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
每种输入的方式都有不同,文档里有详细的阐述,可以根据自己的项目,实际选取对应的方式。
2. 获取 FirebaseVisionFaceDetector
的一个实例:
- FirebaseVisionFaceDetector detector = FirebaseVision.getInstance()
- .getVisionFaceDetector(options);
3. 最后,将图片传递给 detectInImage
方法:
- Task<List<FirebaseVisionFace>> result =
- detector.detectInImage(image)
- .addOnSuccessListener(
- new OnSuccessListener<List<FirebaseVisionFace>>() {
- @Override
- public void onSuccess(List<FirebaseVisionFace> faces) {
- // Task completed successfully
- // ...
- }
- })
- .addOnFailureListener(
- new OnFailureListener() {
- @Override
- public void onFailure(@NonNull Exception e) {
- // Task failed with an exception
- // ...
- }
- });
3)获取检测到的面部的相关信息:
如果人脸识别操作成功,系统会向成功侦听器传递一组 FirebaseVisionFace
对象。每个 FirebaseVisionFace
对象都代表一张在图片中检测到的面孔。对于每张面孔,您可以获取它在输入图片中的边界坐标,以及您已配置面部检测器查找的任何其他信息。
① 自定义“点”类,用以存储人脸识别的点位:
- public class MyPoint {
- public double x;
- public double y;
-
- public MyPoint(double x, double y){
- this.x = x;
- this.y = y;
- }
-
- public double getX(){
- return x;
- }
-
- public double getY(){
- return y;
- }
-
- }
② 获取点位和这张图片,然后将点位画出来:
- public Bitmap drawMlkiLandmarks(Bitmap bitmap, ArrayList<MyPoint> facePoints) {
- Log.d(TAG, "drawMlkiLandmarks: 1," + bitmap);
- Log.d(TAG, "drawMlkiLandmarks: 2," + facePoints);
- Bitmap bitmap3 = bitmap.copy(bitmap.getConfig(), true);
- Canvas canvas = new Canvas(bitmap3);
- for (int j = 0; j < facePoints.size(); j++) {
- int cx = (int) facePoints.get(j).x;
- int cy = (int) facePoints.get(j).y;
- Log.d(TAG, "drawMlkiLandmarks: " + j + ",," + cx + ",," + cy);
- paint.setStyle(Paint.Style.FILL_AND_STROKE);
- paint.setColor(Color.GREEN);
- paint.setStrokeWidth(3f);
- textPaint.setStrokeWidth(35f);
- textPaint.setColor(Color.WHITE);
-
- canvas.drawCircle(cx, cy, 3f, paint);
- canvas.drawText(String.valueOf(j), cx, cy, textPaint);
- }
- Log.d(TAG, "drawMlkiLandmarks: 3," + bitmap3);
- img.setImageBitmap(bitmap3);
- return bitmap3;
- }
以下是笔者的运行结果 —— 一共识别点位128个:运行在模拟器上,图片可能不够清晰。
另外,我们可以单独拿到各个点位,这样就可以对单个或多个点位进行攫取和调整,实际项目中非常方便。
最后给出主干代码,可以直接运行。
Activity代码:
- public class StillImageActivity extends AppCompatActivity {
- private static final String TAG = "StillImageActivity";
- private ImageView img;
- private Bitmap bitmap;
- private Paint paint;
-
- @Override
- protected void onCreate(@Nullable Bundle savedInstanceState) {
- super.onCreate(savedInstanceState);
- setContentView(R.layout.activity_stillimage);
- img = findViewById(R.id.img);
- imgTwo = findViewById(R.id.img_two);
- bitmap = BitmapFactory.decodeResource(this.getResources(), R.drawable.face_3);
-
- paint = new Paint();
- initMlkiFace(bitmap);
-
- }
-
- private volatile FirebaseVisionFaceDetector detector;
-
- private void initMlkiFace(final Bitmap bitmap) {
- final float scaleX = 1;
- final float scaleY = 1;
- final float transX = 0;
- final float transY = 0;
- if (bitmap != null) {
- FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap);
- if (detector == null) {
- FirebaseVisionFaceDetectorOptions options =
- new FirebaseVisionFaceDetectorOptions.Builder()
- .setPerformanceMode(FirebaseVisionFaceDetectorOptions.FAST)
- .setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS)
- .build();
- detector = FirebaseVision.getInstance().getVisionFaceDetector(options);
- }
- detector.detectInImage(image).addOnSuccessListener(new OnSuccessListener<List<FirebaseVisionFace>>() {
- @Override
- public void onSuccess(List<FirebaseVisionFace> faces) {
- Log.d(TAG, "onSuccess: 识别成功");
- ArrayList<MyPoint> facePoints = new ArrayList<>();
- facePoints.clear();
- if (faces.size() > 0) {
- for (int i = 0; i < 1; i++) {
- List<FirebaseVisionPoint> contour = faces.get(i).getContour(FirebaseVisionFaceContour.ALL_POINTS).getPoints();
- for (int j = 0; j < contour.size(); j++) {
- MyPoint myPoint = new MyPoint(contour.get(j).getX() * scaleX + transX, contour.get(j).getY() * scaleY + transY);
- facePoints.add(myPoint);
- }
- }
- }
- drawMlkiLandmarks(bitmap, facePoints);
- try {
- if (detector != null) {
- detector.close();
- detector = null;
- }
- } catch (IOException e) {
- e.printStackTrace();
- }
- }
- }).addOnFailureListener(new OnFailureListener() {
- @Override
- public void onFailure(@NonNull Exception e) {
- Log.d(TAG, "onFailure: 识别失败" + e);
- ArrayList<MyPoint> facePoints = new ArrayList<>();
- facePoints.clear();
- drawMlkiLandmarks(bitmap, facePoints);
- try {
- if (detector != null) {
- detector.close();
- detector = null;
- }
- } catch (IOException e1) {
- e1.printStackTrace();
- }
- }
- });
- }
- }
-
- public Bitmap drawMlkiLandmarks(Bitmap bitmap, ArrayList<MyPoint> facePoints) {
- Log.d(TAG, "drawMlkiLandmarks: 1," + bitmap);
- Log.d(TAG, "drawMlkiLandmarks: 2," + facePoints);
- Bitmap bitmap3 = bitmap.copy(bitmap.getConfig(), true);
- Canvas canvas = new Canvas(bitmap3);
- for (int j = 0; j < facePoints.size(); j++) {
- int cx = (int) facePoints.get(j).x;
- int cy = (int) facePoints.get(j).y;
- Log.d(TAG, "drawMlkiLandmarks: " + j + ",," + cx + ",," + cy);
- paint.setStyle(Paint.Style.FILL_AND_STROKE);
- paint.setColor(Color.GREEN);
- paint.setStrokeWidth(3f);
- textPaint.setStrokeWidth(35f);
- textPaint.setColor(Color.WHITE);
-
- canvas.drawCircle(cx, cy, 3f, paint);
- canvas.drawText(String.valueOf(j), cx, cy, textPaint);
- }
- Log.d(TAG, "drawMlkiLandmarks: 3," + bitmap3);
- img.setImageBitmap(bitmap3);
- return bitmap3;
- }
-
- }
本项目Json文件:google-services.json,注意文件名不可更改。
注意,两个package_name需要同自己的项目保持一致。
- {
- "project_info": {
- "project_number": "48812919930",
- "firebase_url": "https://mlkit-a1dbd.firebaseio.com",
- "project_id": "mlkit-a1dbd",
- "storage_bucket": "mlkit-a1dbd.appspot.com"
- },
- "client": [
- {
- "client_info": {
- "mobilesdk_app_id": "1:48812919930:android:3b4e1d01e0aecbef",
- "android_client_info": {
- "package_name": "com.google.mlkit"
- }
- },
- "oauth_client": [
- {
- "client_id": "48812919930-ghfk9cm82ojhurb1uou7le4u42v10esv.apps.googleusercontent.com",
- "client_type": 1,
- "android_info": {
- "package_name": "com.google.mlkit",
- "certificate_hash": "0956e1fccbb13565a4543708cbf648a0858ffc43"
- }
- },
- {
- "client_id": "48812919930-ouugkae6qhraaf6b97h3ea2fvd187lrh.apps.googleusercontent.com",
- "client_type": 3
- },
- {
- "client_id": "48812919930-ouugkae6qhraaf6b97h3ea2fvd187lrh.apps.googleusercontent.com",
- "client_type": 3
- }
- ],
- "api_key": [
- {
- "current_key": "AIzaSyDorgdUVL_rlk8IGrX9x5IdYvAAt-sx7M0"
- }
- ],
- "services": {
- "analytics_service": {
- "status": 1
- },
- "appinvite_service": {
- "status": 2,
- "other_platform_oauth_client": [
- {
- "client_id": "48812919930-ouugkae6qhraaf6b97h3ea2fvd187lrh.apps.googleusercontent.com",
- "client_type": 3
- }
- ]
- },
- "ads_service": {
- "status": 2
- }
- }
- }
- ],
- "configuration_version": "1"
- }
最后,使用ML Kit,不仅能够识别静态图片的点位,摄像头获取的动态帧数据流人脸的点位也可以获取到。所以在相机项目中的人脸识别也可以使用ML Kit来进行人脸识别。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。