当前位置:   article > 正文

Android摄像头相关源码分析: 设备驱动, HAL, Framework_android framework的拍照接口

android framework的拍照接口

Table of Contents

1 序

本文分析的Android源代码来自Android-X86, 对应的版本是5.1, 因此可能与手机上的Android系统有点差异.

2 V4L2

V4L2是linux针对摄像头等视频设备的驱动, 应用程序只要对摄像头设备通过open获取文件描述符, 然后就能使用read, write, ioctl等操作对摄像头进行操作了. 在android HAL中同样也是这么做的, 直接对设备的操作相关的代码位于/hardware/libcamera/V4L2Camera.cpp中. 由于我的项目中使用了一个虚拟摄像头设备v4l2loopback, 但是有些android中用到的ioctl该设备的动作不符合预期, 需要进行修改, 所以本小节针对v4l2的做一个介绍, 内容主要来源于v4l2的官方文档.

V4L2是一套大而全的设备驱动, 但是很多功能对于摄像头来说用不到, 在android的HAL中, 用到的ioctl有:

接下来就对这些ioctl做一个说明.

2.1 ioctls

VIDIOC_QUERYCAP

用于查询设备的功能和类型, 基本上每个V4L2应用在open设备之后都要用这个ioctl来确定设备的类型. 使用时需要给定一个v4l2_capability类型的数据结构用于传出输出结果, 对于摄像头设备来说, v4l2_capability->capability的V4L2_CAP_VIDEO_CAPTURE位以及V4L2_CAP_STREAMING位需要为1.

V4L2_CAP_VIDEO_CAPTURE表示该设备支持视频捕获, 这也是摄像头的基本功能.

V4L2_CAP_STREAMING表示该设备支持Streaming I/O, 这是一种内存映射的方式来在内核与应用直接传输数据.

VIDIOC_ENUM_FMT

用于查询摄像头支持的图像格式. 使用时需要指定一个v4l2_fmtdesc类型的数据结构作为输出参数. 对于支持多种图像格式的设备来说, 需要设定v4l2_fmtdesc->index然后多次调用这个ioctl, 直到返回EINVAL. ioctl调用成功后, 可以通过v4l2_fmtdesc->pixelformat来获取该设备支持的图像格式, pixelformat可以是:

  • V4L2_PIX_FMT_MJPEG
  • V4L2_PIX_FMT_JPEG
  • V4L2_PIX_FMT_YUYV
  • V4L2_PIX_FMT_YVYU等.
VIDIOC_ENUM_FRAMESIZES

获取到图像格式之后, 还要进一步查询该种格式下设备支持的的分辨率, 这个ioctl就是干这个事. 使用是需要指定一个v4l2_frmsizeenum类型的数据结构, 并且设置v4l2_frmsizeenum->pixel_fromat为需要查询的图像格式, v4l2_frmsizeenum->index设置为0.

成功调用后, v4l2_frmivalenum->type可能有三种情况:

  1. V4L2_FRMSIZE_TYPE_DISCRETE: 可以递增的设置v4l2_frmsizeenum->index来重复调用直到返回EINVAL来获取该种图像格式下所有支持的分辨率. 此时, 可以通过v4l2_frmsizeenum->width和height来获取支持的分辨率的长宽.
  2. V4L2_FRMSIZE_TYPE_STEPWISE: 此时只有v4l2_frmsizeenum->stepwise是有效的, 并且不能再将index设为其他值重复调用此ioctl.
  3. V4L2_FRMSIZE_TYPE_CONTINUOUS: STEPWISE的一种特殊情况, 此时同样只有stepwise有效, 并且stepwise.step_width和stepwise.step_height都为1.

上述三种情况中, 第一种很好理解, 就是支持的分辨率. 但是STEPWISE和CONTINUOUS还不知道是什么意思. 可能是任意分辨率都支持的意思?

VIDIOC_ENUM_FRAMEINTERVALS

获取到图像格式以及图像分辨率之后, 还可以查询在这种格式及分辨率下摄像头设备支持的fps. 使用是需要给定v4l2_frmivalenum格式的一个数据结构, 并设置好index=0, pixel_format, width, height.

调用完成后, 同样需要检查v4l2_frmivalenum.type, 同样有DISCRETE, STEPWISE, CONTINUOUS三种情况.

VIDIOC_TRY_FMT/VIDIOC_S_FMT/VIDIOC_G_FMT

这三个ioctl用于设置以及获取图像格式. TRY_FMT和S_FMT的区别在于前者不改变驱动的状态.

需要设置图像格式时一般通过G_FMT先获取当前格式, 再修改参数, 最后S_FMT或者TRY_FMT.

VIDIOC_S_PARM/VIDIOC_G_PARM

用于设置和获取streaming io的参数. 需要指定一个v4l2_streamparm类型的数据结构.

VIDIOC_S_JPEGCOMP/VIDIOC_G_JPEGCOMP

用于设置和获取JPEG格式的相关参数

VIDIOC_REQBUFS

为了在用户程序和内核设备之间交换图像数据, 需要分配一块内存. 这块内存可以在内核设备中分配, 然后用户程序通过mmap映射到用户空间, 也可以在用户空间分配, 然后内核设备进入用户指针IO模式. 这两种情况都可以用这个ioctl来进行初始化. 使用时需要给定一个v4l2_requestbuffers, 并设定好type, memory, count. 可以多次调用这个ioctl来重新设置参数. count设为0表示free 掉所有内存.

VIDIOC_QUERYBUF

VIDIOC_REQBUFS之后, 可以随时通过此ioctl查询buffer的当前状态. 使用时需要给定一个v4l2_buffer类型的数据结构. 并设定好type和index. 其中index的有效值为[0, count-1]. 这里的count是VIDIOC_REQBUFS返回的count.

VIDIOC_QBUF/VIDIOC_DQBUF

VIDIOC_REQBUFS分配了内存后, V4l2设备驱动还不能直接用这些内存, 需要通过VIDIOC_QBUF将分配好的内存中的一帧压入驱动的接收队列中, 然后通过VIDIOC_DQBUF推出一帧数据的内存. 如果是摄像头这样的CAPTURE设备, 那压入的就是一个空的内存区间, 等到摄像头拍摄到数据填充到了这片内存之后, 就能推出一片包含有效图像数据的帧了. 如果摄像头还没完成填充动作, 那么VIDIOC_DQBUF就会阻塞在那, 除非在open时设置了O_NONBLOCK标记位.

VIDIOC_STREAMON/VIDIOC_STREAMOFF

STREAMON表示开始工作, 只有在这个ioctl调用之后, 摄像头才能开始捕捉图像, 填充数据. 相对的, STREAMOFF则表示停止工作, 此时在设备驱动中尚未被DQBUF 的图像会丢失.

3 像素编码格式

由于底层涉及到图像, 摄像头相关的代码中出现了不少像素编码格式, 所以简单了解一下摄像头会遇到的格式.

3.1 RGB

很简单, 一个像素由RGB三种颜色组成, 每种颜色占用8位, 所以一个像素需要3 字节存储. 有些RGB编码格式会用更少的位保存一些颜色, 例如RGB844, 绿色和蓝色分别用4位来保存, 这样一个像素就只需要2字节. 还有RGBA, 增加了一个alpha通道, 这样一个像素就需要32位来保存.

3.2 YUV

YUV同样有三个通道, Y表示亮度, 由RGB三色特定部分叠加到一起, U和V则分别是红色与亮度的差异和蓝色与亮度的差异. Y一般占用8位, 而UV则可以省略, 因此衍生出了YUV444, YUV420, YUV411等一系列编码. 名称中的三位数字表示YUC 三个信道的比例, 例如YUV444表示三种信道1:1:1, 而Y占用8位, 因此一个像素占用24位. YUV420并不是说V就完全省略了, 而是一行4:1:0, 一行4:0:1.

android系统的摄像头预览默认采用YUV420sp编码. YUV420又分为YUV420p和YUV420sp, 两者的区别在于UV数据的存放顺序不一样:

Android_Hardware_Camera_20160605_142139.png

Figure 1: 图片来自 http://blog.csdn.net/jefry_xdz/article/details/7931018

4 Android Camera

4.1 Hardware

在Android-x85 5.1中, HAL层摄像头相关的类之间的关系大概如图所示:

android_camera_uml.png

SurfaceSize是对一个Surface的长宽参数的封装. SurfaceDesc是对一个Surface 的长宽参数, fps参数的封装.

V4L2Camera类是对V4L2设备驱动的一层封装, 直接通过ioctl控制V4L2设备.

CameraParameters是对摄像头参数的封装. 其中flatten和unfaltten相当于是对CameraParameters的序列化和反序列化.

camera_device其实是一个类似于抽象类的结构体, 定义了一些列摄像头应该实现的接口, 而CameraHardware继承了这个camera_device, 表示一个摄像头的抽象, 这个类主要实现了android定义的一个摄像头应该实现的动作, 如startPreview. 其底层通过V4L2Camera对象来操作摄像头设备, 每个CameraHardware对象都包含一个V4L2Camera对象. 每个实例还包含了一个CameraParameters对象, 保存摄像头的相关参数.

CameraFactory是一个摄像头的管理类, 整个安卓系统中只有一个实例, 其中通过读配置文件的方式创建了多个CameraHardware实例.

CameraFactory

CameraFactory类扮演着是一个摄像头设备的管理员的角色, 这个类查询手机上有几个摄像头, 这几个摄像头的设备路径分别是什么, 旋转角度是多少, 朝向(前置还是后置)等信息. Android-x86通过读取一个配置文件来获取机器上有多少个摄像头:

hardware/libcamera/CameraFactory.cpp

  1. void CameraFactory::parseConfig(const char* configFile)
  2. {
  3. ALOGD("CameraFactory::parseConfig: configFile = %s", configFile);
  4. FILE* config = fopen(configFile, "r");
  5. if (config != NULL) {
  6. char line[128];
  7. char arg1[128];
  8. char arg2[128];
  9. int arg3;
  10. while (fgets(line, sizeof line, config) != NULL) {
  11. int lineStart = strspn(line, " \t\n\v" );
  12. if (line[lineStart] == '#')
  13. continue;
  14. sscanf(line, "%s %s %d", arg1, arg2, &arg3);
  15. if (arg3 != 0 && arg3 != 90 && arg3 != 180 && arg3 != 270)
  16. arg3 = 0;
  17. if (strcmp(arg1, "front") == 0) {
  18. newCameraConfig(CAMERA_FACING_FRONT, arg2, arg3);
  19. } else if (strcmp(arg1, "back") == 0) {
  20. newCameraConfig(CAMERA_FACING_BACK, arg2, arg3);
  21. } else {
  22. ALOGD("CameraFactory::parseConfig: Unrecognized config line '%s'", line);
  23. }
  24. }
  25. } else {
  26. ALOGD("%s not found, using camera configuration defaults", CONFIG_FILE);
  27. if (access(DEFAULT_DEVICE_BACK, F_OK) != -1){
  28. ALOGD("Found device %s", DEFAULT_DEVICE_BACK);
  29. newCameraConfig(CAMERA_FACING_BACK, DEFAULT_DEVICE_BACK, 0);
  30. }
  31. if (access(DEFAULT_DEVICE_FRONT, F_OK) != -1){
  32. ALOGD("Found device %s", DEFAULT_DEVICE_FRONT);
  33. newCameraConfig(CAMERA_FACING_FRONT, DEFAULT_DEVICE_FRONT, 0);
  34. }
  35. }
  36. }

配置文件的路径位于/etc/camera.cfg, 格式为”front/back path_to_device orientation”, 例如”front /dev/video0 0″

值得一提的另一个函数是 cameraDeviceOpen, APP在打开摄像头的过程中会通过这个函数获取摄像头:

  1. 1: int CameraFactory::cameraDeviceOpen(const hw_module_t* module,int camera_id, hw_device_t** device)
  2. 2: {
  3. 3: ALOGD("CameraFactory::cameraDeviceOpen: id = %d", camera_id);
  4. 4:
  5. 5: *device = NULL;
  6. 6:
  7. 7: if (!mCamera || camera_id < 0 || camera_id >= getCameraNum()) {
  8. 8: ALOGE("%s: Camera id %d is out of bounds (%d)",
  9. 9: __FUNCTION__, camera_id, getCameraNum());
  10. 10: return -EINVAL;
  11. 11: }
  12. 12:
  13. 13: if (!mCamera[camera_id]) {
  14. 14: mCamera[camera_id] = new CameraHardware(module, mCameraDevices[camera_id]);
  15. 15: }
  16. 16: return mCamera[camera_id]->connectCamera(device);
  17. 17: }

从第13行可知, 当android系统启动的时候, 就已经构建好了CameraFactory对象并且通过配置文件读取到机器中有多少摄像头, 对应的设备路径是什么. 但是此时并没有创建CameraHardware对象, 而是直到对应的摄像头第一次被打开的时候才创建.

CameraFactory类整个android系统只有一个实例, 那就是定义在CameraFactory.cpp中的gCameraFactory. camera_module_t定义了几个函数指针, 指向了CameraFactory中的static函数, 当调用这些指针指向的函数的时候, 实质上是在调用gCameraFactory对象中的相应方法:

hardware/libcamera/CameraHal.cpp

  1. camera_module_t HAL_MODULE_INFO_SYM = {
  2. common: {
  3. tag: HARDWARE_MODULE_TAG,
  4. version_major: 1,
  5. version_minor: 0,
  6. id: CAMERA_HARDWARE_MODULE_ID,
  7. name: "Camera Module",
  8. author: "The Android Open Source Project",
  9. methods: &android::CameraFactory::mCameraModuleMethods,
  10. dso: NULL,
  11. reserved: {0},
  12. },
  13. get_number_of_cameras: android::CameraFactory::get_number_of_cameras,
  14. get_camera_info: android::CameraFactory::get_camera_info,
  15. };

上述代码定义了一个camera_module_t, 相应的函数定义如下:

hardware/libcamera/CameraFactory.cpp

  1. int CameraFactory2::device_open(const hw_module_t* module,
  2. const char* name,
  3. hw_device_t** device)
  4. {
  5. ALOGD("CameraFactory2::device_open: name = %s", name);
  6. /*
  7. * Simply verify the parameters, and dispatch the call inside the
  8. * CameraFactory instance.
  9. */
  10. if (module != &HAL_MODULE_INFO_SYM.common) {
  11. ALOGE("%s: Invalid module %p expected %p",
  12. __FUNCTION__, module, &HAL_MODULE_INFO_SYM.common);
  13. return -EINVAL;
  14. }
  15. if (name == NULL) {
  16. ALOGE("%s: NULL name is not expected here", __FUNCTION__);
  17. return -EINVAL;
  18. }
  19. int camera_id = atoi(name);
  20. return gCameraFactory.cameraDeviceOpen(module, camera_id, device);
  21. }
  22. int CameraFactory2::get_number_of_cameras(void)
  23. {
  24. ALOGD("CameraFactory2::get_number_of_cameras");
  25. return gCameraFactory.getCameraNum();
  26. }
  27. int CameraFactory2::get_camera_info(int camera_id,
  28. struct camera_info* info)
  29. {
  30. ALOGD("CameraFactory2::get_camera_info");
  31. return gCameraFactory.getCameraInfo(camera_id, info);
  32. }
camera_device

camera_device同时被定义为了camera_device_t, 此处涉及到HAL的扩展规范. Android HAL定义了三个数据类型, struct hw_module_tstruct hw_module_methods_tstruct hw_device_t, 分别表示模块类型, 模块方法和设备类型. 当需要扩展HAL, 增加一种设备的时候, 就要实现以上这三种数据结构. 例如对于摄像头来说, 就需要定义camera_module_t, camera_device_t, 以及为hw_module_methods_t中的函数指针赋值, 其中只有一个open函数, 相当于初始化模块. 而且HAL还规定camera_module_t的第一个成员必须是hw_module_t, camera_device_t的第一个成员必须是hw_device_t, 接下来的其他成员可以自己定义.

更详细的原理说明参加这里.

在Android-x86中, camera_device_t的定义位于hardware/libhardware/include/hardware/hardware.h

hardware/libhardware/include/hardware/hardware.h

  1. typedef struct camera_device {
  2. hw_device_t common;
  3. camera_device_ops_t *ops;
  4. void *priv;
  5. } camera_device_t;

其中的camera_device_ops_t摄像头模块自己定义的一组函数接口, 在同一个文件中, 太长了就不贴出来了.

camera_module_t位于同一个目录下的camera_common.h中

hardware/libhardware/include/hardware/camera_common.h

  1. typedef struct camera_module {
  2. hw_module_t common;
  3. int (*get_number_of_cameras)(void);
  4. int (*get_camera_info)(int camera_id, struct camera_info *info);
  5. int (*set_callbacks)(const camera_module_callbacks_t *callbacks);
  6. void (*get_vendor_tag_ops)(vendor_tag_ops_t* ops);
  7. int (*open_legacy)(const struct hw_module_t* module, const char* id,
  8. uint32_t halVersion, struct hw_device_t** device);
  9. /* reserved for future use */
  10. void* reserved[7];
  11. } camera_module_t;

在framework调用HAL代码时, 通过hw_module_t->methods->open获取hw_device_t, 然后强制转化为camera_device_t, 就能调用camera_device_t->ops中的摄像头相关的函数了. 对于Android-x86的摄像头来说, ops中的函数指针的赋值位于继承了camera_device的CameraHardware类中.

CameraHardware

CameraHardware的众多接口主要是为了完成三个动作: 预览, 录制, 拍照. 其他的函数大多是设置参数等准备工作. 本小节以预览为例对代码流程做一个说明.

首先是对CameraHardware对象的初始化参数, 对应的函数为initDefaultParameters. 该函数通过调用V4L2Camera的getBestPreviewFmtgetBestPictureFmtgetAvailableSizesgetAvailableFps 来分别获取默认预览图像格式, 默认图像格式, 摄像头支持的分辨率和fps:

hardware/libcamera/CameraHardware.cpp

  1. int pw = MIN_WIDTH;
  2. int ph = MIN_HEIGHT;
  3. int pfps = 30;
  4. int fw = MIN_WIDTH;
  5. int fh = MIN_HEIGHT;
  6. SortedVector<SurfaceSize> avSizes;
  7. SortedVector<int> avFps;
  8. if (camera.Open(mVideoDevice) != NO_ERROR) {
  9. ALOGE("cannot open device.");
  10. } else {
  11. // Get the default preview format
  12. pw = camera.getBestPreviewFmt().getWidth();
  13. ph = camera.getBestPreviewFmt().getHeight();
  14. pfps = camera.getBestPreviewFmt().getFps();
  15. // Get the default picture format
  16. fw = camera.getBestPictureFmt().getWidth();
  17. fh = camera.getBestPictureFmt().getHeight();
  18. // Get all the available sizes
  19. avSizes = camera.getAvailableSizes();
  20. // Add some sizes that some specific apps expect to find:
  21. // GTalk expects 320x200
  22. // Fring expects 240x160
  23. // And also add standard resolutions found in low end cameras, as
  24. // android apps could be expecting to find them
  25. // The V4LCamera handles those resolutions by choosing the next
  26. // larger one and cropping the captured frames to the requested size
  27. avSizes.add(SurfaceSize(480,320)); // HVGA
  28. avSizes.add(SurfaceSize(432,320)); // 1.35-to-1, for photos. (Rounded up from 1.3333 to 1)
  29. avSizes.add(SurfaceSize(352,288)); // CIF
  30. avSizes.add(SurfaceSize(320,240)); // QVGA
  31. avSizes.add(SurfaceSize(320,200));
  32. avSizes.add(SurfaceSize(240,160)); // SQVGA
  33. avSizes.add(SurfaceSize(176,144)); // QCIF
  34. // Get all the available Fps
  35. avFps = camera.getAvailableFps();
  36. }

然后将这些参数转为文本形式, 设置到CameraParameters对象中:

hardware/libcamera/CameraHardware.cpp

  1. // Antibanding
  2. p.set(CameraParameters::KEY_SUPPORTED_ANTIBANDING,"auto");
  3. p.set(CameraParameters::KEY_ANTIBANDING,"auto");
  4. // Effects
  5. p.set(CameraParameters::KEY_SUPPORTED_EFFECTS,"none"); // "none,mono,sepia,negative,solarize"
  6. p.set(CameraParameters::KEY_EFFECT,"none");
  7. // Flash modes
  8. p.set(CameraParameters::KEY_SUPPORTED_FLASH_MODES,"off");
  9. p.set(CameraParameters::KEY_FLASH_MODE,"off");
  10. // Focus modes
  11. p.set(CameraParameters::KEY_SUPPORTED_FOCUS_MODES,"fixed");
  12. p.set(CameraParameters::KEY_FOCUS_MODE,"fixed");
  13. #if 0
  14. p.set(CameraParameters::KEY_JPEG_THUMBNAIL_HEIGHT,0);
  15. p.set(CameraParameters::KEY_JPEG_THUMBNAIL_QUALITY,75);
  16. p.set(CameraParameters::KEY_SUPPORTED_JPEG_THUMBNAIL_SIZES,"0x0");
  17. p.set("jpeg-thumbnail-size","0x0");
  18. p.set(CameraParameters::KEY_JPEG_THUMBNAIL_WIDTH,0);
  19. #endif
  20. // Picture - Only JPEG supported
  21. p.set(CameraParameters::KEY_SUPPORTED_PICTURE_FORMATS,CameraParameters::PIXEL_FORMAT_JPEG); // ONLY jpeg
  22. p.setPictureFormat(CameraParameters::PIXEL_FORMAT_JPEG);
  23. p.set(CameraParameters::KEY_SUPPORTED_PICTURE_SIZES, szs);
  24. p.setPictureSize(fw,fh);
  25. p.set(CameraParameters::KEY_JPEG_QUALITY, 85);
  26. // Preview - Supporting yuv422i-yuyv,yuv422sp,yuv420sp, defaulting to yuv420sp, as that is the android Defacto default
  27. p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FORMATS,"yuv422i-yuyv,yuv422sp,yuv420sp,yuv420p"); // All supported preview formats
  28. p.setPreviewFormat(CameraParameters::PIXEL_FORMAT_YUV422SP); // For compatibility sake ... Default to the android standard
  29. p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FPS_RANGE, fpsranges);
  30. p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FRAME_RATES, fps);
  31. p.setPreviewFrameRate( pfps );
  32. p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_SIZES, szs);
  33. p.setPreviewSize(pw,ph);
  34. // Video - Supporting yuv422i-yuyv,yuv422sp,yuv420sp and defaulting to yuv420p
  35. p.set("video-size-values"/*CameraParameters::KEY_SUPPORTED_VIDEO_SIZES*/, szs);
  36. p.setVideoSize(pw,ph);
  37. p.set(CameraParameters::KEY_VIDEO_FRAME_FORMAT, CameraParameters::PIXEL_FORMAT_YUV420P);
  38. p.set("preferred-preview-size-for-video", "640x480");
  39. // supported rotations
  40. p.set("rotation-values","0");
  41. p.set(CameraParameters::KEY_ROTATION,"0");
  42. // scenes modes
  43. p.set(CameraParameters::KEY_SUPPORTED_SCENE_MODES,"auto");
  44. p.set(CameraParameters::KEY_SCENE_MODE,"auto");
  45. // white balance
  46. p.set(CameraParameters::KEY_SUPPORTED_WHITE_BALANCE,"auto");
  47. p.set(CameraParameters::KEY_WHITE_BALANCE,"auto");
  48. // zoom
  49. p.set(CameraParameters::KEY_SMOOTH_ZOOM_SUPPORTED,"false");
  50. p.set("max-video-continuous-zoom", 0 );
  51. p.set(CameraParameters::KEY_ZOOM, "0");
  52. p.set(CameraParameters::KEY_MAX_ZOOM, "100");
  53. p.set(CameraParameters::KEY_ZOOM_RATIOS, "100");
  54. p.set(CameraParameters::KEY_ZOOM_SUPPORTED, "false");
  55. // missing parameters for Camera2
  56. p.set(CameraParameters::KEY_FOCAL_LENGTH, 4.31);
  57. p.set(CameraParameters::KEY_HORIZONTAL_VIEW_ANGLE, 90);
  58. p.set(CameraParameters::KEY_VERTICAL_VIEW_ANGLE, 90);
  59. p.set(CameraParameters::KEY_SUPPORTED_JPEG_THUMBNAIL_SIZES, "640x480,0x0");

当准备工作做完之后, 调用 CameraHardware::startPreview 函数, 这个函数只有三行, 首先加个锁, 然后调用 startPreviewLocked, 所有的预览的工作都是在这个函数中完成.

hardware/libcamera/CameraHardware.cpp

  1. status_t CameraHardware::startPreviewLocked()
  2. {
  3. ALOGD("CameraHardware::startPreviewLocked");
  4. // 预览由一个独立的线程完成, 这几行检查预览是否已经开启. 一般来说是不会进入到if的
  5. if (mPreviewThread != 0) {
  6. ALOGD("CameraHardware::startPreviewLocked: preview already running");
  7. return NO_ERROR;
  8. }
  9. // 通过CameraParameters获取预览的长宽.
  10. int width, height;
  11. // If we are recording, use the recording video size instead of the preview size
  12. if (mRecordingEnabled && mMsgEnabled & CAMERA_MSG_VIDEO_FRAME) {
  13. mParameters.getVideoSize(&width, &height);
  14. } else {
  15. mParameters.getPreviewSize(&width, &height);
  16. }
  17. // 通过CameraParameters获取预览的fps
  18. int fps = mParameters.getPreviewFrameRate();
  19. ALOGD("CameraHardware::startPreviewLocked: Open, %dx%d", width, height);
  20. // 调用V4L2Camera的open函数打开摄像头设备
  21. status_t ret = camera.Open(mVideoDevice);
  22. if (ret != NO_ERROR) {
  23. ALOGE("Failed to initialize Camera");
  24. return ret;
  25. }
  26. ALOGD("CameraHardware::startPreviewLocked: Init");
  27. // 调用V4L2Camera的init函数初始化摄像头设备
  28. ret = camera.Init(width, height, fps);
  29. if (ret != NO_ERROR) {
  30. ALOGE("Failed to setup streaming");
  31. return ret;
  32. }
  33. // 用户要求的预览的长宽可能摄像头设备不支持, 摄像头实际工作的长宽通过以下函数获取.
  34. /* Retrieve the real size being used */
  35. camera.getSize(width, height);
  36. ALOGD("CameraHardware::startPreviewLocked: effective size: %dx%d",width, height);
  37. // 保存实际工作的长宽
  38. // If we are recording, use the recording video size instead of the preview size
  39. if (mRecordingEnabled && mMsgEnabled & CAMERA_MSG_VIDEO_FRAME) {
  40. /* Store it as the video size to use */
  41. mParameters.setVideoSize(width, height);
  42. } else {
  43. /* Store it as the preview size to use */
  44. mParameters.setPreviewSize(width, height);
  45. }
  46. // ???
  47. /* And reinit the memory heaps to reflect the real used size if needed */
  48. initHeapLocked();
  49. ALOGD("CameraHardware::startPreviewLocked: StartStreaming");
  50. // 通过V4L2Camera.StartStreaming让摄像头设备开始工作
  51. ret = camera.StartStreaming();
  52. if (ret != NO_ERROR) {
  53. ALOGE("Failed to start streaming");
  54. return ret;
  55. }
  56. // 初始化预览窗口
  57. // setup the preview window geometry in order to use it to zoom the image
  58. if (mWin != 0) {
  59. ALOGD("CameraHardware::setPreviewWindow - Negotiating preview format");
  60. NegotiatePreviewFormat(mWin);
  61. }
  62. ALOGD("CameraHardware::startPreviewLocked: starting PreviewThread");
  63. // 开启一个线程处理预览工作
  64. mPreviewThread = new PreviewThread(this);
  65. ALOGD("CameraHardware::startPreviewLocked: O - this:0x%p",this);
  66. return NO_ERROR;
  67. }

再来看这个 PreviewThread, 这个类很简单, 就是调用了CameraHardware的previewThread方法, 这个方法根据fps计算出一个等待时间, 然后调用V4L2Camera的GrabRawFrame获取摄像头设备的图像, 然后转换成支持的图像格式, 最后放到显示窗口中显示图像.

V4L2Camera

V4L2Camera类主要是对V4L2设备的封装, 下面分析一下常用的几个接口, 如OpenInitStartStreamingGrabRawFrameEnumFrameIntervalsEnumFrameSizesEnumFrameFormats.

  • Open

    Open接口的逻辑比较简单, 通过 open 系统调用获取摄像头设备的文件描述符, 然后调用VIDIOC_QUERYCAP ioctl查询设备的能力, 由于是摄像头设备, 这里就要求是设备的V4L2_CAP_VIDEO_CAPTURE位被置为1, 所以有个检查. 最后调用EnumFrameFormats 获取摄像头支持的图像格式.

    hardware/libcamera/V4L2Camera.cpp

    1. int V4L2Camera::Open (const char *device)
    2. {
    3. int ret;
    4. /* Close the previous instance, if any */
    5. Close();
    6. memset(videoIn, 0, sizeof (struct vdIn));
    7. if ((fd = open(device, O_RDWR)) == -1) {
    8. ALOGE("ERROR opening V4L interface: %s", strerror(errno));
    9. return -1;
    10. }
    11. ret = ioctl (fd, VIDIOC_QUERYCAP, &videoIn->cap);
    12. if (ret < 0) {
    13. ALOGE("Error opening device: unable to query device.");
    14. return -1;
    15. }
    16. if ((videoIn->cap.capabilities & V4L2_CAP_VIDEO_CAPTURE) == 0) {
    17. ALOGE("Error opening device: video capture not supported.");
    18. return -1;
    19. }
    20. if (!(videoIn->cap.capabilities & V4L2_CAP_STREAMING)) {
    21. ALOGE("Capture device does not support streaming i/o");
    22. return -1;
    23. }
    24. /* Enumerate all available frame formats */
    25. EnumFrameFormats();
    26. return ret;
    27. }
  • EnumFrameFormats

    此函数获取摄像头设备的图像格式, 分辨率以及fps. 参见2.1可知, 设备支持的分辨率是对某个图像格式下才有意义, 不说明在什么图像格式下, 是无法获取支持的分辨率的, 同样, fps也是针对某个图像格式和某个分辨率的. 因此摄像头设备支持的图像格式, 分辨率和fps在调用完这个函数之后就全部知道了. 同时, 这个函数还设置好了 m_BestPreviewFmt 和 m_BestPictureFmt 这两个参数, 这两个参数会被用来设置预览的默认格式.

    hardware/libcamera/V4L2Camera.cpp

    1. bool V4L2Camera::EnumFrameFormats()
    2. {
    3. ALOGD("V4L2Camera::EnumFrameFormats");
    4. struct v4l2_fmtdesc fmt;
    5. // Start with no modes
    6. m_AllFmts.clear();
    7. memset(&fmt, 0, sizeof(fmt));
    8. fmt.index = 0;
    9. fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    10. // 遍历地获取设备所有支持的图像格式
    11. while (ioctl(fd,VIDIOC_ENUM_FMT, &fmt) >= 0) {
    12. fmt.index++;
    13. ALOGD("{ pixelformat = '%c%c%c%c', description = '%s' }",
    14. fmt.pixelformat & 0xFF, (fmt.pixelformat >> 8) & 0xFF,
    15. (fmt.pixelformat >> 16) & 0xFF, (fmt.pixelformat >> 24) & 0xFF,
    16. fmt.description);
    17. // 获取该种格式下设备支持的分辨率和fps
    18. //enumerate frame sizes for this pixel format
    19. if (!EnumFrameSizes(fmt.pixelformat)) {
    20. ALOGE(" Unable to enumerate frame sizes.");
    21. }
    22. };
    23. // 此时, 图像格式, 分辨率, fps已经都获取到了.
    24. // Now, select the best preview format and the best PictureFormat
    25. m_BestPreviewFmt = SurfaceDesc();
    26. m_BestPictureFmt = SurfaceDesc();
    27. unsigned int i;
    28. for (i=0; i<m_AllFmts.size(); i++) {
    29. SurfaceDesc s = m_AllFmts[i];
    30. // 此处设置最佳的拍照参数, 由于是拍照, 对fps就没没什么要求, 只要分辨率大就可以了
    31. // 因此优先寻找一个分辨率最大的那个SurfaceDesc赋值给m_BestPictureFmt
    32. // Prioritize size over everything else when taking pictures. use the
    33. // least fps possible, as that usually means better quality
    34. if ((s.getSize() > m_BestPictureFmt.getSize()) ||
    35. (s.getSize() == m_BestPictureFmt.getSize() && s.getFps() < m_BestPictureFmt.getFps() )
    36. ) {
    37. m_BestPictureFmt = s;
    38. }
    39. // 此处设置最佳的预览参数, 对于预览来说, fps的权重更高
    40. // 因此优先寻找fps高的SurfaceDesc赋值给m_BestPreviewFmt
    41. // Prioritize fps, then size when doing preview
    42. if ((s.getFps() > m_BestPreviewFmt.getFps()) ||
    43. (s.getFps() == m_BestPreviewFmt.getFps() && s.getSize() > m_BestPreviewFmt.getSize() )
    44. ) {
    45. m_BestPreviewFmt = s;
    46. }
    47. }
    48. return true;
    49. }
  • EnumFrameSizes

    此函数根据给定的pixfmt查询该格式下设备支持的分辨率.

    hardware/libcamera/V4L2Camera.cpp

    1. bool V4L2Camera::EnumFrameSizes(int pixfmt)
    2. {
    3. ALOGD("V4L2Camera::EnumFrameSizes: pixfmt: 0x%08x",pixfmt);
    4. int ret=0;
    5. int fsizeind = 0;
    6. struct v4l2_frmsizeenum fsize;
    7. // 设置好v4l2_frmsizeenum
    8. memset(&fsize, 0, sizeof(fsize));
    9. fsize.index = 0;
    10. fsize.pixel_format = pixfmt;
    11. // 循环调用VIDIOC_ENUM_FRAMESIZES ioctl查询所有支持的分辨率
    12. while (ioctl(fd, VIDIOC_ENUM_FRAMESIZES, &fsize) >= 0) {
    13. fsize.index++;
    14. // 根据输出结果的type分情况讨论
    15. if (fsize.type == V4L2_FRMSIZE_TYPE_DISCRETE) {
    16. ALOGD("{ discrete: width = %u, height = %u }",
    17. fsize.discrete.width, fsize.discrete.height);
    18. // 这个变量保存设备支持的DISCRETE类型的分辨率的个数
    19. fsizeind++;
    20. // 继续查询这种分辨率下支持的fps
    21. if (!EnumFrameIntervals(pixfmt,fsize.discrete.width, fsize.discrete.height))
    22. ALOGD(" Unable to enumerate frame intervals");
    23. } else if (fsize.type == V4L2_FRMSIZE_TYPE_CONTINUOUS) { // 如果type是CONTINUOUS或STEPWISE, 则不做任何事
    24. ALOGD("{ continuous: min { width = %u, height = %u } .. "
    25. "max { width = %u, height = %u } }",
    26. fsize.stepwise.min_width, fsize.stepwise.min_height,
    27. fsize.stepwise.max_width, fsize.stepwise.max_height);
    28. ALOGD(" will not enumerate frame intervals.\n");
    29. } else if (fsize.type == V4L2_FRMSIZE_TYPE_STEPWISE) {
    30. ALOGD("{ stepwise: min { width = %u, height = %u } .. "
    31. "max { width = %u, height = %u } / "
    32. "stepsize { width = %u, height = %u } }",
    33. fsize.stepwise.min_width, fsize.stepwise.min_height,
    34. fsize.stepwise.max_width, fsize.stepwise.max_height,
    35. fsize.stepwise.step_width, fsize.stepwise.step_height);
    36. ALOGD(" will not enumerate frame intervals.");
    37. } else {
    38. ALOGE(" fsize.type not supported: %d\n", fsize.type);
    39. ALOGE(" (Discrete: %d Continuous: %d Stepwise: %d)",
    40. V4L2_FRMSIZE_TYPE_DISCRETE,
    41. V4L2_FRMSIZE_TYPE_CONTINUOUS,
    42. V4L2_FRMSIZE_TYPE_STEPWISE);
    43. }
    44. }
    45. // 如果设备不支持任何DISCRETE类型的分辨率, 尝试通过VIDIOC_TRY_FMT对设备设置分辨率, 如果设置成功, 也认为
    46. // 这个摄像头设备支持这种分辨率
    47. if (fsizeind == 0) {
    48. /* ------ gspca doesn't enumerate frame sizes ------ */
    49. /* negotiate with VIDIOC_TRY_FMT instead */
    50. static const struct {
    51. int w,h;
    52. } defMode[] = {
    53. {800,600},
    54. {768,576},
    55. {768,480},
    56. {720,576},
    57. {720,480},
    58. {704,576},
    59. {704,480},
    60. {640,480},
    61. {352,288},
    62. {320,240}
    63. };
    64. unsigned int i;
    65. for (i = 0 ; i < (sizeof(defMode) / sizeof(defMode[0])); i++) {
    66. fsizeind++;
    67. struct v4l2_format fmt;
    68. fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    69. fmt.fmt.pix.width = defMode[i].w;
    70. fmt.fmt.pix.height = defMode[i].h;
    71. fmt.fmt.pix.pixelformat = pixfmt;
    72. fmt.fmt.pix.field = V4L2_FIELD_ANY;
    73. if (ioctl(fd,VIDIOC_TRY_FMT, &fmt) >= 0) {
    74. ALOGD("{ ?GSPCA? : width = %u, height = %u }\n", fmt.fmt.pix.width, fmt.fmt.pix.height);
    75. // Add the mode descriptor
    76. m_AllFmts.add( SurfaceDesc( fmt.fmt.pix.width, fmt.fmt.pix.height, 25 ) );
    77. }
    78. }
    79. }
    80. return true;
    81. }

    可以看到, Android对于分辨率的类型只认DISCRETE, CONTINUOUS和STEPWISE只输出个日志, 不做任何事.

  • EnumFrameIntervals

    该函数通过VIDIOC_ENUM_FRAMEINTERVALS ioctl查询指定图像格式和分辨率下设备支持的fps.

    hardware/libcamera/V4L2Camera.cpp

    1. bool V4L2Camera::EnumFrameIntervalsi(int pixfmt, int width, int height)
    2. {
    3. ALOGD("V4L2Camera::EnumFrameIntervals: pixfmt: 0x%08x, w:%d, h:%d",pixfmt,width,height);
    4. struct v4l2_frmivalenum fival;
    5. int list_fps=0;
    6. // 设置参数
    7. memset(&fival, 0, sizeof(fival));
    8. fival.index = 0;
    9. fival.pixel_format = pixfmt;
    10. fival.width = width;
    11. fival.height = height;
    12. ALOGD("\tTime interval between frame: ");
    13. // 遍历的调用ioctl获取所有支持的fps
    14. while (ioctl(fd,VIDIOC_ENUM_FRAMEINTERVALS, &fival) >= 0)
    15. {
    16. fival.index++;
    17. // 同样只认DISCRETE
    18. if (fival.type == V4L2_FRMIVAL_TYPE_DISCRETE) {
    19. ALOGD("%u/%u", fival.discrete.numerator, fival.discrete.denominator);
    20. // 新建一个SurfaceDesc添加到成员变量m_AllFmts中
    21. m_AllFmts.add( SurfaceDesc( width, height, fival.discrete.denominator ) );
    22. list_fps++;
    23. } else if (fival.type == V4L2_FRMIVAL_TYPE_CONTINUOUS) {
    24. ALOGD("{min { %u/%u } .. max { %u/%u } }",
    25. fival.stepwise.min.numerator, fival.stepwise.min.numerator,
    26. fival.stepwise.max.denominator, fival.stepwise.max.denominator);
    27. break;
    28. } else if (fival.type == V4L2_FRMIVAL_TYPE_STEPWISE) {
    29. ALOGD("{min { %u/%u } .. max { %u/%u } / "
    30. "stepsize { %u/%u } }",
    31. fival.stepwise.min.numerator, fival.stepwise.min.denominator,
    32. fival.stepwise.max.numerator, fival.stepwise.max.denominator,
    33. fival.stepwise.step.numerator, fival.stepwise.step.denominator);
    34. break;
    35. }
    36. }
    37. // Assume at least 1fps
    38. if (list_fps == 0) {
    39. m_AllFmts.add( SurfaceDesc( width, height, 1 ) );
    40. }
    41. return true;
    42. }
  • Init

    Init函数是V4L2Camera类中最为复杂的一个方法.

    hardware/libcamera/V4L2Camera.cpp

    1. int V4L2Camera::Init(int width, int height, int fps)
    2. {
    3. ALOGD("V4L2Camera::Init");
    4. /* Initialize the capture to the specified width and height */
    5. static const struct {
    6. int fmt; /* PixelFormat */
    7. int bpp; /* bytes per pixel */
    8. int isplanar; /* If format is planar or not */
    9. int allowscrop; /* If we support cropping with this pixel format */
    10. } pixFmtsOrder[] = {
    11. {V4L2_PIX_FMT_YUYV, 2,0,1},
    12. {V4L2_PIX_FMT_YVYU, 2,0,1},
    13. {V4L2_PIX_FMT_UYVY, 2,0,1},
    14. {V4L2_PIX_FMT_YYUV, 2,0,1},
    15. {V4L2_PIX_FMT_SPCA501, 2,0,0},
    16. {V4L2_PIX_FMT_SPCA505, 2,0,0},
    17. {V4L2_PIX_FMT_SPCA508, 2,0,0},
    18. {V4L2_PIX_FMT_YUV420, 0,1,0},
    19. {V4L2_PIX_FMT_YVU420, 0,1,0},
    20. {V4L2_PIX_FMT_NV12, 0,1,0},
    21. {V4L2_PIX_FMT_NV21, 0,1,0},
    22. {V4L2_PIX_FMT_NV16, 0,1,0},
    23. {V4L2_PIX_FMT_NV61, 0,1,0},
    24. {V4L2_PIX_FMT_Y41P, 0,0,0},
    25. {V4L2_PIX_FMT_SGBRG8, 0,0,0},
    26. {V4L2_PIX_FMT_SGRBG8, 0,0,0},
    27. {V4L2_PIX_FMT_SBGGR8, 0,0,0},
    28. {V4L2_PIX_FMT_SRGGB8, 0,0,0},
    29. {V4L2_PIX_FMT_BGR24, 3,0,1},
    30. {V4L2_PIX_FMT_RGB24, 3,0,1},
    31. {V4L2_PIX_FMT_MJPEG, 0,1,0},
    32. {V4L2_PIX_FMT_JPEG, 0,1,0},
    33. {V4L2_PIX_FMT_GREY, 1,0,1},
    34. {V4L2_PIX_FMT_Y16, 2,0,1},
    35. };
    36. int ret;
    37. // If no formats, break here
    38. if (m_AllFmts.isEmpty()) {
    39. ALOGE("No video formats available");
    40. return -1;
    41. }
    42. // Try to get the closest match ...
    43. SurfaceDesc closest;
    44. int closestDArea = -1;
    45. int closestDFps = -1;
    46. unsigned int i;
    47. int area = width * height;
    48. for (i = 0; i < m_AllFmts.size(); i++) {
    49. SurfaceDesc sd = m_AllFmts[i];
    50. // Always choose a bigger or equal surface
    51. if (sd.getWidth() >= width &&
    52. sd.getHeight() >= height) {
    53. int difArea = sd.getArea() - area;
    54. int difFps = my_abs(sd.getFps() - fps);
    55. ALOGD("Trying format: (%d x %d), Fps: %d [difArea:%d, difFps:%d, cDifArea:%d, cDifFps:%d]",sd.getWidth(),sd.getHeight(),sd.getFps(), difArea, difFps, closestDArea, closestDFps);
    56. // 从摄像头设备支持的分辨率中寻找一个长宽都大于等于输入的长宽, 并且面积差得最少的一个分辨率
    57. // 如果这种分辨率有多个, 就寻找一个fps差异最小的
    58. // 找到的这个SurfaceDesc赋值给closest变量
    59. if (closestDArea < 0 ||
    60. difArea < closestDArea ||
    61. (difArea == closestDArea && difFps < closestDFps)) {
    62. // Store approximation
    63. closestDArea = difArea;
    64. closestDFps = difFps;
    65. // And the new surface descriptor
    66. closest = sd;
    67. }
    68. }
    69. }
    70. // 如果可用的分辨率中没有长宽都大于等于输入的长宽的分辨率
    71. if (closestDArea == -1) {
    72. ALOGE("Size not available: (%d x %d)",width,height);
    73. return -1;
    74. }
    75. // 此时closest就是最接近输入参数的SurfaceDesc
    76. ALOGD("Selected format: (%d x %d), Fps: %d",closest.getWidth(),closest.getHeight(),closest.getFps());
    77. // 如果closest的长宽并不完全等于输入的长宽, 说明需要剪短
    78. // Check if we will have to crop the captured image
    79. bool crop = width != closest.getWidth() || height != closest.getHeight();
    80. // 遍历支持的像素格式
    81. // Iterate through pixel formats from best to worst
    82. ret = -1;
    83. for (i=0; i < (sizeof(pixFmtsOrder) / sizeof(pixFmtsOrder[0])); i++) {
    84. // If we will need to crop, make sure to only select formats we can crop...
    85. // 当需要剪短并且选中的像素格式支持, 或者不需要剪短, 才进入这个if
    86. if (!crop || pixFmtsOrder[i].allowscrop) {
    87. memset(&videoIn->format,0,sizeof(videoIn->format));
    88. videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    89. videoIn->format.fmt.pix.width = closest.getWidth();
    90. videoIn->format.fmt.pix.height = closest.getHeight();
    91. videoIn->format.fmt.pix.pixelformat = pixFmtsOrder[i].fmt;
    92. // 通过VIDIOC_TRY_FMT设置摄像头设备使用的像素格式
    93. ret = ioctl(fd, VIDIOC_TRY_FMT, &videoIn->format);
    94. // 检查调用成功并且再次确认使用的确实是closest的参数
    95. if (ret >= 0 &&
    96. videoIn->format.fmt.pix.width == (uint)closest.getWidth() &&
    97. videoIn->format.fmt.pix.height == (uint)closest.getHeight()) {
    98. break;
    99. }
    100. }
    101. }
    102. if (ret < 0) {
    103. ALOGE("Open: VIDIOC_TRY_FMT Failed: %s", strerror(errno));
    104. return ret;
    105. }
    106. // 真正设置像素格式
    107. /* Set the format */
    108. memset(&videoIn->format,0,sizeof(videoIn->format));
    109. videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    110. videoIn->format.fmt.pix.width = closest.getWidth();
    111. videoIn->format.fmt.pix.height = closest.getHeight();
    112. videoIn->format.fmt.pix.pixelformat = pixFmtsOrder[i].fmt;
    113. ret = ioctl(fd, VIDIOC_S_FMT, &videoIn->format);
    114. if (ret < 0) {
    115. ALOGE("Open: VIDIOC_S_FMT Failed: %s", strerror(errno));
    116. return ret;
    117. }
    118. // 查询当前使用的图像格式
    119. /* Query for the effective video format used */
    120. memset(&videoIn->format,0,sizeof(videoIn->format));
    121. videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    122. ret = ioctl(fd, VIDIOC_G_FMT, &videoIn->format);
    123. if (ret < 0) {
    124. ALOGE("Open: VIDIOC_G_FMT Failed: %s", strerror(errno));
    125. return ret;
    126. }
    127. /* Note VIDIOC_S_FMT may change width and height. */
    128. /* Buggy driver paranoia. */
    129. // 为裁剪准备参数
    130. unsigned int min = videoIn->format.fmt.pix.width * 2;
    131. if (videoIn->format.fmt.pix.bytesperline < min)
    132. videoIn->format.fmt.pix.bytesperline = min;
    133. min = videoIn->format.fmt.pix.bytesperline * videoIn->format.fmt.pix.height;
    134. if (videoIn->format.fmt.pix.sizeimage < min)
    135. videoIn->format.fmt.pix.sizeimage = min;
    136. /* Store the pixel formats we will use */
    137. videoIn->outWidth = width;
    138. videoIn->outHeight = height;
    139. videoIn->outFrameSize = width * height << 1; // Calculate the expected output framesize in YUYV
    140. videoIn->capBytesPerPixel = pixFmtsOrder[i].bpp;
    141. // 开始裁剪
    142. /* Now calculate cropping margins, if needed, rounding to even */
    143. int startX = ((closest.getWidth() - width) >> 1) & (-2);
    144. int startY = ((closest.getHeight() - height) >> 1) & (-2);
    145. /* Avoid crashing if the mode found is smaller than the requested */
    146. if (startX < 0) {
    147. videoIn->outWidth += startX;
    148. startX = 0;
    149. }
    150. if (startY < 0) {
    151. videoIn->outHeight += startY;
    152. startY = 0;
    153. }
    154. /* Calculate the starting offset into each captured frame */
    155. videoIn->capCropOffset = (startX * videoIn->capBytesPerPixel) +
    156. (videoIn->format.fmt.pix.bytesperline * startY);
    157. ALOGI("Cropping from origin: %dx%d - size: %dx%d (offset:%d)",
    158. startX,startY,
    159. videoIn->outWidth,videoIn->outHeight,
    160. videoIn->capCropOffset);
    161. /* sets video device frame rate */
    162. memset(&videoIn->params,0,sizeof(videoIn->params));
    163. videoIn->params.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    164. videoIn->params.parm.capture.timeperframe.numerator = 1;
    165. videoIn->params.parm.capture.timeperframe.denominator = closest.getFps();
    166. // 设置fps
    167. /* Set the framerate. If it fails, it wont be fatal */
    168. if (ioctl(fd,VIDIOC_S_PARM,&videoIn->params) < 0) {
    169. ALOGE("VIDIOC_S_PARM error: Unable to set %d fps", closest.getFps());
    170. }
    171. /* Gets video device defined frame rate (not real - consider it a maximum value) */
    172. if (ioctl(fd,VIDIOC_G_PARM,&videoIn->params) < 0) {
    173. ALOGE("VIDIOC_G_PARM - Unable to get timeperframe");
    174. }
    175. ALOGI("Actual format: (%d x %d), Fps: %d, pixfmt: '%c%c%c%c', bytesperline: %d",
    176. videoIn->format.fmt.pix.width,
    177. videoIn->format.fmt.pix.height,
    178. videoIn->params.parm.capture.timeperframe.denominator,
    179. videoIn->format.fmt.pix.pixelformat & 0xFF, (videoIn->format.fmt.pix.pixelformat >> 8) & 0xFF,
    180. (videoIn->format.fmt.pix.pixelformat >> 16) & 0xFF, (videoIn->format.fmt.pix.pixelformat >> 24) & 0xFF,
    181. videoIn->format.fmt.pix.bytesperline);
    182. /* Configure JPEG quality, if dealing with those formats */
    183. if (videoIn->format.fmt.pix.pixelformat == V4L2_PIX_FMT_JPEG ||
    184. videoIn->format.fmt.pix.pixelformat == V4L2_PIX_FMT_MJPEG) {
    185. // 设置JPEG质量为100%
    186. /* Get the compression format */
    187. ioctl(fd,VIDIOC_G_JPEGCOMP, &videoIn->jpegcomp);
    188. /* Set to maximum */
    189. videoIn->jpegcomp.quality = 100;
    190. /* Try to set it */
    191. if(ioctl(fd,VIDIOC_S_JPEGCOMP, &videoIn->jpegcomp) >= 0)
    192. {
    193. ALOGE("VIDIOC_S_COMP:");
    194. if(errno == EINVAL)
    195. {
    196. videoIn->jpegcomp.quality = -1; //not supported
    197. ALOGE(" compression control not supported\n");
    198. }
    199. }
    200. /* gets video stream jpeg compression parameters */
    201. if(ioctl(fd,VIDIOC_G_JPEGCOMP, &videoIn->jpegcomp) >= 0) {
    202. ALOGD("VIDIOC_G_COMP:\n");
    203. ALOGD(" quality: %i\n", videoIn->jpegcomp.quality);
    204. ALOGD(" APPn: %i\n", videoIn->jpegcomp.APPn);
    205. ALOGD(" APP_len: %i\n", videoIn->jpegcomp.APP_len);
    206. ALOGD(" APP_data: %s\n", videoIn->jpegcomp.APP_data);
    207. ALOGD(" COM_len: %i\n", videoIn->jpegcomp.COM_len);
    208. ALOGD(" COM_data: %s\n", videoIn->jpegcomp.COM_data);
    209. ALOGD(" jpeg_markers: 0x%x\n", videoIn->jpegcomp.jpeg_markers);
    210. } else {
    211. ALOGE("VIDIOC_G_COMP:");
    212. if(errno == EINVAL) {
    213. videoIn->jpegcomp.quality = -1; //not supported
    214. ALOGE(" compression control not supported\n");
    215. }
    216. }
    217. }
    218. /* Check if camera can handle NB_BUFFER buffers */
    219. memset(&videoIn->rb,0,sizeof(videoIn->rb));
    220. videoIn->rb.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    221. videoIn->rb.memory = V4L2_MEMORY_MMAP;
    222. videoIn->rb.count = NB_BUFFER; // 定义在V4L2Camera.h中, 为4
    223. // 要求设备分配内存
    224. ret = ioctl(fd, VIDIOC_REQBUFS, &videoIn->rb);
    225. if (ret < 0) {
    226. ALOGE("Init: VIDIOC_REQBUFS failed: %s", strerror(errno));
    227. return ret;
    228. }
    229. for (int i = 0; i < NB_BUFFER; i++) {
    230. memset (&videoIn->buf, 0, sizeof (struct v4l2_buffer));
    231. videoIn->buf.index = i;
    232. videoIn->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    233. videoIn->buf.memory = V4L2_MEMORY_MMAP;
    234. ret = ioctl (fd, VIDIOC_QUERYBUF, &videoIn->buf);
    235. if (ret < 0) {
    236. ALOGE("Init: Unable to query buffer (%s)", strerror(errno));
    237. return ret;
    238. }
    239. // 通过mmap将内核设备刚刚分配的内存映射到用户空间
    240. videoIn->mem[i] = mmap (0,
    241. videoIn->buf.length,
    242. PROT_READ | PROT_WRITE,
    243. MAP_SHARED,
    244. fd,
    245. videoIn->buf.m.offset);
    246. if (videoIn->mem[i] == MAP_FAILED) {
    247. ALOGE("Init: Unable to map buffer (%s)", strerror(errno));
    248. return -1;
    249. }
    250. ret = ioctl(fd, VIDIOC_QBUF, &videoIn->buf);
    251. if (ret < 0) {
    252. ALOGE("Init: VIDIOC_QBUF Failed");
    253. return -1;
    254. }
    255. nQueued++;
    256. }
    257. // Reserve temporary buffers, if they will be needed
    258. size_t tmpbuf_size=0;
    259. switch (videoIn->format.fmt.pix.pixelformat)
    260. {
    261. case V4L2_PIX_FMT_JPEG:
    262. case V4L2_PIX_FMT_MJPEG:
    263. case V4L2_PIX_FMT_UYVY:
    264. case V4L2_PIX_FMT_YVYU:
    265. case V4L2_PIX_FMT_YYUV:
    266. case V4L2_PIX_FMT_YUV420: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel
    267. case V4L2_PIX_FMT_YVU420: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel
    268. case V4L2_PIX_FMT_Y41P: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel
    269. case V4L2_PIX_FMT_NV12:
    270. case V4L2_PIX_FMT_NV21:
    271. case V4L2_PIX_FMT_NV16:
    272. case V4L2_PIX_FMT_NV61:
    273. case V4L2_PIX_FMT_SPCA501:
    274. case V4L2_PIX_FMT_SPCA505:
    275. case V4L2_PIX_FMT_SPCA508:
    276. case V4L2_PIX_FMT_GREY:
    277. case V4L2_PIX_FMT_Y16:
    278. case V4L2_PIX_FMT_YUYV:
    279. // YUYV doesn't need a temp buffer but we will set it if/when
    280. // video processing disable control is checked (bayer processing).
    281. // (logitech cameras only)
    282. break;
    283. case V4L2_PIX_FMT_SGBRG8: //0
    284. case V4L2_PIX_FMT_SGRBG8: //1
    285. case V4L2_PIX_FMT_SBGGR8: //2
    286. case V4L2_PIX_FMT_SRGGB8: //3
    287. // Raw 8 bit bayer
    288. // when grabbing use:
    289. // bayer_to_rgb24(bayer_data, RGB24_data, width, height, 0..3)
    290. // rgb2yuyv(RGB24_data, pFrameBuffer, width, height)
    291. // alloc a temp buffer for converting to YUYV
    292. // rgb buffer for decoding bayer data
    293. tmpbuf_size = videoIn->format.fmt.pix.width * videoIn->format.fmt.pix.height * 3;
    294. if (videoIn->tmpBuffer)
    295. free(videoIn->tmpBuffer);
    296. videoIn->tmpBuffer = (uint8_t*)calloc(1, tmpbuf_size);
    297. if (!videoIn->tmpBuffer) {
    298. ALOGE("couldn't calloc %lu bytes of memory for frame buffer\n",
    299. (unsigned long) tmpbuf_size);
    300. return -ENOMEM;
    301. }
    302. break;
    303. case V4L2_PIX_FMT_RGB24: //rgb or bgr (8-8-8)
    304. case V4L2_PIX_FMT_BGR24:
    305. break;
    306. default:
    307. ALOGE("Should never arrive (1)- exit fatal !!\n");
    308. return -1;
    309. }
    310. return 0;
    311. }

    总的来说, Init函数做了以下几件事:

    1. 根据用户要求的长宽和fps, 从设备支持的分辨率和fps中找一个大于用户要求并且最接近的分辨率和fps. 然后设置摄像头设备使用此分辨率和fps. 最后由于实际使用的分辨率比用户请求的大, 还要计算一个裁剪偏差值, 以后使用图片的时候把多出来的那部分裁减掉.
    2. 如果摄像头设备使用了JPEG或者MJPEG压缩, 则设置图片质量是100%.
    3. 要求设备分配内存, 并映射到用户空间的videoIn->mem中. 然后压入摄像头设备的输入队列, 至此, 摄像头设备已经做好了捕捉图像的准备, 就等streamon命令了.
  • StartStreaming

    StartStreaming函数很简单, 除了调用STREAMON ioctl之外只是设置了videoIn 的isStreaming标记:

    hardware/libcamera/V4L2Camera.cpp

    1. int V4L2Camera::StartStreaming ()
    2. {
    3. enum v4l2_buf_type type;
    4. int ret;
    5. if (!videoIn->isStreaming) {
    6. type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    7. ret = ioctl (fd, VIDIOC_STREAMON, &type);
    8. if (ret < 0) {
    9. ALOGE("StartStreaming: Unable to start capture: %s", strerror(errno));
    10. return ret;
    11. }
    12. videoIn->isStreaming = true;
    13. }
    14. return 0;
    15. }

    调用这个函数之后, 摄像头就开始工作了.

  • GrabRawFrame

    StartStreaming之后, 还需要获取拍摄到的摄像头内容, 于是要调用GrabRawFrame.

    hardware/libcamera/V4L2Camera.cpp

    1. void V4L2Camera::GrabRawFrame (void *frameBuffer, int maxSize)
    2. {
    3. LOG_FRAME("V4L2Camera::GrabRawFrame: frameBuffer:%p, len:%d",frameBuffer,maxSize);
    4. int ret;
    5. /* DQ */
    6. // 调用DQBUF获取一帧数据
    7. memset(&videoIn->buf,0,sizeof(videoIn->buf));
    8. videoIn->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    9. videoIn->buf.memory = V4L2_MEMORY_MMAP;
    10. ret = ioctl(fd, VIDIOC_DQBUF, &videoIn->buf);
    11. if (ret < 0) {
    12. ALOGE("GrabPreviewFrame: VIDIOC_DQBUF Failed");
    13. return;
    14. }
    15. nDequeued++;
    16. // Calculate the stride of the output image (YUYV) in bytes
    17. int strideOut = videoIn->outWidth << 1;
    18. // And the pointer to the start of the image
    19. // Init中计算出来需要剪裁掉的偏移量, 此处就通过增加了偏移量来把图像剪裁为用户调用Init
    20. // 时设置的大小. 得到的src是图像的起始地址
    21. uint8_t* src = (uint8_t*)videoIn->mem[videoIn->buf.index] + videoIn->capCropOffset;
    22. LOG_FRAME("V4L2Camera::GrabRawFrame - Got Raw frame (%dx%d) (buf:%d@0x%p, len:%d)",videoIn->format.fmt.pix.width,videoIn->format.fmt.pix.height,videoIn->buf.index,src,videoIn->buf.bytesused);
    23. /* Avoid crashing! - Make sure there is enough room in the output buffer! */
    24. if (maxSize < videoIn->outFrameSize) {
    25. ALOGE("V4L2Camera::GrabRawFrame: Insufficient space in output buffer: Required: %d, Got %d - DROPPING FRAME",videoIn->outFrameSize,maxSize);
    26. } else {
    27. // 根据像素格式分别处理, 最终把图像数据保存到了输出参数framebuffer中.
    28. switch (videoIn->format.fmt.pix.pixelformat)
    29. {
    30. case V4L2_PIX_FMT_JPEG:
    31. case V4L2_PIX_FMT_MJPEG:
    32. if(videoIn->buf.bytesused <= HEADERFRAME1) {
    33. // Prevent crash on empty image
    34. ALOGE("Ignoring empty buffer ...\n");
    35. break;
    36. }
    37. if (jpeg_decode((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight) < 0) {
    38. ALOGE("jpeg decode errors\n");
    39. break;
    40. }
    41. break;
    42. case V4L2_PIX_FMT_UYVY:
    43. uyvy_to_yuyv((uint8_t*)frameBuffer, strideOut,
    44. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
    45. break;
    46. case V4L2_PIX_FMT_YVYU:
    47. yvyu_to_yuyv((uint8_t*)frameBuffer, strideOut,
    48. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
    49. break;
    50. case V4L2_PIX_FMT_YYUV:
    51. yyuv_to_yuyv((uint8_t*)frameBuffer, strideOut,
    52. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
    53. break;
    54. case V4L2_PIX_FMT_YUV420:
    55. yuv420_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    56. break;
    57. case V4L2_PIX_FMT_YVU420:
    58. yvu420_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    59. break;
    60. case V4L2_PIX_FMT_NV12:
    61. nv12_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    62. break;
    63. case V4L2_PIX_FMT_NV21:
    64. nv21_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    65. break;
    66. case V4L2_PIX_FMT_NV16:
    67. nv16_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    68. break;
    69. case V4L2_PIX_FMT_NV61:
    70. nv61_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    71. break;
    72. case V4L2_PIX_FMT_Y41P:
    73. y41p_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    74. break;
    75. case V4L2_PIX_FMT_GREY:
    76. grey_to_yuyv((uint8_t*)frameBuffer, strideOut,
    77. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
    78. break;
    79. case V4L2_PIX_FMT_Y16:
    80. y16_to_yuyv((uint8_t*)frameBuffer, strideOut,
    81. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
    82. break;
    83. case V4L2_PIX_FMT_SPCA501:
    84. s501_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    85. break;
    86. case V4L2_PIX_FMT_SPCA505:
    87. s505_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    88. break;
    89. case V4L2_PIX_FMT_SPCA508:
    90. s508_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
    91. break;
    92. case V4L2_PIX_FMT_YUYV:
    93. {
    94. int h;
    95. uint8_t* pdst = (uint8_t*)frameBuffer;
    96. uint8_t* psrc = src;
    97. int ss = videoIn->outWidth << 1;
    98. for (h = 0; h < videoIn->outHeight; h++) {
    99. memcpy(pdst,psrc,ss);
    100. pdst += strideOut;
    101. psrc += videoIn->format.fmt.pix.bytesperline;
    102. }
    103. }
    104. break;
    105. case V4L2_PIX_FMT_SGBRG8: //0
    106. bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 0);
    107. rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
    108. (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
    109. break;
    110. case V4L2_PIX_FMT_SGRBG8: //1
    111. bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 1);
    112. rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
    113. (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
    114. break;
    115. case V4L2_PIX_FMT_SBGGR8: //2
    116. bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 2);
    117. rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
    118. (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
    119. break;
    120. case V4L2_PIX_FMT_SRGGB8: //3
    121. bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 3);
    122. rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
    123. (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
    124. break;
    125. case V4L2_PIX_FMT_RGB24:
    126. rgb_to_yuyv((uint8_t*) frameBuffer, strideOut,
    127. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
    128. break;
    129. case V4L2_PIX_FMT_BGR24:
    130. bgr_to_yuyv((uint8_t*) frameBuffer, strideOut,
    131. src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
    132. break;
    133. default:
    134. ALOGE("error grabbing: unknown format: %i\n", videoIn->format.fmt.pix.pixelformat);
    135. break;
    136. }
    137. LOG_FRAME("V4L2Camera::GrabRawFrame - Copied frame to destination 0x%p",frameBuffer);
    138. }
    139. // buffer用完之后入队, 重复利用
    140. /* And Queue the buffer again */
    141. ret = ioctl(fd, VIDIOC_QBUF, &videoIn->buf);
    142. if (ret < 0) {
    143. ALOGE("GrabPreviewFrame: VIDIOC_QBUF Failed");
    144. return;
    145. }
    146. nQueued++;
    147. LOG_FRAME("V4L2Camera::GrabRawFrame - Queued buffer");
    148. }

4.2 Framwork

JavaSDK层

Hardware的分析可以自底向上, 首先看V4L2Camera, 再看CameraHardware, 再到CameraFactory. Framework的代码自底向上看东西就太多了, 因此先从SDK中的摄像头部分看起. HAL和Framework说的都是C++的东西, 实现了安卓的底层. 但是实际上在开发app的时候用是SDK是JAVA语言编写的. 我们知道JAVA可以通过JNI来调用C++代码, 接下来就来看看ADK中是如何使用的.

首先考虑一段调用摄像头预览的代码:

  1. Camera cam = Camera.open(); // 获取一个摄像头实例
  2. cam.setPreviewDisplay(surfaceHolder); // 设置预览窗口
  3. cam.startPreview(); // 开始预览

第一行打开的是默认摄像头, 也可以换成 Camera.open(1) 打开其他摄像头, 这几个函数的定义在ADK中位于Camera.java中, open函数为:

frameworks/base/core/java/android/hardware/Camera.java

  1. public static Camera open(int cameraId) {
  2. return new Camera(cameraId);
  3. }
  4. public static Camera open() {
  5. int numberOfCameras = getNumberOfCameras();
  6. CameraInfo cameraInfo = new CameraInfo();
  7. for (int i = 0; i < numberOfCameras; i++) {
  8. getCameraInfo(i, cameraInfo);
  9. if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
  10. return new Camera(i);
  11. }
  12. }
  13. return null;
  14. }

可以看到, 直接open不加任何参数打开的其实是第一个后置摄像头. 总之最后open返回了一个Camera对象. 这里看到了一个熟悉的函数getNumberOfCameras, 在HAL中的camera_module_t中, 除了必须的hw_module_t, 还有两个函数指针 get_number_of_cameras 和get_camera_info, 估计这个 getNumberOfCameras 最终就是调用了get_number_of_cameras. 于是来看这个函数:

  1. /**
  2. * Returns the number of physical cameras available on this device.
  3. */
  4. public native static int getNumberOfCameras();

这个函数在Camera.java中只有一个声明, 表明这是一个native函数, 于是就要找其对应的JNI的定义.

frameworks/base/core/jni/android_hardware_Camera.cpp

  1. static jint android_hardware_Camera_getNumberOfCameras(JNIEnv *env, jobject thiz)
  2. {
  3. return Camera::getNumberOfCameras();
  4. }

再来找这个C++中的Camera类, 这个类已经位于android framework中了, 但是getNumberOfCameras的定义其实是在它的父类CameraBase中:

frameworks/av/camera/CameraBase.cpp

  1. template <typename TCam, typename TCamTraits>
  2. int CameraBase<TCam, TCamTraits>::getNumberOfCameras() {
  3. const sp<ICameraService> cs = getCameraService();
  4. if (!cs.get()) {
  5. // as required by the public Java APIs
  6. return 0;
  7. }
  8. return cs->getNumberOfCameras();
  9. }

可以看到这里只是简单的获取CameraService, 然后调用其getNumberOfCameras 函数, 再来看这个函数:

frameworks/av/camera/CameraBase.cpp

  1. template <typename TCam, typename TCamTraits>
  2. const sp<ICameraService>& CameraBase<TCam, TCamTraits>::getCameraService()
  3. {
  4. Mutex::Autolock _l(gLock);
  5. if (gCameraService.get() == 0) {
  6. sp<IServiceManager> sm = defaultServiceManager();
  7. sp<IBinder> binder;
  8. do {
  9. binder = sm->getService(String16(kCameraServiceName));
  10. if (binder != 0) {
  11. break;
  12. }
  13. ALOGW("CameraService not published, waiting...");
  14. usleep(kCameraServicePollDelay);
  15. } while(true);
  16. if (gDeathNotifier == NULL) {
  17. gDeathNotifier = new DeathNotifier();
  18. }
  19. binder->linkToDeath(gDeathNotifier);
  20. gCameraService = interface_cast<ICameraService>(binder);
  21. }
  22. ALOGE_IF(gCameraService == 0, "no CameraService!?");
  23. return gCameraService;
  24. }

可以看到gCameraService是一个sp<ICameraService>类型的单例, 第一次调用这个函数的时候对gCameraService初始化, 以后每次只是简单地返回这个变量. 在初始化的过程中, 用到了defaultServiceManager获取了一个sm, 并通过sm->getService获取到CameraService. defaultServiceManager这个函数位于frameworks/native/lib/binder/IServiceManager.cpp, 属于binder通信的一部分, 超出了本文的范围, 以后有空再写一篇博客说明.

open函数调用完之后, 就是setPreviewDisplay和startPreiview, 这两个函数同样是native的, 其实现类似, 下面就只看看startPreview:

frameworks/base/core/jni/android_hardware_Camera.cpp

  1. static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz)
  2. {
  3. ALOGV("startPreview");
  4. sp<Camera> camera = get_native_camera(env, thiz, NULL);
  5. if (camera == 0) return;
  6. if (camera->startPreview() != NO_ERROR) {
  7. jniThrowRuntimeException(env, "startPreview failed");
  8. return;
  9. }
  10. }

这段代码首先获取了一个Camera对象, 然后对其调用startPreview, get_native_camera的实习如下:

  1. sp<Camera> get_native_camera(JNIEnv *env, jobject thiz, JNICameraContext** pContext)
  2. {
  3. sp<Camera> camera;
  4. Mutex::Autolock _l(sLock);
  5. JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));
  6. if (context != NULL) {
  7. camera = context->getCamera();
  8. }
  9. ALOGV("get_native_camera: context=%p, camera=%p", context, camera.get());
  10. if (camera == 0) {
  11. jniThrowRuntimeException(env,
  12. "Camera is being used after Camera.release() was called");
  13. }
  14. if (pContext != NULL) *pContext = context;
  15. return camera;
  16. }

该函数通过env->GetLongField获取了一个JNICameraContext的对象的指针, 然后就能通过getCamera得到Camera对象了, 而这个JNICameraContext的对象的指针是在native_setup中设置的:

  1. 1: static jint android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
  2. 2: jobject weak_this, jint cameraId, jint halVersion, jstring clientPackageName)
  3. 3: {
  4. 4: // Convert jstring to String16
  5. 5: const char16_t *rawClientName = env->GetStringChars(clientPackageName, NULL);
  6. 6: jsize rawClientNameLen = env->GetStringLength(clientPackageName);
  7. 7: String16 clientName(rawClientName, rawClientNameLen);
  8. 8: env->ReleaseStringChars(clientPackageName, rawClientName);
  9. 9:
  10. 10: sp<Camera> camera;
  11. 11: if (halVersion == CAMERA_HAL_API_VERSION_NORMAL_CONNECT) {
  12. 12: // Default path: hal version is don't care, do normal camera connect.
  13. 13: camera = Camera::connect(cameraId, clientName,
  14. 14: Camera::USE_CALLING_UID);
  15. 15: } else {
  16. 16: jint status = Camera::connectLegacy(cameraId, halVersion, clientName,
  17. 17: Camera::USE_CALLING_UID, camera);
  18. 18: if (status != NO_ERROR) {
  19. 19: return status;
  20. 20: }
  21. 21: }
  22. 22:
  23. 23: if (camera == NULL) {
  24. 24: return -EACCES;
  25. 25: }
  26. 26:
  27. 27: // make sure camera hardware is alive
  28. 28: if (camera->getStatus() != NO_ERROR) {
  29. 29: return NO_INIT;
  30. 30: }
  31. 31:
  32. 32: jclass clazz = env->GetObjectClass(thiz);
  33. 33: if (clazz == NULL) {
  34. 34: // This should never happen
  35. 35: jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
  36. 36: return INVALID_OPERATION;
  37. 37: }
  38. 38:
  39. 39: // We use a weak reference so the Camera object can be garbage collected.
  40. 40: // The reference is only used as a proxy for callbacks.
  41. 41: sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
  42. 42: context->incStrong((void*)android_hardware_Camera_native_setup);
  43. 43: camera->setListener(context);
  44. 44:
  45. 45: // save context in opaque field
  46. 46: env->SetLongField(thiz, fields.context, (jlong)context.get());
  47. 47: return NO_ERROR;
  48. 48: }

注意第13行, 通过Camera::connect获取到了一个Camera对象, 这里终于又从ADK 层进入到了Framework层.

class Camera

在frameworks/av/camera/下有个Camera.cpp, 定义了一个Camera类, 由上一小节可以看到, ADK层通过JNI, 与这个类直接打交道, 进而进入Framework层, 可以说这个类是进入Framework的入口.

Camera类多重继承于CameraBase和BnCameraClient, 而这个CameraBase用到了模板:

frameworks/av/include/camera/CameraBase.h

  1. template <typename TCam>
  2. struct CameraTraits {
  3. };
  4. template <typename TCam, typename TCamTraits = CameraTraits<TCam> >
  5. class CameraBase : public IBinder::DeathRecipient
  6. {
  7. public:
  8. typedef typename TCamTraits::TCamListener TCamListener;
  9. typedef typename TCamTraits::TCamUser TCamUser;
  10. typedef typename TCamTraits::TCamCallbacks TCamCallbacks;
  11. typedef typename TCamTraits::TCamConnectService TCamConnectService;
  12. }

这里除了用到模板还用到了模板的偏特化, Camera在实际继承CameraBase的时候, TCam就是Camera类型, 而TCamTraits用的是CameraTraits<Camera>, 但是这个模板特化并不是CameraBase.h中的CameraTraits, 而是定义在Camera.h中, 否则的话CameraTraits是空的, 也就没有TCamTraits::TCamListener这些东西了:

frameworks/av/include/camera/Camera.h

  1. template <>
  2. struct CameraTraits<Camera>
  3. {
  4. typedef CameraListener TCamListener;
  5. typedef ICamera TCamUser;
  6. typedef ICameraClient TCamCallbacks;
  7. typedef status_t (ICameraService::*TCamConnectService)(const sp<ICameraClient>&,
  8. int, const String16&, int,
  9. /*out*/
  10. sp<ICamera>&);
  11. static TCamConnectService fnConnectService;
  12. };
mediaserver

mediaserver是一个独立的进程, 有着自己的main函数, 系统启动之后会启动mediaserver作为一个守护进程. mediaserver负责管理android应用需要用到的多媒体相关的服务, 例如音频, 视频播放, 摄像头等. 通过Android的binder机制与app进行通信.

frameworks/av/media/mediaserver/main_mediaserver.cpp

  1. int main(int argc __unused, char** argv)
  2. {
  3. signal(SIGPIPE, SIG_IGN);
  4. char value[PROPERTY_VALUE_MAX];
  5. bool doLog = (property_get("ro.test_harness", value, "0") > 0) && (atoi(value) == 1);
  6. pid_t childPid;
  7. if (doLog && (childPid = fork()) != 0) {
  8. // ...省略
  9. } else {
  10. // all other services
  11. if (doLog) {
  12. prctl(PR_SET_PDEATHSIG, SIGKILL); // if parent media.log dies before me, kill me also
  13. setpgid(0, 0); // but if I die first, don't kill my parent
  14. }
  15. sp<ProcessState> proc(ProcessState::self());
  16. sp<IServiceManager> sm = defaultServiceManager();
  17. ALOGI("ServiceManager: %p", sm.get());
  18. AudioFlinger::instantiate();
  19. MediaPlayerService::instantiate();
  20. CameraService::instantiate();
  21. AudioPolicyService::instantiate();
  22. SoundTriggerHwService::instantiate();
  23. registerExtensions();
  24. ProcessState::self()->startThreadPool();
  25. IPCThreadState::self()->joinThreadPool();
  26. }
  27. }

可以看到, 在main函数中分别对几大服务调用了instantiate初始化. 重点关注CameraService::instantiate() 这一行, 初始化了摄像头服务. 于是接下来就来看这个CameraService类. 这个类的声明就很长, 约五百行, 内部还定义了BasicClientClient 等内部类. 但是并没有发现main函数中调用的instantiate函数. 考虑到CameraService多继承了BinderService<CameraService>, BnCameraService, DeathRecipient, camera_module_callbacks_t等四个类, 估计这个instantiate就是其中一个类定义的, 果然在BinderService.h中发现了这个函数:

frameworks/native/include/binder/BinderService.h

  1. template<typename SERVICE>
  2. class BinderService
  3. {
  4. public:
  5. static status_t publish(bool allowIsolated = false) {
  6. sp<IServiceManager> sm(defaultServiceManager());
  7. return sm->addService(
  8. String16(SERVICE::getServiceName()),
  9. new SERVICE(), allowIsolated);
  10. }
  11. static void publishAndJoinThreadPool(bool allowIsolated = false) {
  12. publish(allowIsolated);
  13. joinThreadPool();
  14. }
  15. static void instantiate() { publish(); }
  16. static status_t shutdown() { return NO_ERROR; }
  17. private:
  18. static void joinThreadPool() {
  19. sp<ProcessState> ps(ProcessState::self());
  20. ps->startThreadPool();
  21. ps->giveThreadPoolName();
  22. IPCThreadState::self()->joinThreadPool();
  23. }
  24. };

可以看到 instantiate 调用了 publish, 而 publish 首先取得了一个全局唯一的 IServiceManager 实例的指针, 并且向其中注册了一个新的服务, 结合CameraService继承了 BinderService<CameraService> 来看, 注册服务实际上调用的是:

  1. sm->addService(
  2. String16(CameraSeivice::getServiceName()),
  3. new CameraService(), allowIsolated);

至此, main函数仅仅注册了CameraService, 那么什么时候使用CameraService呢? 这就要看main函数的最后两行:

  1. ProcessState::self()->startThreadPool();
  2. IPCThreadState::self()->joinThreadPool();

这里开始就就到了binder通信的部分, 不在本文的范围内, 参见我的另一篇博文.

android_camera_framework_uml.png

CameraHardwareInterface
CameraService
  • BasicClient
  • Client
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/619569
推荐阅读
相关标签
  

闽ICP备14008679号