当前位置:   article > 正文

camera2 (api2)打开预览过程(二)_camera onresultavailable

camera onresultavailable

使用camera的流程: openCamera() -> applySettings() -> setPreviewTexture() -> startPreview() ->autoFocus() -> takePicture()。

打开camera设备的大致过程:

1,  实例化CameraModule对象,即mCurrentModule表示当前的module,默认是photoModule。

2,  显示第一次运行的对话框FirstRunDialog,在dialog正常结束后,执行resume。

3,  根据mCurrentModule的类型,实际执行的PhotoModule.java中的resume,间接调用CameraProvider接口的实现类CameraController中的方法requestCamera,如果当期是api2,就会通过AndroidCamera2AgentImpl.java的实例,调用openCamera()$AndroidCamera2AgentImpl.java,实际调用的是父类CameraAgent.java中的方法openCamera。接下来会异步的方式执行打开camera的过程,具体就是CameraActions.OPEN_CAMERA消息的处理,这个消息的处理过程中调用Cameramanager.java的openCamera。

4,  通过CameraManager.java的openCamera,开启打开camera的过程。同时实例化CameraDevice.StateCallback类型的回调mCameraDeviceStateCallback,以便在camera打开后执行其onOpened方法,这个变量是在AndroidCamera2AgentImpl.java中定义的,在这个onOpened回调中,又会CameraOpenCallbackopenCallback的回调onCameraOpened,而这个onCameraOpened的实现在CameraController.java中,从CameraController通过onCameraOpened把camera打开成功的消息传递到CameraActivity,进一步传递到 PhotoModule.java中的onCameraAvailable,开启预览。

创建CameraDeviceImpl.java实例。

5,  CameraManager.java中的openCameraDeviceUserAsync,打开一个到camera设备的connection,先是获取CameraService句柄,然后通过cameraService的connectDevice实现到camera hal层连接。

在openCameraDeviceUserAsync函数的最后,调用了deviceImpl.setRemoteDevice(cameraUser);同时携带了打开的camera客户端作为参数,指定到这里说明camera成功打开了,所以会执行onOpened# CameraDevice.StateCallback,以及StateCallbackKK的onOpened。

6,  CameraService中的connectDevice,会通过makeClient创建CameraDeviceClient实例,这个实例对应了cameraservice.java中的BasicClient类型。

CameraDeviceClient的继承关系:

classCameraDeviceClient :

    publicCamera2ClientBase<CameraDeviceClientBase>,

        publiccamera2::FrameProcessorBase::FilteredListener

继承了Camera2ClientBase,CameraDeviceClientBase,实现了监听:FrameProcessorBase::FilteredListener。

Camera2ClientBase是一个模板类,其中的TClientBase是指CameraDeviceClientBase。

CameraDeviceClientBase又继承了CameraService::BasicClient,camera2::BnCameraDeviceUser。其中继承BnCameraDeviceUser使其具有了跨进程通信的能力。

所以实例化CameraDeviceClient时,这一系列类的构造函数都会被调用。

7,  在Camera2ClientBase的构造函数中创建了Camera3Device实例sp<CameraDeviceBase>  mDevice;

CameraDeviceClient完成实例化后,执行其initialize方法,一方面调用Camera2ClientBase的initializeImpl,执行权限检查操作实际调用的是CameraService::BasicClient::startCameraOps()方法。

另一方面调用Camera3Device的initialize,打开hal设备,执行Hal层的初始化,这个过程中会创建Camera3BufferManager,启动RequestThread,拍照的request,预览的request都会在这个线程的threadLoop中得到处理。

在CameraDeviceClient的initializeImpl中,创建一个FrameProcessorBase实例,这是一个输出帧元数据处理线程,当设备由新的frames可用时,就会调用onResultAvailable方法。

8,在camera成功打开后,接着应用设置项,设置显示纹理,然后才是开启预览界面,其中在设置显示PreviewTexture时,创建了CameraCaptureSession,这是后期发送预览、拍照请求的基础。在CameraCaptureSession成功创建后,会回调CameraCaptureSession.StateCallback(相关实例在AndroidCamera2AgentImpl.java)的onConfigured,同时传回创建的CameraCaptureSession session对象供预览、拍照使用,然后把camerastate改成AndroidCamera2StateHolder.CAMERA_PREVIEW_READY,表示可以预览了。

(以上是Android O版本的调用流程)


接上一篇继续分析,下面就开始获取CameraService的句柄,调用CameraService中的connectDevice函数。

frameworks/base/core/java/android/hardware/camera2/CameraManager.java

  1. private CameraDeviceopenCameraDeviceUserAsync(String cameraId,
  2. CameraDevice.StateCallback callback, Handler handler, final int uid)
  3. throws CameraAccessException@ CameraManager.java {
  4. //首先是获取CameraService的句柄
  5. ICameraServicecameraService = CameraManagerGlobal.get().getCameraService();
  6. //向cameraService发送连接请求
  7. cameraUser= cameraService.connectDevice(callbacks, id,
  8. mContext.getOpPackageName(),uid);
  9. }

先看获取CameraService的过程:

frameworks/base/core/java/android/hardware/camera2/CameraManager.java

  1. CameraManagerGlobal.get().getCameraService();
  2. public ICameraService getCameraService() {
  3. connectCameraServiceLocked();
  4. }
  1. private void connectCameraServiceLocked() {
  2. //这里通过serviceManager来查询cameraservice的句柄,对应的servicename是
  3. // private static final String CAMERA_SERVICE_BINDER_NAME ="media.camera";跟前面提到的
  4. //cameraservice的启动过程中注册cameraservice到servicemanager时,设置的服务名是一
  5. //样的,所以这里得到的就是CameraService的句柄。
  6. IBinder cameraServiceBinder =
  7. ServiceManager.getService(CAMERA_SERVICE_BINDER_NAME);
  8. //接下来是把查询到的cameraservice句柄这个Ibinder转成ICameraService,在注册
  9. //cameraservice时,是把ICameraService转成Ibinder保存的,这里反向转化,由此可以推断//CameraService.cpp一定继承自IBinder,
  10. ICameraService cameraService = ICameraService.Stub.asInterface(cameraServiceBinder);
  11. //这里注册一个监听ICameraServiceListener,当一个新的camera可用时,有相应的回调
  12. cameraService.addListener(this);
  13. }

接着看下CameraService.cpp是不是继承自IBinder

  1. CameraService.h
  2. class CameraService : public::android::hardware::BnCameraService,
  3. 其余省略,CameraService继承自BnCameraService,
  4. BnCameraService.h
  5. //对应的命名空间:android::hardware::
  6. namespace android {
  7. namespace hardware {
  8. class BnCameraService : public::android::BnInterface<ICameraService>
  9. }
  10. }

这里的ICameraService是有ICameraService.aidl进过aidl工具自动生成的,ICameraService.aidl文件经转化后生成了ICameraService.javaICameraService.hICameraService.cpp文件,早期版本aidl文件转化后只有.java文件生成。如果*.aidl文件被添加到的Android.mk,它的build Target是库,比如:include $(BUILD_SHARED_LIBRARY),那么就会自动生成.h.cpp文件。

继续看BnInterface是不是根IBinder有关系:

  1. IInterface.h
  2. template<typename INTERFACE>
  3. class BnInterface : public INTERFACE, public BBinder

可以看到BnInterface是一个模板类,这里的INTERFACE就是ICameraService,并且其继承自BBinder

  1. Frameworks/native/include/binder/Binder.h
  2. class BBinder : public IBinder{}

从这里可以看出BBinder继承自IBinder 

从以上继承关系,可以知道connectDevice的调用流程:

CameraManager.java

  1. private CameraDevice openCameraDeviceUserAsync(String cameraId,
  2. CameraDevice.StateCallback callback, Handler handler, final int uid)
  3. throws CameraAccessException {
  4. ......
  5. try {
  6. if (supportsCamera2ApiLocked(cameraId)) {
  7. // Use cameraservice's cameradeviceclient implementation for HAL3.2+ devices
  8. ICameraService cameraService = CameraManagerGlobal.get().getCameraService();
  9. if (cameraService == null) {
  10. throw new ServiceSpecificException(
  11. ICameraService.ERROR_DISCONNECTED,
  12. "Camera service is currently unavailable");
  13. }
  14. cameraUser = cameraService.connectDevice(callbacks, cameraId,
  15. mContext.getOpPackageName(), uid);
  16. } else {
  17. }
  18. } catch (ServiceSpecificException e) {
  19. } catch (RemoteException e) {
  20. }
  21. }
  22. return device;
  23. }

out/target/common/obj/java_libraries/framework_intermediates/.../ICameraService.java

这是由ICameraService.aidl自动生成的.java文件

  1. public interface ICameraService extends android.os.IInterface
  2. {
  3. /** Local-side IPC implementation stub class. */
  4. public static abstract class Stub extends android.os.Binder implements android.hardware.ICameraService
  5. {
  6. private static class Proxy implements android.hardware.ICameraService
  7. {
  8. /**
  9. * Open a camera device through the new camera API
  10. * Only supported for device HAL versions >= 3.2
  11. */
  12. @Override public android.hardware.camera2.ICameraDeviceUser connectDevice(android.hardware.camera2.ICameraDeviceCallbacks callbacks, java.lang.String cameraId, java.lang.String opPackageName, int clientUid) throws android.os.RemoteException
  13. {
  14. android.os.Parcel _data = android.os.Parcel.obtain();
  15. android.os.Parcel _reply = android.os.Parcel.obtain();
  16. android.hardware.camera2.ICameraDeviceUser _result;
  17. try {
  18. _data.writeInterfaceToken(DESCRIPTOR);
  19. _data.writeStrongBinder((((callbacks!=null))?(callbacks.asBinder()):(null)));
  20. _data.writeString(cameraId);
  21. _data.writeString(opPackageName);
  22. _data.writeInt(clientUid);
  23. mRemote.transact(Stub.TRANSACTION_connectDevice, _data, _reply, 0);
  24. _reply.readException();
  25. _result = android.hardware.camera2.ICameraDeviceUser.Stub.asInterface(_reply.readStrongBinder());
  26. }
  27. return _result;
  28. }
  29. }
  30. }
  31. }

mRemote.transact()开启跨进程的通信,经由IBinderBpBinderIPCThreadState把请求发到Binder驱动,由Binder驱动把请求发到cameraService服务端,针对同一个请求,clientserver端的业务码是一致的。

CameraService.cpponTransact()方法会被调用:

  1. status_t CameraService::onTransact(uint32_tcode, const Parcel& data, Parcel* reply, uint32_t flags) {
  2. return BnCameraService::onTransact(code, data, reply, flags);à
  3. }

ICameraService.cpp

  1. ::android::status_tBnCameraService::onTransact(uint32_t _aidl_code, const ::android::Parcel&_aidl_data, ::android::Parcel* _aidl_reply, uint32_t _aidl_flags) {
  2. caseCall::CONNECTDEVICE:
  3. ::android::sp<::android::hardware::camera2::ICameraDeviceUser>_aidl_return;
  4. //cameraManager.java中发起connectDevice时是带四个参数,这里加了一个ICameraDeviceUser类型的参数,并把这个出参作为reply的一部分返回给client端,这里的connectDevice才是真正调用到CameraService.cpp中的connectDevice方法。
  5. ::android::binder::Status_aidl_status(connectDevice(in_callbacks, in_cameraId, in_opPackageName,in_clientUid, &_aidl_return));
  6. //把_aidl_return写入到返回的数据结构中
  7. _aidl_ret_status=_aidl_reply->writeStrongBinder(::android::hardware::camera2::ICameraDeviceUser::asBinder(_aidl_return));
  8. }

下面先看下怎么返回_aidl_returnclient端的,cameraservice先把结果写到_aidl_reply这个parcel中,然后由Binder驱动在发到client端,其中的细节是client端发起请求后会进入睡眠,等server端有了处理结果,把这个结果写到了binder驱动后,client会被Binder驱动唤醒执行读取操作。这里接收结果的客户端是:

ICameraService.java中的Proxy

  1. public interface ICameraService:: publicstatic abstract class Stub:: private static class Proxy{
  2. @Overridepublic android.hardware.camera2.ICameraDeviceUserconnectDevice(android.hardware.camera2.ICameraDeviceCallbacks callbacks, intcameraId, java.lang.String opPackageName, int clientUid) throwsandroid.os.RemoteException{
  3. //这句代码是发送请求的开始
  4. mRemote.transact(Stub.TRANSACTION_connectDevice,_data, _reply, 0);
  5. //这句就是服务端处理后返回的结果,通过_reply.readStrongBinder()从parcel中读取结果,然后返回值给cameraManager。
  6. _result= android.hardware.camera2.ICameraDeviceUser.Stub.asInterface(_reply.readStrongBinder());
  7. return_result;
  8. }
  9. }

接着看CameraService.cppconnectDevice都做了什么操作:

CameraService.cpp

  1. Status CameraService::connectDevice(
  2. constsp<hardware::camera2::ICameraDeviceCallbacks>& cameraCb,
  3. intcameraId, onst String16& clientPackageName, int clientUid,
  4. /*out*/sp<hardware::camera2::ICameraDeviceUser>*device){
  5. //这里的device是出参,类型是ICameraDeviceUser,也是有ICameraDeviceUser.aidl自动生成的,这个对象跟CameraDeviceClient的实例client对应,CameraDeviceClient继承了BnCameraDeviceUser进而继承了ICameraDeviceUser,
  6. sp<CameraDeviceClient>client = nullptr;
  7. // connectHelper的定义在CameraService.h中
  8. ret=connectHelper<hardware::camera2::ICameraDeviceCallbacks,CameraDeviceClient>
  9. (cameraCb, id, CAMERA_HAL_API_VERSION_UNSPECIFIED,clientPackageName,
  10. clientUid, USE_CALLING_PID, API_2, /*legacyMode*/ false,/*shimUpdateOnly*/ false,
  11. /*out*/client);
  12. *device= client;
  13. }

CameraService.h

在O版本上,connectHelper的函数实现又被放在了frameworks/av/services/camera/libcameraservice/cameraservice.cppz中

//这是一个模板方法,CALLBACKhardware::camera2::ICameraDeviceCallbacks

//CLIENTCameraDeviceClient。这个方法主要作用是生成CameraClient实例,并调用其inittialize方法。

  1. template<class CALLBACK, classCLIENT>
  2. binder::StatusCameraService::connectHelper(const sp<CALLBACK>& cameraCb, constString8& cameraId, int halVersion, const String16& clientPackageName,int clientUid, int clientPid, apiLevel effectiveApiLevel, bool legacyMode, boolshimUpdateOnly, /*out*/sp<CLIENT>& device) {
  3. ret= makeClient(this, cameraCb, clientPackageName, id, facing, clientPid,
  4. clientUid, getpid(),legacyMode, halVersion, deviceVersion, effectiveApiLevel,
  5. /*out*/&tmp)
  6. client= static_cast<CLIENT*>(tmp.get());
  7. err= client->initialize(mModule)
  8. }

CameraService.cpp

  1. Status CameraService::makeClient(constsp<CameraService>& cameraService,
  2. const sp<IInterface>& cameraCb, const String16& packageName,int cameraId,
  3. int facing, int clientPid, uid_t clientUid, int servicePid, boollegacyMode,
  4. int halVersion, int deviceVersion, apiLevel effectiveApiLevel,
  5. /*out*/sp<BasicClient>* client){
  6. //根据apiversion的不同,创建不同的CameraClient实例,这里创建CameraDeviceClient实例。
  7. *client= new CameraDeviceClient(cameraService, tmp, packageName, cameraId,
  8. facing, clientPid,clientUid, servicePid);
  9. }

看下CameraDeviceClient的继承关系,

CameraDeviceClient.h

  1. class CameraDeviceClient :
  2. public Camera2ClientBase<CameraDeviceClientBase>,
  3. public camera2::FrameProcessorBase::FilteredListener{}
  4. struct CameraDeviceClientBase :
  5. public CameraService::BasicClient,
  6. public hardware::camera2::BnCameraDeviceUser{}

可以看到CameraDeviceClient继承了CameraService::BasicClient,并且实现了ICameraDeviceUser的这个Binderapi,同时还实现了帧处理线程的监听。

接着看CameraDeviceClient的构造函数:

CameraDeviceClient.cpp

  1. CameraDeviceClient::CameraDeviceClient(constsp<CameraService>& cameraService,
  2. constsp<hardware::camera2::ICameraDeviceCallbacks>& remoteCallback,
  3. constString16& clientPackageName, int cameraId, int cameraFacing, int clientPid,
  4. uid_tclientUid, int servicePid) :
  5. Camera2ClientBase(cameraService,remoteCallback, clientPackageName,
  6. cameraId,cameraFacing, clientPid, clientUid, servicePid),

主要是在参数初始化列表中调用了父类Camera2ClientBase的构造函数。

Camera2ClientBase.cpp

Camera2ClientBase是一个模板类,这里的TClientBaseCameraDeviceClientBase,可以从CameraDeviceClient的继承关系看出。除了调用父类TClientBase(CameraDeviceClientBase)的构造函数外,还创建Camera3Device实例。

  1. template <typename TClientBase>
  2. Camera2ClientBase<TClientBase>::Camera2ClientBase(
  3. constsp<CameraService>& cameraService, constsp<TCamCallbacks>& remoteCallback,
  4. constString16& clientPackageName, int cameraId, int cameraFacing, int clientPid,
  5. uid_tclientUid, int servicePid):
  6. TClientBase(cameraService, remoteCallback,clientPackageName,
  7. cameraId, cameraFacing, clientPid, clientUid, servicePid),{
  8. mDevice= new Camera3Device(cameraId);
  9. }

接着把继承的构造函数看完,CameraDeviceClientBase又调用了父类CameraService::BasicClient,的构造函数,BasicClient的构造函数实现代码在CameraService中,主要做的事情是应用权限相关的,这块权限的处理不是很了解。

这一系列构造函数的执行,最重要的还是Camera2ClientBase中的Camera3Device实例的创建及紧接着的initialize方法的调用。

下面看initialize方法的调用流程:

CameraDeviceClient.cpp

  1. status_tCameraDeviceClient::initialize(CameraModule *module){
  2. res= Camera2ClientBase::initialize(module);
  3. //这里注册了一个监听,mFrameProcessor是一个Thread,是一个输出帧元数据处理线程,
  4. //处理预览回调相关的事情,这个线程会等待camera设备新的帧,然后调用监听接口的方法onResultAvailable,
  5. //这个方法:CameraDeviceClient::onResultAvailable,又会执行回调:
  6. // remoteCb->onResultReceived(result.mMetadata,result.mResultExtras);这个remoteCb是
  7. //hardware::camera2::ICameraDeviceCallbacks类型的,这个callback实例是在
  8. //Cameramanager.java中执行打开camera设备时创建的,然后由CameraService的connectDevice方法一路传递到CameraDeviceClient这里,所以这个回调实际的实现代码是:
  9. // CameraDeviceImpl.java中的内部类CameraDeviceCallbacks的方法:onResultReceived。
  10. mFrameProcessor->registerListener(FRAME_PROCESSOR_LISTENER_MIN_ID,
  11. FRAME_PROCESSOR_LISTENER_MAX_ID,/*listener*/this, /*sendPartials*/true);
  12. }

其中的CameraDeviceImpl.java中的内部类CameraDeviceCallbacks,在它的被回调方法onResultReceived中,通过mCaptureCallbackMap取出执行开启预览、拍照时传入的callback(CameraDeviceImpl.CaptureCallback),这个CameraDeviceImpl.CaptureCallback实际是对应用程序端传过来的CameraCaptureSession.CaptureCallback的封装,具体封装是通过createCaptureCallbackProxy方法实现的,所以当有一帧远数据可用时,最终层层回调会执行CameraCaptureSession.CaptureCallback的onCaptureProgressed,onCaptureCompleted方法,将元数据传给应用端。


Camera2ClientBase.cpp

  1. status_t Camera2ClientBase<TClientBase>::initialize(CameraModule*module){
  2. //这里的mDevice是Camera3Device类的实例。
  3. res= mDevice->initialize(module);
  4. }

Camera3Device.cpp

  1. status_tCamera3Device::initialize(CameraModule *module){
  2. //调用CameraModule的open方法打开HAL设备,从这里开始就进入到了HAL层,HAL设备对应的结构体类型是camera3_device_t,module就是CameraModule的实例,这个实例的创建是在CameraService第一次被引用时在其void CameraService::onFirstRef()函数中,mModule = new CameraModule(rawModule);这部分跟CameraService的启动有关系。
  3. res= module->open(deviceName.string(),
  4. reinterpret_cast<hw_device_t**>(&device));
  5. //初始化HAL层设备,
  6. res= device->ops->initialize(device, this);
  7. //创建Buffer管理器。
  8. mBufferManager= new Camera3BufferManager();
  9. res= find_camera_metadata_ro_entry(info.static_camera_characteristics,
  10. ANDROID_CONTROL_AE_LOCK_AVAILABLE,&aeLockAvailableEntry);
  11. //开启一个请求队列线程,run方法后它的threadLoop方法就会执行。
  12. mRequestThread= new RequestThread(this, mStatusTracker, device, aeLockAvailable);
  13. res= mRequestThread->run(String8::format("C3Dev-%d-ReqQueue",mId).string());
  14. //创建准备流的线程,但是并没有马上运行这个线程,而是等到调用Camera3Device的prepare方法时,根据需要开启线程,什么时候调用了Camera3Device的prepare方法呢?这个我没打log跟,一种可能的情况是当创建一个session时,预分配缓存时调用。
  15. mPreparerThread= new PreparerThread();
  16. }

在O版本,打开hal设备是通过CameraProviderManager来完成的。


-------------------------------------------------------------------------------------------------------------------

到这里Camera设备的打开就完成了。紧接着的就是开启预览。

再回到应用层,CaptureModule.java

上面camera设备打开的过程是从openCameraAndStartPreviewopen方法开始的,当camera成功打开后,会回调onCameraOpened,在这个回调中通过camera.startPreview启动预览。

  1. private void openCameraAndStartPreview() {
  2. mOneCameraOpener.open(cameraId,captureSetting, mCameraHandler, mainThread,
  3. imageRotationCalculator,mBurstController, mSoundPlayer,
  4. newOpenCallback() {
  5. @Override
  6. publicvoid onCameraOpened(@Nonnull final OneCamera camera) {
  7. mCamera= camera;
  8. updatePreviewBufferDimension();
  9. updatePreviewBufferSize();
  10. camera.startPreview(newSurface(getPreviewSurfaceTexture()),
  11. newCaptureReadyCallback() {
  12. @Override
  13. publicvoid onReadyForCapture() {
  14. //开启预览,要先创建拍照session,如果session成功创建,会回调到这里,说明预览已经准备好了,可以准备拍照了,
  15. mMainThread.execute(newRunnable() {
  16.                                                     public void run() {
  17.                                              onPreviewStarted();
  18.                                              onReadyStateChanged(true);
  19.                                                         }
  20.                                         }
  21.                                     }
  22.                 });
  23. }
  24. },,);
  25. }

OneCameraImpl.java

  1. public void startPreview(SurfacepreviewSurface, CaptureReadyCallback listener) {
  2. setupAsync(mPreviewSurface,listener);
  3. }

开启异步拍照session

  1. private void setupAsync(final SurfacepreviewSurface, final CaptureReadyCallback listener) {
  2. mCameraHandler.post(new Runnable() {
  3. @Override
  4. public void run() {
  5. setup(previewSurface,listener);
  6. }
  7. });
  8. }

  1. private void setup(Surface previewSurface,final CaptureReadyCallback listener) {
  2. mDevice.createCaptureSession(outputSurfaces,new CameraCaptureSession.StateCallback() {
  3. public void onConfigured(CameraCaptureSessionsession) {
  4. mCaptureSession = session;
  5. boolean success =repeatingPreview(null);
  6. if (success) {
  7. listener.onReadyForCapture();
  8. }
  9. }
  10. }

Session的创建是调用到CameraDeviceImpl.java中的createCaptureSession,进而调用configureStreamsChecked配置流,所谓session创建是否成功,就是是否成功配置了输入输出流,如果成功了设备会block进入idle,并且回调StateCallbackKK. onIdle();配置可能会失败,比如格式大小不支持,这时回调StateCallbackKK. onUnconfigured()。不管这个配置成功与否,都会new一个CameraCaptureSessionImpl实例,如果配置是成功的,就会回调上面CameraCaptureSession.StateCallback()中的onConfigured,同时把CameraCaptureSessionImpl实例作为onConfigured的参数传到OneCameraImpl.java中,就是mCaptureSession= session,也就是只有配置成功了,才会接着发出预览的requestrepeatingPreview

OneCameraImpl.java

  1. private boolean repeatingPreview(Objecttag) {
  2. CaptureRequest.Builderbuilder = mDevice.
  3. createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
  4. mCaptureSession.setRepeatingRequest(builder.build(),mCaptureCallback,
  5. mCameraHandler);
  6. }

如果request成功build,就可以准备拍照了。

CameraCaptureSessionImpl.java

  1. public synchronized int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
  2. Handlerhandler) throws CameraAccessException {
  3. //这里会把提交的request的requestID入队,因为session的创建要配置camera设备的内部管道,要分配内存缓冲区,很耗时,所以capture request提交后会先入队,等session ready就开始执行capture
  4. return addPendingSequence(mDeviceImpl.setRepeatingRequest(request,
  5. createCaptureCallbackProxy(handler,callback), mDeviceHandler));
  6. }

其中的参数createCaptureCallbackProxy(handler, callback)是指定了回调从CameraDeviceImpl.CaptureCallbackCameraCaptureSession.CaptureCallback的,比较重要的一个方法是它的onCaptureCompleted。其中callbackOneCameraImpl.java中的mCaptureCallback

继续看Request的创建提交。

CameraDeviceImpl.java

  1. public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
  2. Handlerhandler) throws CameraAccessException {
  3. return submitCaptureRequest(requestList, callback, handler, /*streaming*/true);
  4. }

  1. private int submitCaptureRequest(List<CaptureRequest> requestList, CaptureCallbackcallback,
  2. Handlerhandler, boolean repeating) throws CameraAccessException {
  3. requestInfo= mRemoteDevice.submitRequestList(requestArray, repeating);
  4. if(callback != null) {
  5. mCaptureCallbackMap.put(requestInfo.getRequestId(),
  6. new CaptureCallbackHolder(
  7. callback, requestList,handler, repeating, mNextSessionId - 1));
  8. }
  9. }

通过ICameraDeviceUserWrapper的实例mRemoteDevice提交request

ICameraDeviceUserWrapper.java

  1. public SubmitInfosubmitRequest(CaptureRequest request, boolean streaming){
  2. return mRemoteDevice.submitRequest(request, streaming);
  3. }

这里的mRemoteDevice类型是ICameraDeviceUser,这个实例是通过cameraServiceconnectDevice方法返回的。

前面我们说过ICameraDeviceUser对应了CameraDeviceClientCameraDeviceClient对应了CameraService的内部类Client

ICameraDeviceUser.javaICameraDeviceUser.cpp都是aidl文件自动生成的。

这样request就借助aidl的跨进程从ICameraDeviceUser.java到了CameraDeviceClient.cpp这边,进而跟cameraservice建立了联系。

继续看submitCaptureRequestcallback的处理,把callback做了一个包装放在了mCaptureCallbackMap中跟requestID做了关联,那么什么时候回调了这个callback呢?

前面在说CameraDeviceClient.cpp的初始化时提到,mFrameProcessor是一个输出帧元数据处理线程,处理预览回调相关的事情,这个线程会等待caemra设备新的帧,然后然后调用监听接口的方onResultAvailable,这个方法:CameraDeviceClient::onResultAvailable,又会执行回调:remoteCb->onResultReceived(result.mMetadata,result.mResultExtras);这个remoteCbhardware::camera2::ICameraDeviceCallbacks类型的,这个callback实例是在Cameramanager.java中执行打开camera设备时创建的,然后由CameraServiceconnectDevice方法一路传递到CameraDeviceClient这里,所以这个回调实际的实现代码是:

CameraDeviceImpl.java中的内部类CameraDeviceCallbacks的方法:onResultReceived

我们看CameraDeviceImpl.javaonResultReceived方法:

CameraDeviceImpl.java

  1. public voidonResultReceived(CameraMetadataNative result,
  2. CaptureResultExtrasresultExtras) throws RemoteException {
  3. //根据requestId,取得holder
  4. intrequestId = resultExtras.getRequestId();
  5. finalCaptureCallbackHolder holder =
  6. CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);
  7. finalCaptureRequest request = holder.getRequest(resultExtras.getSubsequenceId());
  8. //通过holder得到callback执行回调,同时传入数据resultAsCapture。
  9. holder.getCallback().onCaptureProgressed(CameraDeviceImpl.this,
  10. request, resultAsCapture);
  11. holder.getCallback().onCaptureCompleted(CameraDeviceImpl.this,
  12. request, resultAsCapture);
  13. }

这样就把底层的数据传到Framework,进一步传到了应用层。

另外获取元数据,也可以通过ImageReader。

在实例化Camera实例时,获取ImageReader对象,同时设置它的监听,当有一张新的图片可用时,回调其onImageAvailable接口,在这个onImageAvailable接口中,读取、存储元数据。


  1. OneCameraImpl.java
  2. private final ImageReader mCaptureImageReader;
  3. ImageReader.OnImageAvailableListener mCaptureImageListener =
  4. new ImageReader.OnImageAvailableListener() {
  5. @Override
  6. public void onImageAvailable(ImageReader reader) {
  7. // Add the image data to the latest in-flight capture.
  8. // If all the data for that capture is complete, store the
  9. // image data.
  10. InFlightCapture capture = null;
  11. synchronized (mCaptureQueue) {
  12. if (mCaptureQueue.getFirst().setImage(reader.acquireLatestImage())
  13. .isCaptureComplete()) {
  14. capture = mCaptureQueue.removeFirst();
  15. }
  16. }
  17. if (capture != null) {
  18. onCaptureCompleted(capture);
  19. }
  20. }
  21. };

获取ImageReader实例,设置监听。

  1. OneCameraImpl(CameraDevice device, CameraCharacteristics characteristics, Size pictureSize) {
  2. mCaptureImageReader = ImageReader.newInstance(pictureSize.getWidth(),
  3. pictureSize.getHeight(),
  4. sCaptureImageFormat, 2);
  5. mCaptureImageReader.setOnImageAvailableListener(mCaptureImageListener,
  6. mCameraHandler);
  7. }


拍照完成时,会回调onCaptureCompleted。

oneCameraImpl.java

  1. private void onCaptureCompleted(InFlightCapture capture) {
  2. // Experimental support for writing RAW. We do not have a usable JPEG
  3. // here, so we don't use the usual capture session mechanism and instead
  4. // just store the RAW file in its own directory.
  5. // TODO: If we make this a real feature we should probably put the DNGs
  6. // into the Camera directly.
  7. //可以存储元数据
  8. if (sCaptureImageFormat == ImageFormat.RAW_SENSOR) {
  9. if (!RAW_DIRECTORY.exists()) {
  10. if (!RAW_DIRECTORY.mkdirs()) {
  11. throw new RuntimeException("Could not create RAW directory.");
  12. }
  13. }
  14. File dngFile = new File(RAW_DIRECTORY, capture.session.getTitle() + ".dng");
  15. writeDngBytesAndClose(capture.image, capture.totalCaptureResult,
  16. mCharacteristics, dngFile);
  17. } else {
  18. //也可以存储jpg。
  19. // Since this is not an HDR+ session, we will just save the
  20. // result.
  21. byte[] imageBytes = acquireJpegBytesAndClose(capture.image);
  22. saveJpegPicture(imageBytes, capture.parameters, capture.session,
  23. capture.totalCaptureResult);
  24. }
  25. broadcastReadyState(true);
  26. capture.parameters.callback.onPictureTaken(capture.session);
  27. }

调用writeDngBytesAndClose存储元数据,

  1. private static void writeDngBytesAndClose(Image image, TotalCaptureResult captureResult,
  2. CameraCharacteristics characteristics, File dngFile) {
  3. try (DngCreator dngCreator = new DngCreator(characteristics, captureResult);
  4. FileOutputStream outputStream = new FileOutputStream(dngFile)) {
  5. // TODO: Add DngCreator#setThumbnail and add the DNG to the normal
  6. // filmstrip.
  7. dngCreator.writeImage(outputStream, image);
  8. outputStream.close();
  9. image.close();
  10. } catch (IOException e) {
  11. Log.e(TAG, "Could not store DNG file", e);
  12. return;
  13. }
  14. Log.i(TAG, "Successfully stored DNG file: " + dngFile.getAbsolutePath());
  15. }


声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Cpp五条/article/detail/499826
推荐阅读
相关标签
  

闽ICP备14008679号