赞
踩
上一章节只是简单介绍hal接口需要实现的几个函数如何定义。并没有真正说到接口内部的实现。我们可以这样认为。外面作为一个对camerahal的外部框架接口。内部由rkisp具体实现。
先介绍比较重要的几个类。
1.RKISP1CameraHw
这个是rk创建的硬件描述类
早在Camera3HAL::init时候通过
mCameraHw = ICameraHw::createCameraHW(mCameraId);
然后执行mCameraHw->init函数
RKISP1CameraHw继承ICameraHw,所以接下来都是看RKISP1CameraHw里面的实现
他的init函数里面创建了比较多重要的类
ImguUnit *mImguUnit;
ControlUnit *mControlUnit;
GraphConfigManager mGCM;
1@ImguUnit
主要是数据流相关的配置。
2@.ControlUnit
3a 相关的控制
3@.mGCM
配置相关链接
2.回到上一节的接口实现
RequestThread线程里面判断消息,执行相应函数
RequestThread::handleConfigureStreams(Message & msg) { …… for (uint32_t i = 0; i < streamsNum; i++) { stream = msg.data.streams.list->streams[i]; if (!stream->priv) { mStreams.push_back(stream); CameraStream* localStream = new CameraStream(mStreamSeqNo, stream, mResultProcessor);//创建本地stream mLocalStreams.push_back(localStream);//本地stream容器 localStream->setActive(true); stream->priv = localStream;//存放私有结构地址 mStreamSeqNo++; } …… } ……//执行RKISP1CameraHw::configStreams status = mCameraHw->configStreams(mStreams, operation_mode);
3.前面把上层传下来的streams封装成本地的CameraStream类,最后都保存在mStreams
然后执行RKISP1CameraHw::configStreams
status_t RKISP1CameraHw::doConfigureStreams(UseCase newUseCase,
uint32_t operation_mode, int32_t testPatternMode)
{……
status_t status = mGCM.configStreams(streams, operation_mode, testPatternMode);//3.1,配置格式,mediactl输出输出
……
status = mImguUnit->configStreams(streams, mConfigChanged);//3.2
……
status = mControlUnit->configStreams(mConfigChanged);//3.3
……
return mImguUnit->configStreamsDone();//3.4
}
3.1
status_t GraphConfigManager::configStreams(const vector<camera3_stream_t*> &streams,
uint32_t operationMode,
int32_t testPatternMode)
{
……
ret = gc->getSensorMediaCtlConfig(mCameraId, testPatternMode,
&mMediaCtlConfigs[CIO2]);
……
ret |= gc->getImguMediaCtlConfig(mCameraId, testPatternMode,
&mMediaCtlConfigs[IMGU_COMMON]);
前面这个函数是根据效果tuning分辨率匹配下支持的分辨率,设置sensor的link,格式,裁剪这些
存到mMediaCtlConfigs[CIO2]里面
后面类似,设置mipi isp的sp和mp这些的link,格式,裁剪这些到mMediaCtlConfigs[IMGU_COMMON]里面
3.2
ImguUnit::configStreams(std::vector<camera3_stream_t*> &activeStreams, bool configChanged)
{
……
status_t status = createProcessingTasks(graphConfig);
……
status = mPollerThread->init(mCurPipeConfig->nodes,
this, POLLPRI | POLLIN | POLLOUT | POLLERR, false);
后面是初始化监听节点线程
主要是看createProcessingTasks,这里面做的事非常多
ImguUnit::createProcessingTasks(std::shared_ptr<GraphConfig> graphConfig) { …… status = mMediaCtlHelper.configure(mGCM, IStreamConfigProvider::CIO2); ……//当前和上面的函数是把前面获取的设置配置一遍 status = mMediaCtlHelper.configure(mGCM, IStreamConfigProvider::IMGU_COMMON); …… mMainOutWorker->attachNode(it.second);//mNode = node; 是V4L2VideoNode结构 mMainOutWorker->attachStream(mStreamNodeMapping[it.first]);// mStream = stream; //videoConfig添加节点 videoConfig->deviceWorkers.push_back(mMainOutWorker); videoConfig->pollableWorkers.push_back(mMainOutWorker); videoConfig->nodes.push_back(mMainOutWorker->getNode()); setStreamListeners(it.first, mMainOutWorker);//listener的类型和这个类型一样,把对应stream添加到mMainOutWorker的addListener …… 这个两个是初始化mp和sp的worker,并添加到mCurPipeConfig里 mSelfOutWorker->attachNode(it.second); mSelfOutWorker->attachStream(mStreamNodeMapping[it.first]); videoConfig->deviceWorkers.push_back(mSelfOutWorker); videoConfig->pollableWorkers.push_back(mSelfOutWorker); videoConfig->nodes.push_back(mSelfOutWorker->getNode()); setStreamListeners(it.first, mSelfOutWorker); //shutter event for non isys, zyc mListenerDeviceWorkers.push_back(mSelfOutWorker.get()); …… for (const auto &it : mCurPipeConfig->deviceWorkers) {//mCurPipeConfig应该包含拍照和预览 //这个函数后面专门讲 注释1 ret = (*it).configure(graphConfig, mConfigChanged);//配置worker的
注释1
ret = (*it).configure(graphConfig, mConfigChanged);//是OutputFrameWorker::configure函数,调用configPostPipeLine
configPostPipeLine
mPostPipeline->prepare(sourceFmt, streams, mNeedPostProcess, mPipelineDepth);//做了数据处理类的相关初始化,各处理模块的顺序,就是监听关系
mPostPipeline->start();//启动各个数据处理单元的线程
setWorkerDeviceBuffers
mNode->setBufferPool
requestBuffers 调用ioctl
pioctl(mFd, VIDIOC_REQBUFS, &req_buf, mName.c_str())
newBuffer 调用ioctl
pioctl(mFd , VIDIOC_QUERYBUF, vbuf.get(), mName.c_str());
allocateWorkerBuffers 分配buf
3.3
ControlUnit::configStreams(bool configChanged)
{
……
//将各个subdev的名字存入mDevPathsMap,有isp,sensor,stat,lens,para。
getDevicesPath();
……
//准备并开始3a工作,调用如下函数
if (mCtrlLoop) {
status = mCtrlLoop->start(prepareParams);
3.4
只调用一个函数kickstart里面是调用mCurPipeConfig->deviceWorkers的startWorker 调用V4L2VideoNode::start 往驱动发送pioctl(mFd, VIDIOC_STREAMON, &mBufType, mName.c_str());打开数据流消息
小结:
doConfigureStreams函数,先获取各个media-ctl上的名字link逻辑这些存储在mGCM,mImguUnit(主要逻辑),做各个连接,申请buffer,设置格式,对接v4l2的ioctl的设置。mControlUnit是做3a相关操作。配置stream就是循环做开始接收数据的前面所有动作。
4 接口函数process_capture_request
传入结构体camera3_capture_request
直接执行到RequestThread::handleProcessCaptureRequest
status_t
RequestThread::handleProcessCaptureRequest(Message & msg)
{
……
//Camera3Requests是本地包装,mRequestsPool是用来存储request的池子。
status = mRequestsPool.acquireItem(&request);
……
//初始化request里面的相关stream分类赋值,初始化本地的request,保存在
status = request->init(msg.data.request3.request3,
mResultProcessor,
mLastSettings, mCameraId);
……
//下面重点分析
status = captureRequest(request);
RequestThread::captureRequest(Camera3Request* request) {…… status = mResultProcessor->registerRequest(request);//4.1 …… for (unsigned int i = 0; i < outStreams->size(); i++) { streamNode = outStreams->at(i); stream = static_cast<CameraStream *>(streamNode); stream->processRequest(request);//4.2 } for (unsigned int i = 0; i < inStreams->size(); i++) { streamNode = inStreams->at(i); stream = static_cast<CameraStream *>(streamNode); status = stream->processRequest(request);//4.3 if (status != NO_ERROR) { LOGE("%s, fail to process stream request", __FUNCTION__); break; } } status = mCameraHw->processRequest(request,mRequestsInHAL);//4.4
4.1 ResultProcessor::registerRequest
发送MESSAGE_ID_REGISTER_REQUEST消息,在消息case后执行
ResultProcessor::handleRegisterRequest
存储req到mRequestsInTransit
4.2
stream->processRequest
只是将每个stream的mPendingRequests都添加request,调用函数mProducer->capture(buffer, request); 实际上看没有实现
4.3 同理4.2
4.4 mCameraHw->processRequest(request,mRequestsInHAL);
mRequestsInHAL是记录当前有多少个request下发到hal,这个函数发现,最终的实现也基本都在,rkisp1里,调用RKISP1CameraHw::processRequest-
mControlUnit->processRequest
ControlUnit::processRequest初始化RequestCtrlState结构发MESSAGE_ID_NEW_REQUEST消息,对应case执行
handleNewRequest
ControlUnit::handleNewRequest(Message &msg)
{
const CameraMetadata *reqSettings = reqState->request->getSettings();
……
status = mSettingsProcessor->processRequestSettings(*reqSettings, *reqState);//4.41
……
status = processRequestForCapture(reqState);//4.42
}
4.41 直接调用
SettingsProcessor::processCroppingRegion(const CameraMetadata &settings,
RequestCtrlState &reqCfg)
先设置crop范围 调用CameraMetadata::updateImpl
是更新RequestCtrlState 里面的ctrlUnitResult 的entity
4.42
再调用processRequestForCapture 运行3a算法处理capture数据:
mCtrlLoop->setFrameParams先调用设置帧参数(3a相关),后调用completeProcessing,跟着调用completeProcessing 先通过request下来的metadata,更新ctrlUnitResult 里面返回的metadata。
然后mImguUnit->completeRequest,先封装DeviceMessage结构,
通过发送msg MESSAGE_COMPLETE_REQ执行handleMessageCompleteReq函数
ImguUnit::handleMessageCompleteReq(DeviceMessage &msg)
{……
status = processNextRequest();
……
status |= mPollerThread->pollRequest(request->getId(), 3000,
&(mCurPipeConfig->nodes));
}
看看processNextRequest->调用 OutputFrameWorker::prepareRun
里面执行了qbuf。并对mPostWorkingBufs,mOutputBuffers赋值,这是接下来所有处理单元的最初和最后一个buffer。中间的是由预申请的buffer来
mPollerThread->pollRequest
发送msg进入PollerThread
可以认为这个是request函数的前半段,ControlUnit处理RequestCtrlState结构消息 ImguUnit 处理DeviceMessage消息。前半段在
后执行PollerThread::handlePollRequest
在里面进入循环,循环主要有pollDevices,底层设备没有消息上来会堵住
ret = V4L2DeviceBase::pollDevices(mPollingDevices, mActiveDevices,
mInactiveDevices,
msg.data.request.timeout, mFlushFd[0],
mEvents);
status = notifyListener(&outMsg);
获取到节点有消息后,赋值下DeviceMessage里面的pollEvent结构,包含id,有数据的device和poll的device。知道哪些节点有消息后通过
ImguUnit::notifyPollEvent 发消息MESSAGE_ID_POLL执行handleMessagePoll
Case里面执行startProcessing
OutputFrameWorker::run
执行status = mNode->grabFrame(&outBuf);//dqbuf 这个是抓取sensor数据
这里会先执行发送CAPTURE_EVENT_SHUTTER到转换消息MESSAGE_ID_NEW_SHUTTER到ControlUnit
接到消息后执行(当然这个一个线程,可能在postrun之后执行) handleNewShutter
更新返回metadata时间,以及request下来的没有赋值的到reqState.ctrlUnitResult(如果metadata已经从3a程序返回),并通知已经填充好mMetadtaFilled = true
然后发送MESSAGE_ID_SHUTTER_DONE到ResultProcessor
调用returnShutterDone 进入mCallbackOps->notify(mCallbackOps, &shutter);返回shutter事件。
如果allMetaReceived都收到了(收到会在handleMetadataDone添加,一般都在后面),会返回metadata
postRun->
在设置输如何输出buffer后调用
mPostPipeline->processFrame(tempBuf, outBufs, mMsg->pMsg.processingSettings);
发送消息MESSAGE_ID_PROCESSFRAME
PostProcessPipeLine::handleProcessFrame
处理postbuffer到mOutputBuffer,也就是上来的buffer到request显示buffer
当buffer是ext属性,添加输出buffer为request下来的buffer
执行第一个mPostProcUnitArray[kFirstLevel],notifyNewFrame
执行mInBufferPool.push_back(std::make_pair(buf, settings)); mCondition.notify_all();
添加当前单元的mInBufferPool,通知数据到了,
当前处理单元的PostProcessUnit::prepareProcess等待到了两个状态,往下执行
里面设置mCurPostProcBufOut,如果是int类型就用预申请的buffer,如果是外部,就直接赋值显示buffer了
调用对应的
PostProcessUnit::processFrame函数
处理好图像后relayToNextProcUnit进入下一个处理单元,如果不是ext类型buffer,调用notifyListeners,通知下一个处理单元处理,如果是ext,那么到最后处理,notifyListeners是mOutputBuffersHandler
mPipeline->mPostProcFrameListener->notifyNewFrame
OutputFrameWorker::notifyNewFrame
buf->cambuf->captureDone
CameraBuffer::captureDone 里面解锁,接fence
执行mOwner->captureDone
mCallback->bufferDone(pendingRequest, aBuffer);
里面showDebugFPS打印帧率,发MESSAGE_ID_BUFFER_DONE到ResultProcessor
最后跳到handleBufferDone
returnPendingBuffers
processCaptureResult
mCallbackOps->process_capture_result(mCallbackOps, result);
似乎跟后面调用同一个函数返回buffer
然后发送CAPTURE_EVENT_SHUTTER接受后发送MESSAGE_ID_NEW_SHUTTER
ControlUnit::handleNewShutter
handleShutterDone
returnShutterDone
mCallbackOps->notify(mCallbackOps, &shutter);
startProcessing执行完postrun 后面调用listener->notifyCaptureEvent
发送CAPTURE_REQUEST_DONE 转换MESSAGE_ID_NEW_REQUEST_DONE到
ControlUnit然后执行
handleNewRequestDone
request->mCallback->metadataDone
发送消息MESSAGE_ID_METADATA_DONE 到ResultProcessor
执行ResultProcessor::handleMetadataDone
执行returnPendingPartials
mCallbackOps->process_capture_result(mCallbackOps, &result);返回metadata
(注意,ctrlUnitResult = request->getPartialResultBuffer(CONTROL_UNIT_PARTIAL_RESULT)之前有赋值)
这里应该是结束了一帧的发送,跟hal1比较起来,主要的pipeline的概念,以及cameraservice可以对每帧进行控制,显示部分完全上移,hal已经看不到nativewindow 这些,录像是否在上面获取帧然后编码,之后往上层查找。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。