当前位置:   article > 正文

MTK 多帧算法集成

mtk mfnr多帧

和你一起终身学习,这里是程序员Android

经典好文推荐,通过阅读本文,您将收获以下知识点:

一、选择feature和配置feature table
二、 挂载算法
三、自定义metadata
四、APP调用算法
五、结语

一、选择feature和配置feature table

1.1 选择feature

多帧降噪算法(MFNR)是一种很常见的多帧算法,在MTK已预置的feature中有MTK_FEATURE_MFNR和TP_FEATURE_MFNR。因此,我们可以对号入座,不用再额外添加feature。这里我们是第三方算法,所以我们选择TP_FEATURE_MFNR。

1.2 配置feature table

确定了feature为TP_FEATURE_MFNR后,我们还需要将其添加到feature table中:

  1. diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
  2. index f14ff8a6e2..38365e0602 100755
  3. --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
  4. +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
  5. @@ -106,6 +106,7 @@ using namespace NSCam::v3::pipeline::policy::scenariomgr;
  6. #define MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)
  7. #define MTK_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_FUSION| TP_FEATURE_WATERMARK)
  8. #define MTK_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_PUREBOKEH| TP_FEATURE_WATERMARK)
  9. +#define MTK_FEATURE_COMBINATION_TP_MFNR (TP_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| MTK_FEATURE_MFNR)
  10. // streaming feature combination (TODO: it should be refined by streaming scenario feature)
  11. #define MTK_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK)
  12. @@ -136,6 +137,7 @@ const std::vector<std::unordered_map<int32_t, ScenarioFeatures>> gMtkScenarioFe
  13. ADD_CAMERA_FEATURE_SET(TP_FEATURE_HDR, MTK_FEATURE_COMBINATION_HDR)
  14. ADD_CAMERA_FEATURE_SET(MTK_FEATURE_AINR, MTK_FEATURE_COMBINATION_AINR)
  15. ADD_CAMERA_FEATURE_SET(MTK_FEATURE_MFNR, MTK_FEATURE_COMBINATION_MFNR)
  16. + ADD_CAMERA_FEATURE_SET(TP_FEATURE_MFNR, MTK_FEATURE_COMBINATION_TP_MFNR)
  17. ADD_CAMERA_FEATURE_SET(MTK_FEATURE_REMOSAIC, MTK_FEATURE_COMBINATION_REMOSAIC)
  18. ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL, MTK_FEATURE_COMBINATION_SINGLE)
  19. CAMERA_SCENARIO_END

注意:

MTK在Android Q(10.0)及更高版本上优化了scenario配置表的客制化,Android Q及更高版本,feature需要在:
vendor/mediatek/proprietary/custom/[platform]/hal/camera/camera_custom_feature_table.cpp中配置,[platform]是诸如mt6580,mt6763之类的。

二、 挂载算法

2.1 为算法选择plugin

MTK HAL3在vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/plugin/PipelinePluginType.h 中将三方算法的挂载点大致分为以下几类:

  • BokehPlugin:Bokeh算法挂载点,双摄景深算法的虚化部分。

  • DepthPlugin:Depth算法挂载点,双摄景深算法的计算深度部分。

  • FusionPlugin:Depth和Bokeh放在1个算法中,即合并的双摄景深算法挂载点。

  • JoinPlugin:Streaming相关算法挂载点,预览算法都挂载在这里。

  • MultiFramePlugin:多帧算法挂载点,包括YUV与RAW,例如MFNR/HDR

  • RawPlugin:RAW算法挂载点,例如remosaic

  • YuvPlugin:Yuv单帧算法挂载点,例如美颜、广角镜头畸变校正等。

对号入座,为要集成的算法选择相应的plugin。这里是多帧算法,只能选择MultiFramePlugin。并且,一般情况下多帧算法只用于拍照,不用于预览。

2.2 添加全局宏控

为了能控制某个项目是否集成此算法,我们在device/mediateksample/[platform]/ProjectConfig.mk中添加一个宏,用于控制新接入算法的编译:

  1. QXT_MFNR_SUPPORT = yes

当某个项目不需要这个算法时,将device/mediateksample/[platform]/ProjectConfig.mk的QXT_MFNR_SUPPORT的值设为 no 就可以了。

2.3 编写算法集成文件

参照vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mfnr/MFNRImpl.cpp中实现MFNR拍照。目录结构如下:
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/cp_tp_mfnr/
├── Android.mk
├── include
│ └── mf_processor.h
├── lib
│ ├── arm64-v8a
│ │ └── libmultiframe.so
│ └── armeabi-v7a
│ └── libmultiframe.so
└── MFNRImpl.cpp

文件说明:

  • Android.mk中配置算法库、头文件、集成的源代码MFNRImpl.cpp文件,将它们编译成库libmtkcam.plugin.tp_mfnr,供libmtkcam_3rdparty.customer依赖调用。

  • libmultiframe.so实现了将连续4帧图像缩小,并拼接成一张图的功能,libmultiframe.so用来模拟需要接入的第三方多帧算法库。mf_processor.h是头文件。

  • MFNRImpl.cpp是集成的源代码CPP文件。

2.3.1 mtkcam3/3rdparty/customer/cp_tp_mfnr/Android.mk
  1. ifeq ($(QXT_MFNR_SUPPORT),yes)
  2. LOCAL_PATH := $(call my-dir)
  3. include $(CLEAR_VARS)
  4. LOCAL_MODULE := libmultiframe
  5. LOCAL_SRC_FILES_32 := lib/armeabi-v7a/libmultiframe.so
  6. LOCAL_SRC_FILES_64 := lib/arm64-v8a/libmultiframe.so
  7. LOCAL_MODULE_TAGS := optional
  8. LOCAL_MODULE_CLASS := SHARED_LIBRARIES
  9. LOCAL_MODULE_SUFFIX := .so
  10. LOCAL_PROPRIETARY_MODULE := true
  11. LOCAL_MULTILIB := both
  12. include $(BUILD_PREBUILT)
  13. ################################################################################
  14. #
  15. ################################################################################
  16. include $(CLEAR_VARS)
  17. #-----------------------------------------------------------
  18. -include $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/mtkcam.mk
  19. #-----------------------------------------------------------
  20. LOCAL_SRC_FILES += MFNRImpl.cpp
  21. #-----------------------------------------------------------
  22. LOCAL_C_INCLUDES += $(MTKCAM_C_INCLUDES)
  23. LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/include $(MTK_PATH_SOURCE)/hardware/mtkcam/include
  24. LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_COMMON)/hal/inc
  25. LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_CUSTOM_PLATFORM)/hal/inc
  26. LOCAL_C_INCLUDES += $(TOP)/external/libyuv/files/include/
  27. LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/3rdparty/customer/cp_tp_mfnr/include
  28. #
  29. LOCAL_C_INCLUDES += system/media/camera/include
  30. #-----------------------------------------------------------
  31. LOCAL_CFLAGS += $(MTKCAM_CFLAGS)
  32. #
  33. #-----------------------------------------------------------
  34. LOCAL_STATIC_LIBRARIES +=
  35. #
  36. LOCAL_WHOLE_STATIC_LIBRARIES +=
  37. #-----------------------------------------------------------
  38. LOCAL_SHARED_LIBRARIES += liblog
  39. LOCAL_SHARED_LIBRARIES += libutils
  40. LOCAL_SHARED_LIBRARIES += libcutils
  41. LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper
  42. LOCAL_SHARED_LIBRARIES += libmtkcam_stdutils
  43. LOCAL_SHARED_LIBRARIES += libmtkcam_pipeline
  44. LOCAL_SHARED_LIBRARIES += libmtkcam_metadata
  45. LOCAL_SHARED_LIBRARIES += libmtkcam_metastore
  46. LOCAL_SHARED_LIBRARIES += libmtkcam_streamutils
  47. LOCAL_SHARED_LIBRARIES += libmtkcam_imgbuf
  48. LOCAL_SHARED_LIBRARIES += libmtkcam_exif
  49. #LOCAL_SHARED_LIBRARIES += libmtkcam_3rdparty
  50. #-----------------------------------------------------------
  51. LOCAL_HEADER_LIBRARIES := libutils_headers liblog_headers libhardware_headers
  52. #-----------------------------------------------------------
  53. LOCAL_MODULE := libmtkcam.plugin.tp_mfnr
  54. LOCAL_PROPRIETARY_MODULE := true
  55. LOCAL_MODULE_OWNER := mtk
  56. LOCAL_MODULE_TAGS := optional
  57. include $(MTK_STATIC_LIBRARY)
  58. ################################################################################
  59. #
  60. ################################################################################
  61. include $(call all-makefiles-under,$(LOCAL_PATH))
  62. endif
2.3.2 mtkcam3/3rdparty/customer/cp_tp_mfnr/include/mf_processor.h
  1. #ifndef QXT_MULTI_FRAME_H
  2. #define QXT_MULTI_FRAME_H
  3. class MFProcessor {
  4. public:
  5. virtual ~MFProcessor() {}
  6. virtual void setFrameCount(int num) = 0;
  7. virtual void setParams() = 0;
  8. virtual void addFrame(unsigned char *src, int srcWidth, int srcHeight) = 0;
  9. virtual void addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,
  10. int srcWidth, int srcHeight) = 0;
  11. virtual void scale(unsigned char *src, int srcWidth, int srcHeight,
  12. unsigned char *dst, int dstWidth, int dstHeight) = 0;
  13. virtual void process(unsigned char *output, int outputWidth, int outputHeight) = 0;
  14. virtual void process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,
  15. int outputWidth, int outputHeight) = 0;
  16. static MFProcessor* createInstance(int width, int height);
  17. };
  18. #endif //QXT_MULTI_FRAME_H

头文件中的接口函数介绍:

  • setFrameCount:没有实际作用,用于模拟设置第三方多帧算法的帧数。因为部分第三方多帧算法在不同场景下需要的帧数可能是不同的。

  • setParams:也没有实际作用,用于模拟设置第三方多帧算法所需的参数。

  • addFrame:用于添加一帧图像数据,用于模拟第三方多帧算法添加图像数据。

  • process:将前面添加的4帧图像数据,缩小并拼接成一张原大小的图。

  • createInstance:创建接口类对象。

为了方便有兴趣的童鞋们,实现代码mf_processor_impl.cpp也一并贴上:

  1. #include <libyuv/scale.h>
  2. #include <cstring>
  3. #include "mf_processor.h"
  4. using namespace std;
  5. using namespace libyuv;
  6. class MFProcessorImpl : public MFProcessor {
  7. private:
  8. int frameCount = 4;
  9. int currentIndex = 0;
  10. unsigned char *dstBuf = nullptr;
  11. unsigned char *tmpBuf = nullptr;
  12. public:
  13. MFProcessorImpl();
  14. MFProcessorImpl(int width, int height);
  15. ~MFProcessorImpl() override;
  16. void setFrameCount(int num) override;
  17. void setParams() override;
  18. void addFrame(unsigned char *src, int srcWidth, int srcHeight) override;
  19. void addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,
  20. int srcWidth, int srcHeight) override;
  21. void scale(unsigned char *src, int srcWidth, int srcHeight,
  22. unsigned char *dst, int dstWidth, int dstHeight) override;
  23. void process(unsigned char *output, int outputWidth, int outputHeight) override;
  24. void process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,
  25. int outputWidth, int outputHeight) override;
  26. static MFProcessor *createInstance(int width, int height);
  27. };
  28. MFProcessorImpl::MFProcessorImpl() = default;
  29. MFProcessorImpl::MFProcessorImpl(int width, int height) {
  30. if (dstBuf == nullptr) {
  31. dstBuf = new unsigned char[width * height * 3 / 2];
  32. }
  33. if (tmpBuf == nullptr) {
  34. tmpBuf = new unsigned char[width / 2 * height / 2 * 3 / 2];
  35. }
  36. }
  37. MFProcessorImpl::~MFProcessorImpl() {
  38. if (dstBuf != nullptr) {
  39. delete[] dstBuf;
  40. }
  41. if (tmpBuf != nullptr) {
  42. delete[] tmpBuf;
  43. }
  44. }
  45. void MFProcessorImpl::setFrameCount(int num) {
  46. frameCount = num;
  47. }
  48. void MFProcessorImpl::setParams() {
  49. }
  50. void MFProcessorImpl::addFrame(unsigned char *src, int srcWidth, int srcHeight) {
  51. int srcYCount = srcWidth * srcHeight;
  52. int srcUVCount = srcWidth * srcHeight / 4;
  53. int tmpWidth = srcWidth >> 1;
  54. int tmpHeight = srcHeight >> 1;
  55. int tmpYCount = tmpWidth * tmpHeight;
  56. int tmpUVCount = tmpWidth * tmpHeight / 4;
  57. //scale
  58. I420Scale(src, srcWidth,
  59. src + srcYCount, srcWidth >> 1,
  60. src + srcYCount + srcUVCount, srcWidth >> 1,
  61. srcWidth, srcHeight,
  62. tmpBuf, tmpWidth,
  63. tmpBuf + tmpYCount, tmpWidth >> 1,
  64. tmpBuf + tmpYCount + tmpUVCount, tmpWidth >> 1,
  65. tmpWidth, tmpHeight,
  66. kFilterNone);
  67. //merge
  68. unsigned char *pDstY;
  69. unsigned char *pTmpY;
  70. for (int i = 0; i < tmpHeight; i++) {
  71. pTmpY = tmpBuf + i * tmpWidth;
  72. if (currentIndex == 0) {
  73. pDstY = dstBuf + i * srcWidth;
  74. } else if (currentIndex == 1) {
  75. pDstY = dstBuf + i * srcWidth + tmpWidth;
  76. } else if (currentIndex == 2) {
  77. pDstY = dstBuf + (i + tmpHeight) * srcWidth;
  78. } else {
  79. pDstY = dstBuf + (i + tmpHeight) * srcWidth + tmpWidth;
  80. }
  81. memcpy(pDstY, pTmpY, tmpWidth);
  82. }
  83. int uvHeight = tmpHeight / 2;
  84. int uvWidth = tmpWidth / 2;
  85. unsigned char *pDstU;
  86. unsigned char *pDstV;
  87. unsigned char *pTmpU;
  88. unsigned char *pTmpV;
  89. for (int i = 0; i < uvHeight; i++) {
  90. pTmpU = tmpBuf + tmpYCount + uvWidth * i;
  91. pTmpV = tmpBuf + tmpYCount + tmpUVCount + uvWidth * i;
  92. if (currentIndex == 0) {
  93. pDstU = dstBuf + srcYCount + i * tmpWidth;
  94. pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth;
  95. } else if (currentIndex == 1) {
  96. pDstU = dstBuf + srcYCount + i * tmpWidth + uvWidth;
  97. pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth + uvWidth;
  98. } else if (currentIndex == 2) {
  99. pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth;
  100. pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth;
  101. } else {
  102. pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth + uvWidth;
  103. pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth + uvWidth;
  104. }
  105. memcpy(pDstU, pTmpU, uvWidth);
  106. memcpy(pDstV, pTmpV, uvWidth);
  107. }
  108. if (currentIndex < frameCount) currentIndex++;
  109. }
  110. void MFProcessorImpl::addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,
  111. int srcWidth, int srcHeight) {
  112. int srcYCount = srcWidth * srcHeight;
  113. int srcUVCount = srcWidth * srcHeight / 4;
  114. int tmpWidth = srcWidth >> 1;
  115. int tmpHeight = srcHeight >> 1;
  116. int tmpYCount = tmpWidth * tmpHeight;
  117. int tmpUVCount = tmpWidth * tmpHeight / 4;
  118. //scale
  119. I420Scale(srcY, srcWidth,
  120. srcU, srcWidth >> 1,
  121. srcV, srcWidth >> 1,
  122. srcWidth, srcHeight,
  123. tmpBuf, tmpWidth,
  124. tmpBuf + tmpYCount, tmpWidth >> 1,
  125. tmpBuf + tmpYCount + tmpUVCount, tmpWidth >> 1,
  126. tmpWidth, tmpHeight,
  127. kFilterNone);
  128. //merge
  129. unsigned char *pDstY;
  130. unsigned char *pTmpY;
  131. for (int i = 0; i < tmpHeight; i++) {
  132. pTmpY = tmpBuf + i * tmpWidth;
  133. if (currentIndex == 0) {
  134. pDstY = dstBuf + i * srcWidth;
  135. } else if (currentIndex == 1) {
  136. pDstY = dstBuf + i * srcWidth + tmpWidth;
  137. } else if (currentIndex == 2) {
  138. pDstY = dstBuf + (i + tmpHeight) * srcWidth;
  139. } else {
  140. pDstY = dstBuf + (i + tmpHeight) * srcWidth + tmpWidth;
  141. }
  142. memcpy(pDstY, pTmpY, tmpWidth);
  143. }
  144. int uvHeight = tmpHeight / 2;
  145. int uvWidth = tmpWidth / 2;
  146. unsigned char *pDstU;
  147. unsigned char *pDstV;
  148. unsigned char *pTmpU;
  149. unsigned char *pTmpV;
  150. for (int i = 0; i < uvHeight; i++) {
  151. pTmpU = tmpBuf + tmpYCount + uvWidth * i;
  152. pTmpV = tmpBuf + tmpYCount + tmpUVCount + uvWidth * i;
  153. if (currentIndex == 0) {
  154. pDstU = dstBuf + srcYCount + i * tmpWidth;
  155. pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth;
  156. } else if (currentIndex == 1) {
  157. pDstU = dstBuf + srcYCount + i * tmpWidth + uvWidth;
  158. pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth + uvWidth;
  159. } else if (currentIndex == 2) {
  160. pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth;
  161. pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth;
  162. } else {
  163. pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth + uvWidth;
  164. pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth + uvWidth;
  165. }
  166. memcpy(pDstU, pTmpU, uvWidth);
  167. memcpy(pDstV, pTmpV, uvWidth);
  168. }
  169. if (currentIndex < frameCount) currentIndex++;
  170. }
  171. void MFProcessorImpl::scale(unsigned char *src, int srcWidth, int srcHeight,
  172. unsigned char *dst, int dstWidth, int dstHeight) {
  173. I420Scale(src, srcWidth,//Y
  174. src + srcWidth * srcHeight, srcWidth >> 1,//U
  175. src + srcWidth * srcHeight * 5 / 4, srcWidth >> 1,//V
  176. srcWidth, srcHeight,
  177. dst, dstWidth,//Y
  178. dst + dstWidth * dstHeight, dstWidth >> 1,//U
  179. dst + dstWidth * dstHeight * 5 / 4, dstWidth >> 1,//V
  180. dstWidth, dstHeight,
  181. kFilterNone);
  182. }
  183. void MFProcessorImpl::process(unsigned char *output, int outputWidth, int outputHeight) {
  184. memcpy(output, dstBuf, outputWidth * outputHeight * 3 / 2);
  185. currentIndex = 0;
  186. }
  187. void MFProcessorImpl::process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,
  188. int outputWidth, int outputHeight) {
  189. int yCount = outputWidth * outputHeight;
  190. int uvCount = yCount / 4;
  191. memcpy(outputY, dstBuf, yCount);
  192. memcpy(outputU, dstBuf + yCount, uvCount);
  193. memcpy(outputV, dstBuf + yCount + uvCount, uvCount);
  194. currentIndex = 0;
  195. }
  196. MFProcessor* MFProcessor::createInstance(int width, int height) {
  197. return new MFProcessorImpl(width, height);
  198. }
2.3.3 mtkcam3/3rdparty/customer/cp_tp_mfnr/MFNRImpl.cpp
  1. #ifdef LOG_TAG
  2. #undef LOG_TAG
  3. #endif // LOG_TAG
  4. #define LOG_TAG "MFNRProvider"
  5. static const char *__CALLERNAME__ = LOG_TAG;
  6. //
  7. #include <mtkcam/utils/std/Log.h>
  8. //
  9. #include <stdlib.h>
  10. #include <utils/Errors.h>
  11. #include <utils/List.h>
  12. #include <utils/RefBase.h>
  13. #include <sstream>
  14. #include <unordered_map> // std::unordered_map
  15. //
  16. #include <mtkcam/utils/metadata/client/mtk_metadata_tag.h>
  17. #include <mtkcam/utils/metadata/hal/mtk_platform_metadata_tag.h>
  18. //zHDR
  19. #include <mtkcam/utils/hw/HwInfoHelper.h> // NSCamHw::HwInfoHelper
  20. #include <mtkcam3/feature/utils/FeatureProfileHelper.h> //ProfileParam
  21. #include <mtkcam/drv/IHalSensor.h>
  22. //
  23. #include <mtkcam/utils/imgbuf/IIonImageBufferHeap.h>
  24. //
  25. #include <mtkcam/utils/std/Format.h>
  26. #include <mtkcam/utils/std/Time.h>
  27. //
  28. #include <mtkcam3/pipeline/hwnode/NodeId.h>
  29. //
  30. #include <mtkcam/utils/metastore/IMetadataProvider.h>
  31. #include <mtkcam/utils/metastore/ITemplateRequest.h>
  32. #include <mtkcam/utils/metastore/IMetadataProvider.h>
  33. #include <mtkcam3/3rdparty/plugin/PipelinePlugin.h>
  34. #include <mtkcam3/3rdparty/plugin/PipelinePluginType.h>
  35. //
  36. #include <isp_tuning/isp_tuning.h> //EIspProfile_T, EOperMode_*
  37. //
  38. #include <custom_metadata/custom_metadata_tag.h>
  39. //
  40. #include <libyuv.h>
  41. #include <mf_processor.h>
  42. using namespace NSCam;
  43. using namespace android;
  44. using namespace std;
  45. using namespace NSCam::NSPipelinePlugin;
  46. using namespace NSIspTuning;
  47. /******************************************************************************
  48. *
  49. ******************************************************************************/
  50. #define MY_LOGV(fmt, arg...) CAM_LOGV("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
  51. #define MY_LOGD(fmt, arg...) CAM_LOGD("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
  52. #define MY_LOGI(fmt, arg...) CAM_LOGI("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
  53. #define MY_LOGW(fmt, arg...) CAM_LOGW("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
  54. #define MY_LOGE(fmt, arg...) CAM_LOGE("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
  55. //
  56. #define MY_LOGV_IF(cond, ...) do { if ( (cond) ) { MY_LOGV(__VA_ARGS__); } }while(0)
  57. #define MY_LOGD_IF(cond, ...) do { if ( (cond) ) { MY_LOGD(__VA_ARGS__); } }while(0)
  58. #define MY_LOGI_IF(cond, ...) do { if ( (cond) ) { MY_LOGI(__VA_ARGS__); } }while(0)
  59. #define MY_LOGW_IF(cond, ...) do { if ( (cond) ) { MY_LOGW(__VA_ARGS__); } }while(0)
  60. #define MY_LOGE_IF(cond, ...) do { if ( (cond) ) { MY_LOGE(__VA_ARGS__); } }while(0)
  61. //
  62. #define ASSERT(cond, msg) do { if (!(cond)) { printf("Failed: %s\n", msg); return; } }while(0)
  63. #define __DEBUG // enable debug
  64. #ifdef __DEBUG
  65. #include <memory>
  66. #define FUNCTION_SCOPE \
  67. auto __scope_logger__ = [](char const* f)->std::shared_ptr<const char>{ \
  68. CAM_LOGD("(%d)[%s] + ", ::gettid(), f); \
  69. return std::shared_ptr<const char>(f, [](char const* p){CAM_LOGD("(%d)[%s] -", ::gettid(), p);}); \
  70. }(__FUNCTION__)
  71. #else
  72. #define FUNCTION_SCOPE
  73. #endif
  74. template <typename T>
  75. inline MBOOL
  76. tryGetMetadata(
  77. IMetadata* pMetadata,
  78. MUINT32 const tag,
  79. T & rVal
  80. )
  81. {
  82. if (pMetadata == NULL) {
  83. MY_LOGW("pMetadata == NULL");
  84. return MFALSE;
  85. }
  86. IMetadata::IEntry entry = pMetadata->entryFor(tag);
  87. if (!entry.isEmpty()) {
  88. rVal = entry.itemAt(0, Type2Type<T>());
  89. return MTRUE;
  90. }
  91. return MFALSE;
  92. }
  93. #define MFNR_FRAME_COUNT 4
  94. /******************************************************************************
  95. *
  96. ******************************************************************************/
  97. class MFNRProviderImpl : public MultiFramePlugin::IProvider {
  98. typedef MultiFramePlugin::Property Property;
  99. typedef MultiFramePlugin::Selection Selection;
  100. typedef MultiFramePlugin::Request::Ptr RequestPtr;
  101. typedef MultiFramePlugin::RequestCallback::Ptr RequestCallbackPtr;
  102. public:
  103. virtual void set(MINT32 iOpenId, MINT32 iOpenId2) {
  104. MY_LOGD("set openId:%d openId2:%d", iOpenId, iOpenId2);
  105. mOpenId = iOpenId;
  106. }
  107. virtual const Property& property() {
  108. FUNCTION_SCOPE;
  109. static Property prop;
  110. static bool inited;
  111. if (!inited) {
  112. prop.mName = "TP_MFNR";
  113. prop.mFeatures = TP_FEATURE_MFNR;
  114. prop.mThumbnailTiming = eTiming_P2;
  115. prop.mPriority = ePriority_Highest;
  116. prop.mZsdBufferMaxNum = 8; // maximum frames requirement
  117. prop.mNeedRrzoBuffer = MTRUE; // rrzo requirement for BSS
  118. inited = MTRUE;
  119. }
  120. return prop;
  121. };
  122. virtual MERROR negotiate(Selection& sel) {
  123. FUNCTION_SCOPE;
  124. IMetadata* appInMeta = sel.mIMetadataApp.getControl().get();
  125. tryGetMetadata<MINT32>(appInMeta, QXT_FEATURE_MFNR, mEnable);
  126. MY_LOGD("mEnable: %d", mEnable);
  127. if (!mEnable) {
  128. MY_LOGD("Force off TP_MFNR shot");
  129. return BAD_VALUE;
  130. }
  131. sel.mRequestCount = MFNR_FRAME_COUNT;
  132. MY_LOGD("mRequestCount=%d", sel.mRequestCount);
  133. sel.mIBufferFull
  134. .setRequired(MTRUE)
  135. .addAcceptedFormat(eImgFmt_I420) // I420 first
  136. .addAcceptedFormat(eImgFmt_YV12)
  137. .addAcceptedFormat(eImgFmt_NV21)
  138. .addAcceptedFormat(eImgFmt_NV12)
  139. .addAcceptedSize(eImgSize_Full);
  140. //sel.mIBufferSpecified.setRequired(MTRUE).setAlignment(16, 16);
  141. sel.mIMetadataDynamic.setRequired(MTRUE);
  142. sel.mIMetadataApp.setRequired(MTRUE);
  143. sel.mIMetadataHal.setRequired(MTRUE);
  144. if (sel.mRequestIndex == 0) {
  145. sel.mOBufferFull
  146. .setRequired(MTRUE)
  147. .addAcceptedFormat(eImgFmt_I420) // I420 first
  148. .addAcceptedFormat(eImgFmt_YV12)
  149. .addAcceptedFormat(eImgFmt_NV21)
  150. .addAcceptedFormat(eImgFmt_NV12)
  151. .addAcceptedSize(eImgSize_Full);
  152. sel.mOMetadataApp.setRequired(MTRUE);
  153. sel.mOMetadataHal.setRequired(MTRUE);
  154. } else {
  155. sel.mOBufferFull.setRequired(MFALSE);
  156. sel.mOMetadataApp.setRequired(MFALSE);
  157. sel.mOMetadataHal.setRequired(MFALSE);
  158. }
  159. return OK;
  160. };
  161. virtual void init() {
  162. FUNCTION_SCOPE;
  163. mDump = property_get_bool("vendor.debug.camera.mfnr.dump", 0);
  164. //nothing to do for MFNR
  165. };
  166. virtual MERROR process(RequestPtr pRequest, RequestCallbackPtr pCallback) {
  167. FUNCTION_SCOPE;
  168. MERROR ret = 0;
  169. // restore callback function for abort API
  170. if (pCallback != nullptr) {
  171. m_callbackprt = pCallback;
  172. }
  173. //maybe need to keep a copy in member<sp>
  174. IMetadata* pAppMeta = pRequest->mIMetadataApp->acquire();
  175. IMetadata* pHalMeta = pRequest->mIMetadataHal->acquire();
  176. IMetadata* pHalMetaDynamic = pRequest->mIMetadataDynamic->acquire();
  177. MINT32 processUniqueKey = 0;
  178. IImageBuffer* pInImgBuffer = NULL;
  179. uint32_t width = 0;
  180. uint32_t height = 0;
  181. if (!IMetadata::getEntry<MINT32>(pHalMeta, MTK_PIPELINE_UNIQUE_KEY, processUniqueKey)) {
  182. MY_LOGE("cannot get unique about MFNR capture");
  183. return BAD_VALUE;
  184. }
  185. if (pRequest->mIBufferFull != nullptr) {
  186. pInImgBuffer = pRequest->mIBufferFull->acquire();
  187. width = pInImgBuffer->getImgSize().w;
  188. height = pInImgBuffer->getImgSize().h;
  189. MY_LOGD("[IN] Full image VA: 0x%p, Size(%dx%d), Format: %s",
  190. pInImgBuffer->getBufVA(0), width, height, format2String(pInImgBuffer->getImgFormat()));
  191. if (mDump) {
  192. char path[256];
  193. snprintf(path, sizeof(path), "/data/vendor/camera_dump/mfnr_capture_in_%d_%dx%d.%s",
  194. pRequest->mRequestIndex, width, height, format2String(pInImgBuffer->getImgFormat()));
  195. pInImgBuffer->saveToFile(path);
  196. }
  197. }
  198. if (pRequest->mIBufferSpecified != nullptr) {
  199. IImageBuffer* pImgBuffer = pRequest->mIBufferSpecified->acquire();
  200. MY_LOGD("[IN] Specified image VA: 0x%p, Size(%dx%d)", pImgBuffer->getBufVA(0), pImgBuffer->getImgSize().w, pImgBuffer->getImgSize().h);
  201. }
  202. if (pRequest->mOBufferFull != nullptr) {
  203. mOutImgBuffer = pRequest->mOBufferFull->acquire();
  204. MY_LOGD("[OUT] Full image VA: 0x%p, Size(%dx%d)", mOutImgBuffer->getBufVA(0), mOutImgBuffer->getImgSize().w, mOutImgBuffer->getImgSize().h);
  205. }
  206. if (pRequest->mIMetadataDynamic != nullptr) {
  207. IMetadata *meta = pRequest->mIMetadataDynamic->acquire();
  208. if (meta != NULL)
  209. MY_LOGD("[IN] Dynamic metadata count: ", meta->count());
  210. else
  211. MY_LOGD("[IN] Dynamic metadata Empty");
  212. }
  213. MY_LOGD("frame:%d/%d, width:%d, height:%d", pRequest->mRequestIndex, pRequest->mRequestCount, width, height);
  214. if (pInImgBuffer != NULL && mOutImgBuffer != NULL) {
  215. uint32_t yLength = pInImgBuffer->getBufSizeInBytes(0);
  216. uint32_t uLength = pInImgBuffer->getBufSizeInBytes(1);
  217. uint32_t vLength = pInImgBuffer->getBufSizeInBytes(2);
  218. uint32_t yuvLength = yLength + uLength + vLength;
  219. if (pRequest->mRequestIndex == 0) {//First frame
  220. //When width or height changed, recreate multiFrame
  221. if (mLatestWidth != width || mLatestHeight != height) {
  222. if (mMFProcessor != NULL) {
  223. delete mMFProcessor;
  224. mMFProcessor = NULL;
  225. }
  226. mLatestWidth = width;
  227. mLatestHeight = height;
  228. }
  229. if (mMFProcessor == NULL) {
  230. MY_LOGD("create mMFProcessor %dx%d", mLatestWidth, mLatestHeight);
  231. mMFProcessor = MFProcessor::createInstance(mLatestWidth, mLatestHeight);
  232. mMFProcessor->setFrameCount(pRequest->mRequestCount);
  233. }
  234. }
  235. mMFProcessor->addFrame((uint8_t *)pInImgBuffer->getBufVA(0),
  236. (uint8_t *)pInImgBuffer->getBufVA(1),
  237. (uint8_t *)pInImgBuffer->getBufVA(2),
  238. mLatestWidth, mLatestHeight);
  239. if (pRequest->mRequestIndex == pRequest->mRequestCount - 1) {//Last frame
  240. if (mMFProcessor != NULL) {
  241. mMFProcessor->process((uint8_t *)mOutImgBuffer->getBufVA(0),
  242. (uint8_t *)mOutImgBuffer->getBufVA(1),
  243. (uint8_t *)mOutImgBuffer->getBufVA(2),
  244. mLatestWidth, mLatestHeight);
  245. if (mDump) {
  246. char path[256];
  247. snprintf(path, sizeof(path), "/data/vendor/camera_dump/mfnr_capture_out_%d_%dx%d.%s",
  248. pRequest->mRequestIndex, mOutImgBuffer->getImgSize().w, mOutImgBuffer->getImgSize().h,
  249. format2String(mOutImgBuffer->getImgFormat()));
  250. mOutImgBuffer->saveToFile(path);
  251. }
  252. } else {
  253. memcpy((uint8_t *)mOutImgBuffer->getBufVA(0),
  254. (uint8_t *)pInImgBuffer->getBufVA(0),
  255. pInImgBuffer->getBufSizeInBytes(0));
  256. memcpy((uint8_t *)mOutImgBuffer->getBufVA(1),
  257. (uint8_t *)pInImgBuffer->getBufVA(1),
  258. pInImgBuffer->getBufSizeInBytes(1));
  259. memcpy((uint8_t *)mOutImgBuffer->getBufVA(2),
  260. (uint8_t *)pInImgBuffer->getBufVA(2),
  261. pInImgBuffer->getBufSizeInBytes(2));
  262. }
  263. mOutImgBuffer = NULL;
  264. }
  265. }
  266. if (pRequest->mIBufferFull != nullptr) {
  267. pRequest->mIBufferFull->release();
  268. }
  269. if (pRequest->mIBufferSpecified != nullptr) {
  270. pRequest->mIBufferSpecified->release();
  271. }
  272. if (pRequest->mOBufferFull != nullptr) {
  273. pRequest->mOBufferFull->release();
  274. }
  275. if (pRequest->mIMetadataDynamic != nullptr) {
  276. pRequest->mIMetadataDynamic->release();
  277. }
  278. mvRequests.push_back(pRequest);
  279. MY_LOGD("collected request(%d/%d)", pRequest->mRequestIndex, pRequest->mRequestCount);
  280. if (pRequest->mRequestIndex == pRequest->mRequestCount - 1) {
  281. for (auto req : mvRequests) {
  282. MY_LOGD("callback request(%d/%d) %p", req->mRequestIndex, req->mRequestCount, pCallback.get());
  283. if (pCallback != nullptr) {
  284. pCallback->onCompleted(req, 0);
  285. }
  286. }
  287. mvRequests.clear();
  288. }
  289. return ret;
  290. };
  291. virtual void abort(vector<RequestPtr>& pRequests) {
  292. FUNCTION_SCOPE;
  293. bool bAbort = false;
  294. IMetadata *pHalMeta;
  295. MINT32 processUniqueKey = 0;
  296. for (auto req:pRequests) {
  297. bAbort = false;
  298. pHalMeta = req->mIMetadataHal->acquire();
  299. if (!IMetadata::getEntry<MINT32>(pHalMeta, MTK_PIPELINE_UNIQUE_KEY, processUniqueKey)) {
  300. MY_LOGW("cannot get unique about MFNR capture");
  301. }
  302. if (m_callbackprt != nullptr) {
  303. MY_LOGD("m_callbackprt is %p", m_callbackprt.get());
  304. /*MFNR plugin callback request to MultiFrameNode */
  305. for (Vector<RequestPtr>::iterator it = mvRequests.begin() ; it != mvRequests.end(); it++) {
  306. if ((*it) == req) {
  307. mvRequests.erase(it);
  308. m_callbackprt->onAborted(req);
  309. bAbort = true;
  310. break;
  311. }
  312. }
  313. } else {
  314. MY_LOGW("callbackptr is null");
  315. }
  316. if (!bAbort) {
  317. MY_LOGW("Desire abort request[%d] is not found", req->mRequestIndex);
  318. }
  319. }
  320. };
  321. virtual void uninit() {
  322. FUNCTION_SCOPE;
  323. if (mMFProcessor != NULL) {
  324. delete mMFProcessor;
  325. mMFProcessor = NULL;
  326. }
  327. mLatestWidth = 0;
  328. mLatestHeight = 0;
  329. };
  330. virtual ~MFNRProviderImpl() {
  331. FUNCTION_SCOPE;
  332. };
  333. const char * format2String(MINT format) {
  334. switch(format) {
  335. case NSCam::eImgFmt_RGBA8888: return "rgba";
  336. case NSCam::eImgFmt_RGB888: return "rgb";
  337. case NSCam::eImgFmt_RGB565: return "rgb565";
  338. case NSCam::eImgFmt_STA_BYTE: return "byte";
  339. case NSCam::eImgFmt_YVYU: return "yvyu";
  340. case NSCam::eImgFmt_UYVY: return "uyvy";
  341. case NSCam::eImgFmt_VYUY: return "vyuy";
  342. case NSCam::eImgFmt_YUY2: return "yuy2";
  343. case NSCam::eImgFmt_YV12: return "yv12";
  344. case NSCam::eImgFmt_YV16: return "yv16";
  345. case NSCam::eImgFmt_NV16: return "nv16";
  346. case NSCam::eImgFmt_NV61: return "nv61";
  347. case NSCam::eImgFmt_NV12: return "nv12";
  348. case NSCam::eImgFmt_NV21: return "nv21";
  349. case NSCam::eImgFmt_I420: return "i420";
  350. case NSCam::eImgFmt_I422: return "i422";
  351. case NSCam::eImgFmt_Y800: return "y800";
  352. case NSCam::eImgFmt_BAYER8: return "bayer8";
  353. case NSCam::eImgFmt_BAYER10: return "bayer10";
  354. case NSCam::eImgFmt_BAYER12: return "bayer12";
  355. case NSCam::eImgFmt_BAYER14: return "bayer14";
  356. case NSCam::eImgFmt_FG_BAYER8: return "fg_bayer8";
  357. case NSCam::eImgFmt_FG_BAYER10: return "fg_bayer10";
  358. case NSCam::eImgFmt_FG_BAYER12: return "fg_bayer12";
  359. case NSCam::eImgFmt_FG_BAYER14: return "fg_bayer14";
  360. default: return "unknown";
  361. };
  362. };
  363. private:
  364. MINT32 mUniqueKey;
  365. MINT32 mOpenId;
  366. MINT32 mRealIso;
  367. MINT32 mShutterTime;
  368. MBOOL mZSDMode;
  369. MBOOL mFlashOn;
  370. Vector<RequestPtr> mvRequests;
  371. RequestCallbackPtr m_callbackprt;
  372. MFProcessor* mMFProcessor = NULL;
  373. IImageBuffer* mOutImgBuffer = NULL;
  374. uint32_t mLatestWidth = 0;
  375. uint32_t mLatestHeight = 0;
  376. MINT32 mEnable = 0;
  377. MINT32 mDump = 0;
  378. // add end
  379. };
  380. REGISTER_PLUGIN_PROVIDER(MultiFrame, MFNRProviderImpl);

主要函数介绍:

  • 在property函数中feature类型设置成TP_FEATURE_MFNR,并设置名称、优先级、最大帧数等等属性。尤其注意mNeedRrzoBuffer属性,一般情况下,多帧算法必须要设置为MTRUE。

  • 在negotiate函数中配置算法需要的输入、输出图像的格式、尺寸。注意,多帧算法有多帧输入,但是只需要一帧输出。因此这里设置了mRequestIndex == 0时才需要mOBufferFull。也就是只有第一帧才有输入和输出,其它帧只有输入。
    另外,还在negotiate函数中获取上层传下来的metadata参数,根据参数决定算法是否运行。

  • 在process函数中接入算法。第一帧时创建算法接口类对象,然后每一帧都调用算法接口函数addFrame加入,最后一帧再调用算法接口函数process进行处理并获取输出。

2.3.4 mtkcam3/3rdparty/customer/Android.mk

最终vendor.img需要的目标共享库是libmtkcam_3rdparty.customer.so。因此,我们还需要修改Android.mk,使模块libmtkcam_3rdparty.customer依赖libmtkcam.plugin.tp_mfnr。

  1. diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
  2. index ff5763d3c2..5e5dd6524f 100755
  3. --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
  4. +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
  5. @@ -77,6 +77,12 @@ LOCAL_SHARED_LIBRARIES += libyuv.vendor
  6. LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_watermark
  7. endif
  8. +ifeq ($(QXT_MFNR_SUPPORT), yes)
  9. +LOCAL_SHARED_LIBRARIES += libmultiframe
  10. +LOCAL_SHARED_LIBRARIES += libyuv.vendor
  11. +LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_mfnr
  12. +endif
  13. +
  14. # for app super night ev decision (experimental for customer only)
  15. LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.control.customersupernightevdecision
  16. ################################################################################
2.3.5 移除MTK示例的MFNR算法

一般情况下,MFNR 算法同一时间只允许运行一个。因此,需要移除 MTK 示例的 MFNR 算法。我们可以使用宏控来移除,这里就简单粗暴,直接注释掉了。

  1. diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk
  2. index 4e2bc68dff..da98ebd0ad 100644
  3. --- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk
  4. +++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk
  5. @@ -118,7 +118,7 @@ LOCAL_SHARED_LIBRARIES += libfeature.stereo.provider
  6. #-----------------------------------------------------------
  7. ifneq ($(strip $(MTKCAM_HAVE_MFB_SUPPORT)),0)
  8. -LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.mfnr
  9. +#LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.mfnr
  10. endif
  11. #4 Cell
  12. LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.remosaic

三、自定义metadata

添加metadata是为了让APP层能够通过metadata传递相应的参数给HAL层,以此来控制算法在运行时是否启用。APP层是通过CaptureRequest.Builder.set(@NonNull Key<T> key, T value)来设置参数的。由于MTK原生相机APP没有多帧降噪模式,因此,我们自定义metadata来验证集成效果。

vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h:

  1. diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
  2. index b020352092..714d05f350 100755
  3. --- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
  4. +++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
  5. @@ -602,6 +602,7 @@ typedef enum mtk_camera_metadata_tag {
  6. MTK_FLASH_FEATURE_END,
  7. QXT_FEATURE_WATERMARK = QXT_FEATURE_START,
  8. + QXT_FEATURE_MFNR,
  9. QXT_FEATURE_END,
  10. } mtk_camera_metadata_tag_t;

vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl:

  1. diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
  2. index 1b4fc75a0e..cba4511511 100755
  3. --- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
  4. +++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
  5. @@ -95,6 +95,8 @@ _IMP_SECTION_INFO_(QXT_FEATURE, "com.qxt.camera")
  6. _IMP_TAG_INFO_( QXT_FEATURE_WATERMARK,
  7. MINT32, "watermark")
  8. +_IMP_TAG_INFO_( QXT_FEATURE_MFNR,
  9. + MINT32, "mfnr")
  10. /******************************************************************************
  11. *

vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h :

  1. diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
  2. index 33e581adfd..4f4772424d 100755
  3. --- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
  4. +++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
  5. @@ -383,6 +383,8 @@ static auto& _QxtFeature_()
  6. sInst = {
  7. _TAG_(QXT_FEATURE_WATERMARK,
  8. "watermark", TYPE_INT32),
  9. + _TAG_(QXT_FEATURE_MFNR,
  10. + "mfnr", TYPE_INT32),
  11. };
  12. //
  13. return sInst;

vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp :

  1. diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
  2. index 591b25b162..9c3db8b1d1 100755
  3. --- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
  4. +++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
  5. @@ -583,10 +583,12 @@ updateData(IMetadata &rMetadata)
  6. {
  7. IMetadata::IEntry qxtAvailRequestEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_REQUEST_KEYS);
  8. qxtAvailRequestEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >());
  9. + qxtAvailRequestEntry.push_back(QXT_FEATURE_MFNR , Type2Type< MINT32 >());
  10. rMetadata.update(qxtAvailRequestEntry.tag(), qxtAvailRequestEntry);
  11. IMetadata::IEntry qxtAvailSessionEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_SESSION_KEYS);
  12. qxtAvailSessionEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >());
  13. + qxtAvailSessionEntry.push_back(QXT_FEATURE_MFNR , Type2Type< MINT32 >());
  14. rMetadata.update(qxtAvailSessionEntry.tag(), qxtAvailSessionEntry);
  15. }
  16. #endif
  17. @@ -605,7 +607,7 @@ updateData(IMetadata &rMetadata)
  18. // to store manual update metadata for sensor driver.
  19. IMetadata::IEntry availCharactsEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_CHARACTERISTICS_KEYS);
  20. availCharactsEntry.push_back(MTK_MULTI_CAM_FEATURE_SENSOR_MANUAL_UPDATED , Type2Type< MINT32 >());
  21. - rMetadata.update(availCharactsEntry.tag(), availCharactsEntry);
  22. + rMetadata.update(availCharactsEntry.tag(), availCharactsEntry);
  23. }
  24. if(physicIdsList.size() > 1)
  25. {

前面这些步骤完成之后,集成工作就基本完成了。我们需要重新编译一下系统源码,为节约时间,也可以只编译vendor.img。

四、APP调用算法

验证算法我们无需再重新写APP,继续使用《MTK HAL算法集成之单帧算法》中的APP代码,只需要将KEY_WATERMARK的值改为"com.qxt.camera.mfnr"即可。为样机刷入系统整包或者vendor.img,开机后,安装demo验证。我们来拍一张看看效果:

image

可以看到,集成后,这个模拟MFNR的多帧算法已经将连续的4帧图像缩小并拼接成一张图了。

五、结语

真正的多帧算法要复杂一些,例如,MFNR算法可能会根据曝光值决定是否启用,光线好就不启用,光线差就启用;HDR算法,可能会要求获取连续几帧不同曝光的图像。可能还会有智能的场景检测等等。但是不管怎么变,多帧算法大体上的集成步骤都是类似的。如果遇到不同的需求,可能要根据需求灵活调整一下代码。

原文链接:https://www.jianshu.com/p/f0b57072ea74

友情推荐:

Android 开发干货集锦

至此,本篇已结束。转载网络的文章,小编觉得很优秀,欢迎点击阅读原文,支持原创作者,如有侵权,恳请联系小编删除,欢迎您的建议与指正。同时期待您的关注,感谢您的阅读,谢谢!

点个在看,方便您使用时快速查找!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/325955
推荐阅读
相关标签
  

闽ICP备14008679号