赞
踩
本例是一个汽车外形的识别程序。
源码下载:https://download.csdn.net/download/tanmx219/10623808
训练得到的xml分类器文件内容如下所示,
- <?xml version="1.0"?>
- <opencv_storage>
- <cascade>
- <stageType>BOOST</stageType>
- <featureType>LBP</featureType>
- <height>24</height>
- <width>48</width>
- <stageParams>
- <boostType>GAB</boostType>
- <minHitRate>9.9500000476837158e-01</minHitRate>
- <maxFalseAlarm>5.0000000000000000e-01</maxFalseAlarm>
- <weightTrimRate>9.4999999999999996e-01</weightTrimRate>
- <maxDepth>1</maxDepth>
- <maxWeakCount>100</maxWeakCount></stageParams>
- <featureParams>
- <maxCatCount>256</maxCatCount>
- <featSize>1</featSize></featureParams>
- <stageNum>10</stageNum>
- <stages>
- <!-- stage 0 -->
- <_>
- <maxWeakCount>3</maxWeakCount>
- <stageThreshold>-1.0521354675292969e+00</stageThreshold>
- <weakClassifiers>
- <_>
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
一般的参数都不难理解,我这里只讲几个比较难理解的参数,
程序中的subsets是指分类器中每个特征树下面的internalNodes节点,
- <maxWeakCount>表示这一层有几个弱分类器
- <stageThreshold>表示这一层的threshold,这个threshold要跟所有弱分类器的输出之和比较后然后决定这层的输出时0还是1
internalNodes 表示的是这个弱分类器,以一个实际的internalNodes为例,
- <_>
- <internalNodes>
- 0 -1 4 1438712153 -216989541 -1056911107 -721088487
- 277360815 1535640456 706631871 1068310687</internalNodes>
- <leafValues>
- -8.9545452594757080e-01 7.0357143878936768e-01</leafValues></_>
其中 0 和 -1表示的是叶节点的索引(leafindex);
后面的那个数值4是predictCategoricalStump函数中的stump.featureIdx,他表示的是这个节点属于哪个feature,即feature的索引,可利用这个索引跟输入图像的序号快速定位输入图像的积分图然后求得这个索引对应的特征值(3.0以前的老版本中对应的是Node->split->var_idx)
后面那8个数是CvDTreeSplit的subset的内容(subsets[0]~subsets[7]),算出来的特征值会跟这个subset里的特定子集比较来看是不是属于这个子集。
leafValues表示左右child的值,二叉树如果向左分裂就是左值,向右分裂就是右值。
从上面图可以看出,OpenCV由弱分类器“并联”组成强分类器,而由强分类器“串联”组成级联分类器。
为了检测到不同大小的目标,一般有两种做法:逐步缩小图像;或者,逐步放大检测窗口。缩小图像就是把图像长宽同时按照一定比例(默认1.1 or 1.2)逐步缩小,然后检测;放大检测窗口是把检测窗口长宽按照一定比例逐步放大,这时位于检测窗口内的特征也会对应放大,然后检测。在默认的情况下,OpenCV是采取逐步缩小的情况,如下图所示,最先检测的图片是底部那张大图。
然后,对应每张图,级联分类器的大小固定的检测窗口器开始遍历图像,以便在图像找到位置不同的目标。对照程序来看,这个固定的大小就是上图的红色框,大小是XML分类器中规定的参数决定的,
- <height>24</height>
- <width>48</width>
这样,为了找到图像中不同位置的目标,需要逐次移动检测窗口,随着检测窗口的移动,窗口中的特征相应也随着窗口移动,这样就可以遍历到图像中的每一个位置,完成所有的特征检测。
如下所示
- #include <opencv2/opencv.hpp>
- #pragma comment(lib, "opencv_world341d.lib")
-
- using namespace cv;
- using namespace std;
-
- int main(int argc, char** argv)
- {
-
- String TEST_DIR = "data_file\\test";
- String CAR_CXML = "data_file\\cascade.xml";
- String NAME_WIN = "Entrenar OpenCV";
-
- CascadeClassifier car_detector;
- // read all the data
- if (!car_detector.load(CAR_CXML)) { cout << "Error en el archivo: " + CAR_CXML << endl; return -1; };
-
- for (int i = 0; i < 20; i++)
- {
- std::stringstream number;
- number << i;
-
- String image_test = TEST_DIR + "\\test-" + number.str() + ".pgm";
- Mat image = imread(image_test, 1);
-
- if (!image.data) { cout << "No image data." << endl; return -1; }
-
- std::vector<Rect> rc;
-
- car_detector.detectMultiScale(image, rc, 1.5, 3);
-
- for (size_t i = 0; i < rc.size(); i++)
- {
- rectangle(image, Point(rc[i].x, rc[i].y),
- Point(rc[i].x + rc[i].width, rc[i].y + rc[i].height),
- CV_RGB(0, 255, 0), 1);
- }
-
- namedWindow(NAME_WIN, WINDOW_AUTOSIZE);
- imshow(NAME_WIN, image);
- waitKey(0);
- }
-
- return 0;
- }
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
其中,car_detector.load(CAR_CXML)负责读取分类器,获得相应的参数。大概过程如下, car_detector.load调用的是
- bool CascadeClassifierImpl::load(const String& filename)
- {
- oldCascade.release();
- data = Data();
- featureEvaluator.release();
- // 读取文件内容,并建立相应的有节点结构的树
- FileStorage fs(filename, FileStorage::READ);
- if( !fs.isOpened() )
- return false;
-
- // 读取node中的所有feature
- if( read_(fs.getFirstTopLevelNode()) )
- return true;
-
- fs.release();
-
- oldCascade.reset((CvHaarClassifierCascade*)cvLoad(filename.c_str(), 0, 0, 0));
- return !oldCascade.empty();
- }
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
其中,
bool CascadeClassifierImpl::read_(const FileNode& root)
负责创建featureEvaluator,并调用执行函数
bool CascadeClassifierImpl::Data::read(const FileNode &root)
来初始化数据。
函数的调用路线
- CascadeClassifierImpl::load(const String& filename)
- ==》
- FileStorage::FileStorage(const String& filename, int flags, const String& encoding)
- ==》 open( filename, flags, encoding );
- ==》 fs.reset(cvOpenFileStorage( filename.c_str(), 0, flags, !encoding.empty() ? encoding.c_str() : 0));
其中cvOpenFileStorage是关键,这个函数的调用了Persistance_xml.cpp中的
void icvXMLParse( CvFileStorage* fs )
来解析XML分类器,这解析过程中,下面这两个函数
- static char* icvXMLParseTag( CvFileStorage* fs, char* ptr, CvStringHashNode** _tag, CvAttrList** _list, int* _tag_type )
- static char* icvXMLParseValue( CvFileStorage* fs, char* ptr, CvFileNode* node, int value_type CV_DEFAULT(CV_NODE_NONE))
会被轮番递归调用,最后得到所有的Tag(标签)和Value(值)。所以了解读取过程,重点就是icvXMLParseTag和icvXMLParseValue这两个函数。
功能:根据XML文件中tag(标签)读取value(值),他们都以字符串形式存在XML分类器源文件中。
- static char*
- icvXMLParseTag( CvFileStorage* fs, char* ptr, CvStringHashNode** _tag,
- CvAttrList** _list, int* _tag_type )
- {
- int tag_type = 0;
- CvStringHashNode* tagname = 0;
- CvAttrList *first = 0, *last = 0;
- int count = 0, max_count = 4;
- int attr_buf_size = (max_count*2 + 1)*sizeof(char*) + sizeof(CvAttrList);
- char* endptr;
- char c;
- int have_space;
-
- if( *ptr == '\0' )
- CV_PARSE_ERROR( "Preliminary end of the stream" );
-
- if( *ptr != '<' )
- CV_PARSE_ERROR( "Tag should start with \'<\'" );
-
- // 判断目前的指令类型
- ptr++;
- if( cv_isalnum(*ptr) || *ptr == '_' )
- tag_type = CV_XML_OPENING_TAG;
- else if( *ptr == '/' )
- {
- tag_type = CV_XML_CLOSING_TAG;
- ptr++;
- }
- else if( *ptr == '?' )
- {
- tag_type = CV_XML_HEADER_TAG;
- ptr++;
- }
- else if( *ptr == '!' )
- {
- tag_type = CV_XML_DIRECTIVE_TAG;
- assert( ptr[1] != '-' || ptr[2] != '-' );
- ptr++;
- }
- else
- CV_PARSE_ERROR( "Unknown tag type" );
-
- // 根据指令类型读取值
- for(;;)
- {
- CvStringHashNode* attrname;
-
- if( !cv_isalpha(*ptr) && *ptr != '_' )
- CV_PARSE_ERROR( "Name should start with a letter or underscore" );
-
- endptr = ptr - 1;
- do c = *++endptr;
- while( cv_isalnum(c) || c == '_' || c == '-' );
-
- // 生成一个hash节点,并得到key,从ptr开始到endptr结束的字符串会被拷贝到指针
- // attrname->str指向的空间
- attrname = cvGetHashedKey( fs, ptr, (int)(endptr - ptr), 1 );
-
- CV_Assert(attrname);
- ptr = endptr;
-
- if( !tagname ) // 如果(本节点中)还没有根节点,就把attrname当成根节点
- tagname = attrname;
- else // 如果已经有了根节点,就直接把新生成的节点放在下面
- {
- if( tag_type == CV_XML_CLOSING_TAG )
- CV_PARSE_ERROR( "Closing tag should not contain any attributes" );
-
- // 读取tag(!last表示目前是标签名称tag而不是value值),
- // last==0 表示本tag还没有根节点, count >= max_count表示该链表已经够长了,
- // 要建新的
- if( !last || count >= max_count )
- {
- CvAttrList* chunk;
-
- // 从fs->memstorage中分配size大小的内存给ptr并返回
- chunk = (CvAttrList*)cvMemStorageAlloc( fs->memstorage, attr_buf_size );
- memset( chunk, 0, attr_buf_size );
- chunk->attr = (const char**)(chunk + 1);
- count = 0;
- // 如果链表中还没节点(内容),则该chunk就是第一个,如果已经有了节点,
- // 就把chunk作为下一个节点
- if( !last )
- first = last = chunk;
- else
- last = last->next = chunk;
- }
- last->attr[count*2] = attrname->str.ptr; // last->attr[count*2] 指针指向tag的内存
- }
-
- if( last ) // 读取value(!last表示标签名称tag, last表示是value值)
- {
- CvFileNode stub;
-
- // 跳过空格和一些无效字符
- if( *ptr != '=' )
- {
- ptr = icvXMLSkipSpaces( fs, ptr, CV_XML_INSIDE_TAG );
- if( *ptr != '=' )
- CV_PARSE_ERROR( "Attribute name should be followed by \'=\'" );
- }
-
- // 确认值是放在双引号之中的
- c = *++ptr;
- if( c != '\"' && c != '\'' )
- {
- ptr = icvXMLSkipSpaces( fs, ptr, CV_XML_INSIDE_TAG );
- if( *ptr != '\"' && *ptr != '\'' )
- CV_PARSE_ERROR( "Attribute value should be put into single or double quotes" );
- }
-
- // 取value值,读以的结果保存在stub->data.str
- ptr = icvXMLParseValue( fs, ptr, &stub, CV_NODE_STRING );
- assert( stub.tag == CV_NODE_STRING );
- // 注意,这里last->attr取得的tag和value,只不过是把last->attr的指针指向数据的内存
- last->attr[count*2+1] = stub.data.str.ptr;
- count++;
- }
-
- c = *ptr;
- have_space = cv_isspace(c) || c == '\0';
-
- if( c != '>' )
- {
- ptr = icvXMLSkipSpaces( fs, ptr, CV_XML_INSIDE_TAG );
- c = *ptr;
- }
-
- if( c == '>' )
- {
- if( tag_type == CV_XML_HEADER_TAG )
- CV_PARSE_ERROR( "Invalid closing tag for <?xml ..." );
- ptr++;
- break;
- }
- else if( c == '?' && tag_type == CV_XML_HEADER_TAG )
- {
- if( ptr[1] != '>' )
- CV_PARSE_ERROR( "Invalid closing tag for <?xml ..." );
- ptr += 2;
- break;
- }
- else if( c == '/' && ptr[1] == '>' && tag_type == CV_XML_OPENING_TAG )
- {
- tag_type = CV_XML_EMPTY_TAG;
- ptr += 2;
- break;
- }
-
- if( !have_space )
- CV_PARSE_ERROR( "There should be space between attributes" );
- }
- // 保存获得的结果
- *_tag = tagname; // 整个节点
- *_tag_type = tag_type;
- *_list = first; // 第一个chunk的位置
-
- return ptr;
- }
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
读取值(字符串值或数据)
一般情况下,在有嵌套结构的文件中,icvXMLParseValue会递归调用icvXMLParseValue和icvXMLParseTag,最后完成文件的读取。
最终结果保存到CvFileNode中,
- typedef struct CvString
- {
- int len;
- char* ptr;
- }
- CvString;
-
- /** All the keys (names) of elements in the readed file storage
- are stored in the hash to speed up the lookup operations: */
- typedef struct CvStringHashNode
- {
- unsigned hashval;
- CvString str;
- struct CvStringHashNode* next;
- }
- CvStringHashNode;
-
- typedef struct CvGenericHash CvFileNodeHash;
-
- /** Basic element of the file storage - scalar or collection: */
- typedef struct CvFileNode
- {
- int tag;
- struct CvTypeInfo* info; /**< type information
- (only for user-defined object, for others it is 0) */
- union
- {
- double f; /**< scalar floating-point number */
- int i; /**< scalar integer number */
- CvString str; /**< text string */
- CvSeq* seq; /**< sequence (ordered collection of file nodes) */
- CvFileNodeHash* map; /**< map (collection of named file nodes) */
- } data;
- }
- CvFileNode;
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
多尺度模式识别采用典型的图(map)结构来管理内存。详情请参考:
https://blog.csdn.net/tanmx219/article/details/82011196
在动态内存管理中说过,OpenCV数据结构的底层结构管理一般都是CvFileStorage,
- typedef struct CvFileStorage
- {
- int flags;
- int fmt;
- int write_mode;
- int is_first;
- CvMemStorage* memstorage;
- CvMemStorage* dststorage;
- CvMemStorage* strstorage;
- CvStringHash* str_hash;
- CvSeq* roots;
- CvSeq* write_stack;
- int struct_indent;
- int struct_flags;
- CvString struct_tag;
- int space;
- char* filename;
- FILE* file;
- gzFile gzfile;
- char* buffer;
- char* buffer_start;
- char* buffer_end;
- int wrap_margin;
- int lineno;
- int dummy_eof;
- const char* errmsg;
- char errmsgbuf[128];
-
- CvStartWriteStruct start_write_struct;
- CvEndWriteStruct end_write_struct;
- CvWriteInt write_int;
- CvWriteReal write_real;
- CvWriteString write_string;
- CvWriteComment write_comment;
- CvStartNextStream start_next_stream;
-
- const char* strbuf;
- size_t strbufsize, strbufpos;
- std::deque<char>* outbuf;
-
- base64::Base64Writer * base64_writer;
- bool is_default_using_base64;
- base64::fs::State state_of_writing_base64; /**< used in WriteRawData only */
-
- bool is_write_struct_delayed;
- char* delayed_struct_key;
- int delayed_struct_flags;
- char* delayed_type_name;
-
- bool is_opened;
- }
- CvFileStorage;
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
为了加快查找速度,OpenCV采用了hash表的形式来管理节点的查找,注意下边这些定义,它在后面会被反复用到,具体定义参考persistance.hpp源文件,
- typedef struct CvGenericHash
- {
- CV_SET_FIELDS()
- int tab_size;
- void** table;
- }
- CvGenericHash;
- typedef CvGenericHash CvStringHash;
- typedef struct CvGenericHash CvFileNodeHash;
- typedef struct CvGenericHash CvFileNodeHash;
其中的CV_SET_FIELDS 就是CvSet。 该结构展开后,可以写成
- typedef struct CvGenericHash
- {
- //typedef struct CvSet{
- // CvSeq part
- int flags; /**< Miscellaneous flags. */
- int header_size; /**< Size of sequence header. */
- struct CvSeq* h_prev; /**< Previous sequence. */
- struct CvSeq* h_next; /**< Next sequence. */
- struct CvSeq* v_prev; /**< 2nd previous sequence. */
- struct CvSeq* v_next /**< 2nd next sequence. */
- int total; /**< Total number of elements. */
- int elem_size; /**< Size of sequence element in bytes. */
- schar* block_max; /**< Maximal bound of the last block. */
- schar* ptr; /**< Current write pointer. */
- int delta_elems; /**< Grow seq this many at a time. */
- CvMemStorage* storage; /**< Where the seq is stored. */
- CvSeqBlock* free_blocks; /**< Free blocks list. */
- CvSeqBlock* first; /**< Pointer to the first sequence block. */
-
- // CvSetElem part and others
- CvSetElem* free_elems;
- int active_count;
- //}CvSet;
-
- int tab_size;
- void** table;
- }
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
可见,和CvSet相比,CvStringHash增加了两项,tab_size和table。在《OpenCV源码解析之动态内存管理》中,我们讲过,
- CvSetElem
- #define CV_SET_ELEM_FIELDS(elem_type) \
- int flags; \
- struct elem_type* next_free;
-
- typedef struct CvSetElem
- {
- CV_SET_ELEM_FIELDS(CvSetElem)
- }
- CvSetElem;
展开后为
- typedef struct CvSetElem
- {
- int flags;
- struct CvSetElem * next_free;
- }
- CvSetElem;
CvStringHashNode 转换示例:
- node = (CvStringHashNode*)cvSetNew( (CvSet*)map );
- typedef struct CvStringHashNode
- {
- unsigned hashval;
- CvString str;
- struct CvStringHashNode* next;
- }
- CvStringHashNode;
CvFileNode 转换示例:
- CvFileMapNode* node = (CvFileMapNode*)cvSetNew( (CvSet*)map );
-
- /** Basic element of the file storage - scalar or collection: */
- typedef struct CvFileNode
- {
- int tag;
- struct CvTypeInfo* info; /**< type information
- (only for user-defined object, for others it is 0) */
- union
- {
- double f; /**< scalar floating-point number */
- int i; /**< scalar integer number */
- CvString str; /**< text string */
- CvSeq* seq; /**< sequence (ordered collection of file nodes) */
- CvFileNodeHash* map; /**< map (collection of named file nodes) */
- } data;
- }
- CvFileNode;
- typedef struct CvFileMapNode
- {
- CvFileNode value;
- const CvStringHashNode* key;
- struct CvFileMapNode* next;
- }
- CvFileMapNode;
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
本次讲解中所有参数: scaleFactor = 1.5,即每次扩大窗口50% 初始化的窗口大小为<height>24</height>及<width>48</width>,如前面xml分类器所示。
整个检测过程,最重要的是计算积分图及LBP特征计算,与之紧密相连的有一个参数sbuf。下面我们详细解释。
因为被搜索的图片大小是按比例变化的(参考本文前面《xml分类器及检测原理》),所以会有多个图片,这些积分图全部都对应放在sbuf中,即sbuf保存了所有大小不同的图片的积分图,要了解这些积分图在内存中的分布,可以参考本文后面的FeatureEvaluator::updateScaleData的源码解析部分。
积分图的填充是在FeatureEvaluator::setImage中通过computerChannels完成的。现在来看一下整个过程,
首先,真正的检测函数是cascadeClassifierImpl::detectMultiScaleNoGrouping,在该函数调用 FeatureEvaluator::setImage来设置目标图像,最后通过computerChannels函数调用Integral函数来计算积分图。
然后,在cascadeClassifierImpl::detectMultiScaleNoGrouping的最后,调用
- CascadeClassifierInvoker invoker(...);
- parallel_for_(Range(0, nstripes), invoker);
并行执行类,该类的重载函数void operator()(const Range& range) const是计算的主体,在这个函数中,最重要的是分类器的classifier->runAt函数,在这个runAt函数中,完成了两部分的工作,
runAt的第一部分工作:先是通过evaluator->setWindow(pt, scaleIdx)函数,设置了pwin的位置
pwin = &sbuf.at<int>(pt) + s.layer_ofs;
使pwin指向当前所需的积分图,
runAt的第二部分工作:通过predictCategoricalStump调用LBPEvaluator的重载()函数
- int operator()(int featureIdx) const
- { return optfeaturesPtr[featureIdx].calc(pwin); }
也就是LBPEvaluator::OptFeature :: calc来完成对积分图的LBP特征计算。
下面我们看一下detectMultiScale这个函数,
参数1:image–待检测图片,一般为灰度图像以加快检测速度;
参数2:objects–被检测物体的矩形框向量组;为输出量,如某特征检测矩阵Mat
参数3:scaleFactor–表示在前后两次相继的扫描中,搜索窗口的比例系数。默认为1.1即每次搜索窗口依次扩大10%
参数4:minNeighbors–表示构成检测目标的相邻矩形的最小个数(默认为3个)。 如果组成检测目标的小矩形的个数和小于 min_neighbors - 1 都会被排除。 如果min_neighbors 为 0, 则函数不做任何操作就返回所有的被检候选矩形框, 这种设定值一般用在用户自定义对检测结果的组合程序上;
参数5:flags–要么使用默认值,要么使用CV_HAAR_DO_CANNY_PRUNING,如果设置为CV_HAAR_DO_CANNY_PRUNING,那么函数将会使用Canny边缘检测来排除边缘过多或过少的区域,因此这些区域通常不会是某特征所在区域;
参数6、7:minSize和maxSize用来限制得到的目标区域的范围。也就是我本次训练得到实际项目尺寸大小 函数介绍: detectMultiscale函数为多尺度多目标检测: 多尺度:通常搜索目标的模板尺寸大小是固定的,但是不同图片大小不同,所以目标对象的大小也是不定的,所以多尺度即不断缩放图片大小(缩放到与模板匹配),通过模板滑动窗函数搜索匹配;同一副图片可能在不同尺度下都得到匹配值,所以多尺度检测函数detectMultiscale是多尺度合并的结果。 多目标:通过检测符合模板匹配对象,可得到多个目标,均输出到objects向量里面。
- void CascadeClassifier::detectMultiScale( InputArray image,
- CV_OUT std::vector<Rect>& objects,
- double scaleFactor,
- int minNeighbors, int flags,
- Size minSize,
- Size maxSize )
detectMultiScale的真正执行函数是
-
- CascadeClassifierImpl::detectMultiScaleNoGrouping
- void CascadeClassifierImpl::detectMultiScaleNoGrouping( InputArray _image, std::vector<Rect>& candidates,
- std::vector<int>& rejectLevels, std::vector<double>& levelWeights,
- double scaleFactor, Size minObjectSize, Size maxObjectSize,
- bool outputRejectLevels )
- {
- CV_INSTRUMENT_REGION()
-
- Size imgsz = _image.size();
- Size originalWindowSize = getOriginalWindowSize(); 初始搜索窗口的尺寸
-
- if( maxObjectSize.height == 0 || maxObjectSize.width == 0 )
- maxObjectSize = imgsz;
-
- // If a too small image patch is entering the function, break early before any processing
- if( (imgsz.height < originalWindowSize.height) || (imgsz.width < originalWindowSize.width) )
- return;
-
- std::vector<float> all_scales, scales;
- all_scales.reserve(1024);
- scales.reserve(1024);
-
- // 按scaleFactor比例扩大搜索窗口,直到窗口的宽大于图片尺寸的宽,
- // 或窗口的高大于图片尺寸的高(不包括大于的)
- // First calculate all possible scales for the given image and model,
- // then remove undesired scales
- // This allows us to cope with single scale detections
- // (minSize == maxSize) that do not fall on precalculated scale
- for( double factor = 1; ; factor *= scaleFactor )
- {
- Size windowSize( cvRound(originalWindowSize.width*factor), cvRound(originalWindowSize.height*factor) );
- if( windowSize.width > imgsz.width || windowSize.height > imgsz.height )
- break;
- all_scales.push_back((float)factor);
- }
-
- // Scales记录了所有有效的搜索窗口的比例,
- // This will capture allowed scales and a minSize==maxSize scale, if it is in the precalculated scales
- for( size_t index = 0; index < all_scales.size(); index++){
- Size windowSize( cvRound(originalWindowSize.width*all_scales[index]), cvRound(originalWindowSize.height*all_scales[index]) );
- if( windowSize.width > maxObjectSize.width || windowSize.height > maxObjectSize.height)
- break; // 窗口的尺寸(高或宽)不得大于最大目标的尺寸(高或宽)
- if( windowSize.width < minObjectSize.width || windowSize.height < minObjectSize.height )
- continue; // 窗口的尺寸(高或宽)不得小于最小目标的尺寸(高或宽)
- scales.push_back(all_scales[index]);
- }
-
- // If minSize and maxSize parameter are equal and scales is not filled yet,
- // then the scale was not available in the precalculated scales
- // In that case we want to return the most fitting scale
- // (closest corresponding scale using L2 distance)
- if( scales.empty() && !all_scales.empty() ){
- std::vector<double> distances;
-
- // Calculate distances
- for(size_t v = 0; v < all_scales.size(); v++){
- Size windowSize( cvRound(originalWindowSize.width*all_scales[v]),
- cvRound(originalWindowSize.height*all_scales[v]) );
- double d = (minObjectSize.width - windowSize.width) *
- (minObjectSize.width - windowSize.width)
- + (minObjectSize.height - windowSize.height) *
- (minObjectSize.height - windowSize.height);
- distances.push_back(d);
- }
- // Take the index of lowest value
- // Use that index to push the correct scale parameter
- size_t iMin=0;
- for(size_t i = 0; i < distances.size(); ++i){
- if(distances[iMin] > distances[i])
- iMin=i;
- }
- scales.push_back(all_scales[iMin]);
- }
-
- candidates.clear();
- rejectLevels.clear();
- levelWeights.clear();
-
- #ifdef HAVE_OPENCL
- bool use_ocl = tryOpenCL && ocl::isOpenCLActivated() &&
- OCL_FORCE_CHECK(_image.isUMat()) &&
- featureEvaluator->getLocalSize().area() > 0 &&
- (data.minNodesPerTree == data.maxNodesPerTree) &&
- !isOldFormatCascade() &&
- maskGenerator.empty() &&
- !outputRejectLevels;
- #endif
-
- Mat grayImage;
- _InputArray gray;
-
- if (_image.channels() > 1)
- cvtColor(_image, grayImage, COLOR_BGR2GRAY);
- else if (_image.isMat())
- grayImage = _image.getMat();
- else
- _image.copyTo(grayImage);
- gray = grayImage;
-
- if( !featureEvaluator->setImage(gray, scales) )
- return;
-
- #ifdef HAVE_OPENCL
- // OpenCL code
- CV_OCL_RUN(use_ocl, ocl_detectMultiScaleNoGrouping( scales, candidates ))
-
- if (use_ocl)
- tryOpenCL = false;
- #endif
-
- // CPU code
- featureEvaluator->getMats(); // 确定得到有效的sbuf(内含image的积分图)
- {
- Mat currentMask;
- if (maskGenerator)
- currentMask = maskGenerator->generateMask(gray.getMat());
-
- size_t i, nscales = scales.size();
- cv::AutoBuffer<int> stripeSizeBuf(nscales);
- int* stripeSizes = stripeSizeBuf;
- const FeatureEvaluator::ScaleData* s = &featureEvaluator->getScaleData(0);
-
- // szworking = szw = (163,92) = (211-48, 116-24),
- // 这个是减去搜索窗口大小后的尺寸 (也就是允许滑动的范围)
- Size szw = s->getWorkingSize(data.origWinSize);
- // nstripes 是条带的个数
- int nstripes = cvCeil(szw.width/32.);
- for( i = 0; i < nscales; i++ )
- {
- szw = s[i].getWorkingSize(data.origWinSize);
- // 高度 szw.height 对齐到nstripes(方便均分)
- stripeSizes[i] = std::max((szw.height/s[i].ystep +
- nstripes-1)/nstripes, 1)*s[i].ystep;
- }
-
- CascadeClassifierInvoker invoker(*this, (int)nscales, nstripes, s, stripeSizes,
- candidates, rejectLevels, levelWeights,
- outputRejectLevels, currentMask, &mtx);
- parallel_for_(Range(0, nstripes), invoker);
- }
- }
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
功能:
计算所有搜索窗口所需要的bufSize,并调整搜索窗口在内存中的分配。
解说:
采取的方法是,设置一个足够宽的内存块,该宽度是所有图块中最大那块的宽度,且32位对齐。
然后在这块中,按搜索窗口从大到小分配,最大的窗口在左上角,如果宽度上容不了下一个窗口,则把下一个窗口放在下面。 以我们的实例,根据detectMultiScaleNoGrouping中的描述,对于210x115的图片,当搜索窗口初始化大小为48x24时,允许的scale有4个,分别为
{1, 1.5, 2.25, 3.375}
实际使用中,被搜索窗口的大小是记录在scaleData中的,最大是
(cvRound(imgsz.width/sc)+1, cvRound(imgsz.height/sc)+1) = (210/1+1, 115/1+1) = (211,116)
所以这4个窗口的scale和size依次就是
{scale=1.00000000 szi={width=211 height=116 } ...}
{scale=1.50000000 szi={width=141 height=78 }...}
{scale=2.25000000 szi={width=94 height=52 }...}
{scale=3.37500000 szi={width=63 height=35 } ... }
内存中为这些窗口分配的最大宽度是
sbufSize.width = std::max(sbufSize.width, (int)alignSize(cvRound(imgsz.width/_scales[0]) + 31, 32));
该宽度是所有图块中最大那块的宽度,且32位对齐。
实际分配就是这样的
- bool FeatureEvaluator::updateScaleData( Size imgsz, const std::vector<float>& _scales )
- {
- if( scaleData.empty() )
- scaleData = makePtr<std::vector<ScaleData> >();
-
- // nscales表示放大的次数,比如每次imgsz放大scale倍,共计放大nscales次。
- size_t i, nscales = _scales.size();
- bool recalcOptFeatures = nscales != scaleData->size();
- scaleData->resize(nscales); // 重置(初始化)放大次数
-
- int layer_dy = 0;
- Point layer_ofs(0,0); // layer offset point
- Size prevBufSize = sbufSize; // local buf size
- // 设置一个足够宽的内存块,该宽度是所有图块中最大那块的宽度,且32位对齐。
- sbufSize.width = std::max(sbufSize.width,
- int)alignSize(cvRound(imgsz.width/_scales[0]) + 31, 32));
- recalcOptFeatures = recalcOptFeatures || sbufSize.width != prevBufSize.width;
-
- // 设置scaleData
- for( i = 0; i < nscales; i++ )
- {
- FeatureEvaluator::ScaleData& s = scaleData->at(i);
- if( !recalcOptFeatures && fabs(s.scale - _scales[i])
- > FLT_EPSILON*100*_scales[i] )
- recalcOptFeatures = true;
- float sc = _scales[i];
- Size sz;
- sz.width = cvRound(imgsz.width/sc);
- sz.height = cvRound(imgsz.height/sc);
- s.ystep = sc >= 2 ? 1 : 2;
- s.scale = sc;
- s.szi = Size(sz.width+1, sz.height+1);
-
- if( i == 0 )
- {
- layer_dy = s.szi.height;
- }
-
- // if larger than the image buf width, then restart from the left lower
- if( layer_ofs.x + s.szi.width > sbufSize.width )
- {
- layer_ofs = Point(0, layer_ofs.y + layer_dy); // left lower point
- layer_dy = s.szi.height;
- }
- s.layer_ofs = layer_ofs.y*sbufSize.width + layer_ofs.x;
- layer_ofs.x += s.szi.width;
- }
-
- layer_ofs.y += layer_dy;
- sbufSize.height = std::max(sbufSize.height, layer_ofs.y);
- recalcOptFeatures = recalcOptFeatures || sbufSize.height != prevBufSize.height;
- return recalcOptFeatures;
- }
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
功能: 设置图片特征块,并计算所有搜索窗口对应的积分特征,结果保存到sbuf。
- bool FeatureEvaluator::setImage( InputArray _image, const std::vector<float>& _scales )
- {
- CV_INSTRUMENT_REGION()
-
- Size imgsz = _image.size();
- bool recalcOptFeatures = updateScaleData(imgsz, _scales);
-
- size_t i, nscales = scaleData->size();
- if (nscales == 0)
- {
- return false;
- }
- Size sz0 = scaleData->at(0).szi; // 取最大的搜索窗口
- sz0 = Size(std::max(rbuf.cols, (int)alignSize(sz0.width, 16)), std::max(rbuf.rows, sz0.height)); // 宽度16位对齐
-
- if (recalcOptFeatures)
- {
- computeOptFeatures();
- copyVectorToUMat(*scaleData, uscaleData); // 把scaleData拷贝到uscaleData里去
- }
-
- if (_image.isUMat() && localSize.area() > 0)
- {
- usbuf.create(sbufSize.height*nchannels, sbufSize.width, CV_32S);
- urbuf.create(sz0, CV_8U);
-
- for (i = 0; i < nscales; i++)
- {
- const ScaleData& s = scaleData->at(i);
- UMat dst(urbuf, Rect(0, 0, s.szi.width - 1, s.szi.height - 1));
- resize(_image, dst, dst.size(), 1. / s.scale, 1. / s.scale, INTER_LINEAR_EXACT);
- computeChannels((int)i, dst);
- }
- sbufFlag = USBUF_VALID;
- }
- else
- {
- Mat image = _image.getMat();
- // 开辟内存空间, sbufSize是在updateScaleData中更新的
- sbuf.create(sbufSize.height*nchannels, sbufSize.width, CV_32S);
- rbuf.create(sz0, CV_8U); // 原始图片大小的内存空间
-
- for (i = 0; i < nscales; i++)
- {
- // 逐个计算scaleData大小的矩阵dst,并把图像image调整到大小和dst一样,
- // 然后用computeChannels计算该矩阵的特征
- const ScaleData& s = scaleData->at(i);
- Mat dst(s.szi.height - 1, s.szi.width - 1, CV_8U, rbuf.ptr());
- resize(image, dst, dst.size(), 1. / s.scale, 1. / s.scale, INTER_LINEAR_EXACT);
- computeChannels((int)i, dst); // 计算积分特征,计算结果保存到sbuf
- }
- sbufFlag = SBUF_VALID;
- }
-
- return true;
- }
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
根据滑窗机制会产生很多的候选框,候选框间会相互交叠,相互包含等,最后需要合并候选框,以识别整体特征所在的范围。
这里要用到一个类,用来计算相似的矩形,
- Ref. objdetect.hpp
- //! class for grouping object candidates, detected by Cascade Classifier, HOG etc.
- //! instance of the class is to be passed to cv::partition (see cxoperations.hpp)
-
- //SimilarRect中计算相似度的方法:
- inline bool operator()(const Rect& r1, const Rect& r2) const
- {
- // delta为最小长宽和高的eps/2倍 = [(width + height)/2] * eps
- double delta = eps*(std::min(r1.width, r2.width) +
- std::min(r1.height, r2.height))*0.5;
- // 如果矩形的四个顶点的位置差别都小于delta,则表示相似的矩形
- return std::abs(r1.x - r2.x) <= delta &&
- std::abs(r1.y - r2.y) <= delta &&
- std::abs(r1.x + r1.width - r2.x - r2.width) <= delta &&
- std::abs(r1.y + r1.height - r2.y - r2.height) <= delta;
- }
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
groupRectangles
- void groupRectangles(std::vector<Rect>& rectList, int groupThreshold, double eps,
- std::vector<int>* weights, std::vector<double>* levelWeights)
- {
- CV_INSTRUMENT_REGION()
-
- // 当组合阈值groupThreshold<=0时,如果需要输出权重weights,
- // 则weights中设置与rectList同样个数的1,返回
- if( groupThreshold <= 0 || rectList.empty() )
- {
- if( weights && !levelWeights )
- {
- size_t i, sz = rectList.size();
- weights->resize(sz);
- for( i = 0; i < sz; i++ )
- (*weights)[i] = 1;
- }
- return;
- }
-
- // 如果有合并操作,下面进行合并
- std::vector<int> labels; // labels表示每个rect属于哪个类别(nclasses)
- // nclasses表示组合类别(相似度计算使用SimilarRects类)
- // 对rectList中的矩形进行分类
- int nclasses = partition(rectList, labels, SimilarRects(eps));
-
- std::vector<Rect> rrects(nclasses);
- std::vector<int> rweights(nclasses, 0);
- std::vector<int> rejectLevels(nclasses, 0);
- std::vector<double> rejectWeights(nclasses, DBL_MIN);
- int i, j, nlabels = (int)labels.size();
-
- //组合分到同一类别的矩形
- for( i = 0; i < nlabels; i++ )
- {
- int cls = labels[i];
- rrects[cls].x += rectList[i].x;
- rrects[cls].y += rectList[i].y;
- rrects[cls].width += rectList[i].width;
- rrects[cls].height += rectList[i].height;
- rweights[cls]++;
- }
-
- bool useDefaultWeights = false; //并保存当前类别下通过stage的最大值以及最大的权重
-
- if ( levelWeights && weights && !weights->empty() && !levelWeights->empty() )
- {
- for( i = 0; i < nlabels; i++ )
- {
- int cls = labels[i];
- if( (*weights)[i] > rejectLevels[cls] )
- {
- rejectLevels[cls] = (*weights)[i];
- rejectWeights[cls] = (*levelWeights)[i];
- }
- else if( ( (*weights)[i] == rejectLevels[cls] ) &&
- ( (*levelWeights)[i] > rejectWeights[cls] ) )
- rejectWeights[cls] = (*levelWeights)[i];
- }
- }
- else
- useDefaultWeights = true;
-
- // 得到各类矩形的平均大小
- for( i = 0; i < nclasses; i++ )
- {
- Rect r = rrects[i];
- float s = 1.f/rweights[i];
- rrects[i] = Rect(saturate_cast<int>(r.x*s),
- saturate_cast<int>(r.y*s),
- saturate_cast<int>(r.width*s),
- saturate_cast<int>(r.height*s));
- }
-
- rectList.clear();
- if( weights )
- weights->clear();
- if( levelWeights )
- levelWeights->clear();
-
-
- for( i = 0; i < nclasses; i++ )
- {
- Rect r1 = rrects[i];
- int n1 = rweights[i];
- double w1 = rejectWeights[i];
- int l1 = rejectLevels[i];
-
- // filter out rectangles which don't have enough similar rectangles
- if( n1 <= groupThreshold )
- continue;
- // filter out small face rectangles inside large rectangles
- for( j = 0; j < nclasses; j++ )
- {
- int n2 = rweights[j];
-
- if( j == i || n2 <= groupThreshold )
- continue;
- // 找到了r1,r2两个不同的类别的矩阵,都满足<= groupThreshold的要求
- Rect r2 = rrects[j];
-
- int dx = saturate_cast<int>( r2.width * eps );
- int dy = saturate_cast<int>( r2.height * eps );
- // 当r1在r2的内部时,该结果无用,跳出
- if( i != j &&
- r1.x >= r2.x - dx &&
- r1.y >= r2.y - dy &&
- r1.x + r1.width <= r2.x + r2.width + dx &&
- r1.y + r1.height <= r2.y + r2.height + dy &&
- (n2 > std::max(3, n1) || n1 < 3) )
- break;
- }
-
- if( j == nclasses )
- {
- rectList.push_back(r1);
- if( weights )
- weights->push_back(useDefaultWeights ? n1 : l1);
- if( levelWeights )
- levelWeights->push_back(w1);
- }
- }
- }
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。