当前位置:   article > 正文

图像特征匹配总结_match_descriptors函数

match_descriptors函数

一. Brute-Force的匹配基础

蛮力匹配器:首先在第一幅图像中选取一个关键点然后依次与第二幅图像的每个关键点进行[描述符]距离测试,最后返回距离最近的关键点。函数原型BFMatcher::BFMatcher(int normType=NORM_L2, bool crossCheck=false),如下所示:

1. normType:它是用来指定要使用的距离测试类型。默认值为cv2.Norm_L2。这很适合SIFT和SURF等(c2.NORM_L1也可)。对于使用二进制描述符的ORB、BRIEF和BRISK算法等,要使用cv2.NORM_HAMMING,这样就会返回两个测试对象之间的汉明距离。如果ORB算法的参数设置为WTA_K==3或4,normType就应该设置成cv2.NORM_HAMMING2。

2. crossCheck:默认值为False。如果设置为True,匹配条件就会更加严格,只有到A中的第个特征点与B中的第个特征点距离最近,并且B中的第个特征点到A中的第个特征点也是最近时才会返回最佳匹配,即这两个特征点要互相匹配才行。

BFMatcher对象有两个方法BFMatcher.match()和BFMatcher.knnMatch()。第一个方法会返回最佳匹配。第二个方法为每个关键点返回个最佳匹配,其中是由用户设定的。使用cv2.drawMatches()来绘制匹配的点,它会将两幅图像先水平排列,然后在最佳匹配的点之间绘制直线。如果前面使用的是BFMatcher.knnMatch(),现在可以使用函数cv2.drawMatchsKnn为每个关键点和它的个最佳匹配点绘制匹配线。如果要选择性绘制就要给函数传入一个掩模。

 

二. 对ORB描述符进行蛮力匹配

使用ORB描述符来进行特征匹配,在目标图像中寻找查询图像的位置,如下所示:

  1. import cv2
  2. from matplotlib import pyplot as plt
  3. img1 = cv2.imread('box.png',0) # queryImage
  4. img2 = cv2.imread('box_in_scene.png',0) # trainImage
  5. # Initiate SIFT detector
  6. orb = cv2.ORB()
  7. # find the keypoints and descriptors with SIFT
  8. kp1, des1 = orb.detectAndCompute(img1,None)
  9. kp2, des2 = orb.detectAndCompute(img2,None)
  10. # create BFMatcher object
  11. bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
  12. # Match descriptors.
  13. matches = bf.match(des1,des2)
  14. # Sort them in the order of their distance.
  15. matches = sorted(matches,key=lambda x:x.distance)
  16. # Draw first 10 matches.
  17. img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10],flags=2)
  18. plt.imshow(img3)
  19. plt.show()

结果输出,如下所示:

matches=bf.match(des1,des2)返回值是一个DMatch对象列表。DMatch对象具有属性:DMatch.distance表示描述符之间的距离,越小越好。DMatch.trainIdx表示目标图像中描述符的索引。DMatch.queryIdx表示查询图像中描述符的索引。DMatch.imgIdx表示目标图像的索引。

 

三. 对SIFT描述符进行蛮力匹配和比值测试

如果k等于2,就会为每个关键点绘制两条最佳匹配直线。比值测试的意思是首先获取与A距离最近的点B[最近]和C[次近],只有当B/C小于阈值[0.75]才被认为是匹配。因为假设匹配是一一对应,真正匹配的理想距离为0。

  1. import cv2
  2. from matplotlib import pyplot as plt
  3. img1 = cv2.imread('box.png',0) # queryImage
  4. img2 = cv2.imread('box_in_scene.png',0) # trainImage
  5. # Initiate SIFT detector
  6. sift = cv2.SIFT()
  7. # find the keypoints and descriptors with SIFT
  8. kp1, des1 = sift.detectAndCompute(img1,None)
  9. kp2, des2 = sift.detectAndCompute(img2,None)
  10. # BFMatcher with default params
  11. bf = cv2.BFMatcher()
  12. matches = bf.knnMatch(des1,des2,k=2)
  13. # Apply ratio test
  14. good = []
  15. for m,n in matches:
  16. if m.distance < 0.75*n.distance:
  17. good.append([m])
  18. # cv2.drawMatchesKnn expects list of lists as matches
  19. img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,flags=2)
  20. plt.imshow(img3)
  21. plt.show()

结果输出,如下所示:

 

四. FLANN匹配器

快速最近邻搜索包[Fast Library for Approximate NearestNeighbors,FLANN]是一个对大数据集和高维特征进行最近邻搜索的算法的集合,在面对大数据集时它的效果要好于BFMatcher。使用FLANN匹配需要传入两个字典参数,第一个参数是IndexParams,对于SIFT和SURF,可以传入参数index_params=dict(algorithm=FLANN_INDEX_KDTREE, trees=5)。对于ORB,可以传入参数index_params=dict(algorithm=FLANN_INDEX_LSH, table_number=6, key_size=12, multi_probe_level=1)。第二个参数是SearchParams,可以传入参数search_params=dict(checks=100),它来指定递归遍历的次数,值越高结果越准确,但是消耗的时间也越多。 

  1. import cv2
  2. from matplotlib import pyplot as plt
  3. img1 = cv2.imread('box.png',0) # queryImage
  4. img2 = cv2.imread('box_in_scene.png',0) # trainImage
  5. # Initiate SIFT detector
  6. sift = cv2.SIFT()
  7. # find the keypoints and descriptors with SIFT
  8. kp1, des1 = sift.detectAndCompute(img1,None)
  9. kp2, des2 = sift.detectAndCompute(img2,None)
  10. # FLANN parameters
  11. FLANN_INDEX_KDTREE=0
  12. index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
  13. search_params = dict(checks=50)
  14. flann = cv2.FlannBasedMatcher(index_params,search_params)
  15. matches = flann.knnMatch(des1,des2,k=2)
  16. # Need to draw only good matches, so create a mask
  17. matchesMask = [[0,0] for i in xrange(len(matches))]
  18. # ratio test as per Lowe's paper
  19. for i,(m,n) in enumerate(matches):
  20. if m.distance < 0.7*n.distance:
  21. matchesMask[i]=[1,0]
  22. draw_params = dict(matchColor=(0,255,0),
  23. singlePointColor=(255,0,0),
  24. matchesMask=matchesMask,
  25. flags=0)
  26. img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params)
  27. plt.imshow(img3)
  28. plt.show()

结果输出,如下所示:

函数原型cv2.drawMatchesKnn(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg,如下所示:

1. matches1to2:Matches from the first image to the second one, which means that keypoints1[i] has a corresponding point in keypoints2[matches[i]].

2. matchesMask:Mask determining which matches are drawn. If the mask is empty, all matches are drawn.

3. flags:Flags setting drawing features. Possible flags bit values are defined by DrawMatchesFlags.

 

五. 使用特征匹配和单应性查找对象

联合使用特征提取和calib3d模块中的findHomography在复杂图像中查找已知对象。基本思路是首先将这两幅图像中的特征点集传给该函数,它就会找到这个对象的透视图变换。然后使用函数cv2.perspectiveTransform()找到这个对象。至少需要4个正确的点才能找到这种变换。为了解决匹配过程中的错误,使用RANSAC和LEAST_MEDIAN算法来解决这个问题。好的匹配提供的正确的估计被称为inliers,剩下的被称为outliers。cv2.findHomography()返回一个掩模,这个掩模确定了inlier和outlier。

  1. import numpy as np
  2. import cv2
  3. from matplotlib import pyplot as plt
  4. MIN_MATCH_COUNT = 10
  5. img1 = cv2.imread('box.png',0) # queryImage
  6. img2 = cv2.imread('box_in_scene.png',0) # trainImage
  7. # Initiate SIFT detector
  8. sift = cv2.SIFT()
  9. # find the keypoints and descriptors with SIFT
  10. kp1, des1 = sift.detectAndCompute(img1,None)
  11. kp2, des2 = sift.detectAndCompute(img2,None)
  12. FLANN_INDEX_KDTREE = 0
  13. index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
  14. search_params = dict(checks = 50)
  15. flann = cv2.FlannBasedMatcher(index_params, search_params)
  16. matches = flann.knnMatch(des1,des2,k=2)
  17. # store all the good matches as per Lowe's ratio test.
  18. good = []
  19. for m,n in matches:
  20. if m.distance < 0.7*n.distance:
  21. good.append(m)
  22. if len(good)>MIN_MATCH_COUNT:
  23. # 获取关键点的坐标
  24. src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
  25. dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
  26. M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
  27. matchesMask = mask.ravel().tolist()
  28. h,w = img1.shape
  29. # 使用得到的变换矩阵对原图像的四个角进行变换,获得在目标图像上对应的坐标
  30. pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
  31. dst = cv2.perspectiveTransform(pts,M)
  32. cv2.polylines(img2,[np.int32(dst)],True,255,10, cv2.LINE_AA)
  33. else:
  34. print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
  35. matchesMask = None
  36. draw_params = dict(matchColor=(0,255,0),
  37. singlePointColor=None,
  38. matchesMask=matchesMask,
  39. flags=2)
  40. img3 = cv2.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)
  41. plt.imshow(img3, 'gray')
  42. plt.show()

结果输出,如下所示:

 

 

参考文献:

[1] Feature Matching:http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.html#matcher

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/356029
推荐阅读
相关标签
  

闽ICP备14008679号