赞
踩
最新源码已更新(包含测试图像):
https://github.com/XuPeng23/CV/tree/main/Difference%20Detection%20In%20Similar%20Images
图像压缩包解压一下和python文件放一个目录应该就能直接跑起来了
两幅相似的图像,如果整体位置一样,仅仅是在一些地方有差异(如图1-1),那么只需要相减就能得出差异部分。而现实情况往往复杂得多,由于拍摄时间、角度不同,两幅图像可能存在视角、灰度、旋转、尺度等的差异(如图1-2),因此找出两幅图像之间的变换关系是实现该功能的前提。
图1-1 整体位置相同的两幅图像
图1-2 存在各种差异的两幅相似图像
本文使用单应性变换来表示两幅图像之间的变换关系,得出单应性变换需要至少3对匹配对,在图像特征匹配部分使用了SIFT描述子,因为考虑到SIFT拥有良好的灰度不变性、旋转不变性和尺度不变性。在两幅图像中找出SIFT特征点并进行匹配得到预匹配集,接着通过与相邻匹配对的距离比例关系剔除一部分匹配对,剩下的匹配对认为是比较好的,用来找出单应性变换。
# 载入图像 img1 = cv2.imread('./datahomo6/img1.png') img2 = cv2.imread('./datahomo6/img2.png') sift = cv2.xfeatures2d.SIFT_create() # 检测关键点 kp1, des1 = sift.detectAndCompute(img1,None) kp2, des2 = sift.detectAndCompute(img2,None) # 关键点匹配 FLANN_INDEX_KDTREE = 0 index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 6) search_params = dict(checks = 10) flann = cv2.FlannBasedMatcher(index_params, search_params) matches = flann.knnMatch(des1,des2,k=2) good = [] for m,n in matches: if m.distance < 0.6*n.distance: good.append(m)
此步运行结果:
单应性变换需要至少3对点对来算出,单应性矩阵在此不多介绍,本文通过OpenCv中的findHomography函数得出单应性矩阵,并将左图的检测点映射到右图的对应位置。左图监测点位置范围由存在认为正确的匹配对的区域位置决定(匹配对覆盖范围越广监测点也可以覆盖越广)。
# 得到单应性变换 M, mask = cv2.findHomography(pts_src, pts_dst, cv2.RANSAC,5.0) # 检测范围确定 interval = 2.5*size # 监测点间隔 searchWidth = int((xMaxLeft - xMinLeft)/interval) searchHeight = int((yMaxLeft - yMinLeft)/interval) searchNum = searchWidth * searchHeight demo_src = np.float32([[0] * 2] * searchNum * 1).reshape(-1,1,2) for i in range(searchWidth): for j in range(searchHeight): demo_src[i+j*searchWidth][0][0] = xMinLeft + i*interval demo_src[i+j*searchWidth][0][1] = yMinLeft + j*interval # 单应性变换 左图映射到右图的位置 demo_dst = cv2.perspectiveTransform(demo_src,M) # 转换成KeyPoint类型 kp_src = [cv2.KeyPoint(demo_src[i][0][0], demo_src[i][0][1], size) for i in range(demo_src.shape[0])] kp_dst = [cv2.KeyPoint(demo_dst[i][0][0], demo_dst[i][0][1], size) for i in range(demo_dst.shape[0])] # 计算这些关键点的SIFT描述子 keypoints_image1, descriptors_image1 = sift.compute(img1, kp_src) keypoints_image2, descriptors_image2 = sift.compute(img2, kp_dst) # 差异点 diffLeft = [] diffRight = [] # 分析差异 for i in range(searchNum): shreshood = 470 difference = 0 for j in range(128): d = abs(descriptors_image1[i][j]-descriptors_image2[i][j]) difference = difference + d*d difference = math.sqrt(difference) # 右图关键点位置不超出范围 if (demo_dst[i][0][1]>= 0) & (demo_dst[i][0][0] >= 0): if difference <= shreshood: cv2.circle(output, (demo_src[i][0][0],demo_src[i][0][1]),1, (0, 255, 0), 2) cv2.circle(output, (int(demo_dst[i][0][0]+width),demo_dst[i][0][1]),1, (0, 255, 0), 2) if difference > shreshood: if func == 1: cv2.circle(output, (demo_src[i][0][0],demo_src[i][0][1]),1, (0, 0, 255), 2) cv2.circle(output, (int(demo_dst[i][0][0]+width),demo_dst[i][0][1]),1, (0, 0, 255), 2) if func == 2: diffLeft.append([demo_src[i][0][0],demo_src[i][0][1]]) diffRight.append([demo_dst[i][0][0],demo_dst[i][0][1]])
之后将差异大于阈值的点对输出:
最后把检测到的差异点聚类一下
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。