赞
踩
(附上详细的解释及稍微修改的代码)
参考链接:http://www.cnblogs.com/xianglan/archive/2011/01/01/1923779.html
原博主的链接。
图像细化:
图像细化主要是针对二值图而言,所谓骨架,可以理解为图像的中轴,,一个长方形的骨架,是它的长方向上的中轴线,圆的骨架是它的圆心,直线的骨架是它自身,孤立点的骨架也是自身。我们来看看典型的图形的骨架(用粗线表示)
细化的算法有很多种,但比较常用的算法是查表法:
细化是从原来的图中去掉一些点,但仍要保持原来的形状。实际上是保持原图的骨架。判断一个点是否能去掉是以8个相邻点(八连通)的情况来作为判据的,具体判据为:
1,内部点不能删除
2,鼓励点不能删除
3,直线端点不能删除
4,如果P是边界点,去掉P后,如果连通分量不增加,则P可删除
看看上面那些点,就是3*3矩阵中的中心点。
其中某像素点的八领域,将其标为序号1-9,对应的值如右图。至于为什么取这些数,是因为无论怎么组合相加是不会有重复的数,所以只要能保证此效果,可以修改。为什么中间是0?因为我们的前提是决定这个黑点是否应该删除。
实际上该博主的操作就是把所有的情况都列举出来了,16x16数组。
array = [0,0,1,1,0,0,1,1,1,1,0,1,1,1,0,1,
1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,1,
0,0,1,1,0,0,1,1,1,1,0,1,1,1,0,1,
1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,1,
1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,1,0,0,1,1,0,0,1,1,0,1,1,1,0,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,1,1,0,0,1,1,1,1,0,1,1,1,0,1,
1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,1,
0,0,1,1,0,0,1,1,1,1,0,1,1,1,0,1,
1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,0,
1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,
1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,0,
1,1,0,0,1,1,0,0,1,1,0,1,1,1,0,0,
1,1,0,0,1,1,1,0,1,1,0,0,1,0,0,0]
例如:前面第三个点计算的值1+2+4+32+64+128=231,则对应与array[231]=0,在代码中,
sum = a[0]*1+a[1]*2+a[2]*4+a[3]*8+a[5]*16+a[6]*32+a[7]*64+a[8]*128 image[i,j] = array[sum]*255,代表着不删除,0为黑色。自己可以算一下上面的点看是否符合。
附上基于原博主稍微修改的代码和结果,
import cv2 import numpy as np def VThin(image,array): h ,w= image.shape NEXT = 1 for i in range(h): for j in range(w): if NEXT == 0: NEXT = 1 else: M = image[i,j-1]+image[i,j]+image[i,j+1] if 0<j<w-1 else 1 if image[i,j] == 0 and M != 0: #上下有白点 a = [0]*9 for k in range(3): for l in range(3): if -1<(i-1+k)<h and -1<(j-1+l)<w and image[i-1+k,j-1+l]==255: a[k*3+l] = 1 sum = a[0]*1+a[1]*2+a[2]*4+a[3]*8+a[5]*16+a[6]*32+a[7]*64+a[8]*128 image[i,j] = array[sum]*255 if array[sum] == 1:#发现删除的点后,跳过右点 NEXT = 0 return image def HThin(image,array): h ,w= image.shape NEXT = 1 for j in range(w): for i in range(h): if NEXT == 0: NEXT = 1 else: M = image[i-1,j]+image[i,j]+image[i+1,j] if 0<i<h-1 else 1 if image[i,j] == 0 and M != 0: #左右有白点 a = [0]*9 for k in range(3): for l in range(3): if -1<(i-1+k)<h and -1<(j-1+l)<w and image[i-1+k,j-1+l]==255: a[k*3+l] = 1 sum = a[0]*1+a[1]*2+a[2]*4+a[3]*8+a[5]*16+a[6]*32+a[7]*64+a[8]*128 image[i,j] = array[sum]*255 if array[sum] == 1: NEXT = 0 return image def Xihua(image,array,num=10): iXihua = image for i in range(num):#搜寻的次数,不是说循环的终点,可以自己设置 VThin(iXihua,array)#iXihua=VThin(iXihua,array) HThin(iXihua,array)# iXihua=HThin(iXihua,array), return iXihua array = [0,0,1,1,0,0,1,1,1,1,0,1,1,1,0,1,\ 1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,1,\ 0,0,1,1,0,0,1,1,1,1,0,1,1,1,0,1,\ 1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,1,\ 1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,\ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\ 1,1,0,0,1,1,0,0,1,1,0,1,1,1,0,1,\ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,\ 0,0,1,1,0,0,1,1,1,1,0,1,1,1,0,1,\ 1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,1,\ 0,0,1,1,0,0,1,1,1,1,0,1,1,1,0,1,\ 1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,0,\ 1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,\ 1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,0,\ 1,1,0,0,1,1,0,0,1,1,0,1,1,1,0,0,\ 1,1,0,0,1,1,1,0,1,1,0,0,1,0,0,0] image = cv2.imread('25.jpg',1)#读去自己的图 gray=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)#灰度化 ret1, th1 = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU) #方法选择为#THRESH_OTSU cv2.imshow('image',image) iTwo =th1#二值图像 cv2.imshow('iTwo',iTwo) iThin = Xihua(iTwo,array) cv2.imshow('iThin',iThin)
这个是按照原博主的代码写的结果。
本次的结果对比。
其实方法是不错的,关键还是看边界提取的精度,精度越高,自然算法骨架提取的越好。
如果有侵权,请务必联系我,我会及时处理。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。