赞
踩
本 文 目 录 :
图像是由像素组成,当计算机接收到一副图像是,首先是将像素点转换为对应的数组。我们以下面的 5*5 像素点数据矩阵为例,矩阵中的数值范围为0~255,0代表像素全黑,255为全白。我们可以给像素点数值设置一个门槛(阈值thresh),当低于或高于这个门槛的数据会按照一定的规则进行处理。
- import cv2
- import matplotlib.pyplot as plt
- glnz = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/dlrb.jpg")
- glnz1 = cv2.cvtColor(dlrb, cv2.COLOR_BGR2GRAY)
-
- retval0, dst1 = cv2.threshold(glnz1, 80, 255, cv2.THRESH_BINARY)
- retval1, dst2 = cv2.threshold(glnz1, 80, 255, cv2.THRESH_BINARY_INV)
- retval2, dst3 = cv2.threshold(glnz1, 80, 255, cv2.THRESH_TRUNC)
- retval3, dst4 = cv2.threshold(glnz1, 80, 255, cv2.THRESH_TOZERO)
- retval4, dst5 = cv2.threshold(glnz1, 80, 255, cv2.THRESH_TOZERO_INV)
- #图像显示
- titles = ["lbxx", "BINARY", "BINARY_INV", "TRUNC", "TOZERO", "TOZERO_INV"]
- images = [glnz1, dst1, dst2, dst3, dst4, dst5]
-
- for i in range(6):
- plt.subplot(2, 3, i+1), plt.imshow(images[i],'gray')
- plt.xticks([]), plt.yticks([])
- plt.title(titles[i])
可以看到,与原始图片对比,(阈值80)情况下BINARY在亮的地方更亮了(变为255),暗的地方更暗了(变为了0)。BINART_INV与其相反。后三种方法同理如上面所解释。
图像平滑处理就是,将图像中与周围像素点的像素值差异较大的像素点调整成和周围像素点像素值 相近的值。自己动手,丰衣足食,不如自己造一张噪点照片,比如以下这种:
- import cv2
- import numpy as np
- import matplotlib.pyplot as plt
- import random
-
- def gaussian_noise(image, mean=0, var=0.001):
- # 添加高斯噪声
- # mean : 均值
- # var : 方差
- image = np.array(image / 255, dtype=float)
- noise = np.random.normal(mean, var ** 0.5, image.shape)
- out = image + noise
- if out.min() < 0:
- low_clip = -1.
- else:
- low_clip = 0.
- out = np.clip(out, low_clip, 1.0)
- out = np.uint8(out * 255)
- return out
-
- src = cv2.imread('E:/Machine Vision/Computer Vision/Pictures and Videos/dlrb.jpg')
- img_gaussian = gaussian_noise(src, 0.1, 0.03)
-
- cv2.imshow('img_gaussian', img_gaussian)
- cv2.imwrite('E:/Machine Vision/Computer Vision/Pictures and Videos/dlrb2t.png', img_gaussian)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
我们手里现在有这张照片,那么可以进行图像滤波。常见滤波方式有5种:
- import cv2
- import numpy as np
- import matplotlib.pyplot as plt
- img = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/glnz.png")
- cv2.imshow('glnz', img)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
2.1 均值滤波——简单的平均卷积操作(卷积核大小3*3内部值都是1,下方参数是3*3矩阵均值)
- blur = cv2.blur(img, (3,3))
- cv2.imshow('blur', blur)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
2.2 方框滤波——基本和均值一样,可以选择归一化(-1表示颜色通道一致,3*3同上,normalize=True做归一化,此时与上方均值滤波是一样的。)
- box = cv2.boxFilter(img, -1, (3, 3), normalize = True)
- cv2.imshow('box',box)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
2.3 方框滤波——基本和均值一样,可以选择归一化,容易越界。(不再均值除以9,和大于255按255赋值。)
- box = cv2.boxFilter(img, -1, (3, 3), normalize = False)
- cv2.imshow('box',box)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
2.4 高斯滤波——高斯模糊的卷积核里的数值是满足高斯分布,相当于更重视中间的。(此时卷积核不再全是1,而是离的近的相对较大,离得远的相对较小。)
- aussian = cv2.GaussianBlur(img, (5, 5), 1)
- cv2.imshow('aussian', aussian)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
2.5 中值滤波——相当于用中值代替(5*5的矩阵25个数,中间值为处理结果。)
- median = cv2.medianBlur(img, 5)
- cv2.imshow('median', median)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
原噪点图、均值、高斯、中值滤波对比:
- res = np.hstack((img, blur, aussian, median)) # hstack()是横向展示,vstack()是纵向展示
- print(res)
- cv2.imshow('median vs average', res)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
像土壤侵蚀一样,腐蚀操作会把照片种的内容边界腐蚀掉。比如下面的"大"字,周围弥漫着许多小枝桠,我们需要通过腐蚀将其除掉:
- import cv2
- import numpy as np
- import matplotlib.pyplot as plt
- img = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/da.png")
-
- #矩阵初始化7行 7列 的矩阵(每个元素默认为8为int)
- kernel = np.ones((7,7),np.uint8)
- erosion = cv2.erode(img, kernel, iterations = 1)
- cv2.imshow('erosion', erosion)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
erosion = cv2.erode(img, kernel, iterations = 1)
关于卷积核选取和迭代次数:
上面选用7*7,以大字为例,当7*7区域出现不同值(如这里0和255),那么就把这个点腐蚀掉。
卷积核如果选择太大,可能会直接被侵蚀没掉。
- pie = cv2.imread('./data/pie.png')
- kernel = np.ones((50,50),np.uint8)
- erosion_1 = cv2.erode(pie, kernel, iterations = 1)
- erosion_2 = cv2.erode(pie, kernel, iterations = 2)
- erosion_3 = cv2.erode(pie, kernel, iterations = 3)
- res = np.hstack((erosion_1, erosion_2, erosion_3))
- print(res)
- cv2.imshow('res', res)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
通过上面“大”字的腐蚀操作,虽然一些细枝末节被侵蚀掉了,但也存在一个问题——字体比以前变得更细了。如果想恢复到原始的粗,那么需要引入膨胀操作,使其胖起来。
- img = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/da1.png")
- kernel = np.ones((15,15),np.uint8)
- dige_dilate = cv2.dilate(img, kernel, iterations = 1)
- cv2.imshow('dige_erosion', dige_dilate)
同理:
- pie = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/pie.png")
- kernel = np.ones((30,30),np.uint8)
- erosion_1 = cv2.dilate(pie, kernel, iterations = 1)
- erosion_2 = cv2.dilate(pie, kernel, iterations = 2)
- erosion_3 = cv2.dilate(pie, kernel, iterations = 3)
- res = np.hstack((pie, erosion_1, erosion_2, erosion_3))
-
- cv2.imshow( res)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
开运算:先进行腐蚀,再进行膨胀就叫做开运算。就像我们上面介绍的那样,它被用来去除噪声。
- import cv2
- import numpy as np
- import matplotlib.pyplot as plt
-
- img = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/da.png")
- kernel = np.ones((7, 7), np.uint8)
- opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
- cv2.imshow('opening', opening)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
闭运算:先进行膨胀再进行腐蚀就叫做闭运算。它经常被用来填充前景物体中的小洞,或者前景物体上的小黑点。
- import cv2
- import numpy as np
- import matplotlib.pyplot as plt
-
- img = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/da.png")
- kernel = np.ones((7, 7), np.uint8)
- opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
- cv2.imshow('opening', opening)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
梯度 = 膨胀 - 腐蚀
- # 以下是经过5次腐蚀和5次膨胀后的图像:
- pie = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/pie.png")
- kernel = np.ones((7,7), np.uint8)
- dilate = cv2.dilate(pie, kernel, iterations = 5)
- erosion = cv2.erode(pie, kernel, iterations = 5)
-
- res = np.hstack((dilate, erosion))
- cv2.imshow('res',res)
- # 获得边界信息:梯度运算
- gradient = cv2.morphologyEx(pie, cv2.MORPH_GRADIENT, kernel)
-
- cv2.imshow('gradient',gradient)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
- # 礼帽
- img = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/da.png")
- kernel = np.ones((7,7), np.uint8)
- tophat = cv2.morphologyEx(img, cv2.MORPH_TOPHAT, kernel)
- cv2.imshow('tophat',tophat)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
- # 黑帽
- img = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/da.png")
- kernel = np.ones((7,7), np.uint8)
- blackhat = cv2.morphologyEx(img, cv2.MORPH_BLACKHAT, kernel)
- cv2.imshow('blackhat', blackhat)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
我们还是以Pie图像为例。在这张圆中,梯度即为边界点(在像素层面来说,像素值不一样)。
Sobel算子就是找到这些存在梯度的边缘位置(边缘检测)。
- img = cv2.imread('./pie.png',cv2.IMREAD_GRAYSCALE)
- cv2.imshow('img', img)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
我们引入Gx和Gy来处理水平和竖直两个方向的梯度。Gx是用图像右边的数据减去左边的数据来比较大小,Gy是用图像下边的数据减去上边的数据来比较大小,以此判断像素周围是否存在梯度。
- # 计算水平方向,成像如下图右侧所示
- import cv2
- img = cv2.imread('./pie.png',cv2.IMREAD_GRAYSCALE)
-
- def cv_show(name,img):
- cv2.imshow(name, img)
- cv2.waitKey(0)
- cv2.destroyAllWindow()
-
- sobelx = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize = 3)
- cv_show('sobelx',sobelx)
图像解析:Gx是用图像右边的数据减去左边的数据来比较大小,白色为255,黑色为0。当在白色左半边边缘时,右侧减去左侧为正,呈现白色;当在白色右半边边缘时,右侧减去左侧为负,被截断为0,呈现黑色。
白到黑是正数,黑到白就是负数,所有的负数都会被截断成0,所以要取绝对值。因此需要下面的转化:
- # 水平方向计算绝对值
- sobelx = cv2.Sobel(img, cv2.CV_64F,1,0,ksize = 3)
- sobelx = cv2.convertScaleAbs(sobelx)
- cv_show('sobelx',sobelx)
-
- # 竖直方向计算绝对值
- sobely = cv2.Sobel(img, cv2.CV_64F,0,1,ksize = 3)
- sobely = cv2.convertScaleAbs(sobely)
- cv_show('sobely',sobely)
- # cv2.addWeighted()给两幅照片赋予权重,公式为:R=αX1+βX2+C,偏置项C默认是0
- sobelxy = cv2.addWeighted(sobelx,0.5,sobely,0.5,0)
- cv_show('sobelxy',sobelxy)
示例:
- # 分开运算
- import cv2
- img = cv2.imread("E:/Machine Vision/Computer Vision/Pictures and Videos/glnz.jpg",cv2.IMREAD_GRAYSCALE)
- sobelx = cv2.Sobel(img, cv2.CV_64F,1,0,ksize = 3)
- sobelx = cv2.convertScaleAbs(sobelx)
- sobely = cv2.Sobel(img, cv2.CV_64F,0,1,ksize = 3)
- sobely = cv2.convertScaleAbs(sobely)
- sobelxy = cv2.addWeighted(sobelx,0.5,sobely,0.5,0)
-
- cv2.imshow('sobelxy', sobelxy)
- cv2.waitKey(0)
- cv2.destroyAllWindows()
不建议整体进行计算,因为现实效果不如分开计算,如下图右1。
- # 合并运算
- sobelxy = cv2.Sobel(img, cv2.CV_64F,1,1,ksize = 3)
- sobelxy = cv2.convertScaleAbs(sobelx)
- # 三种算子的比较
- import cv2
- import numpy as np
- import matplotlib.pyplot as plt
- img = cv2.imread("./glnz.jpg",cv2.IMREAD_GRAYSCALE)
-
- sobelx = cv2.Sobel(img, cv2.CV_64F,1,0,ksize = 3)
- sobelx = cv2.convertScaleAbs(sobelx)
- sobely = cv2.Sobel(img, cv2.CV_64F,0,1,ksize = 3)
- sobely = cv2.convertScaleAbs(sobely)
- sobelxy = cv2.addWeighted(sobelx,0.5,sobely,0.5,0)
-
- scharrx = cv2.Scharr(img, cv2.CV_64F,1,0)
- scharrx = cv2.convertScaleAbs(scharrx)
- scharry = cv2.Scharr(img, cv2.CV_64F,0,1)
- scharry = cv2.convertScaleAbs(scharry)
- scharrxy = cv2.addWeighted(scharrx,0.5,scharry,0.5,0)
-
- laplacian = cv2.Laplacian(img, cv2.CV_64F)
- laplacianxy = cv2.convertScaleAbs(laplacian)
-
- #图像显示
- titles = ["img", "sobelxy", "scharrxy", "laplacianxy"]
- images = [img, sobelxy, scharrxy, laplacianxy]
-
- for i in range(4):
- plt.subplot(1, 4, i+1), plt.imshow(images[i],'gray')
- plt.xticks([]), plt.yticks([])
- plt.title(titles[i])
从上图可以看出,Scharr算子相较于Sobel算子,能够捕捉到更多梯度的信息,进而使图像的线条更为丰富;laplacian算子效果并不理想,不建议单独使用,可以与其它工具配合使用。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。