当前位置:   article > 正文

基于FPGA的车牌识别系统_fpga车牌识别

fpga车牌识别

程序基于Xilinx公司的Pynq-Z2开发板,使用opencv库完成车牌识别.

项目背景和设计目的

•       车牌识别系统是计算机视频图像识别技术在车辆牌照识别中的一种应用,在高速公路、停车场、小区、道路等环境下有着广泛的应用。

•       我们希望能够充分利用PYNQ的内部资源,运用Python语言的程序设计和OpenCV计算机视觉库,设计出一个较为可靠的车牌识别系统,将输出结果显示到显示器上,包含车牌号码和车速等信息。

•       对于停车场门口或收费站等应用场景,本系统还可以直接控制舵机,用于限制车辆的进出

产品特点

•       运用了Python语言的程序设计和OpenCV计算机视觉库,使用了ZYNQ-7020芯片,将ARM处理器嵌入到FPGA中,这样将ARM处理器的优势和FPGA的优势结合到了一起。

•       研究基于FPGA+ARM平台的车牌识别系统可利用ARM处理器在操作系统方面的优势,能够实现良好的人机交互功能;又利用FPGA在并行算法加速、可动态重配置的特点,实现硬件加速、增加系统的灵活性,提高车牌识别速度。

•       使用USB外设单目摄像头读取图像信息,能准确、清晰地搜集到车辆图像,图像分辨率达到设计要求。使用外设显示屏输出显示车牌信息。

•       外接舵机,模拟小区出入口和高速公路收费站。

硬件平台

1.png

    使用Xilinx公司的PYNQ-Z2开发板,搭载ZYNQ-7020芯片和ARM-A9双核处理,使用Python语言和Verilog HDL硬件描述语言对板子进行描述。

2.png

性能和成本

现场可编程门阵列(FPGA)和图形处理器(GPU)的性能对比。FPGA的功耗远小于GPU,并且FPGA可以根据特定的应用去编程硬件,虽然FPGA的运算速度相较GPU较慢,总体上,谁在机器视觉方面具备优势是一个需要深入研究的课题,众多企业对FPGA的应用前景比较看好,具备较强的研究价值和应用前景。

FPGA的造价相对GPU较低,在未来,随着技术的成熟和广泛的应用,FPGA的制造成本和销售价格推测均会进一步降低,在现在和将来能够带来的经济收益会高于GPU,在这方面,FPGA应用于机器视觉领域具备很强的潜在的社会经济效益。

整体框图

3.png

FPGA处理模块

4.png

算法流程图

5.png

运行结果

6-.png

7.png

9.png

改进

图像处理的拓展。在本次设计中,尚未加入车型识别/驾驶员特征(如有无系安全带等)等内容的读取,在进一步研究中可以考虑增加上述内容的研究和识别。

加速模块尚未完善,加速效果不显著。

物联网方向的深入。在本次设计中,尚未设置联网比对功能,如果有这部分的设计,那么我们可以实时与公安的数据进行比对,如车辆颜色等特征判断是否为套牌,车牌是否系伪造(数据库中无该车牌),是否是被盗车辆、未缴纳交通强制保险车辆、肇事逃逸车辆等,功能会更加完善、齐全。

Python代码

    以下为python代码,但是并不包含KNN模型的数据,完整的工程请看:

https://github.com/chenjianqu/Pynq-LicensePlateRecognition

  1. from pynq.overlays.base import BaseOverlay
  2. from pynq.lib.video import *
  3. from matplotlib import pyplot as plt
  4. import numpy as np
  5. import cv2
  6. import time
  7. import math
  8. from time import sleep
  9. from pynq.lib.pmod import Pmod_PWM
  10. from pynq.lib.pmod import PMODA
  11.  
  12.  
  13. base = BaseOverlay("base.bit")
  14. pwm = Pmod_PWM(PMODA, 0)
  15.  
  16. #读取字典和数据       
  17. with open('knn_dict_letter.txt','r'as f:
  18.     labelDict = eval(f.read())
  19. with np.load('knn_data.npz'as data:
  20.     train_data=data['train_data']
  21.     train_labels=data['train_labels']
  22. with np.load('knn_data_zh.npz'as data:
  23.     train_data_zh=data['train_data_zh']
  24.     train_labels_zh=data['train_labels_zh']
  25.  
  26.     # 定义分类器
  27. cascade_path = 'cascade.xml'
  28. cascade_car_path = 'cars.xml'
  29. car_cascade = cv2.CascadeClassifier(cascade_car_path)
  30. cascade = cv2.CascadeClassifier(cascade_path)
  31.  
  32.  
  33. # monitor configuration640*480 @ 24Hz
  34. Mode = VideoMode(640,480,24)
  35. hdmi_out = base.video.hdmi_out
  36. hdmi_out.configure(Mode,PIXEL_BGR)
  37. hdmi_out.start()
  38.  
  39. # monitor (output) frame buffer size
  40. frame_out_w = 640
  41. frame_out_h = 480
  42.  
  43. scale=400/480 #比例系数
  44.  
  45. PLATE_WIDTH=0.44
  46.  
  47. Hmin=100
  48. Hmax=124
  49. Smin=120
  50. Smax=255
  51. Vmin=120
  52. Vmax=255
  53.  
  54.  
  55. reg=["ShangHaiF19911","ShangHaiB6N866","NingXia1B6N86","ShangHaiB6N866","ShangHaiB8N866","ShangHaiB0N866","NingXiaB19873","HeNanP8A629"]
  56.  
  57.  
  58. isRecognize=3
  59.  
  60. print('Read Data Done')
  61.  
  62. def KeyCallBack():
  63.     if(base.buttons[0].read()==1):
  64.         isRecognize=0
  65.     elif(base.buttons[1].read()==1):
  66.         isRecognize=1
  67.     elif(base.buttons[2].read()==1):
  68.         isRecognize=2
  69.     elif(base.buttons[3].read()==1):
  70.         isRecognize=3
  71.  
  72. #精确的定位车牌
  73. def locateAccuracy(img):
  74.     frame=cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
  75.     height = frame.shape[0]
  76.     width = frame.shape[1]
  77.     top=0
  78.     button=height-1
  79.     left=0
  80.     right=width-1
  81.  
  82.     row=0
  83.     while(row<height):
  84.         col=0
  85.         count=0
  86.         while(col<width):
  87.             if((frame[row,col,0]>=Hmin and frame[row,col,0]<=Hmax) and (frame[row,col,1]>=Smin and frame[row,col,1]<=Smax) and (frame[row,col,2]>=Vmin and frame[row,col,2]<=Vmax)):
  88.                 count+=1
  89.             col+=1
  90.         if(count/width>0.6):
  91.             top=row
  92.             break
  93.         row+=1
  94.  
  95.     row=button
  96.     while(row>0):
  97.         col=0
  98.         count=0
  99.         while(col<width):
  100.             if((frame[row,col,0]>=Hmin and frame[row,col,0]<=Hmax) and (frame[row,col,1]>=Smin and frame[row,col,1]<=Smax) and (frame[row,col,2]>=Vmin and frame[row,col,2]<=Vmax)):
  101.                 count+=1
  102.             col+=1
  103.         if(count/width>0.6):
  104.             button=row
  105.             break
  106.         row-=1
  107.  
  108.     col=right
  109.     while(col>0):
  110.         row=0
  111.         count=0
  112.         while(row<height):
  113.             if((frame[row,col,0]>=Hmin and frame[row,col,0]<=Hmax) and (frame[row,col,1]>=Smin and frame[row,col,1]<=Smax) and (frame[row,col,2]>=Vmin and frame[row,col,2]<=Vmax)):
  114.                 count+=1
  115.             row+=1
  116.         if(count/height>0.6):
  117.             right=col
  118.             break
  119.         col-=1
  120.     col=left
  121.     while(col<width):
  122.         row=0
  123.         count=0
  124.         while(row<height):
  125.             if((frame[row,col,0]>=Hmin and frame[row,col,0]<=Hmax) and (frame[row,col,1]>=Smin and frame[row,col,1]<=Smax) and (frame[row,col,2]>=Vmin and frame[row,col,2]<=Vmax)):
  126.                 count+=1
  127.             row+=1
  128.         if(count/height>0.6):
  129.             left=col
  130.             break
  131.         col+=1
  132.     return top,button,left,right
  133.  
  134. def recognize(img):
  135.     top,button,left,right=locateAccuracy(img)
  136.     image=img[top:button,left:right]
  137.     if(image.size==0):
  138.         return []
  139.     img_gray=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
  140.     img_thre=cv2.adaptiveThreshold(img_gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,35,2)#自适应二值化
  141.     #对对图像进行垂直投影
  142.     arr=np.zeros(image.shape[1])
  143.     col=0
  144.     while(col<img_thre.shape[1]):
  145.         row=0
  146.         count=0
  147.         while(row<img_thre.shape[0]):
  148.             count+=img_thre[row,col]
  149.             row+=1
  150.         arr[col]=int(count/255)
  151.         col+=1
  152.     #根据投影结果进行分割字符
  153.     count_1=0
  154.     flag=0
  155.     flag_index=0
  156.     i=0
  157.     for c in arr:
  158.         if(c<10):
  159.             count_1+=1
  160.         else:
  161.             if(count_1>flag):
  162.                 flag=count_1
  163.                 flag_index=int(i-count_1/2)
  164.             if(count_1>3):
  165.                 arr[int(i-count_1/2)]=-1
  166.             count_1=0
  167.         i+=1
  168.     i=0
  169.     j=0
  170.     x=0
  171.     y=top
  172.     h=button-top
  173.  
  174.     #获得分割结果
  175.     charList=[]
  176.     for c in arr:
  177.         if(c==-1):
  178.             w=i-x
  179.             charList.append([x+left,y,w,h])
  180.             x=i
  181.         if(flag_index==and (j!=1 or j!=2)):
  182.             return []
  183.  
  184.         i+=1
  185.         j+=1
  186.     charList.append([x+left,y,right-x,h])
  187.     if(len(charList)<=5 or len(charList)>8):
  188.         return []
  189.     return charList
  190.  
  191. def recognizeRect(img_thre,rect,knn):
  192.     x,y,w,h=rect
  193.     roi=img_thre[y:(y+h),x:(x+w)]
  194.     if h>w:
  195.         roi=cv2.copyMakeBorder(roi,0,0,int((h-w)/2),int((h-w)/2),cv2.BORDER_CONSTANT,value=[0,0,0])
  196.     roi=cv2.resize(roi,(20,20))
  197.     #cv2.imshow("char",roi)
  198.     #cv2.waitKey(0)
  199.     roivector=roi.reshape(1,400).astype(np.float32)
  200.     ret,result,neighbours,dist=knn.findNearest(roivector,k=6)#进行预测
  201.     return int(ret)
  202.  
  203. def access_pixels(frame):
  204.     frame = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
  205.     #print(frame.shape)  #shape内包含三个元素:按顺序为高、宽、通道数
  206.     height = frame.shape[0]
  207.     weight = frame.shape[1]
  208.     count=0
  209.     for row in range(height):            #遍历高
  210.         for col in range(weight):         #遍历宽
  211.             if((frame[row,col,0]>=100 and frame[row,col,0]<=124and (frame[row,col,1]>=43 and frame[row,col,1]<=255and (frame[row,col,2]>=46 and frame[row,col,2]<=255)):
  212.                 count+=1
  213.     if(count/(height*weight)>0.5):
  214.         return True
  215.     else:
  216.         return False
  217.  
  218. def isPlate(frame):
  219.     if(frame.shape[1]>frame.shape[0]/2 or frame.shape[1]*5<frame.shape[0]):
  220.         return True
  221.     else:
  222.         return False
  223.  
  224.  
  225. cap = cv2.VideoCapture(0)
  226. cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640);
  227. cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480);
  228.  
  229. print("Capture device is open: " + str(cap.isOpened()))
  230.  
  231. period=20000
  232. duty=6
  233. pwm.generate(period,duty)
  234.  
  235. time_start=time.time()
  236. time_last=0
  237. i=1
  238. x_last=0
  239. y_last=0
  240. isRecognize=3
  241. str_plate=""
  242. print("start while")
  243. while(cap.isOpened()):   
  244.     if(base.buttons[0].read()==1):
  245.         isRecognize=0
  246.     elif(base.buttons[1].read()==1):
  247.         isRecognize=1
  248.     elif(base.buttons[2].read()==1):
  249.         isRecognize=2
  250.  
  251.     elif(base.buttons[3].read()==1):
  252.         isRecognize=3
  253.         period=20000
  254.         duty=6
  255.         pwm.generate(period,duty)
  256.         str_plate=""
  257.  
  258.  
  259.     time_start=time.time()
  260.     ret, frame = cap.read()
  261.  
  262.     if (ret):  
  263.         if(isRecognize==3):
  264.             outframe = hdmi_out.newframe()
  265.             outframe[0:640,0:480,:] = frame[0:640,0:480,:]
  266.             hdmi_out.writeframe(outframe)
  267.         else:
  268.             time_last=time_start
  269.             time_start=time.time()
  270.             #print("read frame")
  271.             image=cv2.resize(frame,(int(640*scale),400))
  272.             #print("after new outframe time is:"+str(time.time()-time_start))
  273.             img_gray=cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
  274.             #print("after cvtgray time is:"+str(time.time()-time_start))
  275.             car_plates = cascade.detectMultiScale(img_gray, 1.12, minSize=(369), maxSize=(36 * 409 * 40))
  276.             #print("after mul time is:"+str(time.time()-time_start))
  277.  
  278.  
  279.             for car_plate in car_plates:
  280.                 x, y, w, h = car_plate
  281.                 #plate = image[y: y + h, x: x + w]
  282.                 #if(isPlate(plate)==False):#根据颜色判断是否是正确的车牌区域
  283.                 #    continue
  284.                 #if(access_pixels(plate)==False):#根据颜色判断是否是正确的车牌区域
  285.                 #    continue
  286.  
  287.                 if(isRecognize==0):
  288.                     plateScale=(w/PLATE_WIDTH)
  289.                     v=(math.sqrt((x-x_last)*(x-x_last)+(y-y_last)*(y-y_last))/(time_start-time_last)/plateScale)/i
  290.                     v=int(v*100)/100
  291.                     x_last=x
  292.                     y_last=y
  293.                     i=0
  294.                     cv2.putText(image,"Vehicle Velocity:"+str(v)+" m/s",(20,image.shape[0]-10),cv2.FONT_HERSHEY_PLAIN, 1.5, (00255), 2)
  295.                     i+=1
  296.                 elif(isRecognize==1):
  297.                     if(base.buttons[0].read()==1):
  298.                         isRecognize=0
  299.                     elif(base.buttons[1].read()==1):
  300.                         isRecognize=1
  301.                     elif(base.buttons[2].read()==1):
  302.                         isRecognize=2
  303.                        
  304.                     elif(base.buttons[3].read()==1):
  305.                         isRecognize=3
  306.                         period=20000
  307.                         duty=6
  308.                         pwm.generate(period,duty)
  309.                         str_plate=""
  310.  
  311.                     #分割字符
  312.                     img_thre=cv2.adaptiveThreshold(img_gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,35,2)#自适应二值化
  313.                     plate=image[y: y + h, x: x + w]
  314.                     charRect=recognize(image[y: y + h, x: x + w])
  315.                     if(len(charRect)==0):
  316.                         continue
  317.                     if(len(charRect)>7):
  318.                         charRect.pop()
  319.                    
  320.                     str_plate=""
  321.                     KeyCallBack();
  322.                     #模型创建
  323.                     knn_zh=cv2.ml.KNearest_create()
  324.                     knn_zh.train(train_data_zh,cv2.ml.ROW_SAMPLE,train_labels_zh)
  325.                    
  326.                     #识别中文
  327.                     rect=charRect[0]
  328.                     x1,y1,w1,h1=rect
  329.                     x1=x+x1
  330.                     y1=y+y1
  331.                     str_plate=labelDict[recognizeRect(img_thre,(x1,y1,w1,h1),knn_zh)]
  332.                     if(len(charRect)>0):
  333.                         x1,y1,w1,h1=charRect[0]
  334.                         cv2.rectangle(image,(x1,y1),(x1+w1,y1+h1),(255,0,0),1)
  335.                     if(base.buttons[0].read()==1):
  336.                         isRecognize=0
  337.                     elif(base.buttons[1].read()==1):
  338.                         isRecognize=1
  339.                     elif(base.buttons[2].read()==1):
  340.                         isRecognize=2
  341.                        
  342.                     elif(base.buttons[3].read()==1):
  343.                         isRecognize=3
  344.                         str_plate=""
  345.                         period=20000
  346.                         duty=6
  347.                         pwm.generate(period,duty)
  348.                        
  349.  
  350.                     knn=cv2.ml.KNearest_create()
  351.                     knn.train(train_data,cv2.ml.ROW_SAMPLE,train_labels)
  352.                     for rect in charRect[1:]:
  353.                         x1,y1,w1,h1=rect
  354.                         x1=x+x1
  355.                         y1=y+y1
  356.                         cv2.rectangle(image,(x1,y1),(x1+w1,y1+h1),(255,0,0),1)#框出字块
  357.                         s=labelDict[recognizeRect(img_thre,(x1,y1,w1,h1),knn)]
  358.                         str_plate=str_plate+s
  359.                    
  360.                     for re in reg:
  361.                         if(re in str_plate):
  362.                             period=20000
  363.                             duty=11
  364.                             pwm.generate(period,duty)
  365.                             print("********************ok")
  366.                        
  367.                     x1,y1,w1,h1=rect
  368.                     print('recognize time:',time.time()-time_start,'s')
  369.                     image=cv2.putText(image,str_plate,(20,image.shape[0]-10),cv2.FONT_HERSHEY_PLAIN, 2.0, (00255), 2)
  370.                 elif(isRecognize==2):
  371.                     image=cv2.putText(image,str_plate,(20,image.shape[0]-10),cv2.FONT_HERSHEY_PLAIN, 2.0, (00255), 2)
  372.                 #标出粗定位的车牌
  373.                 cv2.rectangle(image, (x - 10, y - 10), (x + w + 10, y + h + 10), (255255255), 1)
  374.             image=cv2.resize(image,(640,480))   
  375.             outframe = hdmi_out.newframe()
  376.             outframe[0:640,0:480,:] = image[0:640,0:480,:]
  377.             hdmi_out.writeframe(outframe)
  378.             #print("after write time is:"+str(time.time()-time_start))
  379.             
  380.     else:
  381.         raise RuntimeError("Failed to read from camera.")
  382. print("end program")

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/花生_TL007/article/detail/725993
推荐阅读
相关标签
  

闽ICP备14008679号