当前位置:   article > 正文

yolov5_master的下载、环境搭建、数据处理及训练全过程_香橙派 yolo

香橙派 yolo

     本文借用了以下微博的文章,觉得写的比较全,所以照抄了过来,并且搭建了一遍可以正常训练,在这里作为笔记以后用的时候方便找,这个yolov5_master的使用可以将pth模型文件转换为onnx文件,进而转换为rknn文件,在瑞芯微的小型化设备的NPU环境下进行模型推理。

香橙派5使用RK3588S内置NPU加速yolov5推理,实时识别数字达到50fps_rk3588 yolov5 实时检测-CSDN博客

1.yolov5_master的环境搭建

先把官方指定的yolov5版本贴出来官方yolov5

        这里建议大家用官方的yolov5去训练,这样最后在香橙派5上展现出来的效果最好。

        大家先到GitHub上把yolov5给下载下来,这时候我们得给电脑安装一个anaconda,用来创建虚拟环境,这样我们再把yolov5需要的环境给下载到这个虚拟环境中,这样环境与环境中就不会相互干扰和污染。这里我不再赘述anaconda的安装,如果不知道的话,还是去百度一下。进入虚拟环境后,我们根据需求来安装一下依赖包。

pip install -r requirements.txt

这里要注意一点,因为每台电脑装载的显卡版本不同,在安装完之后,大家可以运行一下一下代码来判断一下自己的torch版本是否匹配。如果出现“successful installation!”那就是安装成功了,反之就是每成功。

  1. import torch
  2. torch.cuda.is_available()

       如果结果为True,则安装好了GPU的torch,否则大家需要根据自己的环境单独安装适合版本的GPU的torch。我是通过终端命令单独安装的,安装命令如下(这个版本不一定适合你的版本):

conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.6 -c pytorch -c conda-forge

安装成功后再次运行

  1. import torch
  2. torch.cuda.is_available()

直到检测后为True为止。

在环境搭建好后,我们接下来做一些训练前的准备。

2.数据集的处理

      在这里,标注数据大家应该都不陌生,我们主要将如何处理标注好的xml数据集。

       首先在yolov5文件夹下新建一个文件夹,这里取名为VOCData,并在这个文件夹下面新建两个文件夹,一个是Annotations,另一个是images。其中Annotations下面放的是我们标注好的.xml文件,另一个images下面放的是我们拍摄的图片。 

划分数据集

        接下来,我们要在VOCData下面新建文件 split_train_val.py用来划分我们的数据集(这里不需要修改,直接运行就可以)

  1. # coding:utf-8
  2. import os
  3. import random
  4. import argparse
  5. parser = argparse.ArgumentParser()
  6. # xml文件的地址,根据自己的数据进行修改 xml一般存放在Annotations
  7. parser.add_argument('--xml_path', default='Annotations', type=str, help='input xml label path')
  8. # 数据集的划分,地址选择自己数据下的ImageSets/Main
  9. parser.add_argument('--txt_path', default='ImageSets/Main', type=str, help='output txt label path')
  10. opt = parser.parse_args()
  11. trainval_percent = 1.0 # 训练集和验证集所占比例。 这里没有划分测试集
  12. train_percent = 0.9 # 训练集所占比例,可自己进行调整
  13. xmlfilepath = opt.xml_path
  14. txtsavepath = opt.txt_path
  15. total_xml = os.listdir(xmlfilepath)
  16. if not os.path.exists(txtsavepath):
  17. os.makedirs(txtsavepath)
  18. num = len(total_xml)
  19. list_index = range(num)
  20. tv = int(num * trainval_percent)
  21. tr = int(tv * train_percent)
  22. trainval = random.sample(list_index, tv)
  23. train = random.sample(trainval, tr)
  24. file_trainval = open(txtsavepath + '/trainval.txt', 'w')
  25. file_test = open(txtsavepath + '/test.txt', 'w')
  26. file_train = open(txtsavepath + '/train.txt', 'w')
  27. file_val = open(txtsavepath + '/val.txt', 'w')
  28. for i in list_index:
  29. name = total_xml[i][:-4] + '\n'
  30. if i in trainval:
  31. file_trainval.write(name)
  32. if i in train:
  33. file_train.write(name)
  34. else:
  35. file_val.write(name)
  36. else:
  37. file_test.write(name)
  38. file_trainval.close()
  39. file_train.close()
  40. file_val.close()
  41. file_test.close()

 运行完后会在VOCData\ImagesSets\Main下生成 测试集、训练集、训练验证集和验证集,如下图所示:

 将.xml文件转为.txt文件 

        在VOCData目录下创建程序 text_to_yolo.py 并运行,开头classes部分改成自己的类别。

  1. # -*- coding: utf-8 -*-
  2. import xml.etree.ElementTree as ET
  3. import os
  4. from os import getcwd
  5. sets = ['train', 'val', 'test']
  6. classes = ["0","1","2","3","4","5","6","7","8","9"] # 改成自己的类别
  7. abs_path = os.getcwd()
  8. print(abs_path)
  9. def convert(size, box):
  10. dw = 1. / (size[0])
  11. dh = 1. / (size[1])
  12. x = (box[0] + box[1]) / 2.0 - 1
  13. y = (box[2] + box[3]) / 2.0 - 1
  14. w = box[1] - box[0]
  15. h = box[3] - box[2]
  16. x = x * dw
  17. w = w * dw
  18. y = y * dh
  19. h = h * dh
  20. return x, y, w, h
  21. def convert_annotation(image_id):
  22. in_file = open('E:/SQY/new/yolov5-master/VOCData/Annotations/%s.xml' % (image_id), encoding='UTF-8')
  23. out_file = open('E:/SQY/new/yolov5-master/VOCData/labels/%s.txt' % (image_id), 'w')
  24. tree = ET.parse(in_file)
  25. root = tree.getroot()
  26. size = root.find('size')
  27. w = int(size.find('width').text)
  28. h = int(size.find('height').text)
  29. for obj in root.iter('object'):
  30. difficult = obj.find('difficult').text
  31. # difficult = obj.find('Difficult').text
  32. cls = obj.find('name').text
  33. if cls not in classes or int(difficult) == 1:
  34. continue
  35. cls_id = classes.index(cls)
  36. xmlbox = obj.find('bndbox')
  37. b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
  38. float(xmlbox.find('ymax').text))
  39. b1, b2, b3, b4 = b
  40. # 标注越界修正
  41. if b2 > w:
  42. b2 = w
  43. if b4 > h:
  44. b4 = h
  45. b = (b1, b2, b3, b4)
  46. bb = convert((w, h), b)
  47. out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
  48. wd = getcwd()
  49. for image_set in sets:
  50. if not os.path.exists('E:/SQY/new/yolov5-master/VOCData/labels/'):
  51. os.makedirs('E:/SQY/new/yolov5-master/VOCData/labels/')
  52. image_ids = open('E:/SQY/new/yolov5-master/VOCData/ImageSets/Main/%s.txt' % (image_set)).read().strip().split()
  53. if not os.path.exists('E:/SQY/new/yolov5-master/VOCData/dataSet_path/'):
  54. os.makedirs('E:/SQY/new/yolov5-master/VOCData/dataSet_path/')
  55. list_file = open('dataSet_path/%s.txt' % (image_set), 'w')
  56. for image_id in image_ids:
  57. list_file.write('E:/SQY/new/yolov5-master/VOCData/images/%s.JPG\n' % (image_id))
  58. convert_annotation(image_id)
  59. list_file.close()

 运行完后会生成如下 labels 文件夹和 dataSet_path 文件夹
97ca176c0cbc4d50b920c945ee7ac06c.png

          其中 labels 中为不同图像的标注文件。每个图像对应一个txt文件,文件每一行为一个目标的信息,包括class, x_center, y_center, width, height格式,这种即为 yolo_txt格式。dataSet_path文件夹包含三个数据集的txt文件,train.txt等txt文件为划分后图像所在位置的绝对路径,如train.txt就含有所有训练集图像的绝对路径。
 

配置myvoc.yaml文件

         在 yolov5 目录下的 data 文件夹下 新建一个 myvoc.yaml文件

  1. train: E:\SQY\new\yolov5-master\VOCData\dataSet_path\train.txt
  2. val: E:\SQY\new\yolov5-master\VOCData\dataSet_path\val.txt
  3. # number of classes
  4. nc: 10
  5. # class names
  6. names: ["0","1","2","3","4","5","6","7","8","9"]

聚类先验框
        生成anchors文件,VOCData目录下创建程序两个程序 kmeans.py 以及 clauculate_anchors.py不需要运行 kmeans.py,运行 clauculate_anchors.py 即可。

    kmeans.py 程序如下:这不需要运行,也不需要更改,报错则查看第十三行内容。

  1. import numpy as np
  2. def iou(box, clusters):
  3. """
  4. Calculates the Intersection over Union (IoU) between a box and k clusters.
  5. :param box: tuple or array, shifted to the origin (i. e. width and height)
  6. :param clusters: numpy array of shape (k, 2) where k is the number of clusters
  7. :return: numpy array of shape (k, 0) where k is the number of clusters
  8. """
  9. x = np.minimum(clusters[:, 0], box[0])
  10. y = np.minimum(clusters[:, 1], box[1])
  11. if np.count_nonzero(x == 0) > 0 or np.count_nonzero(y == 0) > 0:
  12. raise ValueError("Box has no area") # 如果报这个错,可以把这行改成pass即可
  13. intersection = x * y
  14. box_area = box[0] * box[1]
  15. cluster_area = clusters[:, 0] * clusters[:, 1]
  16. iou_ = intersection / (box_area + cluster_area - intersection)
  17. return iou_
  18. def avg_iou(boxes, clusters):
  19. """
  20. Calculates the average Intersection over Union (IoU) between a numpy array of boxes and k clusters.
  21. :param boxes: numpy array of shape (r, 2), where r is the number of rows
  22. :param clusters: numpy array of shape (k, 2) where k is the number of clusters
  23. :return: average IoU as a single float
  24. """
  25. return np.mean([np.max(iou(boxes[i], clusters)) for i in range(boxes.shape[0])])
  26. def translate_boxes(boxes):
  27. """
  28. Translates all the boxes to the origin.
  29. :param boxes: numpy array of shape (r, 4)
  30. :return: numpy array of shape (r, 2)
  31. """
  32. new_boxes = boxes.copy()
  33. for row in range(new_boxes.shape[0]):
  34. new_boxes[row][2] = np.abs(new_boxes[row][2] - new_boxes[row][0])
  35. new_boxes[row][3] = np.abs(new_boxes[row][3] - new_boxes[row][1])
  36. return np.delete(new_boxes, [0, 1], axis=1)
  37. def kmeans(boxes, k, dist=np.median):
  38. """
  39. Calculates k-means clustering with the Intersection over Union (IoU) metric.
  40. :param boxes: numpy array of shape (r, 2), where r is the number of rows
  41. :param k: number of clusters
  42. :param dist: distance function
  43. :return: numpy array of shape (k, 2)
  44. """
  45. rows = boxes.shape[0]
  46. distances = np.empty((rows, k))
  47. last_clusters = np.zeros((rows,))
  48. np.random.seed()
  49. # the Forgy method will fail if the whole array contains the same rows
  50. clusters = boxes[np.random.choice(rows, k, replace=False)]
  51. while True:
  52. for row in range(rows):
  53. distances[row] = 1 - iou(boxes[row], clusters)
  54. nearest_clusters = np.argmin(distances, axis=1)
  55. if (last_clusters == nearest_clusters).all():
  56. break
  57. for cluster in range(k):
  58. clusters[cluster] = dist(boxes[nearest_clusters == cluster], axis=0)
  59. last_clusters = nearest_clusters
  60. return clusters
  61. if __name__ == '__main__':
  62. a = np.array([[1, 2, 3, 4], [5, 7, 6, 8]])
  63. print(translate_boxes(a))

        运行:clauculate_anchors.py会调用 kmeans.py 聚类生成新anchors的文件。程序如下:需要更改第 9 、13行文件路径 以及 第 16 行标注类别名称

  1. # -*- coding: utf-8 -*-
  2. # 根据标签文件求先验框
  3. import os
  4. import numpy as np
  5. import xml.etree.cElementTree as et
  6. from kmeans import kmeans, avg_iou
  7. FILE_ROOT = "E:/SQY/new/yolov5-master/VOCData/" # 根路径
  8. ANNOTATION_ROOT = "Annotations" # 数据集标签文件夹路径
  9. ANNOTATION_PATH = FILE_ROOT + ANNOTATION_ROOT
  10. ANCHORS_TXT_PATH = "E:/SQY/new/yolov5-master/VOCData/anchors.txt" # anchors文件保存位置
  11. CLUSTERS = 9
  12. CLASS_NAMES = ['0','1','2','3','4','5','6','7','8','9'] # 类别名称
  13. def load_data(anno_dir, class_names):
  14. xml_names = os.listdir(anno_dir)
  15. boxes = []
  16. for xml_name in xml_names:
  17. xml_pth = os.path.join(anno_dir, xml_name)
  18. tree = et.parse(xml_pth)
  19. width = float(tree.findtext("./size/width"))
  20. height = float(tree.findtext("./size/height"))
  21. for obj in tree.findall("./object"):
  22. cls_name = obj.findtext("name")
  23. if cls_name in class_names:
  24. xmin = float(obj.findtext("bndbox/xmin")) / width
  25. ymin = float(obj.findtext("bndbox/ymin")) / height
  26. xmax = float(obj.findtext("bndbox/xmax")) / width
  27. ymax = float(obj.findtext("bndbox/ymax")) / height
  28. box = [xmax - xmin, ymax - ymin]
  29. boxes.append(box)
  30. else:
  31. continue
  32. return np.array(boxes)
  33. if __name__ == '__main__':
  34. anchors_txt = open(ANCHORS_TXT_PATH, "w")
  35. train_boxes = load_data(ANNOTATION_PATH, CLASS_NAMES)
  36. count = 1
  37. best_accuracy = 0
  38. best_anchors = []
  39. best_ratios = []
  40. for i in range(10): ##### 可以修改,不要太大,否则时间很长
  41. anchors_tmp = []
  42. clusters = kmeans(train_boxes, k=CLUSTERS)
  43. idx = clusters[:, 0].argsort()
  44. clusters = clusters[idx]
  45. # print(clusters)
  46. for j in range(CLUSTERS):
  47. anchor = [round(clusters[j][0] * 640, 2), round(clusters[j][1] * 640, 2)]
  48. anchors_tmp.append(anchor)
  49. print(f"Anchors:{anchor}")
  50. temp_accuracy = avg_iou(train_boxes, clusters) * 100
  51. print("Train_Accuracy:{:.2f}%".format(temp_accuracy))
  52. ratios = np.around(clusters[:, 0] / clusters[:, 1], decimals=2).tolist()
  53. ratios.sort()
  54. print("Ratios:{}".format(ratios))
  55. print(20 * "*" + " {} ".format(count) + 20 * "*")
  56. count += 1
  57. if temp_accuracy > best_accuracy:
  58. best_accuracy = temp_accuracy
  59. best_anchors = anchors_tmp
  60. best_ratios = ratios
  61. anchors_txt.write("Best Accuracy = " + str(round(best_accuracy, 2)) + '%' + "\r\n")
  62. anchors_txt.write("Best Anchors = " + str(best_anchors) + "\r\n")
  63. anchors_txt.write("Best Ratios = " + str(best_ratios))
  64. anchors_txt.close()

        接下来,我们会生成一个名为anchor的文件,在这个文件的第二行,我们需要把Best Anchors 复制到我们的yolov5s.yaml里面。每一行有6个,把我框起来的部分复制即可,这里要注意一点,我们对于小数部分是四舍五入的,最后要保证在yolov5s.yaml里面的都是整数,整个yolov5s.yaml的anchor部分全部要换成我们的Best Anchors,同时还要把我们的nc也就是识别的类别数目给修改一下。

下载权重:

        在Github上的官方仓库里,有我们需要的权重文件yolov5s.pt,这里方便大家取用,我也上传到了百度网盘里面提取码2471,我们在yolov5下面新建weights文件夹,并将我们的权重文件放进去。

准备训练:

        我们进入anaconda创建的虚拟环境,然后我们就可以运行以下的命令了:

python train.py --weights weights/yolov5s.pt  --cfg models/yolov5s.yaml  --data data/myvoc.yaml --epoch 200 --batch-size 8 --img 640   --device 0 

        接下来就进入到了漫长的训练阶段,在此之前,可能大家还会遇到一些小问题,比如什么页面太小,无法完成操作的问题,这个不难,分配一下虚拟内存即可,具体可以参考一下这个文章报错解决

将训练好的.pt文件转化为.onnx文件

        首先,我们参照官方的RKNN文档的操作,将对应的部分给修改一下。RKNN官方文档

e9d1d908be7946048e423dc532a15000.png

根据官方的步骤,我们把下面的代码

  1. def forward(self, x):
  2. z = [] # inference output
  3. for i in range(self.nl):
  4. if os.getenv('RKNN_model_hack', '0') != '0':
  5. z.append(torch.sigmoid(self.m[i](x[i])))
  6. continue
  7. x[i] = self.m[i](x[i]) # conv
  8. bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
  9. x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
  10. if not self.training: # inference
  11. if self.onnx_dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
  12. self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
  13. y = x[i].sigmoid()
  14. if self.inplace:
  15. y[..., 0:2] = (y[..., 0:2] * 2 + self.grid[i]) * self.stride[i] # xy
  16. y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
  17. else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
  18. xy, wh, conf = y.split((2, 2, self.nc + 1), 4) # y.tensor_split((2, 4, 5), 4) # torch 1.8.0
  19. xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
  20. wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh
  21. y = torch.cat((xy, wh, conf), 4)
  22. z.append(y.view(bs, -1, self.no))
  23. if os.getenv('RKNN_model_hack', '0') != '0':
  24. return z
  25. return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x)

修改为

  1. def forward(self, x):
  2. z = [] # inference output
  3. for i in range(self.nl):
  4. x[i] = self.m[i](x[i]) # conv
  5. return x

这一步一定要等训练完成后,再对export.py进行修改,然后我们要把得出的best.pt复制到export.py同一级文件夹下。       

        接下来,我们就可以在pycharm终端中将.pt模型转为.onnx模型了。

python export.py --weights best.pt --img 640 --batch 1 --include onnx

之后,在export.py的同级目录下,就会生成best.onnx这个文件,我们需要做的就是把这个文件copy到我们的Ubuntu20.04系统里面,进行处理。 

        同时,他会在文件目录下给我们生成一个对应的RK.anchor

这个文件下面存放的是我们的先验框的锚点,大家要注意的是,一定要把锚点对应的值复制到下一步转换rknn模型的test.py文件里,否则就会像下图这样

将best.onnx转为RKNN格式       

        这一步就需要我们进入到Ubuntu20.04系统中了,我的Ubuntu系统中已经下载好了anaconda,使用anaconda的好处就是可以方便的安装一些库,而且还可以利用conda来配置虚拟环境,做到环境与环境之间相互独立。
 

conda create -n rknn_new python=3.8

之后,在RKNN的github仓库里我们将整个项目下载下来,解压后,我们进入刚刚创立虚拟环境下配置knn-toolkit2。进入doc目录后,输入命令

pip install -r requirements_cp38-1.4.0.txt -i https://mirror.baidu.com/pypi/simple

还是那句话,这里一定要带上百度的镜像源,要不然会报错。

出现以上界面,就说明咱们的环境已经安装上了。

 接下来,我们进入packages文件夹,输入一下命令

pip install rknn_toolkit2-1.4.0_22dcfef4-cp38-cp38-linux_x86_64.whl

如果用的是这个官方文档的话,就不会出现我上一篇博客写到的那个错误。安装完成后,我们在终端输入python,再输入以下命令,如果没有报错,则证明我们的环境已经搭载好了 

from rknn.api import RKNN

 接下来,我们要进入example/onnx/yolov5文件夹下,找到我们的test.py文件,修改一下模型地址,和我们的类别。

然后,我们还需要修改一下后处理process函数,将代码修改为以下格式(这也是我询问一个B站大佬才学会,感谢大佬的不吝赐教)

  1. def process(input, mask, anchors):
  2. anchors = [anchors[i] for i in mask]
  3. grid_h, grid_w = map(int, input.shape[0:2])
  4. box_confidence = input[..., 4]
  5. box_confidence = np.expand_dims(box_confidence, axis=-1)
  6. box_class_probs = input[..., 5:]
  7. box_xy = input[..., :2]*2 - 0.5
  8. col = np.tile(np.arange(0, grid_w), grid_w).reshape(-1, grid_w)
  9. row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_h)
  10. col = col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
  11. row = row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
  12. grid = np.concatenate((col, row), axis=-1)
  13. box_xy += grid
  14. box_xy *= int(IMG_SIZE/grid_h)
  15. box_wh = pow(input[..., 2:4]*2, 2)
  16. box_wh = box_wh * anchors
  17. box = np.concatenate((box_xy, box_wh), axis=-1)
  18. return box, box_confidence, box_class_probs

否则的话,就会出现以下情况,出现非常多的框

修改完成后,我们就可以在命令行里使用命令

python test.py

 之后,会在指定位置给我们弹出一个名为best.rknn的文件。

在香橙派5上运行best.rknn

        到了这一步,我们就需要把模型放到我们的香橙派上来了,我使用的是RKNN的python版本来实现NPU加速的,这里我们需要到Github上下载RKNN官方教程,下载完成后进入该文件夹,输入指令

cd /examples/onnx/yolov5

进入文件夹后,创建一个名为demo.py的文件,将以下代码复制即可,我已经实现了实时视频,同样要注意的一点是,我们需要把RK_anchor的锚点在这里也修改一下,同样把我们的后处理部分修改一下,也就是上面提到的process函数修改一下(这里有一点很奇怪,按照大佬来说的话,其实这里应该修改的,可是我实践操作了一下,修改了process的话反而会出现很多的框,反正这里的话,大家根据实际情况修改)
 

  1. import os
  2. import urllib
  3. import traceback
  4. import time
  5. import datetime as dt
  6. import sys
  7. import numpy as np
  8. import cv2
  9. from rknnlite.api import RKNNLite
  10. #RKNN_MODEL = 'yolov5s-640-640.rknn'
  11. RKNN_MODEL = 'new/best.rknn'
  12. #DATASET = './dataset.txt'
  13. QUANTIZE_ON = True
  14. OBJ_THRESH = 0.25
  15. NMS_THRESH = 0.45
  16. IMG_SIZE = 640
  17. '''CLASSES = ("person", "bicycle", "car", "motorbike ", "aeroplane ", "bus ", "train", "truck ", "boat", "traffic light",
  18. "fire hydrant", "stop sign ", "parking meter", "bench", "bird", "cat", "dog ", "horse ", "sheep", "cow", "elephant",
  19. "bear", "zebra ", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite",
  20. "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife ",
  21. "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza ", "donut", "cake", "chair", "sofa",
  22. "pottedplant", "bed", "diningtable", "toilet ", "tvmonitor", "laptop ", "mouse ", "remote ", "keyboard ", "cell phone", "microwave ",
  23. "oven ", "toaster", "sink", "refrigerator ", "book", "clock", "vase", "scissors ", "teddy bear ", "hair drier", "toothbrush ")
  24. '''
  25. CLASSES = ("0","1","2","3","4","5","6","7","8","9")
  26. def sigmoid(x):
  27. return 1 / (1 + np.exp(-x))
  28. def xywh2xyxy(x):
  29. # Convert [x, y, w, h] to [x1, y1, x2, y2]
  30. y = np.copy(x)
  31. y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
  32. y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
  33. y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
  34. y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
  35. return y
  36. def process(input, mask, anchors):
  37. anchors = [anchors[i] for i in mask]
  38. grid_h, grid_w = map(int, input.shape[0:2])
  39. box_confidence = sigmoid(input[..., 4])
  40. box_confidence = np.expand_dims(box_confidence, axis=-1)
  41. box_class_probs = sigmoid(input[..., 5:])
  42. box_xy = sigmoid(input[..., :2])*2 - 0.5
  43. col = np.tile(np.arange(0, grid_w), grid_w).reshape(-1, grid_w)
  44. row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_h)
  45. col = col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
  46. row = row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
  47. grid = np.concatenate((col, row), axis=-1)
  48. box_xy += grid
  49. box_xy *= int(IMG_SIZE/grid_h)
  50. box_wh = pow(sigmoid(input[..., 2:4])*2, 2)
  51. box_wh = box_wh * anchors
  52. box = np.concatenate((box_xy, box_wh), axis=-1)
  53. return box, box_confidence, box_class_probs
  54. def filter_boxes(boxes, box_confidences, box_class_probs):
  55. """Filter boxes with box threshold. It's a bit different with origin yolov5 post process!
  56. # Arguments
  57. boxes: ndarray, boxes of objects.
  58. box_confidences: ndarray, confidences of objects.
  59. box_class_probs: ndarray, class_probs of objects.
  60. # Returns
  61. boxes: ndarray, filtered boxes.
  62. classes: ndarray, classes for boxes.
  63. scores: ndarray, scores for boxes.
  64. """
  65. boxes = boxes.reshape(-1, 4)
  66. box_confidences = box_confidences.reshape(-1)
  67. box_class_probs = box_class_probs.reshape(-1, box_class_probs.shape[-1])
  68. _box_pos = np.where(box_confidences >= OBJ_THRESH)
  69. boxes = boxes[_box_pos]
  70. box_confidences = box_confidences[_box_pos]
  71. box_class_probs = box_class_probs[_box_pos]
  72. class_max_score = np.max(box_class_probs, axis=-1)
  73. classes = np.argmax(box_class_probs, axis=-1)
  74. _class_pos = np.where(class_max_score >= OBJ_THRESH)
  75. boxes = boxes[_class_pos]
  76. classes = classes[_class_pos]
  77. scores = (class_max_score* box_confidences)[_class_pos]
  78. return boxes, classes, scores
  79. def nms_boxes(boxes, scores):
  80. """Suppress non-maximal boxes.
  81. # Arguments
  82. boxes: ndarray, boxes of objects.
  83. scores: ndarray, scores of objects.
  84. # Returns
  85. keep: ndarray, index of effective boxes.
  86. """
  87. x = boxes[:, 0]
  88. y = boxes[:, 1]
  89. w = boxes[:, 2] - boxes[:, 0]
  90. h = boxes[:, 3] - boxes[:, 1]
  91. areas = w * h
  92. order = scores.argsort()[::-1]
  93. keep = []
  94. while order.size > 0:
  95. i = order[0]
  96. keep.append(i)
  97. xx1 = np.maximum(x[i], x[order[1:]])
  98. yy1 = np.maximum(y[i], y[order[1:]])
  99. xx2 = np.minimum(x[i] + w[i], x[order[1:]] + w[order[1:]])
  100. yy2 = np.minimum(y[i] + h[i], y[order[1:]] + h[order[1:]])
  101. w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)
  102. h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)
  103. inter = w1 * h1
  104. ovr = inter / (areas[i] + areas[order[1:]] - inter)
  105. inds = np.where(ovr <= NMS_THRESH)[0]
  106. order = order[inds + 1]
  107. keep = np.array(keep)
  108. return keep
  109. def yolov5_post_process(input_data):
  110. masks = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
  111. anchors = [[199, 371], [223, 481], [263, 428], [278, 516], [320, 539], [323, 464], [361, 563], [402, 505], [441, 584]]
  112. boxes, classes, scores = [], [], []
  113. for input, mask in zip(input_data, masks):
  114. b, c, s = process(input, mask, anchors)
  115. b, c, s = filter_boxes(b, c, s)
  116. boxes.append(b)
  117. classes.append(c)
  118. scores.append(s)
  119. boxes = np.concatenate(boxes)
  120. boxes = xywh2xyxy(boxes)
  121. classes = np.concatenate(classes)
  122. scores = np.concatenate(scores)
  123. nboxes, nclasses, nscores = [], [], []
  124. for c in set(classes):
  125. inds = np.where(classes == c)
  126. b = boxes[inds]
  127. c = classes[inds]
  128. s = scores[inds]
  129. keep = nms_boxes(b, s)
  130. nboxes.append(b[keep])
  131. nclasses.append(c[keep])
  132. nscores.append(s[keep])
  133. if not nclasses and not nscores:
  134. return None, None, None
  135. boxes = np.concatenate(nboxes)
  136. classes = np.concatenate(nclasses)
  137. scores = np.concatenate(nscores)
  138. return boxes, classes, scores
  139. def draw(image, boxes, scores, classes, fps):
  140. """Draw the boxes on the image.
  141. # Argument:
  142. image: original image.
  143. boxes: ndarray, boxes of objects.
  144. classes: ndarray, classes of objects.
  145. scores: ndarray, scores of objects.
  146. fps: int.
  147. all_classes: all classes name.
  148. """
  149. for box, score, cl in zip(boxes, scores, classes):
  150. top, left, right, bottom = box
  151. print('class: {}, score: {}'.format(CLASSES[cl], score))
  152. print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(top, left, right, bottom))
  153. top = int(top)
  154. left = int(left)
  155. right = int(right)
  156. bottom = int(bottom)
  157. cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)
  158. cv2.putText(image, '{0} {1:.2f}'.format(CLASSES[cl], score),
  159. (top, left - 6),
  160. cv2.FONT_HERSHEY_SIMPLEX,
  161. 0.6, (0, 0, 255), 2)
  162. def letterbox(im, new_shape=(640, 640), color=(0, 0, 0)):
  163. # Resize and pad image while meeting stride-multiple constraints
  164. shape = im.shape[:2] # current shape [height, width]
  165. if isinstance(new_shape, int):
  166. new_shape = (new_shape, new_shape)
  167. # Scale ratio (new / old)
  168. r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
  169. # Compute padding
  170. ratio = r, r # width, height ratios
  171. new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
  172. dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding
  173. dw /= 2 # divide padding into 2 sides
  174. dh /= 2
  175. if shape[::-1] != new_unpad: # resize
  176. im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
  177. top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
  178. left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
  179. im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
  180. return im, ratio, (dw, dh)
  181. # ==================================
  182. # 如下为改动部分,主要就是去掉了官方 demo 中的模型转换代码,直接加载 rknn 模型,并将 RKNN 类换成了 rknn_toolkit2_lite 中的 RKNNLite
  183. # ==================================
  184. rknn = RKNNLite()
  185. # load RKNN model
  186. print('--> Load RKNN model')
  187. ret = rknn.load_rknn(RKNN_MODEL)
  188. # Init runtime environment
  189. print('--> Init runtime environment')
  190. # use NPU core 0 1 2
  191. ret = rknn.init_runtime(core_mask=RKNNLite.NPU_CORE_0_1_2)
  192. if ret != 0:
  193. print('Init runtime environment failed!')
  194. exit(ret)
  195. print('done')
  196. # Create a VideoCapture object and read from input file
  197. # If the input is the camera, pass 0 instead of the video file name
  198. cap = cv2.VideoCapture(0)
  199. # Check if camera opened successfully
  200. if (cap.isOpened()== False):
  201. print("Error opening video stream or file")
  202. # Read until video is completed
  203. while(cap.isOpened()):
  204. start = dt.datetime.utcnow()
  205. # Capture frame-by-frame
  206. ret, img = cap.read()
  207. if not ret:
  208. break
  209. img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
  210. img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
  211. # Inference
  212. #print('--> Running model')
  213. outputs = rknn.inference(inputs=[img])
  214. #print('done')
  215. # post process
  216. input0_data = outputs[0]
  217. input1_data = outputs[1]
  218. input2_data = outputs[2]
  219. input0_data = input0_data.reshape([3, -1]+list(input0_data.shape[-2:]))
  220. input1_data = input1_data.reshape([3, -1]+list(input1_data.shape[-2:]))
  221. input2_data = input2_data.reshape([3, -1]+list(input2_data.shape[-2:]))
  222. input_data = list()
  223. input_data.append(np.transpose(input0_data, (2, 3, 0, 1)))
  224. input_data.append(np.transpose(input1_data, (2, 3, 0, 1)))
  225. input_data.append(np.transpose(input2_data, (2, 3, 0, 1)))
  226. boxes, classes, scores = yolov5_post_process(input_data)
  227. duration = dt.datetime.utcnow() - start
  228. fps = round(1000000 / duration.microseconds)
  229. # draw process result and fps
  230. img_1 = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
  231. cv2.putText(img_1, f'fps: {fps}',
  232. (20, 20),
  233. cv2.FONT_HERSHEY_SIMPLEX,
  234. 0.6, (0, 125, 125), 2)
  235. if boxes is not None:
  236. draw(img_1, boxes, scores, classes, fps)
  237. # show output
  238. cv2.imshow("post process result", img_1)
  239. # Press Q on keyboard to exit
  240. if cv2.waitKey(25) & 0xFF == ord('q'):
  241. break
  242. # When everything done, release the video capture object
  243. cap.release()
  244. # Closes all the frames
  245. cv2.destroyAllWindows()

 到了这一步,还没完,如果要想要让NPU充分跑起来的话,需要给CPU和NPU进行定频操作。

CPU、NPU定频操作

        这里呢我是根据RKNPU的官方文档摘抄下来的一些命令。

        先进入root用户,直接输入su就可以了。

        查看 CPU 频率:

  1. # 方法一
  2. cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
  3. # 方法二
  4. cat /sys/kernel/debug/clk/clk_summary | grep arm

 固定 CPU 频率

  1. # 查看 CPU 可用频率
  2. cat /sys/devices/system/cpu/cpufreq/policy0/scaling_available_frequencies
  3. # 输出 => 408000 600000 816000 1008000 1200000 1416000 1608000 1800000
  4. # 设置 CPU 频率,例如设置为最高的 1.8GHz
  5. echo userspace > /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
  6. echo 1800000 > /sys/devices/system/cpu/cpufreq/policy0/scaling_setspeed

查看 NPU 频率(rk3588 专用)

cat /sys/class/devfreq/fdab0000.npu/cur_freq

固定 NPU 频率(rk3588 专用) 

  1. # 查看 NPU 可用频率
  2. cat /sys/class/devfreq/fdab0000.npu/available_frequencies
  3. # => 300000000 400000000 500000000 600000000 700000000 800000000 900000000 1000000000
  4. # 设置 NPU 频率,例如设为最高的 1 GHz
  5. echo userspace > /sys/class/devfreq/fdab0000.npu/governor
  6. echo 1000000000 > /sys/kernel/debug/clk/clk_npu_dsu0/clk_rate

不过要注意的一点是,在 NPU 驱动 0.7.2 版本之后,需要先打开 NPU 电源,才能进行频率设置 。

        我经过实践发现,如果这一次把CPU给定频了的话,下一次再开机的话,CPU就回到了原来的频率,所以这里我学了一些shell知识,创建了两个.sh文件,这样每次就不需要我亲自来开启了。

        第一个是root_set.sh
 

  1. #!/usr/bin/expect
  2. set password "orangepi"
  3. spawn su root -c "/home/orangepi/NPU_run.sh"
  4. expect "密码:"
  5. send "$password\r"
  6. interact

第二个是NPU_run.sh

  1. #!/bin/bash
  2. #sudo apt update
  3. echo userspace > /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
  4. echo 1800000 > /sys/devices/system/cpu/cpufreq/policy0/scaling_setspeed
  5. echo userspace > /sys/devices/system/cpu/cpufreq/policy4/scaling_governor
  6. echo 2400000 > /sys/devices/system/cpu/cpufreq/policy4/scaling_setspeed
  7. echo userspace > /sys/devices/system/cpu/cpufreq/policy6/scaling_governor
  8. echo 2400000 > /sys/devices/system/cpu/cpufreq/policy6/scaling_setspeed
  9. echo "CPU is done"
  10. cat /sys/devices/system/cpu/cpufreq/policy0/cpuinfo_cur_freq
  11. cat /sys/devices/system/cpu/cpufreq/policy4/cpuinfo_cur_freq
  12. cat /sys/devices/system/cpu/cpufreq/policy6/cpuinfo_cur_freq

       通过对root_set.sh的调用,我们可以让他自动开启CPU定频,同时给我们返回定频之后CPU的频率方便我们检查。

        定频之后,我们的NPU就能加速到50FPS左右了

        以上纯粹是照抄上面博主的,就是做个笔记,方便后面再次使用。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/代码探险家/article/detail/903246
推荐阅读
相关标签
  

闽ICP备14008679号