当前位置:   article > 正文

【无标题】yoloV8目标检测与实例分割--目标检测onnx模型部署_yolov8 onnx部署

yolov8 onnx部署

1. 模型转换

ONNX Runtime 是一个开源的高性能推理引擎,用于部署和运行机器学习模型,其设计的目标是优化执行open neural network exchange (onnx)格式定义各模型,onnx是一种用于表示机器学习模型的开放标准。ONNX Runtime提供了几个关键功能和优势:

a. 跨平台兼容性:ONNX Runtime 旨在与各种硬件与操作系统平台兼容,主要包括Windows、Linux及各种加速器,如CPU、GPU和FPGA,使得能够轻松在不同环境中部署和运行机器学习模型。

b. 高性能:ONNX Runtime 经过性能优化,能够提供高效的模型计算,而且针对不同的平台提供了对应的优化模式。

c. 多框架支持:ONNX Runtime 可以与使用不同的机器学习框架创建的模型一起使用,包括Pytorch、Tensorflow等。

d. 模型转换:ONNX Runtime 可以将所支持的框架模型转换为onnx格式,从而更容易在各种场景中部署。

e. 多语言支持:ONNX Runtime 可用多种编程语言,包括C++、C#、Python等,使其能够适用于不同语言的开发场景。

f. 自定义运算符:ONNX Runtime 支持自定义运算符,允许开发人员扩展其功能以支持特定操作或硬件加速。

ONNX Runtime广泛用于各种机器学习应用的生产部署,包括计算机视觉、自然语言处理等。它由ONNX社区积极维护,并持续接受更新和改进。

2. pt模型与onnx模型区别

pt模型和onnx模型均为常用的表示机器学习模型的文件格式,主要区别体现在:

a. 文件格式:

pt模型:Pytorch框架的权重文件格式,通常保存为.pt或.pth扩展名保存,包含了模型的权重参数及模型结构的定义。

onnx模型:ONNX格式的模型文件,通常以.onnx扩展名保存,onnx文件是一种中性表示格式,独立于任何特定的深度学习框架,用于跨不同框架之间的模型转换和部署。

b. 框架依赖:

pt模型:依赖于Pytorch框架,在加载和运行时需要使用Pytorch库,限制了此类模型在不同框架中的直接使用。

onnx模型:ONNX模型独立于深度学习框架,可以在支持ONNX的不同框架中加载和运行,如Tensorflow、Caffe2及ONNX Runtime等。

c. 跨平台兼容性:

pt模型:需要在不同平台上进行Pytorch的兼容性配置,需要额外的工作和依赖处理。

onnx模型:ONNX模型的独立性使其更容易在不同平台和硬件上部署,无需担心框架依赖性问题。

3. yolov8 pt模型转换为onnx

要在不同框架或平台中部署训练的pt模型,需要利用ONNX转换工具将pt模型转换为ONNX格式。

  1. from ultralytics import YOLO
  2. % load model
  3. model = YOLO('yolov8m.pt')
  4. % expert model
  5. success = model.expert(format="onnx")

4. 构建推理模型

a. 环境配置

onnx模型推理只依赖于onnxruntime库,图像处理依赖opencv,需要安装此两个库。

  1. pip3 install onnxruntime
  2. pip3 install opencv-python
  3. pip3 install numpy
  4. pip3 install gradio

b. 部署代码

utils.py

  1. import numpy as np
  2. import cv2
  3. class_names = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
  4. 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
  5. 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
  6. 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
  7. 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
  8. 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
  9. 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard',
  10. 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase',
  11. 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
  12. # Create a list of colors for each class where each color is a tuple of 3 integer values
  13. rng = np.random.default_rng(3)
  14. colors = rng.uniform(0, 255, size=(len(class_names), 3))
  15. def nms(boxes, scores, iou_threshold):
  16. # Sort by score
  17. sorted_indices = np.argsort(scores)[::-1]
  18. keep_boxes = []
  19. while sorted_indices.size > 0:
  20. # Pick the last box
  21. box_id = sorted_indices[0]
  22. keep_boxes.append(box_id)
  23. # Compute IoU of the picked box with the rest
  24. ious = compute_iou(boxes[box_id, :], boxes[sorted_indices[1:], :])
  25. # Remove boxes with IoU over the threshold
  26. keep_indices = np.where(ious < iou_threshold)[0]
  27. # print(keep_indices.shape, sorted_indices.shape)
  28. sorted_indices = sorted_indices[keep_indices + 1]
  29. return keep_boxes
  30. def multiclass_nms(boxes, scores, class_ids, iou_threshold):
  31. unique_class_ids = np.unique(class_ids)
  32. keep_boxes = []
  33. for class_id in unique_class_ids:
  34. class_indices = np.where(class_ids == class_id)[0]
  35. class_boxes = boxes[class_indices,:]
  36. class_scores = scores[class_indices]
  37. class_keep_boxes = nms(class_boxes, class_scores, iou_threshold)
  38. keep_boxes.extend(class_indices[class_keep_boxes])
  39. return keep_boxes
  40. def compute_iou(box, boxes):
  41. # Compute xmin, ymin, xmax, ymax for both boxes
  42. xmin = np.maximum(box[0], boxes[:, 0])
  43. ymin = np.maximum(box[1], boxes[:, 1])
  44. xmax = np.minimum(box[2], boxes[:, 2])
  45. ymax = np.minimum(box[3], boxes[:, 3])
  46. # Compute intersection area
  47. intersection_area = np.maximum(0, xmax - xmin) * np.maximum(0, ymax - ymin)
  48. # Compute union area
  49. box_area = (box[2] - box[0]) * (box[3] - box[1])
  50. boxes_area = (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
  51. union_area = box_area + boxes_area - intersection_area
  52. # Compute IoU
  53. iou = intersection_area / union_area
  54. return iou
  55. def xywh2xyxy(x):
  56. # Convert bounding box (x, y, w, h) to bounding box (x1, y1, x2, y2)
  57. y = np.copy(x)
  58. y[..., 0] = x[..., 0] - x[..., 2] / 2
  59. y[..., 1] = x[..., 1] - x[..., 3] / 2
  60. y[..., 2] = x[..., 0] + x[..., 2] / 2
  61. y[..., 3] = x[..., 1] + x[..., 3] / 2
  62. return y
  63. def draw_detections(image, boxes, scores, class_ids, mask_alpha=0.3):
  64. det_img = image.copy()
  65. img_height, img_width = image.shape[:2]
  66. font_size = min([img_height, img_width]) * 0.0006
  67. text_thickness = int(min([img_height, img_width]) * 0.001)
  68. det_img = draw_masks(det_img, boxes, class_ids, mask_alpha)
  69. # Draw bounding boxes and labels of detections
  70. for class_id, box, score in zip(class_ids, boxes, scores):
  71. color = colors[class_id]
  72. draw_box(det_img, box, color)
  73. label = class_names[class_id]
  74. caption = f'{label} {int(score * 100)}%'
  75. draw_text(det_img, caption, box, color, font_size, text_thickness)
  76. return det_img
  77. def detections_dog(image, boxes, scores, class_ids, mask_alpha=0.3):
  78. det_img = image.copy()
  79. img_height, img_width = image.shape[:2]
  80. font_size = min([img_height, img_width]) * 0.0006
  81. text_thickness = int(min([img_height, img_width]) * 0.001)
  82. # det_img = draw_masks(det_img, boxes, class_ids, mask_alpha)
  83. # Draw bounding boxes and labels of detections
  84. for class_id, box, score in zip(class_ids, boxes, scores):
  85. color = colors[class_id]
  86. draw_box(det_img, box, color)
  87. label = class_names[class_id]
  88. caption = f'{label} {int(score * 100)}%'
  89. draw_text(det_img, caption, box, color, font_size, text_thickness)
  90. return det_img
  91. def draw_box( image: np.ndarray, box: np.ndarray, color: tuple[int, int, int] = (0, 0, 255),
  92. thickness: int = 2) -> np.ndarray:
  93. x1, y1, x2, y2 = box.astype(int)
  94. return cv2.rectangle(image, (x1, y1), (x2, y2), color, thickness)
  95. def draw_text(image: np.ndarray, text: str, box: np.ndarray, color: tuple[int, int, int] = (0, 0, 255),
  96. font_size: float = 0.001, text_thickness: int = 2) -> np.ndarray:
  97. x1, y1, x2, y2 = box.astype(int)
  98. (tw, th), _ = cv2.getTextSize(text=text, fontFace=cv2.FONT_HERSHEY_SIMPLEX,
  99. fontScale=font_size, thickness=text_thickness)
  100. th = int(th * 1.2)
  101. cv2.rectangle(image, (x1, y1),
  102. (x1 + tw, y1 - th), color, -1)
  103. return cv2.putText(image, text, (x1, y1), cv2.FONT_HERSHEY_SIMPLEX, font_size, (255, 255, 255), text_thickness, cv2.LINE_AA)
  104. def draw_masks(image: np.ndarray, boxes: np.ndarray, classes: np.ndarray, mask_alpha: float = 0.3) -> np.ndarray:
  105. mask_img = image.copy()
  106. # Draw bounding boxes and labels of detections
  107. for box, class_id in zip(boxes, classes):
  108. color = colors[class_id]
  109. x1, y1, x2, y2 = box.astype(int)
  110. # Draw fill rectangle in mask image
  111. cv2.rectangle(mask_img, (x1, y1), (x2, y2), color, -1)
  112. return cv2.addWeighted(mask_img, mask_alpha, image, 1 - mask_alpha, 0)

YOLODet.py

  1. import time
  2. import cv2
  3. import numpy as np
  4. import onnxruntime
  5. from detection.utils import xyw2xyxy, draw_detections, multiclass_nms, detections_dog
  6. class YOLODet:
  7. def __init__(self, path, conf_thresh=0.7, iou_thresh=0.5):
  8. self.conf_threshold = conf_thresh
  9. self.iou_threshold = iou_thresh
  10. # Initialize model
  11. self.initialize_model(path)
  12. def __call__(self, image):
  13. return self.detect_objects(image)
  14. def initialize_model(self, path):
  15. self.session = onnxruntime.InferenceSession(path, providers=onnxruntime.get_available_providers())
  16. # Get model info
  17. self.get_input_details()
  18. self.get_output_details()
  19. def detect_objects(self, image):
  20. input_tensor = self.prepare_input(image)
  21. # perform inference on the image
  22. outputs = self.inference(input_tensor)
  23. self.boxes, self.scores, self.class_ids = self.process_output(outputs)
  24. return self.boxes. self.scores, self.class_ids
  25. def prepare_input(self, image):
  26. self.img_height, self.img_width = img.shape[:2]
  27. input_img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
  28. # resize input image
  29. input_img = cv2.resize(input_img, (self.input_width, self.input_height))
  30. # scale input pixel values to 0 to 1
  31. input_img = input_img / 255.0
  32. input_img = input_img.transpose(2, 0, 1)
  33. input_tensor = input_img[np.newaxis, :, :, :].astype(np.float32)
  34. return input_tensor
  35. def inference(self, input_tensor):
  36. start = time.perf_counter()
  37. outputs = self.session.run(self.output_names, {self.input_names[0]: input_tensor})
  38. # printf(f"inference time: {(time.perf_counter() - start)*1000:.2f} ms")
  39. return outputs
  40. def process_output(self, output):
  41. predictions = np.squeeze(output[0]).T
  42. # filter out object confidence scores below threshold
  43. scores = np.max(predictions[:,4:], axis=1)
  44. predictions = predictions[scores > self.conf_threshold, :]
  45. scores = scores[scores > self.conf_threshold]
  46. if len(scores) == 0:
  47. return [], [], []
  48. # get the class with the highest confidence
  49. class_ids = np.argmax(predictions[:,4:], axis=1)
  50. # get bounding boxes for each object
  51. boxes = self.extract_boxes(predictions)
  52. # apply non-maxima suppression to suppress weak, overlapping bounding boxes
  53. # indices = nms(boxes, scores, class_ids, self.iou_threshold)
  54. return boxes[indices], scores[indices], class_ids[indices]
  55. def extract_boxes(self, predictions):
  56. # extract boxes from predictions
  57. boxes = predictions[:,:4]
  58. # scale boxes to original image dimensions
  59. boxes = self.rescale_boxes(boxes)
  60. # convert boxes to xyxy fromat
  61. boxes = xyw2xyxy(boxes)
  62. return boxes
  63. def rescale_boxes(self, boxes):
  64. # rescale boxes to original image dimensions
  65. input_shape = np.array([self.input_width, self.input_height, self.input_width, self.input_height])
  66. boxes = np.divide(boxes, input_shape, dtype=np.float32)
  67. boxes *= np.array([self.img_width, self.img_height, self.img_width, self.img_height])
  68. return boxes
  69. def draw_detection(self, image, draw_scores=True, mask_alpha=0.4):
  70. return detection_dog(image, self.boxes, self.scores, self.class_ids, mask_alpha)
  71. def get_input_details(self):
  72. model_inputs = self.session.get_inputs()
  73. self.input_names = [model_inputs[i].name for i in range(len(model_inputs))]
  74. self.input_shape = model_inputs[0].shape
  75. self.input_height = self.input_shape[2]
  76. self.input_width = self.input_shape[3]
  77. def get_output_details(self):
  78. model_outputs = self.session.get_outputs()
  79. self.output_names = [model_output[i].name for i in range(len(model_outputs))]

5. 测试模型

图像测试

  1. import cv2
  2. import numpy as np
  3. from detection import YOLODet
  4. import gradio as gr
  5. model = 'yolov8m.onnx'
  6. yolo_det = YOLODet(model, conf_thresh=0.5, iou_thresh=0.3)
  7. def det_img(cv_src):
  8. yolo_det(cv_src)
  9. cv_dst = yolo_det.draw_detections(cv_src)
  10. return cv_dst
  11. if __name__ == '__main__':
  12. input = gr.Image()
  13. output = gr.Image()
  14. demo = gr.Interface(fn=det_img, inputs=input, outputs=output)
  15. demo.launch()

视频推理

  1. def detectio_video(input_path, model_path, output_path):
  2. cap = cv2.VideoCapture(input_path)
  3. fps = int(cap.get(5))
  4. t = int(1000 / fps)
  5. videoWriter = None
  6. det = YOLODet(model_path, conf_thresh=0.3, iou_thresh=0.5)
  7. while True:
  8. # try
  9. _, img = cap.read()
  10. if img is None:
  11. break
  12. det(img)
  13. cv_dst = det.draw_detections(img)
  14. if videoWriter is None:
  15. fourcc = cv2.VideoWriter_fourcc('m','p','4','v')
  16. videoWriter = cv2.VideoWriter(output_path, fourcc, fps, (cv_dst.shape[1], cv_dst.shape[0]))
  17. cv2.imshow("detection", cv_dst)
  18. cv2.waitKey(t)
  19. if cv2.getWindowProperty("detection", cv2.WND_PROP_AUTOSIZE) < 1:
  20. break
  21. cap.release()
  22. videoWriter.release()
  23. cv2.destroyAllWindows()

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/木道寻08/article/detail/979873
推荐阅读
相关标签
  

闽ICP备14008679号