赞
踩
上一章节已经实现了yolov3模型,接下来就要用fastapi来部署模型。代码如下
建立一个文件夹用来返回图片预测的结果
- import os
- dir_name = "images_uploaded"
- if not os.path.exists(dir_name):
- os.mkdir(dir_name)
接下来实现部署模型代码
- import io
- import uvicorn
- import numpy as np
- import nest_asyncio
- from enum import Enum
- from fastapi import FastAPI, UploadFile, File, HTTPException
- from fastapi.responses import StreamingResponse
- import cv2
- import cvlib as cv
- from cvlib.object_detection import draw_bbox
-
-
- # Assign an instance of the FastAPI class to the variable "app".
- # You will interact with your api using this instance.
- app = FastAPI(title='Deploying a ML Model with FastAPI: 终于成功了!!!')
-
-
- # List available models using Enum for convenience. This is useful when the options are pre-defined.
- class Model(str, Enum):
- yolov3tiny = "yolov3-tiny"
- yolov3 = "yolov3"
-
-
- # By using @app.get("/") you are allowing the GET method to work for the / endpoint.
- @app.get("/")
- def home():
- return "Congratulations! Your API is working as expected. Now head over to http://localhost:8000/docs."
-
-
- # This endpoint handles all the logic necessary for the object detection to work.
- # It requires the desired model and the image in which to perform object detection.
- @app.post("/predict")
- def prediction(model: Model, file: UploadFile = File(...)):
- # 1. VALIDATE INPUT FILE
- filename = file.filename
- fileExtension = filename.split(".")[-1] in ("jpg", "jpeg", "png")
- if not fileExtension:
- raise HTTPException(status_code=415, detail="Unsupported file provided.")
-
- # 2. TRANSFORM RAW IMAGE INTO CV2 image
-
- # Read image as a stream of bytes
- image_stream = io.BytesIO(file.file.read())
-
- # Start the stream from the beginning (position zero)
- image_stream.seek(0)
-
- # Write the stream of bytes into a numpy array
- file_bytes = np.asarray(bytearray(image_stream.read()), dtype=np.uint8)
-
- # Decode the numpy array as an image
- image = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
-
- # 3. RUN OBJECT DETECTION MODEL
-
- # Run object detection
- bbox, label, conf = cv.detect_common_objects(image, model=model)
-
- # Create image that includes bounding boxes and labels
- output_image = draw_bbox(image, bbox, label, conf)
-
- # Save it in a folder within the server
- cv2.imwrite(f'images_uploaded/{filename}', output_image)
-
- # 4. STREAM THE RESPONSE BACK TO THE CLIENT
-
- # Open the saved image for reading in binary mode
- file_image = open(f'images_uploaded/{filename}', mode="rb")
-
- # Return the image as a stream specifying media type
- return StreamingResponse(file_image, media_type="image/jpeg")
-
-
- # Allows the server to be run in this interactive environment
- nest_asyncio.apply()
-
- # Host depends on the setup you selected (docker or virtual env)
- host = "0.0.0.0" if os.getenv("DOCKER-SETUP") else "127.0.0.1"
-
- # Spin up the server!
- uvicorn.run(app, host=host, port=8000)
'运行
运行后可以看到:
点击链接得到:
再访问: http://localhost:8000/docs 如下图,并点击prediction
再点击Try it out
并且选择上传的图片,点击execute。 成功后在 images_uploaded文件夹可以得到结果。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。