当前位置:   article > 正文

【深度学习】InternVL2-8B,图转文,docker部署

【深度学习】InternVL2-8B,图转文,docker部署

基础

https://huggingface.co/OpenGVLab/InternVL2-8B#%E7%AE%80%E4%BB%8B

InternVL2-26B应该更好,但显存没那么大,只能跑InternVL2-8B了。

下载:

cd /ssd/xiedong/InternVL2-26B
git clone https://huggingface.co/OpenGVLab/InternVL2-8B
  • 1
  • 2

运行docker:

docker run -it -v /ssd/xiedong/InternVL2-26B:/ssd/xiedong/InternVL2-26B --gpus device=3 -p 7895:7860 kevinchina/deeplearning:pytorch2.3.0-cuda12.1-cudnn8-devel-InternVL2 bash
  • 1

进去路径:

cd /ssd/xiedong/InternVL2-26B
  • 1

执行此python代码:

from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image

model = '/ssd/xiedong/InternVL2-26B/InternVL2-8B'
system_prompt = 'Describe this image in English with no more than 50 words.'
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
chat_template_config = ChatTemplateConfig('internvl-internlm2')
chat_template_config.meta_instruction = system_prompt
pipe = pipeline(model, chat_template_config=chat_template_config,
                backend_config=TurbomindEngineConfig(session_len=8192))
response = pipe(('Describe this image in English with no more than 50 words.', image))
print(response.text)

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

或者传一张图片后执行这种代码:

from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image

model = '/ssd/xiedong/InternVL2-26B/InternVL2-8B'
system_prompt = 'Describe this image in English with no more than 50 words.'
image = load_image('/ssd/xiedong/InternVL2-26B/000030982.jpg')
chat_template_config = ChatTemplateConfig('internvl-internlm2')
chat_template_config.meta_instruction = system_prompt
pipe = pipeline(model, chat_template_config=chat_template_config,
                backend_config=TurbomindEngineConfig(session_len=8192))
response = pipe(('Describe this image in English with no more than 50 words.', image))
print(response.text)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

fastapi服务

执行这个代码可以开启一个fastapi接口,7860端口会被docker映射到7895.

from fastapi import FastAPI, File, UploadFile
from fastapi.responses import JSONResponse
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
from PIL import Image
import io

app = FastAPI()

model = '/ssd/xiedong/InternVL2-26B/InternVL2-8B'
system_prompt = 'Describe this image in English with no more than 50 words.'
chat_template_config = ChatTemplateConfig('internvl-internlm2')
chat_template_config.meta_instruction = system_prompt
pipe = pipeline(model, chat_template_config=chat_template_config,
                backend_config=TurbomindEngineConfig(session_len=8192))

@app.post("/describe-image")
def describe_image(file: UploadFile = File(...)):
    try:
        # 将上传的文件转为Pillow图像对象
        image = Image.open(io.BytesIO(file.file.read()))

        # 使用load_image方法加载图像
        loaded_image = load_image(image)

        # 调用模型处理图片
        response = pipe(('Describe this image in English with no more than 50 words, just need to output a captioning of the image.', loaded_image))

        # 返回描述结果
        return JSONResponse(content={"description": response.text})

    except Exception as e:
        return JSONResponse(content={"error": str(e)}, status_code=500)
    finally:
        file.file.close()

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=7860)


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41

请求fastapi接口

import requests

url = "http://10.136.19.26:7895/describe-image"

# 要上传的图片文件路径
file_path = "output_image.png"

# 打开文件并发送POST请求
with open(file_path, "rb") as file:
    files = {"file": file}
    response = requests.post(url, files=files)

# 检查响应并打印结果
if response.status_code == 200:
    print("Description:", response.json().get("description"))
else:
    print("Error:", response.json().get("error"))

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

在这里插入图片描述

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/849218
推荐阅读
相关标签
  

闽ICP备14008679号