当前位置:   article > 正文

【linux 使用ollama部署运行本地大模型完整的教程,openai接口, llama2例子】_linux部署ollama

linux部署ollama

# 安装相应的包

  1. # linux 安装
  2. curl -fsSL https://ollama.com/install.sh | sh
  3. pip install ollama

# 开启ollama服务端!

$ ollama serve

# 启动llama2大模型(新开一个终端)

  1. # autodl开启加速(其他平台省略)
  2. $ source /etc/network_turbo
  3. $ ollama run llama2-uncensored:7b-chat-q6_K

# 如果不想启动运行,只下载可以

  1. # 拉取模型
  2. $ ollama pull llama2-uncensored:7b-chat-q6_K

在启动完后,就可以对话了

# python接口对话

  1. import ollama
  2. response = ollama.chat(model='llama2', messages=[
  3. {
  4. 'role': 'user',
  5. 'content': 'Why is the sky blue?',
  6. },
  7. ])
  8. print(response['message']['content'])

 

# OpenAI适配接口对话

  1. from openai import OpenAI
  2. client = OpenAI(
  3. base_url = 'http://localhost:11434/v1',
  4. api_key='ollama', # required, but unused
  5. )
  6. response = client.chat.completions.create(
  7. model="llama2",
  8. messages=[
  9. {"role": "system", "content": "You are a helpful assistant."},
  10. {"role": "user", "content": "Who won the world series in 2020?"},
  11. {"role": "assistant", "content": "The LA Dodgers won in 2020."},
  12. {"role": "user", "content": "Where was it played?"}
  13. ]
  14. )
  15. print(response.choices[0].message.content)

# CUR流式接口

  1. curl -X POST http://localhost:11434/api/generate -d '{
  2. "model": "llama2",
  3. "prompt":"Why is the sky blue?"
  4. }'

# 参考

llama2 (ollama.com)icon-default.png?t=N7T8https://ollama.com/library/llama2

OpenAI compatibility · Ollama Blogicon-default.png?t=N7T8https://ollama.com/blog/openai-compatibility

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/641637
推荐阅读
相关标签
  

闽ICP备14008679号