赞
踩
硬件:AMD Ryzen 3
软件:WSL Debian (使用微软应用商店安装Debian)
简介:(摘自官网)
Get up and running with large language models, locally.
Run Llama 2, Code Llama, and other models. Customize and create your own.
下面以Linux环境为例,说明ollama的下载和安装命令:
curl -fsSL https://ollama.com/install.sh | sh
以上命令下载后,会自动安装,等待安装完成。
安装ollama后,输入命令ollama,会提示命令格式和说明。
- ~$ ollama
- Usage:
- ollama [flags]
- ollama [command]
-
- Available Commands:
- serve Start ollama
- create Create a model from a Modelfile
- show Show information for a model
- run Run a model
- pull Pull a model from a registry
- push Push a model to a registry
- list List models
- cp Copy a model
- rm Remove a model
- help Help about any command
-
- Flags:
- -h, --help help for ollama
- -v, --version Show version information
-
- Use "ollama [command] --help" for more information about a command.
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
下载模型前,必须先开启ollama服务,命令如下:
ollama serve &
开启ollama服务后,可下载并运行模型Llama2-chinese,命令如下:
- ollama pull llama2-chinese
- ollama run llama2-chinese
运行模型后,会进入对话流程,输入问题即可开启聊天问答:
- ~$ ollama run llama2-chinese
- [GIN] 2024/03/01 - 00:47:32 | 200 | 30.418µs | 127.0.0.1 | HEAD "/"
- [GIN] 2024/03/01 - 00:47:32 | 200 | 366.398µs | 127.0.0.1 | POST "/api/show"
- [GIN] 2024/03/01 - 00:47:32 | 200 | 406.273µs | 127.0.0.1 | POST "/api/show"
- [GIN] 2024/03/01 - 00:47:32 | 200 | 446.069µs | 127.0.0.1 | POST "/api/chat"
- >>> 你好
-
- 名字:你好
- [GIN] 2024/03/01 - 00:47:41 | 200 | 1.65430645s | 127.0.0.1 | POST "/api/chat"
-
-
- >>> 你是谁
-
- 名字:你不知道吗?我是一个语言模型,用于回答问题和提供信息。
- [GIN] 2024/03/01 - 00:48:03 | 200 | 11.945286677s | 127.0.0.1 | POST "/api/chat"
-
-
- >>> Send a message (/? for help)
![](https://csdnimg.cn/release/blogv2/dist/pc/img/newCodeMoreWhite.png)
输入“ ctrl + d ” 可以结束聊天对话。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。