赞
踩
将调用的openai api改为了ollama,模型适用范围更广。
具体详见self-cognition-instuctions项目。
采用了api调用方法。
首先下载ollama(linux)
curl -fsSL https://ollama.com/install.sh | sh
通过ollama下载你想要的模型,这里以qwen1.5-32b为例。
ollama pull qwen:32b
需要开启ollama的服务
ollama serve
安装ollama包,为后面调用api提供支持。
pip install ollama
原代码:
- import traceback
-
- import openai
- import yaml
- from template.prompts import prompt_template
- from template.questions import questions
- from tqdm import tqdm
-
- CONFIG = yaml.load(open("./config.yml", "r", encoding="utf-8"), Loader=yaml.FullLoader)
-
- openai.api_base = CONFIG["openai"]["api_url"]
- openai.api_key = CONFIG["openai"]["api_key"]
-
-
- def main():
- samples = []
- max_samples = CONFIG["data"]["num_samples"]
- pbar = tqdm(total=max_samples, desc="Generating self cognition data")
-
- while True:
- exit_flag = False
- for question in questions:
- prompt = prompt_template.format(
- name=CONFIG["about"]["name"],
- company=CONFIG["about"]["company"],
- version=CONFIG["about"]["version"],
- date=CONFIG["about"]["date"],
- description=CONFIG["about"]["description"],
- ability=CONFIG["about"]["ability"],
- limitation=CONFIG["about"]["limitation"],
- author=CONFIG["about"]["author"],
- user_input=question,
- role=CONFIG["about"]["role"],
- )
- try:
- chat_completion = openai.ChatCompletion.create(
- model=CONFIG["openai"]["model"],
- messages=[{"role": "user", "content": prompt}],
- )
- sample = chat_completion.choices[0].message.content
- json_sample = eval(sample)
- samples.append(json_sample)
修改后的代码:
- import ollama #引入ollama
- import os
- import yaml
- import json
- import time
- from tqdm import tqdm
- import traceback
-
- from template.prompts import prompt_template
- from template.questions import questions
-
-
- CONFIG = yaml.load(open("./config.yml", "r", encoding="utf-8"), Loader=yaml.FullLoader)
-
- def main():
- samples = []
- max_samples = CONFIG["data"]["num_samples"]
- pbar = tqdm(total=max_samples, desc="Generating self cognition data")
-
- while True:
- exit_flag = False
- for question in questions:
- prompt = prompt_template.format(
- name=CONFIG["about"]["name"],
- company=CONFIG["about"]["company"],
- version=CONFIG["about"]["version"],
- date=CONFIG["about"]["date"],
- description=CONFIG["about"]["description"],
- ability=CONFIG["about"]["ability"],
- limitation=CONFIG["about"]["limitation"],
- author=CONFIG["about"]["author"],
- user_input=question,
- role=CONFIG["about"]["role"],
- )
- try:
- response = ollama.chat(model='qwen:32b', messages=[{"role": "user", "content": prompt}]) #调用qwen1.5-32b
- sample = response['message']['content']
- #print(sample)
- sample = sample.replace("```json", "").replace("```", "") #由于生成的数据经常出现```json、```导致报错,这里做一步处理。
- #print(sample)
- json_sample = eval(sample)
- samples.append(json_sample)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。