当前位置:   article > 正文

国内大模型API调用实战_volcenginesdkarkruntime

volcenginesdkarkruntime

国内大模型API调用实战

今天我们就来具体看下具体大模型厂商API的使用。

1.API

  • deepseek
    deepseek支持requests直接调用和openai式的调用,新注册用户有500万token

      import requests
      import json
    
      url = "https://api.deepseek.com/chat/completions"
    
      payload = json.dumps({
         
      "messages": [
          {
         
          "content": "You are a helpful assistant",
          "role": "system"
          },
          {
         
          "content": "Hi",
          "role": "user"
          }
      ],
      "model": "deepseek-coder",
      "frequency_penalty": 0,
      "max_tokens": 2048,
      "presence_penalty": 0,
      "stop": None,
      "stream": False,
      "temperature": 1,
      "top_p": 1,
      "logprobs": False,
      "top_logprobs": None
      })
      headers = {
         
      'Content-Type': 'application/json',
      'Accept': 'application/json',
      'Authorization': 'Bearer <key>'
      }
      response = requests.request("POST", url, headers=headers, data=payload)
    
      print(response.text)
    
      from openai import OpenAI
      # for backward compatibility, you can still use `https://api.deepseek.com/v1` as `base_url`.
      client = OpenAI(api_key="sk-4f510fa4ca9d4ea7b56cdca99418fd9d", base_url="https://api.deepseek.com")
    
      response = client.chat.completions.create(
          model="deepseek-chat",
          messages=[
              {
         "role": "system", "content": "You are a helpful assistant"},
              {
         "role": "user", "content": "Hello"},
      ],
          max_tokens=1024,
          temperature=0.7,
          stream=False
      )
    
      print(response.choices[0].message.content)
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
  • 智普AI
    需要安装zhipuai包,新注册用户有赠送token

    • pip install zhipuai

    • Python>=3.7

      from zhipuai import ZhipuAI
      client = ZhipuAI(api_key="dd") # 请填写您自己的APIKey
      response = client.chat.completions.create(
      model="glm-4",  # 填写需要调用的模型名称
          messages=[
              {
             "role": "user", "content": "你好!你叫什么名字"},
          ],
          stream=True,
          )
      for chunk in response:
          print(chunk.choices[0].delta)
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
  • kimi
    kimi也是支持openai样式的调用,新注册用户的15元的赠送

    • pip install --upgrade ‘openai>=1.0’

      from openai import OpenAI
      
      client = OpenAI(
          api_key = "sk-KbwNWKBaRwhkvmfIHeaV4xnlpp3haV7qLr0sSU7wIOI9ZSNh",
          base_url = "https://api.moonshot.cn/v1",
      )
      
      completion = client.chat.completions.create(
          model = "moonshot-v1-8k",
          messages = [
              {
             "role": "system", "content": "你是 Kimi,由 Moonshot AI 提供的人工智能助手,你更擅长中文和英文的对话。你会为用户提供安全,有帮助,准确的回答。同时,你会拒绝一切涉及恐怖主义,种族歧视,黄色暴力等问题的回答。Moonshot AI 为专有名词,不可翻译成其他语言。"},
              {
             "role": "user", "content": "你好,我叫李雷,1+1等于多少?"}
          ],
          temperature = 0.3,
      )
      
      print(completion.choices[0].message.content)
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
  • 字节豆包
    豆包需要安装方舟包

    • pip install ‘volcengine-python-sdk[ark]’

      from volcenginesdkarkruntime import Ark
      
      # Authentication
      # 1.If you authorize your endpoint using an API key, you can set your api key to environment variable "ARK_API_KEY"
      # or specify api key by Ark(api_key="${YOUR_API_KEY}").
      # Note: If you use an API key, this API key will not be refreshed.
      # To prevent the API from expiring and failing after some time, choose an API key with no expiration date.
      
      # 2.If you authorize your endpoint with Volcengine Identity and Access Management(IAM), set your api key to environment variable "VOLC_ACCESSKEY", "VOLC_SECRETKEY"
      # or specify ak&sk by Ark(ak="${YOUR_AK}", sk="${YOUR_SK}").
      # To get your ak&sk, please refer to this document([https://www.volcengine.com/docs/6291/65568](https://www.volcengine.com/docs/6291/65568))
      # For more information,please check this document([https://www.volcengine.com/docs/82379/1263279](https://www.volcengine.com/docs/82379/1263279))
      client = Ark()
      
      # Non-streaming:
      print("----- standard request -----")
      completion = client.chat.completions.create(
        model="${YOUR_ENDPOINT_ID}",
        messages = [
            {
             "role": "system", "content": "你是豆包,是由字节跳动开发的 AI 人工智能助手"},
            {
             "role": "user", "content": "常见的十字花科植物有哪些?"},
        ],
      )
      print(completion.choices[0].message.content)
      
      # Streaming:
      print("----- streaming request -----")
      stream = client.chat.completions.create(
        model="${YOUR_ENDPOINT_ID}",
        messages = [
            {
             "role": "system", "content": "你是豆包,是由字节跳动开发的 AI 人工智能助手"},
            {
             "role": "user", "content": "常见的十字花科植物有哪些?"},
        ],
        stream=True
      )
      for chunk in stream:
        if not chunk.choices:
            continue
        print(chunk.choices[0].delta.content, end="")
      print()
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28
      • 29
      • 30
      • 31
      • 32
      • 33
      • 34
      • 35
      • 36
      • 37
      • 38
      • 39
      • 40
      • 41
      • 42
      • 43
      • 44
  • 讯飞星火
    讯飞星火新注册用户都有免费额度,不同模型额度不要,3.5有10万token, 2.0的有200万一年token. 接口是采用websocket

    • pip install --upgrade spark_ai_python

    • python 3.8

      from sparkai.llm.llm import ChatSparkLLM, ChunkPrintHandler
      from sparkai.core.messages import ChatMessage
      
      #星火认知大模型Spark3.5 Max的URL值,其他版本大模型URL值请前往文档(https://www.xfyun.cn/doc/spark/Web.html)查看
      SPARKAI_URL = 'wss://spark-api.xf-yun.com/v3.5/chat'
      #星火认知大模型调用秘钥信息,请前往讯飞开放平台控制台(https://console.xfyun.cn/services/bm35)查看
      SPARKAI_APP_ID = ''
      SPARKAI_API_SECRET = ''
      SPARKAI_API_KEY = ''
      #星火认知大模型Spark3.5 Max的domain值,其他版本大模型domain值请前往文档(https://www.xfyun.cn/doc/spark/Web.html)查看
      SPARKAI_DOMAIN = 'generalv3.5'
      
      if __name__ == '__main__':
        spark = ChatSparkLLM(
            spark_api_url=SPARKAI_URL,
            spark_app_id=SPARKAI_APP_ID,
            spark_api_key=SPARKAI_API_KEY,
            spark_api_secret=SPARKAI_API_SECRET,
            spark_llm_domain=SPARKAI_DOMAIN,
            streaming=False,
        )
        messages = [ChatMessage(
            role="user",
            content='你好呀'
        )]
        handler = ChunkPrintHandler()
        a = spark.generate([messages], callbacks=[handler])
        print(a)
      
      • 1
      • 2
      • 3
      • 4
      • 5
      • 6
      • 7
      • 8
      • 9
      • 10
      • 11
      • 12
      • 13
      • 14
      • 15
      • 16
      • 17
      • 18
      • 19
      • 20
      • 21
      • 22
      • 23
      • 24
      • 25
      • 26
      • 27
      • 28

2. API使用总结

可以知道上面的API使用大同小异,有些直接兼容openai的调用格式:

  • 需要注册获取api key,进行授权
  • api接口基本都支持stream流式输出
  • api接口都是典型chat模式,提供不同的角色信息,system,user和assistant, 多轮对话在history中拼接会话信息

大家快去使用吧。

如何系统的去学习AI大模型LLM ?

作为一名热心肠的互联网老兵,我意识到有很多经验和知识值得分享给大家,也可以通过我们的能力和经验解答大家在人工智能学习中的很多困惑,所以在工作繁忙的情况下还是坚持各种整理和分享。

但苦于知识传播途径有限,很多互联网行业朋友无法获得正确的资料得到学习提升,故此将并将重要的 AI大模型资料 包括AI大模型入门学习思维导图、精品AI大模型学习书籍手册、视频教程、实战学习等录播视频免费分享出来

所有资料 ⚡️ ,朋友们如果有需要全套 《LLM大模型入门+进阶学习资源包》,扫码获取~

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/878721
推荐阅读
相关标签