当前位置:   article > 正文

ollama执行@focal jammy Ubuntu @FreeBSD jail_error: could not connect to ollama app, is it runn

error: could not connect to ollama app, is it running?

编译安装ollama,参见:尝试FreeBSD下安装ollama-CSDN博客

 ollama编译安装@focal jammy Ubuntu @FreeBSD jail-CSDN博客

运行:

  1. ./ollama run phi3
  2. Error: could not connect to ollama app, is it running?

有报错,原来要启动服务:

./ollama serve

这时候再执行./ollama run phi3 就正常了

ollama模型相关

ollama有这些模型:

Here are some example models that can be downloaded:

ModelParametersSizeDownload
Llama 38B4.7GBollama run llama3
Llama 370B40GBollama run llama3:70b
Phi 3 Mini3.8B2.3GBollama run phi3
Phi 3 Medium14B7.9GBollama run phi3:medium
Gemma 29B5.5GBollama run gemma2
Gemma 227B16GBollama run gemma2:27b
Mistral7B4.1GBollama run mistral
Moondream 21.4B829MBollama run moondream
Neural Chat7B4.1GBollama run neural-chat
Starling7B4.1GBollama run starling-lm
Code Llama7B3.8GBollama run codellama
Llama 2 Uncensored7B3.8GBollama run llama2-uncensored
LLaVA7B4.5GBollama run llava
Solar10.7B6.1GBollama run solar

Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.

 ollama自定义模型

Customize a model

Import from GGUF

Ollama supports importing GGUF models in the Modelfile:

  1. Create a file named Modelfile, with a FROM instruction with the local filepath to the model you want to import.

    FROM ./vicuna-33b.Q4_0.gguf
    
  2. Create the model in Ollama

    ollama create example -f Modelfile
    
  3. Run the model

    ollama run example
    

Import from PyTorch or Safetensors

See the guide on importing models for more information.

Customize a prompt

Models from the Ollama library can be customized with a prompt. For example, to customize the llama3 model:

ollama pull llama3

Create a Modelfile:

  1. FROM llama3
  2. # set the temperature to 1 [higher is more creative, lower is more coherent]
  3. PARAMETER temperature 1
  4. # set the system message
  5. SYSTEM """
  6. You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
  7. """

Next, create and run the model:

  1. ollama create mario -f ./Modelfile
  2. ollama run mario
  3. >>> hi
  4. Hello! It's your friend Mario.

For more examples, see the examples directory. For more information on working with a Modelfile, see the Modelfile documentation.

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/1008464
推荐阅读
相关标签
  

闽ICP备14008679号