赞
踩
描述
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
Supported platforms:
Mac OS
Linux
Windows (via CMake)
Docker
FreeBSD
Supported models:
LLaMA
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。