当前位置:   article > 正文

llama-cpp-python安装bug:error: subprocess-exited-。scikit-build-core 0.8.2 using CMake 3.28.3 (wheel)_error: llama_cpp_python-0.2.56+cpuavx2-cp311-cp311

error: llama_cpp_python-0.2.56+cpuavx2-cp311-cp311-manylinux_2_31_x86_64.whl

1 Bug详情

系统
linux
python 3.10
安装命令
pip install llama-cpp-python

错误信息
Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error
  
  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [325 lines of output]
      *** scikit-build-core 0.8.2 using CMake 3.28.3 (wheel)
      *** Configuring CMake...
      2024-03-04 17:29:03,943 - scikit_build_core - WARNING - libdir/ldlibrary: /home/zmp/.conda/envs/base_torch_201cu118_cp310/lib/libpython3.10.a is not a real file!
      2024-03-04 17:29:03,943 - scikit_build_core - WARNING - Can't find a Python library, got libdir=/home/zmp/.conda/envs/base_torch_201cu118_cp310/lib, ldlibrary=libpython3.10.a, multiarch=x86_64-linux-gnu, masd=None
      loading initial cache file /tmp/tmpcjlys8ws/build/CMakeInit.txt
      -- The C compiler identification is GNU 4.8.5
      -- The CXX compiler identification is GNU 4.8.5
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: /usr/bin/git (found version "1.8.3.1")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
      -- Check if compiler accepts -pthread
      -- Check if compiler accepts -pthread - yes
      -- Found Threads: TRUE
      -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with LLAMA_CCACHE=OFF
      -- CMAKE_SYSTEM_PROCESSOR: x86_64
      -- x86 detected
      CMake Warning (dev) at CMakeLists.txt:21 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      CMake Warning (dev) at CMakeLists.txt:30 (install):
        Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      -- Configuring done (1.1s)
      -- Generating done (0.0s)
      -- Build files have been written to: /tmp/tmpcjlys8ws/build
      *** Building project with Ninja...
      Change Dir: '/tmp/tmpcjlys8ws/build'
......
/tmp/pip-install-qpsadbts/llama-cpp-python_6fb3cbcdc9d540c5b4fb53a5f6f8cd97/vendor/llama.cpp/ggml-quants.c:9656:13: 错误:隐式声明函数‘_mm256_set_m128i’ [-Werror=implicit-function-declaration]
                   const __m256i full_signs_1 = _mm256_set_m128i(full_signs_l, full_signs_l);
                   ^
      /tmp/pip-install-qpsadbts/llama-cpp-python_6fb3cbcdc9d540c5b4fb53a5f6f8cd97/vendor/llama.cpp/ggml-quants.c:9656:42: 错误:用‘int’初始化‘__m256i’时类型不兼容
                   const __m256i full_signs_1 = _mm256_set_m128i(full_signs_l, full_signs_l);
                                                ^
      /tmp/pip-install-qpsadbts/llama-cpp-python_6fb3cbcdc9d540c5b4fb53a5f6f8cd97/vendor/llama.cpp/ggml-quants.c:9657:42: 错误:用‘int’初始化‘__m256i’时类型不兼容
                   const __m256i full_signs_2 = _mm256_set_m128i(full_signs_h, full_signs_h);
.......
 ninja: build stopped: subcommand failed.
      
      
      *** CMake build failed
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72

原因:使用pip install llama-cpp-python安装时,是通过下载源码编译安装的(llama_cpp_python-0.2.55.tar.gz (36.8 MB))。这时候如果系统没有相应的cmake 和 gcc版本,会弹出这个错误。

2 解决方案

根据系统选择官方编译后的whl下载进行离线安装。

官方网址
Releases · abetlen/llama-cpp-python (github.com)

在这里插入图片描述

安装命令例子

pip install llama_cpp_python-0.2.55-cp310-cp310-manylinux_2_17_x86_64.whl
  • 1

注:官方只是编译了常见的系统,特殊系统还是需要自行编译。

一般不推荐更新公用linux系统的cmake和gcc进行安装,可以通过建立docker镜像,安装相应版本的cmake和gcc(gcc应高于11.0.0,可以为11.3.0)。查看gcc版本:gcc --version

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/羊村懒王/article/detail/421764
推荐阅读
相关标签
  

闽ICP备14008679号