赞
踩
背景:
使用codeShell
- import torch
- from transformers import AutoModelForCausalLM, AutoTokenizer
-
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
- tokenizer = AutoTokenizer.from_pretrained("codeshell-7b")
- model = AutoModelForCausalLM.from_pretrained("codeshell-7b", trust_remote_code=True, torch_dtype=torch.bfloat16).to(device)
- inputs = tokenizer('def merge_sort():', return_tensors='pt').to(device)
- outputs = model.generate(**inputs)
- print(tokenizer.decode(outputs[0]))
安装pip install bitsandbytes后运行仍然报错
- False
-
- ===================================BUG REPORT===================================
- C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: Welcome to bitsandbytes. For bug reports, please run
-
- python -m bitsandbytes
-
-
- warn(msg)
- ================================================================================
- The following directories listed in your path were found to be non-existent: {WindowsPath('/Anaconda/Anaconda/envs/CodeLLM/lib'), WindowsPath('D')}
- C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\bitsandbytes\cuda_setup\main.py:166: UserWarning: D:\Anaconda\Anaconda\envs\CodeLLM did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
- warn(msg)
- CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
- The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')}
- DEBUG: Possible options found for libcudart.so: set()
- CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.9.
- CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
- CUDA SETUP: Loading binary C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\bitsandbytes\libbitsandbytes_cuda118.so...
- argument of type 'WindowsPath' is not iterable
- CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected.
- CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
- CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null
- CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a
- CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
- CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA.
- CUDA SETUP: Solution 2a): Download CUDA install script: wget https://github.com/TimDettmers/bitsandbytes/blob/main/cuda_install.sh
- CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO.
- CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local
- Traceback (most recent call last):
- File "<frozen runpy>", line 189, in _run_module_as_main
- File "<frozen runpy>", line 148, in _get_module_details
- File "<frozen runpy>", line 112, in _get_module_details
- File "C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\bitsandbytes\__init__.py", line 6, in <module>
- from . import cuda_setup, utils, research
- File "C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\bitsandbytes\research\__init__.py", line 1, in <module>
- from . import nn
- File "C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\bitsandbytes\research\nn\__init__.py", line 1, in <module>
- from .modules import LinearFP8Mixed, LinearFP8Global
- File "C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\bitsandbytes\research\nn\modules.py", line 8, in <module>
- from bitsandbytes.optim import GlobalOptimManager
- RuntimeError:
- CUDA Setup failed despite GPU being available. Please run the following command to get more information:
-
- python -m bitsandbytes
-
- Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
- to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
- and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
查找原因后发现可能是Bitsandbytes以前不支持windows。
解决:
pip install bitsandbytes-windows
之后再运行得到结果:
- Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
- You are using a model of type kclgpt to instantiate a model of type codeshell. This is not supported for all configurations of models and can yield errors.
- Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
-
- ===================================BUG REPORT===================================
- Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
- ================================================================================
- binary_path: C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll
- CUDA SETUP: Loading binary C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll...
- Loading checkpoint shards: 100%|██████████| 2/2 [00:12<00:00, 6.09s/it]
- Some weights of CodeShellForCausalLM were not initialized from the model checkpoint at codeshell-7b and are newly initialized: ['lm_head.weight']
- You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
- C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\transformers\generation\utils.py:1201: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
- warnings.warn(
- Setting `pad_token_id` to `eos_token_id`:70000 for open-end generation.
- C:\Users\Ma\AppData\Roaming\Python\Python311\site-packages\transformers\generation\utils.py:1288: UserWarning: Using `max_length`'s default (20) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
- warnings.warn(
- def merge_sort():潻潻潻潻潻潻潻潻潻潻潻潻潻潻潻
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。