当前位置:   article > 正文

大模型部署手记(6)通义千问+Jetson AGX Orin_warning: import flash_attn rotary fail, please ins

warning: import flash_attn rotary fail, please install flashattention rotary

1.简介

组织机构:阿里

代码仓:https://github.com/QwenLM/Qwen

模型:Qwen/Qwen-7B-Chat-Int4

下载:http://huggingface.co/Qwen/Qwen-7B-Chat-Int4

modelscope下载:https://modelscope.cn/models/qwen/Qwen-7B-Chat-Int4/summary

硬件环境:Jetson AGX Orin

2.代码和模型下载

cd /home1/zhanghui

git clone https://github.com/QwenLM/Qwen

cd Qwen

在这个目录下创建模型目录:

打开 http://huggingface.co/Qwen/Qwen-7B-Chat-Int4 下载模型:

下载好的模型保存到了 ~/Downloads目录:

将其挪到 /home1/zhanghui/Qwen/Qwen/Qwen-7B-Chat-Int4 目录:

3.安装依赖

由于Orin上的pytorch只能选Python 3.8的版本,所以:

conda create -n model38 python=3.8

conda activate model38

cd /home1/zhanghui/

安装pytorch 2.0

pip install ./torch-2.0.0+nv23.05-cp38-cp38-linux_aarch64.whl

安装依赖包:

cd Qwen

pip install -r requirements.txt

安装量化依赖:

pip install auto-gptq optimum

安装其他依赖:

pip install transformers_stream_generator tiktoken

好像不需要装了。已经有了。

安装flash-attention库

cd ..

git clone https://github.com/Dao-AILab/flash-attention -b v1.0.8

cd flash-attention

pip install .

编译的时间有点慢,要耐心等待。如果不放心,可以另开一个终端看看它是不是还在编译:

编译完毕。

pip install chardet cchardet

4.部署验证

cd /home1/zhanghui

cd Qwen

修改cli_demo.py

DEFAULT_CKPT_PATH = './Qwen/Qwen-7B-Chat-Int4'

以及去掉清屏

  1. # Copyright (c) Alibaba Cloud.
  2. #
  3. # This source code is licensed under the license found in the
  4. # LICENSE file in the root directory of this source tree.
  5. """A simple command-line interactive chat demo."""
  6. import argparse
  7. import os
  8. import platform
  9. import shutil
  10. from copy import deepcopy
  11. from transformers import AutoModelForCausalLM, AutoTokenizer
  12. from transformers.generation import GenerationConfig
  13. from transformers.trainer_utils import set_seed
  14. #DEFAULT_CKPT_PATH = 'Qwen/Qwen-7B-Chat'
  15. DEFAULT_CKPT_PATH = './Qwen/Qwen-7B-Chat-Int4'
  16. _WELCOME_MSG = '''\
  17. Welcome to use Qwen-Chat model, type text to start chat, type :h to show command help.
  18. (欢迎使用 Qwen-Chat 模型,输入内容即可进行对话,:h 显示命令帮助。)
  19. Note: This demo is governed by the original license of Qwen.
  20. We strongly advise users not to knowingly generate or allow others to knowingly generate harmful content, including hate speech, violence, pornography, deception, etc.
  21. (注:本演示受Qwen的许可协议限制。我们强烈建议,用户不应传播及不应允许他人传播以下内容,包括但不限于仇恨言论、暴力、色情、欺诈相关的有害信息。)
  22. '''
  23. _HELP_MSG = '''\
  24. Commands:
  25. :help / :h Show this help message 显示帮助信息
  26. :exit / :quit / :q Exit the demo 退出Demo
  27. :clear / :cl Clear screen 清屏
  28. :clear-his / :clh Clear history 清除对话历史
  29. :history / :his Show history 显示对话历史
  30. :seed Show current random seed 显示当前随机种子
  31. :seed <N> Set random seed to <N> 设置随机种子
  32. :conf Show current generation config 显示生成配置
  33. :conf <key>=<value> Change generation config 修改生成配置
  34. :reset-conf Reset generation config 重置生成配置
  35. '''
  36. def _load_model_tokenizer(args):
  37. tokenizer = AutoTokenizer.from_pretrained(
  38. args.checkpoint_path, trust_remote_code=True, resume_download=True,
  39. )
  40. if args.cpu_only:
  41. device_map = "cpu"
  42. else:
  43. device_map = "auto"
  44. model = AutoModelForCausalLM.from_pretrained(
  45. args.checkpoint_path,
  46. device_map=device_map,
  47. trust_remote_code=True,
  48. resume_download=True,
  49. ).eval()
  50. config = GenerationConfig.from_pretrained(
  51. args.checkpoint_path, trust_remote_code=True, resume_download=True,
  52. )
  53. return model, tokenizer, config
  54. def _clear_screen():
  55. if platform.system() == "Windows":
  56. os.system("cls")
  57. else:
  58. os.system("clear")
  59. def _print_history(history):
  60. terminal_width = shutil.get_terminal_size()[0]
  61. print(f'History ({len(history)})'.center(terminal_width, '='))
  62. for index, (query, response) in enumerate(history):
  63. print(f'User[{index}]: {query}')
  64. print(f'QWen[{index}]: {response}')
  65. print('=' * terminal_width)
  66. def _get_input() -> str:
  67. while True:
  68. try:
  69. message = input('User> ').strip()
  70. except UnicodeDecodeError:
  71. print('[ERROR] Encoding error in input')
  72. continue
  73. except KeyboardInterrupt:
  74. exit(1)
  75. if message:
  76. return message
  77. print('[ERROR] Query is empty')
  78. def main():
  79. parser = argparse.ArgumentParser(
  80. description='QWen-Chat command-line interactive chat demo.')
  81. parser.add_argument("-c", "--checkpoint-path", type=str, default=DEFAULT_CKPT_PATH,
  82. help="Checkpoint name or path, default to %(default)r")
  83. parser.add_argument("-s", "--seed", type=int, default=1234, help="Random seed")
  84. parser.add_argument("--cpu-only", action="store_true", help="Run demo with CPU only")
  85. args = parser.parse_args()
  86. history, response = [], ''
  87. model, tokenizer, config = _load_model_tokenizer(args)
  88. orig_gen_config = deepcopy(model.generation_config)
  89. #_clear_screen()
  90. print(_WELCOME_MSG)
  91. seed = args.seed
  92. while True:
  93. query = _get_input()
  94. # Process commands.
  95. if query.startswith(':'):
  96. command_words = query[1:].strip().split()
  97. if not command_words:
  98. command = ''
  99. else:
  100. command = command_words[0]
  101. if command in ['exit', 'quit', 'q']:
  102. break
  103. elif command in ['clear', 'cl']:
  104. _clear_screen()
  105. print(_WELCOME_MSG)
  106. continue
  107. elif command in ['clear-history', 'clh']:
  108. print(f'[INFO] All {len(history)} history cleared')
  109. history.clear()
  110. continue
  111. elif command in ['help', 'h']:
  112. print(_HELP_MSG)
  113. continue
  114. elif command in ['history', 'his']:
  115. _print_history(history)
  116. continue
  117. elif command in ['seed']:
  118. if len(command_words) == 1:
  119. print(f'[INFO] Current random seed: {seed}')
  120. continue
  121. else:
  122. new_seed_s = command_words[1]
  123. try:
  124. new_seed = int(new_seed_s)
  125. except ValueError:
  126. print(f'[WARNING] Fail to change random seed: {new_seed_s!r} is not a valid number')
  127. else:
  128. print(f'[INFO] Random seed changed to {new_seed}')
  129. seed = new_seed
  130. continue
  131. elif command in ['conf']:
  132. if len(command_words) == 1:
  133. print(model.generation_config)
  134. else:
  135. for key_value_pairs_str in command_words[1:]:
  136. eq_idx = key_value_pairs_str.find('=')
  137. if eq_idx == -1:
  138. print('[WARNING] format: <key>=<value>')
  139. continue
  140. conf_key, conf_value_str = key_value_pairs_str[:eq_idx], key_value_pairs_str[eq_idx + 1:]
  141. try:
  142. conf_value = eval(conf_value_str)
  143. except Exception as e:
  144. print(e)
  145. continue
  146. else:
  147. print(f'[INFO] Change config: model.generation_config.{conf_key} = {conf_value}')
  148. setattr(model.generation_config, conf_key, conf_value)
  149. continue
  150. elif command in ['reset-conf']:
  151. print('[INFO] Reset generation config')
  152. model.generation_config = deepcopy(orig_gen_config)
  153. print(model.generation_config)
  154. continue
  155. else:
  156. # As normal query.
  157. pass
  158. # Run chat.
  159. set_seed(seed)
  160. try:
  161. for response in model.chat_stream(tokenizer, query, history=history, generation_config=config):
  162. pass
  163. # print(f"\nUser: {query}")
  164. print(f"\nQwen-Chat: {response}")
  165. # _clear_screen()
  166. # print(f"\nUser: {query}")
  167. # print(f"\nQwen-Chat: {response}")
  168. except KeyboardInterrupt:
  169. print('[WARNING] Generation interrupted')
  170. continue
  171. history.append((query, response))
  172. if __name__ == "__main__":
  173. main()

运行 python cli_demo.py

看来如果不装modelscope的话,可能会少不少其他的包。

pip install modelscope

再来:python cli_demo.py

怎么?auto-gptq没有aarch64的版本?

pip install auto-gptq==0.4.2

那就只好源码编译了。

打开AutoGPTQ官网:https://github.com/PanQiWei/AutoGPTQ

请注意不是AutoGPT:https://github.com/Significant-Gravitas/AutoGPT

而是AutoGPTQ。AutoGPTQ用于量化,而AutoGPT是自动聊天机器人。

​cd ..

git clone https://github.com/PanQiWei/AutoGPTQ.git -b v0.4.2

cd AutoGPTQ

pip install -v .

耐心等待编译结束:

cd ../Qwen

再来:python cli_demo.py

参考:本地免费GPT4?Llama 2开源大模型,一键部署且无需硬件要求教程-CSDN博客

修改 /home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/peft/utils/other.py

将 is_npu_available改为is_tpu_available

再来:python cli_demo.py

终于可以运行了?

好像并不可以。

查看下torch的版本:

用modelscope的方式运行下试试呢?

vi Qwen-7B-Chat-Int4.py

  1. from modelscope import AutoTokenizer, AutoModelForCausalLM, snapshot_download
  2. model_dir = snapshot_download("qwen/Qwen-7B-Chat-Int4", revision = 'v1.1.3' )
  3. # Note: The default behavior now has injection attack prevention off.
  4. tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
  5. model = AutoModelForCausalLM.from_pretrained(
  6. model_dir,
  7. device_map="auto",
  8. trust_remote_code=True
  9. ).eval()
  10. response, history = model.chat(tokenizer, "你好", history=None)
  11. print(response)
  12. # 你好!很高兴为你提供帮助。

并执行:python Qwen-7B-Chat-Int4.py

它会将模型下载到 ~/.cache/modelscope/hub/qwen/Qwen-7B-Chat-Int4 目录:(其实下次可以把这个目录下的文件准备好)

这个直接就报错了。错误跟cli_demo.py一模一样。

将 ~/archiconda3/envs/model38/lib/python3.8/site-packages/peft/utils/other.py 文件改回来:

  1. from accelerate.utils import is_npu_available, is_xpu_available
  2. #from accelerate.utils import is_tpu_available, is_xpu_available

查看 https://blog.51cto.com/u_9453611/7671814

可能是peft包的问题,重新装一下:

pip uninstall peft

pip install peft@git+https://github.com/huggingface/peft.git

再来:python Qwen-7B-Chat-Int4.py

  1. (model38) zhanghui@ubuntu:/home1/zhanghui/Qwen$ python Qwen-7B-Chat-Int4.py
  2. 2023-10-01 19:55:46,315 - modelscope - INFO - PyTorch version 2.0.0+nv23.5 Found.
  3. 2023-10-01 19:55:46,316 - modelscope - INFO - Loading ast index from /home/zhanghui/.cache/modelscope/ast_indexer
  4. 2023-10-01 19:55:46,365 - modelscope - INFO - Loading done! Current index file version is 1.9.1, with md5 d2574d97b79a12fd280c8b43dde90408 and a total number of 924 components indexed
  5. 2023-10-01 19:55:47,331 - modelscope - INFO - Use user-specified model revision: v1.1.3
  6. Warning: please make sure that you are using the latest codes and checkpoints, especially if you used Qwen-7B before 09.25.2023.请使用最新模型和代码,尤其如果你在925日前已经开始使用Qwen-7B,千万注意不要使用错误代码和模型。
  7. Try importing flash-attention for faster inference...
  8. Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary
  9. Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm
  10. Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.11s/it]
  11. CUDA error (/home1/zhanghui/flash-attention/csrc/flash_attn/src/fmha_fwd_launch_template.h:89): no kernel image is available for execution on the device
  12. /arrow/cpp/src/arrow/filesystem/s3fs.cc:2829: arrow::fs::FinalizeS3 was not called even though S3 was initialized. This could lead to a segmentation fault at exit
  13. (model38) zhanghui@ubuntu:/home1/zhanghui/Qwen$

要不然垂死挣扎一次,重新源码编译一下flash attention

pip uninstall flash-attn

安装最新版flash-attn试试:

cd /home1/zhanghui

mkdir new

cd new

git clone https://github.com/Dao-AILab/flash-attention

cd flash-attention

pip install flash-attn --no-build-isolation

python setup.py install

当前的nvcc是11.4版本,而最新版flash-attn需要11.6以上版本。

所以是不是装一个cuda 11.6?经验告诉我,这不是一个好的选择,没准会把orin给搞崩溃。

那么退而求其次,我们看看 flash-attention有么有支持11.4版本的:https://github.com/Dao-AILab/flash-attention/tree/v2.1.0

巧的是 v2.1.1 是CUDA 11.4版本,而v2.1.2是CUDA11.6版本。

cd /home1/zhanghui

pip uninstall flash-attn

mkdir new2

cd new2

git clone https://github.com/Dao-AILab/flash-attention -b v2.1.0

cd flash-attention

python setup.py install

然后好像orin又死了。

重启Orin后,试试单进程编译:

export MAX_JOBS=1

export FLASH_ATTENTION_FORCE_SINGLE_THREAD=True

python setup.py install

耐心等待吧。

这回应该不会让CPU占满的。

但是一直看不出来。可能是并发太低了。这样其实也受不了。

还是换成 2.1.1的版本吧!

cd /home1/zhanghui

cd new2

mv flash-attention flash-attention-2.1.0

git clone https://github.com/Dao-AILab/flash-attention -b v2.1.1

cd flash-attention

export MAX_JOBS=4

python setup.py install

编译成功。

再试试:

cd /home1/zhanghui

cd Qwen

python Qwen-7B-Chat-Int4.py

报错如下:

  1. (model38) zhanghui@ubuntu:/home1/zhanghui/Qwen$ python Qwen-7B-Chat-Int4.py
  2. 2023-10-01 21:32:31,642 - modelscope - INFO - PyTorch version 2.0.0+nv23.5 Found.
  3. 2023-10-01 21:32:31,644 - modelscope - INFO - Loading ast index from /home/zhanghui/.cache/modelscope/ast_indexer
  4. 2023-10-01 21:32:31,691 - modelscope - INFO - Loading done! Current index file version is 1.9.1, with md5 d2574d97b79a12fd280c8b43dde90408 and a total number of 924 components indexed
  5. 2023-10-01 21:32:32,625 - modelscope - INFO - Use user-specified model revision: v1.1.3
  6. Warning: please make sure that you are using the latest codes and checkpoints, especially if you used Qwen-7B before 09.25.2023.请使用最新模型和代码,尤其如果你在925日前已经开始使用Qwen-7B,千万注意不要使用错误代码和模型。
  7. Try importing flash-attention for faster inference...
  8. Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary
  9. Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm
  10. Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.20s/it]
  11. Traceback (most recent call last):
  12. File "Qwen-7B-Chat-Int4.py", line 12, in <module>
  13. response, history = model.chat(tokenizer, "你好", history=None)
  14. File "/home/zhanghui/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat-Int4/modeling_qwen.py", line 1195, in chat
  15. outputs = self.generate(
  16. File "/home/zhanghui/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat-Int4/modeling_qwen.py", line 1314, in generate
  17. return super().generate(
  18. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
  19. return func(*args, **kwargs)
  20. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/transformers/generation/utils.py", line 1642, in generate
  21. return self.sample(
  22. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/transformers/generation/utils.py", line 2724, in sample
  23. outputs = self(
  24. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
  25. return forward_call(*args, **kwargs)
  26. File "/home/zhanghui/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat-Int4/modeling_qwen.py", line 1104, in forward
  27. transformer_outputs = self.transformer(
  28. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
  29. return forward_call(*args, **kwargs)
  30. File "/home/zhanghui/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat-Int4/modeling_qwen.py", line 934, in forward
  31. outputs = block(
  32. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
  33. return forward_call(*args, **kwargs)
  34. File "/home/zhanghui/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat-Int4/modeling_qwen.py", line 635, in forward
  35. attn_outputs = self.attn(
  36. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
  37. return forward_call(*args, **kwargs)
  38. File "/home/zhanghui/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat-Int4/modeling_qwen.py", line 542, in forward
  39. context_layer = self.core_attention_flash(q, k, v, attention_mask=attention_mask)
  40. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
  41. return forward_call(*args, **kwargs)
  42. File "/home/zhanghui/.cache/huggingface/modules/transformers_modules/Qwen-7B-Chat-Int4/modeling_qwen.py", line 213, in forward
  43. output = flash_attn_unpadded_func(
  44. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/flash_attn-2.1.1-py3.8-linux-aarch64.egg/flash_attn/flash_attn_interface.py", line 780, in flash_attn_varlen_func
  45. return FlashAttnVarlenFunc.apply(
  46. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply
  47. return super().apply(*args, **kwargs) # type: ignore[misc]
  48. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/flash_attn-2.1.1-py3.8-linux-aarch64.egg/flash_attn/flash_attn_interface.py", line 436, in forward
  49. out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_varlen_forward(
  50. File "/home/zhanghui/archiconda3/envs/model38/lib/python3.8/site-packages/flash_attn-2.1.1-py3.8-linux-aarch64.egg/flash_attn/flash_attn_interface.py", line 66, in _flash_attn_varlen_forward
  51. out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd(
  52. RuntimeError: CUDA error: invalid device function
  53. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
  54. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
  55. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

这个问题搜了一下,在网上看到过很多次,大概率是什么软件版本不匹配的原因。这样就比较尴尬了。

比如安装torch 2.1的jetson版本试试:

pip install ./torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl

python Qwen-7B-Chat-Int4.py

咦?

怎么好像是成功的样子。。。

python cli_demo.py

没想到到最后居然能成功。真的是国庆一喜了!

最后列一下pip的版本:

  1. (model38) zhanghui@ubuntu:~$ pip list
  2. Package Version Editable project location
  3. ----------------------------- ----------------------- -------------------------------
  4. absl-py 2.0.0
  5. accelerate 0.23.0
  6. addict 2.4.0
  7. aiofiles 23.1.0
  8. aiohttp 3.8.4
  9. aiosignal 1.3.1
  10. aliyun-python-sdk-core 2.14.0
  11. aliyun-python-sdk-kms 2.16.2
  12. altair 5.0.1
  13. anyio 3.7.0
  14. appdirs 1.4.4
  15. async-timeout 4.0.2
  16. attrs 23.1.0
  17. auto-gptq 0.4.2+cu114
  18. cachetools 5.3.1
  19. cchardet 2.1.7
  20. certifi 2023.7.22
  21. cffi 1.16.0
  22. chardet 5.2.0
  23. charset-normalizer 3.1.0
  24. click 8.1.7
  25. coloredlogs 15.0.1
  26. contourpy 1.1.1
  27. crcmod 1.7
  28. cryptography 41.0.4
  29. cycler 0.12.0
  30. datasets 2.13.0
  31. diffusers 0.21.4
  32. dill 0.3.6
  33. docker-pycreds 0.4.0
  34. einops 0.7.0
  35. exceptiongroup 1.1.1
  36. fastapi 0.99.0
  37. ffmpy 0.3.0
  38. filelock 3.12.2
  39. flash-attn 2.1.1
  40. fonttools 4.43.0
  41. frozenlist 1.3.3
  42. fschat 0.2.16
  43. fsspec 2023.6.0
  44. gast 0.5.4
  45. gitdb 4.0.10
  46. GitPython 3.1.31
  47. google-auth 2.23.2
  48. google-auth-oauthlib 1.0.0
  49. gradio 3.35.2
  50. gradio_client 0.2.7
  51. grpcio 1.59.0
  52. h11 0.14.0
  53. httpcore 0.17.2
  54. httpx 0.24.1
  55. huggingface-hub 0.15.1
  56. humanfriendly 10.0
  57. idna 3.4
  58. importlib-metadata 6.8.0
  59. importlib-resources 5.12.0
  60. jieba 0.42.1
  61. Jinja2 3.1.2
  62. jmespath 0.10.0
  63. joblib 1.3.2
  64. jsonschema 4.17.3
  65. kiwisolver 1.4.5
  66. linkify-it-py 2.0.2
  67. loralib 0.1.2
  68. Markdown 3.4.4
  69. markdown-it-py 2.2.0
  70. markdown2 2.4.9
  71. MarkupSafe 2.1.3
  72. matplotlib 3.7.3
  73. mdit-py-plugins 0.3.3
  74. mdurl 0.1.2
  75. modelscope 1.9.1
  76. mpmath 1.3.0
  77. ms-swift 1.1.0
  78. multidict 6.0.4
  79. multiprocess 0.70.14
  80. nanosam 0.0 /home1/zhanghui/nanosam/nanosam
  81. networkx 3.1
  82. nh3 0.2.13
  83. ninja 1.11.1
  84. nltk 3.8.1
  85. numpy 1.24.4
  86. oauthlib 3.2.2
  87. optimum 1.13.2
  88. orjson 3.9.1
  89. oss2 2.18.2
  90. packaging 23.1
  91. pandas 2.0.3
  92. pathtools 0.1.2
  93. peft 0.6.0.dev0
  94. Pillow 10.0.1
  95. pip 22.3.1
  96. pkgutil_resolve_name 1.3.10
  97. platformdirs 3.10.0
  98. prompt-toolkit 3.0.38
  99. protobuf 4.24.3
  100. psutil 5.9.5
  101. pyarrow 13.0.0
  102. pyasn1 0.5.0
  103. pyasn1-modules 0.3.0
  104. pycparser 2.21
  105. pycryptodome 3.19.0
  106. pydantic 1.10.10
  107. pydub 0.25.1
  108. Pygments 2.15.1
  109. pyparsing 3.1.1
  110. pyrsistent 0.19.3
  111. python-dateutil 2.8.2
  112. python-multipart 0.0.6
  113. pytz 2023.3.post1
  114. PyYAML 6.0.1
  115. regex 2023.6.3
  116. requests 2.31.0
  117. requests-oauthlib 1.3.1
  118. rich 13.4.2
  119. rouge 1.0.1
  120. rsa 4.9
  121. safetensors 0.3.3
  122. scipy 1.10.1
  123. semantic-version 2.10.0
  124. sentencepiece 0.1.99
  125. sentry-sdk 1.26.0
  126. setproctitle 1.3.2
  127. setuptools 65.5.1
  128. shortuuid 1.0.11
  129. simplejson 3.19.1
  130. six 1.16.0
  131. smmap 5.0.0
  132. sniffio 1.3.0
  133. sortedcontainers 2.4.0
  134. starlette 0.27.0
  135. svgwrite 1.4.3
  136. sympy 1.12
  137. tensorboard 2.14.0
  138. tensorboard-data-server 0.7.1
  139. tiktoken 0.4.0
  140. tokenizers 0.13.3
  141. tomli 2.0.1
  142. toolz 0.12.0
  143. torch 2.1.0a0+41361538.nv23.6
  144. tqdm 4.65.0
  145. transformers 4.32.0
  146. transformers-stream-generator 0.0.4
  147. typing_extensions 4.7.0
  148. tzdata 2023.3
  149. uc-micro-py 1.0.2
  150. urllib3 2.0.5
  151. uvicorn 0.22.0
  152. wandb 0.15.4
  153. wavedrom 2.0.3.post3
  154. wcwidth 0.2.6
  155. websockets 11.0.3
  156. Werkzeug 3.0.0
  157. wheel 0.38.4
  158. xxhash 3.3.0
  159. yapf 0.40.2
  160. yarl 1.9.2
  161. zipp 3.15.0
  162. (model38) zhanghui@ubuntu:~$

(全文完,谢谢阅读)

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/你好赵伟/article/detail/103325
推荐阅读
相关标签
  

闽ICP备14008679号