当前位置:   article > 正文

llama-index调用qwen大模型实现RAG_llamaindex qwen

llamaindex qwen

背景

llama-index在实现RAG方案的时候多是用的llama等英文大模型,对于国内的诸多模型案例较少,本次将使用qwen大模型实现llama-index的RAG方案。

环境配置

(1)pip包

llamaindex需要预装很多包,这里先把我成功的案例里面的pip包配置发出来,在requirements.txt里面。

  1. absl-py==1.4.0
  2. accelerate==0.27.2
  3. aiohttp==3.9.3
  4. aiosignal==1.3.1
  5. aliyun-python-sdk-core==2.13.36
  6. aliyun-python-sdk-kms==2.16.1
  7. annotated-types==0.6.0
  8. anyio==3.7.1
  9. apphub @ file:///environment/apps/apphub/dist/apphub-1.0.0.tar.gz#sha256=260f99c0de4c575b19ab913aa134877e9efd81b820b97511fc8379674643c253
  10. argon2-cffi==21.3.0
  11. argon2-cffi-bindings==21.2.0
  12. asgiref==3.7.2
  13. asttokens==2.2.1
  14. astunparse==1.6.3
  15. async-timeout==4.0.3
  16. attrs==23.1.0
  17. Babel==2.12.1
  18. backcall==0.2.0
  19. backoff==2.2.1
  20. bcrypt==4.1.2
  21. beautifulsoup4==4.12.3
  22. bleach==6.0.0
  23. boltons @ file:///croot/boltons_1677628692245/work
  24. brotlipy==0.7.0
  25. bs4==0.0.2
  26. build==1.1.1
  27. cachetools==5.3.1
  28. certifi @ file:///croot/certifi_1690232220950/work/certifi
  29. cffi @ file:///croot/cffi_1670423208954/work
  30. chardet==3.0.4
  31. charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
  32. chroma-hnswlib==0.7.3
  33. chromadb==0.4.24
  34. click==7.1.2
  35. cmake==3.25.0
  36. coloredlogs==15.0.1
  37. comm==0.1.4
  38. conda @ file:///croot/conda_1690494963117/work
  39. conda-content-trust @ file:///tmp/abs_5952f1c8-355c-4855-ad2e-538535021ba5h26t22e5/croots/recipe/conda-content-trust_1658126371814/work
  40. conda-libmamba-solver @ file:///croot/conda-libmamba-solver_1685032319139/work/src
  41. conda-package-handling @ file:///croot/conda-package-handling_1685024767917/work
  42. conda_package_streaming @ file:///croot/conda-package-streaming_1685019673878/work
  43. contourpy==1.2.0
  44. crcmod==1.7
  45. cryptography @ file:///croot/cryptography_1686613057838/work
  46. cycler==0.12.1
  47. dataclasses-json==0.6.4
  48. debugpy==1.6.7
  49. decorator==5.1.1
  50. defusedxml==0.7.1
  51. Deprecated==1.2.14
  52. dirtyjson==1.0.8
  53. distro==1.9.0
  54. ecdsa==0.18.0
  55. exceptiongroup==1.1.2
  56. executing==1.2.0
  57. fastapi==0.104.1
  58. fastjsonschema==2.18.0
  59. featurize==0.0.24
  60. filelock==3.9.0
  61. flatbuffers==23.5.26
  62. fonttools==4.44.0
  63. frozenlist==1.4.1
  64. fsspec==2024.2.0
  65. gast==0.4.0
  66. google-auth==2.22.0
  67. google-auth-oauthlib==1.0.0
  68. google-pasta==0.2.0
  69. googleapis-common-protos==1.62.0
  70. greenlet==3.0.3
  71. grpcio==1.62.0
  72. gunicorn==21.2.0
  73. h11==0.14.0
  74. h5py==3.9.0
  75. httpcore==0.17.3
  76. httptools==0.6.1
  77. httpx==0.24.1
  78. huggingface-hub==0.20.3
  79. humanfriendly==10.0
  80. idna==2.10
  81. imageio==2.32.0
  82. importlib-metadata==6.11.0
  83. importlib_resources==6.1.3
  84. ipykernel==6.25.0
  85. ipython==8.14.0
  86. ipython-genutils==0.2.0
  87. ipywidgets==8.1.2
  88. jedi==0.19.0
  89. Jinja2==3.1.2
  90. jmespath==0.10.0
  91. joblib==1.3.2
  92. json5==0.9.14
  93. jsonpatch @ file:///tmp/build/80754af9/jsonpatch_1615747632069/work
  94. jsonpointer==2.1
  95. jsonschema==4.18.6
  96. jsonschema-specifications==2023.7.1
  97. jupyter-server==1.24.0
  98. jupyter_client==8.3.0
  99. jupyter_core==5.3.1
  100. jupyterlab==3.2.9
  101. jupyterlab-pygments==0.2.2
  102. jupyterlab_server==2.24.0
  103. jupyterlab_widgets==3.0.10
  104. keras==2.13.1
  105. kiwisolver==1.4.5
  106. kubernetes==29.0.0
  107. lazy_loader==0.3
  108. libclang==16.0.6
  109. libmambapy @ file:///croot/mamba-split_1685993156657/work/libmambapy
  110. lit==15.0.7
  111. llama-index==0.10.17
  112. llama-index-agent-openai==0.1.5
  113. llama-index-cli==0.1.8
  114. llama-index-core==0.10.17
  115. llama-index-embeddings-huggingface==0.1.4
  116. llama-index-embeddings-openai==0.1.6
  117. llama-index-indices-managed-llama-cloud==0.1.3
  118. llama-index-legacy==0.9.48
  119. llama-index-llms-huggingface==0.1.3
  120. llama-index-llms-openai==0.1.7
  121. llama-index-multi-modal-llms-openai==0.1.4
  122. llama-index-program-openai==0.1.4
  123. llama-index-question-gen-openai==0.1.3
  124. llama-index-readers-file==0.1.8
  125. llama-index-readers-llama-parse==0.1.3
  126. llama-index-vector-stores-chroma==0.1.5
  127. llama-parse==0.3.8
  128. llamaindex-py-client==0.1.13
  129. Markdown==3.4.4
  130. MarkupSafe==2.1.2
  131. marshmallow==3.21.1
  132. matplotlib==3.8.1
  133. matplotlib-inline==0.1.6
  134. mistune==3.0.1
  135. mmh3==4.1.0
  136. monotonic==1.6
  137. mpmath==1.2.1
  138. multidict==6.0.4
  139. mypy-extensions==1.0.0
  140. nbclassic==0.2.8
  141. nbclient==0.8.0
  142. nbconvert==7.7.3
  143. nbformat==5.9.2
  144. nest-asyncio==1.6.0
  145. networkx==3.0
  146. nltk==3.8.1
  147. notebook==6.4.12
  148. numpy==1.24.1
  149. nvidia-cublas-cu12==12.1.3.1
  150. nvidia-cuda-cupti-cu12==12.1.105
  151. nvidia-cuda-nvrtc-cu12==12.1.105
  152. nvidia-cuda-runtime-cu12==12.1.105
  153. nvidia-cudnn-cu12==8.9.2.26
  154. nvidia-cufft-cu12==11.0.2.54
  155. nvidia-curand-cu12==10.3.2.106
  156. nvidia-cusolver-cu12==11.4.5.107
  157. nvidia-cusparse-cu12==12.1.0.106
  158. nvidia-nccl-cu12==2.19.3
  159. nvidia-nvjitlink-cu12==12.4.99
  160. nvidia-nvtx-cu12==12.1.105
  161. oauthlib==3.2.2
  162. onnxruntime==1.17.1
  163. openai==1.13.3
  164. opencv-python==4.8.1.78
  165. opentelemetry-api==1.23.0
  166. opentelemetry-exporter-otlp-proto-common==1.23.0
  167. opentelemetry-exporter-otlp-proto-grpc==1.23.0
  168. opentelemetry-instrumentation==0.44b0
  169. opentelemetry-instrumentation-asgi==0.44b0
  170. opentelemetry-instrumentation-fastapi==0.44b0
  171. opentelemetry-proto==1.23.0
  172. opentelemetry-sdk==1.23.0
  173. opentelemetry-semantic-conventions==0.44b0
  174. opentelemetry-util-http==0.44b0
  175. opt-einsum==3.3.0
  176. orjson==3.9.15
  177. oss2==2.18.1
  178. overrides==7.7.0
  179. packaging @ file:///croot/packaging_1678965309396/work
  180. pandas==2.1.2
  181. pandocfilters==1.5.0
  182. parso==0.8.3
  183. pexpect==4.8.0
  184. pickleshare==0.7.5
  185. Pillow==9.3.0
  186. platformdirs==3.10.0
  187. pluggy @ file:///tmp/build/80754af9/pluggy_1648024709248/work
  188. posthog==3.5.0
  189. prometheus-client==0.17.1
  190. prompt-toolkit==3.0.39
  191. protobuf==4.23.4
  192. psutil==5.9.5
  193. ptyprocess==0.7.0
  194. pulsar-client==3.4.0
  195. pure-eval==0.2.2
  196. pyasn1==0.5.0
  197. pyasn1-modules==0.3.0
  198. pycosat @ file:///croot/pycosat_1666805502580/work
  199. pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
  200. pycryptodome==3.18.0
  201. pydantic==2.4.2
  202. pydantic_core==2.10.1
  203. Pygments==2.15.1
  204. PyMuPDF==1.23.26
  205. PyMuPDFb==1.23.22
  206. pyOpenSSL @ file:///croot/pyopenssl_1677607685877/work
  207. pyparsing==3.1.1
  208. pypdf==4.1.0
  209. PyPika==0.48.9
  210. pyproject_hooks==1.0.0
  211. PySocks @ file:///home/builder/ci_310/pysocks_1640793678128/work
  212. python-dateutil==2.8.2
  213. python-dotenv==1.0.0
  214. pytz==2023.3.post1
  215. PyYAML==6.0.1
  216. pyzmq==25.1.0
  217. referencing==0.30.0
  218. regex==2023.12.25
  219. requests==2.31.0
  220. requests-oauthlib==1.3.1
  221. rpds-py==0.9.2
  222. rsa==4.9
  223. ruamel.yaml @ file:///croot/ruamel.yaml_1666304550667/work
  224. ruamel.yaml.clib @ file:///croot/ruamel.yaml.clib_1666302247304/work
  225. safetensors==0.4.2
  226. scikit-image==0.22.0
  227. scikit-learn==1.3.2
  228. scipy==1.11.3
  229. seaborn==0.13.0
  230. Send2Trash==1.8.2
  231. six @ file:///tmp/build/80754af9/six_1644875935023/work
  232. sniffio==1.3.0
  233. socksio==1.0.0
  234. soupsieve==2.4.1
  235. SQLAlchemy==2.0.28
  236. sshpubkeys==3.3.1
  237. stack-data==0.6.2
  238. starlette==0.27.0
  239. sympy==1.11.1
  240. tabulate==0.8.7
  241. tenacity==8.2.3
  242. tensorboard==2.13.0
  243. tensorboard-data-server==0.7.1
  244. tensorflow==2.13.0
  245. tensorflow-estimator==2.13.0
  246. tensorflow-io-gcs-filesystem==0.33.0
  247. termcolor==2.3.0
  248. terminado==0.17.1
  249. threadpoolctl==3.2.0
  250. tifffile==2023.9.26
  251. tiktoken==0.6.0
  252. tinycss2==1.2.1
  253. tokenizers==0.15.2
  254. tomli==2.0.1
  255. toolz @ file:///croot/toolz_1667464077321/work
  256. torch==2.2.1
  257. torchaudio==2.0.2+cu118
  258. torchvision==0.15.2+cu118
  259. tornado==6.3.2
  260. tqdm==4.66.2
  261. traitlets==5.9.0
  262. transformers==4.38.2
  263. triton==2.2.0
  264. typer==0.9.0
  265. typing-inspect==0.9.0
  266. typing_extensions==4.8.0
  267. tzdata==2023.3
  268. urllib3==1.25.11
  269. uvicorn==0.23.2
  270. uvloop==0.19.0
  271. watchfiles==0.21.0
  272. wcwidth==0.2.5
  273. webencodings==0.5.1
  274. websocket-client==1.2.1
  275. websockets==12.0
  276. Werkzeug==2.3.6
  277. widgetsnbextension==4.0.10
  278. workspace @ file:///home/featurize/work/workspace/dist/workspace-0.1.0.tar.gz#sha256=b292beb3599f79d3791771eff9dc422cc37c58c1fc8daadeafbf025a2e7ea986
  279. wrapt==1.15.0
  280. yarl==1.9.2
  281. zipp==3.17.0
  282. zstandard @ file:///croot/zstandard_1677013143055/work

(2)python 环境

(3)安装命令

  1. !pip install llama-index
  2. !pip install llama-index-llms-huggingface
  3. !pip install llama-index-embeddings-huggingface
  4. !pip install llama-index ipywidgets
  5. !pip install torch
  6. !git clone https://www.modelscope.cn/AI-ModelScope/bge-small-zh-v1.5.git
  7. !git clone https://www.modelscope.cn/qwen/Qwen1.5-4B-Chat.git

(4)目录结构

代码 

(1)加载模型

  1. import torch
  2. from llama_index.llms.huggingface import HuggingFaceLLM
  3. from llama_index.core import PromptTemplate
  4. import os
  5. os.environ['KMP_DUPLICATE_LIB_OK']='True'
  6. # Model names (make sure you have access on HF)
  7. LLAMA2_7B = "/home/featurize/Qwen1.5-4B-Chat"
  8. # LLAMA2_7B_CHAT = "meta-llama/Llama-2-7b-chat-hf"
  9. # LLAMA2_13B = "meta-llama/Llama-2-13b-hf"
  10. LLAMA2_13B_CHAT = "/home/featurize/Qwen1.5-4B-Chat"
  11. # LLAMA2_70B = "meta-llama/Llama-2-70b-hf"
  12. # LLAMA2_70B_CHAT = "meta-llama/Llama-2-70b-chat-hf"
  13. selected_model = LLAMA2_13B_CHAT
  14. SYSTEM_PROMPT = """You are an AI assistant that answers questions in a friendly manner, based on the given source documents. Here are some rules you always follow:
  15. - Generate human readable output, avoid creating output with gibberish text.
  16. - Generate only the requested output, don't include any other language before or after the requested output.
  17. - Never say thank you, that you are happy to help, that you are an AI agent, etc. Just answer directly.
  18. - Generate professional language typically used in business documents in North America.
  19. - Never generate offensive or foul language.
  20. """
  21. query_wrapper_prompt = PromptTemplate(
  22. "[INST]<<SYS>>\n" + SYSTEM_PROMPT + "<</SYS>>\n\n{query_str}[/INST] "
  23. )
  24. llm = HuggingFaceLLM(context_window=4096,
  25. max_new_tokens=2048,
  26. generate_kwargs={"temperature": 0.0, "do_sample": False},
  27. query_wrapper_prompt=query_wrapper_prompt,
  28. tokenizer_name=selected_model,
  29. model_name=selected_model,
  30. device_map="auto"
  31. )

(2)加载词嵌入向量

  1. from llama_index.embeddings.huggingface import HuggingFaceEmbedding
  2. embed_model = HuggingFaceEmbedding(model_name="/home/featurize/bge-small-zh-v1.5")
  1. from llama_index.core import Settings
  2. Settings.llm = llm
  3. Settings.embed_model = embed_model
  1. from llama_index.core import SimpleDirectoryReader
  2. # load documents
  3. documents = SimpleDirectoryReader("./data/").load_data()
  1. from llama_index.core import VectorStoreIndex
  2. index = VectorStoreIndex.from_documents(documents)

 

index

 

  1. # set Logging to DEBUG for more detailed outputs
  2. query_engine = index.as_query_engine()
  1. response = query_engine.query("小额贷款咋规定的?")
  2. print(response)

 

知识库 

llamaindex实现RAG中很关键的一环就是知识库,知识库主要是各种类型的文档,这里给的文档是一个pdf文件,文件内容如下。

 总结

从上面的代码可以看出,我们使用qwen和bge-zh模型可以实现本地下载模型的RAG方案,知识库里面的内容也可以实现中文问答,这非常有利于我们进行私有化部署方案,从而扩展我们的功能。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/黑客灵魂/article/detail/903953
推荐阅读
相关标签
  

闽ICP备14008679号