当前位置:   article > 正文

llamaIndex 基于GPU加载本地embedding模型_云gpu llamaindex

云gpu llamaindex

llamaIndex 基于GPU加载本地embedding模型

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import Settings
import os

os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:4000"
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"
documents = SimpleDirectoryReader("./data/paul_graham").load_data()


Settings.embed_model = HuggingFaceEmbedding(
    model_name="/home/leicq/Documents/LLM_models/bge-large-zh-v1.5"
    
)

index = VectorStoreIndex.from_documents(
    documents,
)
print("hello")
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/煮酒与君饮/article/detail/903957
推荐阅读
相关标签
  

闽ICP备14008679号