赞
踩
这里我们进一步尝试将embedding模型也换为本地的,同时熟悉一下流程和学一些新的东西
1.环境还是用之前的,这里我们先下载LLM
然后你会在下载nomic模型的时候崩溃,因为无法搜索,无法下载
解决办法如下
lm studio 0.2.24国内下载模型_lm studio 国内源-CSDN博客
按照上面的教程依旧无法下载模型,但是可以搜索了,没什么用
直接hugging face下载,然后导入llm models文件夹
C:\Users\Administrator\.cache\lm-studio\models
注意有格式要求
C:\Users\Administrator\.cache\lm-studio\models\Publisher\Repository
将模型放在这个文件夹里才能被识别,然后加在模型
然后修改配置
settings.yaml
- ##我这里用到是我的另一个电脑运行LLM Studio ,所以IP是127
- encoding_model: cl100k_base
- skip_workflows: []
- llm:
- api_key: ollama
- type: openai_chat # or azure_openai_chat
- model: llama3
- model_supports_json: true # recommended if this is available for your model.
- # max_tokens: 4000
- # request_timeout: 180.0
- api_base: http://127.0.0.1:11434/v1
- # api_version: 2024-02-15-preview
- # organization: <organization_id>
- # deployment_name: <azure_model_deployment_name>
- # tokens_per_minute: 150_000 # set a leaky bucket throttle
- # requests_per_minute: 10_000 # set a leaky bucket throttle
- # max_retries: 10
- # max_retry_wait: 10.0
- # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
- # concurrent_requests: 25 # the number of parallel inflight requests that may be made
-
- parallelization:
- stagger: 0.3
- # num_threads: 50 # the number of threads to use for parallel processing
-
- async_mode: threaded # or asyncio
-
- embeddings:
- ## parallelization: override the global parallelization settings for embeddings
- async_mode: threaded # or asyncio
- llm:
- api_key: lm-studio
- type: openai_embedding # or azure_openai_embedding
- model: Publisher/Repository/nomic-embed-text-v1.5.Q5_K_M.gguf
- api_base: http://192.168.1.127:1234/v1
- # api_version: 2024-02-15-preview
- # organization: <organization_id>
- # deployment_name: <azure_model_deployment_name>
- # tokens_per_minute: 150_000 # set a leaky bucket throttle
- # requests_per_minute: 10_000 # set a leaky bucket throttle
- # max_retries: 10
- # max_retry_wait: 10.0
- # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
- # concurrent_requests: 25 # the number of parallel inflight requests that may be made
- # batch_size: 16 # the number of documents to send in a single request
- # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
- # target: required # or optional
-
-
-
- chunks:
- size: 300
- overlap: 100
- group_by_columns: [id] # by default, we don't allow chunks to cross documents
-
- input:
- type: file # or blob
- file_type: text # or csv
- base_dir: "input"
- file_encoding: utf-8
- file_pattern: ".*\\.txt$"
-
- cache:
- type: file # or blob
- base_dir: "cache"
- # connection_string: <azure_blob_storage_connection_string>
- # container_name: <azure_blob_storage_container_name>
-
- storage:
- type: file # or blob
- base_dir: "output/${timestamp}/artifacts"
- # connection_string: <azure_blob_storage_connection_string>
- # container_name: <azure_blob_storage_container_name>
-
- reporting:
- type: file # or console, blob
- base_dir: "output/${timestamp}/reports"
- # connection_string: <azure_blob_storage_connection_string>
- # container_name: <azure_blob_storage_container_name>
-
- entity_extraction:
- ## llm: override the global llm settings for this task
- ## parallelization: override the global parallelization settings for this task
- ## async_mode: override the global async_mode settings for this task
- prompt: "prompts/entity_extraction.txt"
- entity_types: [organization,person,geo,event]
- max_gleanings: 0
-
- summarize_descriptions:
- ## llm: override the global llm settings for this task
- ## parallelization: override the global parallelization settings for this task
- ## async_mode: override the global async_mode settings for this task
- prompt: "prompts/summarize_descriptions.txt"
- max_length: 500
-
- claim_extraction:
- ## llm: override the global llm settings for this task
- ## parallelization: override the global parallelization settings for this task
- ## async_mode: override the global async_mode settings for this task
- # enabled: true
- prompt: "prompts/claim_extraction.txt"
- description: "Any claims or facts that could be relevant to information discovery."
- max_gleanings: 0
-
- community_report:
- ## llm: override the global llm settings for this task
- ## parallelization: override the global parallelization settings for this task
- ## async_mode: override the global async_mode settings for this task
- prompt: "prompts/community_report.txt"
- max_length: 2000
- max_input_length: 8000
-
- cluster_graph:
- max_cluster_size: 10
-
- embed_graph:
- enabled: false # if true, will generate node2vec embeddings for nodes
- # num_walks: 10
- # walk_length: 40
- # window_size: 2
- # iterations: 3
- # random_seed: 597832
-
- umap:
- enabled: false # if true, will generate UMAP embeddings for nodes
-
- snapshots:
- graphml: false
- raw_entities: false
- top_level_nodes: false
-
- local_search:
- # text_unit_prop: 0.5
- # community_prop: 0.1
- # conversation_history_max_turns: 5
- # top_k_mapped_entities: 10
- # top_k_relationships: 10
- # max_tokens: 12000
-
- global_search:
- # max_tokens: 12000
- # data_max_tokens: 12000
- # map_max_tokens: 1000
- # reduce_max_tokens: 2000
- # concurrency: 32
- #测试文档 https://github.com/win4r/mytest/blob/main/book.pdf
-
- pip install marker-pdf
-
- marker_single ./book.pdf ./pdftxt --batch_multiplier 2 --max_pages 60 --langs English
-
- #markdown转txt
- python markdown_to_text.py book.md book.txt
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。