当前位置:   article > 正文

LLMs之Agent:Personal_LLM_Agents_Survey的简介、使用方法之详细攻略_personal llm agents - survey

personal llm agents - survey

LLMs之Agent:Personal_LLM_Agents_Survey的简介、使用方法之详细攻略

导读:该项目包含了针对个人型LLM代理(Personal LLM Agents)的相关论文列表。通过查询相关论文,可以了解这一新兴技术方向的最新研究进展,比如在对话能力、知识表示、隐私保护等方面如何进行优化,从而提升用户体验。通过论文也可以了解这一技术的应用案例、难点以及解决方法。例如如何将LLM代理应用在教育或医疗助手等领域,如何使其对话能力更加逼真自然,或者如何保护用户隐私不被滥用等都是值得关注的问题。
总的来说,此项目给出了一个系统整理的个人LLM代理相关论文列表,从多个角度论述了这个新技术方向的发展现状和未来走势,有助于研究人员和开发者更好地把握趋势并开展工作

目录

Personal_LLM_Agents_Survey的简介

Personal_LLM_Agents_Survey的使用方法

1、个人LLM代理的关键能力

(1)、任务自动化

基于UI的任务自动化代理

LLM-based Approaches

Traditional Approaches

UI自动化的基准测试

(2)、感知

基于LLM的方法

传统方法

(3)、记忆

记忆获取

记忆管理

代理自我演化

2、LLM代理的效率

(1)、高效的LLM推理与训练

(2)、高效的记忆检索与管理

组织记忆

优化记忆的效率

Searching Design

Searching Execution

Efficient Indexing

3、个人LLM代理的安全性和隐私

(1)、机密性(用户数据的保密性)

(2)、完整性(代理行为的完整性)

Adversarial Attacks

Backdoor Attacks

Prompt Injection Attacks

(3)、可靠性(代理决策的可靠性)

Problems

Improvement

Inspection


Personal_LLM_Agents_Survey的简介

个人LLM代理(智能体)被定义为一种特殊类型的基于LLM的代理,它与个人数据、个人设备和个人服务深度集成。它们最好部署到资源受限的移动/边缘设备和/或由轻量级AI模型提供支持。个人LLM代理的主要目的是协助最终用户并增强其能力,帮助他们更专注更出色地处理有趣和重要的事务。

这份论文清单涵盖了个人LLM代理的几个主要方面,包括能力、效率和安全性。

GitHub地址https://github.com/MobileLLM/Personal_LLM_Agents_Survey

Personal_LLM_Agents_Survey的使用方法

1、个人LLM代理的关键能力

(1)、任务自动化

任务自动化是个人LLM代理的核心能力,它决定了代理能够多好地响应用户命令和/或自动执行用户任务。由于UI-based任务自动化代理在这个列表中很受欢迎并与个人设备密切相关,我们专注于这方面。

基于UI的任务自动化代理

LLM-based Approaches
  • WebGPT: Browser-assisted question-answering with human feedback. [paper]
  • Enabling Conversational Interaction with Mobile UI Using Large Language Models. [CHI 2023] [paper]
  • Language Models can Solve Computer Tasks. [NeurIPS 2023] [paper]
  • DroidBot-GPT: GPT-powered UI Automation for Android. [arxiv] [code]
  • Responsible Task Automation: Empowering Large Language Models as Responsible Task Automators.[paper]
  • Mind2Web: Towards a Generalist Agent for the Web. arxiv 2023 [paper][code][code]
  • (AutoDroid) Empowering LLM to use Smartphone for Intelligent Task Automation. [paper] [code]
  • You Only Look at Screens: Multimodal Chain-of-Action Agents. ArXiv Preprint [paper] [code]
  • AXNav: Replaying Accessibility Tests from Natural Language. [paper]
  • Automatic Macro Mining from Interaction Traces at Scale. [paper]
  • A Zero-Shot Language Agent for Computer Control with Structured Reflection. [paper]
  • Reinforced UI Instruction Grounding: Towards a Generic UI Task Automation API. [paper]
  • GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation. [paper][code]
  • UGIF: UI Grounded Instruction Following. [paper]
  • Explore, Select, Derive, and Recall: Augmenting LLM with Human-like Memory for Mobile Task Automation. [paper][code]
  • CogAgent: A Visual Language Model for GUI Agents. [paper][code]
  • AppAgent: Multimodal Agents as Smartphone Users. [paper][code]
Traditional Approaches
  • uLink: Enabling User-Defined Deep Linking to App Content. [Mobisys 2016]
  • SUGILITE: Creating Multimodal Smartphone Automation by Demonstration. [CHI 2017] [paper][code]
  • Programming IoT devices by demonstration using mobile apps. [IS-EUD 2017]
  • Kite: Building Conversational Bots from Mobile Apps. [MobiSys 2018]. [paper]
  • Reinforcement Learning on Web Interfaces using Workflow-Guided Exploration. [ICLR 2018]. [paper][code]
  • Mapping Natural Language Instructions to Mobile UI Action Sequences. [ACL 2020] [paper][code]
  • Glider: A Reinforcement Learning Approach to Extract UI Scripts from Websites. [SIGIR 2021] [paper]
  • UIBert: Learning Generic Multimodal Representations for UI Understanding. [IJCAI-21] [paper]
  • META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI. [EMNLP 2022][paper][code]
  • UINav: A maker of UI automation agents. [paper]

UI自动化的基准测试
  • Mapping natural language commands to web elements. [EMNLP 2018] [paper][code]
  • UIBert: Learning Generic Multimodal Representations for UI Understanding. [IJCAI-21] [paper]
  • Mapping Natural Language Instructions to Mobile UI Action Sequences. [ACL 2020] [paper][code]
  • A Dataset for Interactive Vision Language Navigation with Unknown Command Feasibility. [ECCV 2022][paper] [code]
  • META-GUI: Towards Multi-modal Conversational Agents on Mobile GUI. [EMNLP 2022][paper][code]
  • UGIF: UI Grounded Instruction Following. [paper]
  • ASSISTGUI: Task-Oriented Desktop Graphical User Interface Automation. [paper][code]
  • Mind2Web: Towards a Generalist Agent for the Web. arxiv 2023 [paper][code][code]
  • Android in the Wild: A Large-Scale Dataset for Android Device Control. [paper][code]
  • Empowering LLM to use Smartphone for Intelligent Task Automation. [paper] [code]
  • World of Bits: An Open-Domain Platform for Web-Based Agents. [ICML 2017] [paper][code]
  • Reinforcement Learning on Web Interfaces using Workflow-Guided Exploration. [ICLR 2018]. [paper][code]
  • WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents. [NeurIPS 2022] [paper]
  • AndroidEnv: A Reinforcement Learning Platform for Android [paper][code]
  • Mobile-Env: An Evaluation Platform and Benchmark for Interactive Agents in LLM Era. [paper][code]
  • WebArena: A Realistic Web Environment for Building Autonomous Agents. [paper][code]

(2)、感知

理解当前上下文的能力对于个人LLM代理提供个性化、上下文感知的服务至关重要。这包括感知用户活动、心理状态、环境动态等技术。

基于LLM的方法

  • “Automated Mobile Sensing Strategies Generation for Human Behaviour Understanding” (Gao et al., 2023, p. 521) arxiv
  • “Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs” (Wang et al., 2023, p. 1) EMNLP 2023
  • “Exploring Large Language Models for Human Mobility Prediction under Public Events” (Liang et al., 2023, p. 1) arxiv
  • “Penetrative AI: Making LLMs Comprehend the Physical World” (Xu et al., 2023, p. 1) arxiv
  • “Evaluating Subjective Cognitive Appraisals of Emotions from Large Language Models” (Zhan et al., 2023, p. 1) arxiv
  • “PALR: Personalization Aware LLMs for Recommendation” (Yang et al., 2023, p. 1) arxiv
  • “Sentiment Analysis through LLM Negotiations” (Sun et al., 2023, p. 1) arxiv
  • “Bridging the Information Gap Between Domain-Specific Model and General LLM for Personalized Recommendation” (Zhang et al., 2023, p. 1) arxiv
  • “Conversational Health Agents: A Personalized LLM-Powered Agent Framework” (Abbasian et al., 2023, p. 1) arxiv

传统方法

  • “Afective State Prediction from Smartphone Touch and Sensor Data in the Wild” (Wampfler et al., 2022, p. 1) CHI'22

  • “Mobile Localization Techniques for Wireless Sensor Networks: Survey and Recommendations” (Oliveira et al., 2023, p. 361) ACM Transactions on Sensor Networks

  • “Are You Killing Time? Predicting Smartphone Users’ Time-killing Moments via Fusion of Smartphone Sensor Data and Screenshots” (Chen et al., 2023, p. 1) CHI'23

  • “Remote Breathing Rate Tracking in Stationary Position Using the Motion and Acoustic Sensors of Earables” (Ahmed et al., 2023, p. 1) CHI'23

  • “SAMoSA: Sensing Activities with Motion and Subsampled Audio” (Mollyn et al., 2022, p. 1321) IMWUT

  • “A Systematic Survey on Android API Usage for Data-Driven Analytics with Smartphones” (Lee et al., 2023, p. 1) ACM Computing Surveys

  • “A Multi-Sensor Approach to Automatically Recognize Breaks and Work Activities of Knowledge Workers in Academia” (Di Lascio et al., 2020, p. 781) IMWUT

  • “Robust Inertial Motion Tracking through Deep Sensor Fusion across Smart Earbuds and Smartphone” (Gong et al., 2021, p. 621) IMWUT

  • “DancingAnt: Body-empowered Wireless Sensing Utilizing Pervasive Radiations from Powerline” (Cui et al., 2023, p. 873) ACM MobiCom'23

  • “DeXAR: Deep Explainable Sensor-Based Activity Recognition in Smart-Home Environments” (Arrotta et al., 2022, p. 11) IMWUT

  • “MUSE-Fi: Contactless MUti-person SEnsing Exploiting Near-field Wi-Fi Channel Variation” (Hu et al., 2023, p. 1135) IMWUT

  • “SenCom: Integrated Sensing and Communication with Practical WiFi” (He et al., 2023, p. 903) ACM MobiCom'23

  • “SleepMore: Inferring Sleep Duration at Scale via Multi-Device WiFi Sensing” (Zakaria et al., 2022, p. 1931) IMWUT

  • “COCOA: Cross Modality Contrastive Learning for Sensor Data” (Deldari et al., 2022, p. 1081) ACM MobiCom'23

  • “M3Sense: Affect-Agnostic Multitask Representation Learning Using Multimodal Wearable Sensors” (Samyoun et al., 2022, p. 731) IMWUT

  • “Predicting Subjective Measures of Social Anxiety from Sparsely Collected Mobile Sensor Data” (Rashid et al., 2020, p. 1091) IMWUT

  • “Attend and Discriminate: Beyond the State-of-the-Art for Human Activity Recognition Using Wearable Sensors” (Abedin et al., 2021, p. 11) IMWUT

  • “Fall Detection based on Interpretation of Important Features with Wrist-Wearable Sensors” (Kim et al., 2022, p. 1) IMWUT

  • “PowerPhone: Unleashing the Acoustic Sensing Capability of Smartphones” (Cao et al., 2023, p. 842) ACM MobiCom'23

  • “I Spy You: Eavesdropping Continuous Speech on Smartphones via Motion Sensors” (Zhang et al., 2022, p. 1971) IMWUT

  • “Watching Your Phone’s Back: Gesture Recognition by Sensing Acoustical Structure-borne Propagation” (Wang et al., 2021, p. 821) IMWUT

  • “Gesture Recognition Method Using Acoustic Sensing on Usual Garment” (Amesaka et al., 2022, p. 411) IMWUT

  • “Complex Daily Activities, Country-Level Diversity, and Smartphone Sensing: A Study in Denmark, Italy, Mongolia, Paraguay, and UK” (Assi et al., 2023, p. 1) CHI'23
  • “Generalization and Personalization of Mobile Sensing-Based Mood Inference Models: An Analysis of College Students in Eight Countries” (Meegahapola et al., 2022, p. 1761) IMWUT
  • “Detecting Social Contexts from Mobile Sensing Indicators in Virtual Interactions with Socially Anxious Individuals” (Wang et al., 2023, p. 1341) IMWUT
  • “Examining the Social Context of Alcohol Drinking in Young Adults with Smartphone Sensing” (Meegahapola et al., 2021, p. 1211) IMWUT
  • “Towards Open-Domain Twitter User Profile Inference” (Wen et al., 2023, p. 3172) ACL 2023
  • “One More Bite? Inferring Food Consumption Level of College Students Using Smartphone Sensing and Self-Reports” (Meegahapola et al., 2021, p. 261) IMWUT
  • “FlowSense: Monitoring Airflow in Building Ventilation Systems Using Audio Sensing” (Chhaglani et al., 2022, p. 51) IMWUT
  • “MicroCam: Leveraging Smartphone Microscope Camera for Context-Aware Contact Surface Sensing” (Hu et al., 2023, p. 981) IMWUT
  • “A Multi-Sensor Approach to Automatically Recognize Breaks and Work Activities of Knowledge Workers in Academia” (Di Lascio et al., 2020, p. 781) IMWUT

  • Mobile and Wearable Sensing Frameworks for mHealth Studies and Applications: A Systematic Review” (Kumar et al., 2021, p. 81) ACM Transaction on Computing for Healthcare

  • “Afective State Prediction from Smartphone Touch and Sensor Data in the Wild” (Wampfler et al., 2022, p. 1) CHI'22

  • “Are You Killing Time? Predicting Smartphone Users’ Time-killing Moments via Fusion of Smartphone Sensor Data and Screenshots” (Chen et al., 2023, p. 1) CHI'23

  • “FeverPhone: Accessible Core-Body Temperature Sensing for Fever Monitoring Using Commodity Smartphones” (Breda et al., 2022, p. 31) IMWUT

  • “Guard Your Heart Silently: Continuous Electrocardiogram Waveform Monitoring with Wrist-Worn Motion Sensor” (Cao et al., 2022, p. 1031) IMWUT

  • “Listen2Cough: Leveraging End-to-End Deep Learning Cough Detection Model to Enhance Lung Health Assessment Using Passively Sensed Audio” (Xu et al., 2021, p. 431) IMWUT

  • “HealthWalks: Sensing Fine-grained Individual Health Condition via Mobility Data” (Lin et al., 2020, p. 1381) IMWUT

  • “Identifying Mobile Sensing Indicators of Stress-Resilience” (Adler et al., 2021, p. 511) IMWUT

  • “MoodExplorer: Towards Compound Emotion Detection via Smartphone Sensing” (Zhang et al., 2018, p. 1761) IMWUT

  • “mTeeth: Identifying Brushing Teeth Surfaces Using Wrist-Worn Inertial Sensors” (Akther et al., 2021, p. 531) IMWUT

  • “Detecting Job Promotion in Information Workers Using Mobile Sensing” (Nepal et al., 2020, p. 1131) IMWUT

  • “First-Gen Lens: Assessing Mental Health of First-Generation Students across Their First Year at College Using Mobile Sensing” (Wang et al., 2022, p. 951) IMWUT

  • “Predicting Personality Traits from Physical Activity Intensity” (Gao et al., 2019, p. 1) IEEE Computer

  • “Predicting Symptom Trajectories of Schizophrenia using Mobile Sensing” (Wang et al., 2017, p. 1101) IMWUT

  • “Predictors of Life Satisfaction based on Daily Activities from Mobile Sensor Data” (Yürüten et al., 2014, p. 1) CHI'14

  • “SmartGPA: How Smartphones Can Assess and Predict Academic Performance of College Students” (Wang et al., 2015, p. 1) UbiComp'15

  • “Social Sensing: Assessing Social Functioning of Patients Living with Schizophrenia using Mobile Phone Sensing” (Wang et al., 2020, p. 1) CHI'20

  • “SmokingOpp: Detecting the Smoking ‘Opportunity’ Context Using Mobile Sensors” (Chatterjee et al., 2020, p. 41) IMWUT

(3)、记忆

记忆是个人LLM代理保持关于用户信息的能力,使代理能够提供更定制的服务并根据用户偏好自我演变。

记忆获取
记忆管理

代理自我演化

2、LLM代理的效率

LLM代理的效率与LLM推理、LLM训练/定制以及内存管理的效率密切相关。

(1)、高效的LLM推理与训练

LLM推理/训练的效率已经在现有调查中得到全面总结(例如此链接)。因此,在这个列表中,我们省略了这部分内容。

(2)、高效的记忆检索与管理

在这里,我们主要列举与高效内存管理相关的论文,这是LLM代理的重要组成部分。

组织记忆

(with vector library, vector DB, and others)

Vector Library

  • RETRO: Improving language models by retrieving from trillions of tokens. [ICML, 2021] [paper]
  • RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit. [arXiv, 2023] [paper] [code]
  • TRIME: Training Language Models with Memory Augmentation. [EMNLP, 2022] [paper] [code]
  • Enhancing LLM Intelligence with ARM-RAG: Auxiliary Rationale Memory for Retrieval Augmented Generation. [arXiv, 2023] [paper] [code]

Vector Database

  • Survey of Vector Database Management Systems. [arXiv, 2023] [paper]
  • Vector database management systems: Fundamental concepts, use-cases, and current challenges. [arXiv, 2023] [paper]
  • A Comprehensive Survey on Vector Database: Storage and Retrieval Technique, Challenge. [arXiv, 2023] [paper]

Other Forms of Memory

  • Memorizing Transformers. [ICLR, 2022] [paper] [code]
  • RET-LLM: Towards a General Read-Write Memory for Large Language Models. [arXiv, 2023] [paper]

优化记忆的效率

Searching Design
Searching Execution
  • Faiss:Facebook AI Similarity Search. [wiki] [code]
  • Milvus: A purpose-built vector data management system. [SIGMOD, 2021] [paper] [code]
  • Quicker ADC : Unlocking the Hidden Potential of Product Quantization With SIMD. [IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019] [paper] [code]
Efficient Indexing
  • LSH: Locality-sensitive hashing scheme based on p-stable distributions. [SCG, 2004] [paper]
  • Random projection trees and low dimensional manifolds. [STOC, 2008] [paper]
  • SPANN: Highly-efficient Billion-scale Approximate Nearest Neighborhood Search. [NeurIPS, 2021] [paper] [code]
  • Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs. [IEEE Transactions on Pattern Analysis and Machine Intelligence, VOL. 42, NO. 4, 2020] [paper]
  • DiskANN: Fast Accurate Billion-point Nearest Neighbor Search on a Single Node. [NeurIPS, 2019] [paper] [code]
  • DiskANN++: Efficient Page-based Search over Isomorphic Mapped Graph Index using Query-sensitivity Entry Vertex. [arXiv, 2023] [paper]
  • CXL-ANNS: Software-Hardware Collaborative Memory Disaggregation and Computation for Billion-Scale Approximate Nearest Neighbor Search. [USENIX ATC, 2023] [paper]
  • Co-design Hardware and Algorithm for Vector Search. [SC, 2023] [paper] [code]

3、个人LLM代理的安全性和隐私

AI/ML的安全与隐私是一个庞大的领域,涉及大量相关论文。在这里,我们只关注与LLM和LLM代理相关的论文。

(1)、机密性(用户数据的保密性)

  • THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption. [ACL, 2022][paper]
  • TextFusion: Privacy-Preserving Pre-trained Model Inference via Token Fusion [EMNLP, 2022] [paper][code]
  • TextObfuscator: Making Pre-trained Language Model a Privacy Protector via Obfuscating Word Representations. [ACL, 2023] [paper][code]
  • Adversarial Training for Large Neural Language Models. [arXiv, 2020] [paper][code]

(2)、完整性(代理行为的完整性)

Adversarial Attacks
  • Certifying LLM Safety against Adversarial Prompting. [arXiv, 2023] [paper][code]
  • On evaluating adversarial robustness of large vision-language models. [arXiv, 2023] [paper][code]
  • Jailbroken: How does llm safety training fail? [arXiv, 2023] [paper]
  • On the adversarial robustness of multi-modal foundation models. [arXiv, 2023] [paper]
  • Misusing Tools in Large Language Models With Visual Adversarial Examples. [arXiv, 2023] [paper]
  • Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models. [arXiv, 2023] [paper]
Backdoor Attacks
  • Backdoor attacks for in-context learning with language models. [arXiv, 2023] [paper]
  • Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models. [arXiv, 2023] [paper]
  • PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models. [arXiv, 2023] [paper][code]
  • Defending against backdoor attacks in natural language generation. [arXiv, 2021] [paper][code]
Prompt Injection Attacks
  • Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection. [arXiv, 2023] [paper]
  • Ignore Previous Prompt: Attack Techniques For Language Models. [arXiv, 2022] [paper][code]
  • Prompt Injection attack against LLM-integrated Applications. [arXiv, 2023] [paper][code]
  • Jailbreaking Black Box Large Language Models in Twenty Queries. [arXiv, 2023] [paper][code]
  • Extracting Training Data from Large Language Models. [arXiv, 2020] [paper]
  • SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks. [arXiv, 2023] [paper][code]

(3)、可靠性(代理决策的可靠性)

Problems
  • Survey of Hallucination in Natural Language Generation. [ACM Computing Surveys 2023] [paper]
  • A Survey of Hallucination in Large Foundation Models. [arXiv, 2023] [paper]
  • DERA: Enhancing Large Language Model Completions with Dialog-Enabled Resolving Agents. [arXiv, 2023] [paper]
  • Cumulative Reasoning with Large Language Models. [arXiv, 2023] [paper]
  • Learning From Mistakes Makes LLM Better Reasoner. [arXiv, 2023] [paper]
  • Large Language Models can Learn Rules. [arXiv, 2023] [paper]
Improvement
  • PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts. [ACL 2022] [paper]
  • Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks. [EMNLP 2022] [paper]
  • Finetuned Language Models are Zero-Shot Learners. [ICLR 2022] [paper]
  • SELFCHECKGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models. [EMNLP 2023] [paper]
  • Large Language Models Can Self-Improve. [arXiv, 2022] [paper]
  • Self-Refine: Iterative Refinement with Self-Feedback. [arXiv, 2023] [paper]
  • Teaching Large Language Models to Self-Debug. [arXiv, 2023] [paper]
  • Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks. [ACL 2023] [paper]
  • Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models. [arXiv, 2023] [paper]
  • Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection. [arXiv, 2023] [paper]
  • Self-Knowledge Guided Retrieval Augmentation for Large Language Models. [Findings of EMNLP, 2023] [paper]
Inspection
  • CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling. [AAAI 2019] [paper]
  • Gradient-Based Constrained Sampling from Language Models. [EMNLP 2022] [paper]
  • Large Language Models are Better Reasoners with Self-Verification. [Findings of EMNLP 2023] [paper]
  • Explainability for Large Language Models: A Survey. [arXiv, 2023] [paper]
  • Self-Consistency Improves Chain of Thought Reasoning in Language Models. [ICLR, 2023] [paper]
  • Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models. [arXiv, 2023] [paper]
  • Mutual Information Alleviates Hallucinations in Abstractive Summarization. [EMNLP, 2023] [paper]
  • Overthinking the Truth: Understanding how Language Models Process False Demonstrations. [arXiv, 2023] [paper]
  • Inference-Time Intervention: Eliciting Truthful Answers from a Language Model. [NeurIPS, 2023] [paper]

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/388858
推荐阅读
相关标签
  

闽ICP备14008679号