赞
踩
[晓理紫]每日论文推送(有中文摘要或代码或者项目地址)
每日更新论文,关注晓理紫获取每日最新论文
[晓理紫]
标题: A Comprehensive Study of Knowledge Editing for Large Language Models
作者: Ningyu Zhang, Yunzhi Yao, Bozhong Tian
摘要: Large Language Models (LLMs) have shown extraordinary capabilities in
understanding and generating text that closely mirrors human communication.
However, a primary limitation lies in the significant computational demands
during training, arising from their extensive parameterization. This challenge
is further intensified by the dynamic nature of the world, necessitating
frequent updates to LLMs to correct outdated information or integrate new
knowledge, thereby ensuring their continued relevance. Note that many
applications demand continual model adjustments post-training to address
deficiencies or undesirable behaviors. There is an increasing interest in
efficient, lightweight methods for on-the-fly model modifications. To this end,
recent years have seen a burgeoning in the techniques of knowledge editing for
LLMs, which aim to efficiently modify LLMs’ behaviors within specific domains
while preserving overall performance across various inputs. In this paper, we
first define the knowledge editing problem and then provide a comprehensive
review of cutting-edge approaches. Drawing inspiration from educational and
cognitive research theories, we propose a unified categorization criterion that
classifies knowledge editing methods into three groups: resorting to external
knowledge, merging knowledge into the model, and editing intrinsic knowledge.
Furthermore, we introduce a new benchmark, KnowEdit, for a comprehensive
empirical evaluation of representative knowledge editing approaches.
Additionally, we provide an in-depth analysis of knowledge location, which can
provide a deeper understanding of the knowledge structures inherent within
LLMs. Finally, we discuss several potential applications of knowledge editing,
outlining its broad and impactful implications.
中文摘要: 大型语言模型(LLMs)在理解和生成接近人类沟通的文本方面展现了非凡的能力。然而,它们的一个主要限制在于训练过程中的巨大计算需求,这是由于它们广泛的参数化造成的。这一挑战因世界的动态本质而进一步加剧,需要频繁更新LLMs以纠正过时信息或整合新知识,以确保它们的持续相关性。值得注意的是,许多应用需要在训练后持续调整模型,以解决缺陷或不良行为。对于实时模型修改的高效、轻量级方法,人们越来越感兴趣。为此,近年来,LLMs的知识编辑技术日益增多,旨在有效地修改LLMs在特定领域的行为,同时保持对各种输入的整体性能。在本文中,我们首先定义知识编辑问题,然后对最前沿的方法进行全面回顾。受教育和认知研究理论的启发,我们提出了一个统一的分类标准,将知识编辑方法分为三组:依赖外部知识、将知识融入模型和编辑内在知识。此外,我们引入了一个新的基准,KnowEdit,用于对代表性知识编辑方法进行全面的实证评估。我们还对知识位置进行了深入分析,这可以提供对LLMs内在知识结构的更深入了解。最后,我们讨论了知识编辑的几个潜在应用,概述了其广泛而深远的影响。
[论文下载:]http://arxiv.org/abs/2401.01286v2
[项目页面:]https://huggingface.co/datasets/zjunlp/KnowEdit|
[GitHub:]https://github.com/zjunlp/EasyEdit|https://github.com/zjunlp/KnowledgeEditingPapers|
关注晓理紫,每日更新最新论文请转发给有需要的同学
{晓理紫}喜分享,也很需要你的支持,喜欢留下痕迹哦!
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。