赞
踩
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.1 Keywords: transfer learning, natural language processing, multi-task learning, attentionbased models, deep learning
迁移学习,把一个模型先在数据丰富的任务上进行预训练,然后再针对下游任务进行微调,这在自然语言处理中是一个强大的技术。迁移学习的有效性引起了方法、方式和实现的多样性。在本文中,我们探索了NLP的迁移学习技术的前景,通过引入一个统一框架将所有基于文本的语言问题转换为文本到文本格式。我们系统的比较了数十种语言理解任务的预训练目标,体系结构,未标记的数据集,迁移方法和其他因素。通过结合对规模的探索和新的“巨型清洁爬虫语料库(C4)”,我们在许多基准上获得了最先进的结果,包括文本摘要,问题解答,文本分类等。为了促进NLP迁移学习的发展,我们发布了数据集,预训练的模型和代码。
Training a machine learning model to perform natural language processing (NLP) tasks often requires that the model can process text in a way that is amenable to downstream learning. This can be loosely viewed as developing general-purpose knowledge that allows the model to “understand” text. This knowledge can range from low-level (e.g. the spelling or meaning of words) to high-level (e.g. that a tuba is too large to fit in most backpacks). In modern machine learning practice, providing this knowledge is rarely done explicitly; instead, it is often learned as part
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。