当前位置:   article > 正文

“万物皆可Seq2Seq” | 忠于原文的T5手写论文翻译_“万物皆可seq2seq” | 使用 bert4keras 在中文语境下运行google multi

“万物皆可seq2seq” | 使用 bert4keras 在中文语境下运行google multilingual t5

《Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer》

摘要 / Abstract

     Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.1 Keywords: transfer learning, natural language processing, multi-task learning, attentionbased models, deep learning

     迁移学习,把一个模型先在数据丰富的任务上进行预训练,然后再针对下游任务进行微调,这在自然语言处理中是一个强大的技术。迁移学习的有效性引起了方法、方式和实现的多样性。在本文中,我们探索了NLP的迁移学习技术的前景,通过引入一个统一框架将所有基于文本的语言问题转换为文本到文本格式。我们系统的比较了数十种语言理解任务的预训练目标,体系结构,未标记的数据集,迁移方法和其他因素。通过结合对规模的探索和新的“巨型清洁爬虫语料库(C4)”,我们在许多基准上获得了最先进的结果,包括文本摘要,问题解答,文本分类等。为了促进NLP迁移学习的发展,我们发布了数据集,预训练的模型和代码。

章节1 介绍 / Introduction

     Training a machine learning model to perform natural language processing (NLP) tasks often requires that the model can process text in a way that is amenable to downstream learning. This can be loosely viewed as developing general-purpose knowledge that allows the model to “understand” text. This knowledge can range from low-level (e.g. the spelling or meaning of words) to high-level (e.g. that a tuba is too large to fit in most backpacks). In modern machine learning practice, providing this knowledge is rarely done explicitly; instead, it is often learned as part

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/码创造者/article/detail/1019733
推荐阅读
相关标签
  

闽ICP备14008679号