当前位置:   article > 正文

自然语言处理十几个经典模型及论文整理分享_自然语言处理深度模型

自然语言处理深度模型

图片

    本文对基于深度学习模型自然语言处理最近几年的几十个经典的模型及相关的论文进行整理,分享给大家。

    资源整理自网络,源地址:https://github.com/gyunggyung/NLP-Papers

论文列表

    [2013/01] Efficient Estimation of Word Representations in Vector Space

    [2014/12] Dependency-Based Word Embeddings

    [2015/07] Neural Machine Translation of Rare Words with Subword Units

    [2014/07] GloVe: Global Vectors for Word Representation : GloVe

    [2016/06] Siamese CBOW: Optimizing Word Embeddings for Sentence Representations : Siamese CBOW

    [2016/07] Enriching Word Vectors with Subword Information : fastText

    [2014/09] Sequence to Sequence Learningwith Neural Networks : seq2seq

    [2017/07] Attention Is All You Need : Transformer

    [2017/08] Learned in Translation: Contextualized Word Vectors : CoVe

    [2018/01] Universal Language Model Fine-tuning for Text Classification : ULMFIT

    [2018/02] Deep contextualized word representations : ELMo

    [2018/06] Improving Language Understanding by Generative Pre-Training : GPT-1

    [2018/10] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding : BERT

    [2019/02] Language Models are Unsupervised Multitask Learners : GPT-2

    [2019/04] Language Models with Transformers

    [2019/08] Neural Text Generation with Unlikelihood Training

    [2019/01] Cross-lingual Language Model Pretraining XLM

    [2019/01] Multi-Task Deep Neural Networks for Natural Language Understanding : MT-DNN

    [2019/01] Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context : Transformer-XL

    [2019/06] XLNet: Generalized Autoregressive Pretraining for Language Understanding : XLNet

    [2019/04] The Curious Case of Neural Text Degeneration

    [2019/09] Fine-Tuning Language Models from Human Preferences

    [2019/01] BioBERT: a pre-trained biomedical language representation model for biomedical text mining : BioBERT

    [2019/03] SciBERT: A Pretrained Language Model for Scientific Text : SciBERT

    [2019/04] ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission : ClinicalBERT

    [2019/06] HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization : HIBERT

    [2019/07] SpanBERT: Improving Pre-training by Representing and Predicting Spans : SpanBERT

    [2019/04] Publicly Available Clinical BERT Embeddings

    [2019/08] Pre-Training with Whole Word Masking for Chinese BERT

    [2019/07] Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment

    [2019/07] R-Transformer: Recurrent Neural Network Enhanced Transformer : R-Transformer

    [2019/09] FREELB: ENHANCED ADVERSARIAL TRAINING FOR LANGUAGE UNDERSTANDING : FREELB

    [2019/09] Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks

    [2019/10] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer : T5

    [2018/07] Subword-level Word Vector Representations for Korean

    [2019/08] Zero-shot Word Sense Disambiguation using Sense Definition Embeddings

    [2019/06] Bridging the Gap between Training and Inference for Neural Machine Translation

    [2019/06] Emotion-Cause Pair Extraction: A New Task to Emotion Analysis in Texts

    [2019/07] A Simple Theoretical Model of Importance for Summarization

    [2019/05] Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems

    [2019/07] We need to talk about standard splits

    [2019/07] ERNIE 2.0: A Continual Pre-training Framework for Language Understanding : ERNIE 2.0

    [2019/07] Multi-Task Deep Neural Networks for Natural Language Understanding : mt-dnn

    [2019/05] SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems : SuperGLUE

    [2020/01] Towards a Human-like Open-Domain Chatbot + Google AI Blog

    [2020/03] ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators : ELECTRA

    [2019/04] Mask-Predict: Parallel Decoding of Conditional Masked Language Models : Mask-Predict

    [2020/01] Reformer: The Efficient Transformer : Reformer

    [2020/04] Longformer: The Long-Document Transformer : Longformer

    [2019/11] DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation : DialoGPT

    [2020/01] Towards a Human-like Open-Domain Chatbot

    [2020/04] You Impress Me: Dialogue Generation via Mutual Persona Perception

    [2020/04] Recipes for building an open-domain chatbot

    [2020/04] ToD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogues : ToD-BERT

    [2020/04] SOLOIST: Few-shot Task-Oriented Dialog with A Single Pre-trained Auto-regressive Model : SOLOIST

    [2020/05] A Simple Language Model for Task-Oriented Dialogue

    [2019/07] ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation : ReCoSa

    [2020/04] FastBERT: a Self-distilling BERT with Adaptive Inference Time : FastBERT

    [2020/01] PoWER-BERT: Accelerating BERT inference for Classification Tasks : PoWER-BERT

    [2019/10] DistillBERT, a distilled version of BERT: smaller, faster, cheaper and lighter : DistillBERT

    [2019/10] TinyBERT: Distilling BERT for Natural Language Understanding : TinyBERT

    [2019/11] Not Enough Data? Deep Learning to the Rescue!

    [2018/12] Conditional BERT Contextual Augmentation

    [2020/03] Data Augmentation using Pre-trained Transformer Models

    [2020/04] FLAT: Chinese NER Using Flat-Lattice Transformer : FLAT

    [2019/12] Big Transfer (BiT): General Visual Representation Learning : BiT

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/酷酷是懒虫/article/detail/870547
推荐阅读
相关标签
  

闽ICP备14008679号