当前位置:   article > 正文

大模型(LLM)总结_大模型 csdn

大模型 csdn

大模型(大型语言模型,LLMs)是当下AI和NLP研究与产业中最重要的方向之一。

本文将对当下的主流大模型进行总结。(*更新于2023.03.19)

本文将参数规模在1B以上的模型视为大模型。

模型一览

Model作者Size类型开源?
LLaMaMeta AI7B-65BDecoderopen
OPTMeta AI125M-175BDecoderopen
T5Google220M-11BEncoder-Decoderopen
mT5Google235M-13BEncoder-Decoderopen
UL2Google20BEncoder-Decoderopen
PaLMGoogle540BDecoderno
LaMDAGoogle2B-137BDecoderno
FLAN-T5Google同T5Encoder-Decoderopen
FLAN-UL2Google同U2Encoder-Decoderopen
FLAN-PaLMGoogle同PaLMDecoderno
FLANGoogle同LaMDADecoderno
BLOOMBigScience176BDecoderopen
T0BigScience3BDecoderopen
BLOOMZBigScience同BLOOMDecoderopen
mT0BigScience同T0Decoderopen
GPT-NeoEleutherAI125M-2.7BDecoderopen
GPT-NeoXEleutherAI20BDecoderopen
GPT3OpenAI175B (davinci)Decoderno
GPT4OpenAIunknownOpenAIno
InstructGPTOpenAI1.3BDecoderno
AlpacaStanford同LLaMaDecoderopen

Meta/Facebook AI

  • LLaMA: Open and Efficient Foundation Language Models

https://arxiv.org/pdf/2302.13971v1.pdf​arxiv.org/pdf/2302.13971v1.pdf

https://github.com/facebookresearch/llama​github.com/facebookresearch/llama

  • OPT: Open Pre-trained Transformer Language Models

https://arxiv.org/pdf/2205.01068.pdf​arxiv.org/pdf/2205.01068.pdf

GitHub - facebookresearch/metaseq: Repo for external large-scale work​github.com/facebookresearch/metaseq正在上传…重新上传取消

Google

  • T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

https://arxiv.org/pdf/1910.10683.pdf​arxiv.org/pdf/1910.10683.pdf

https://github.com/google-research/text-to-text-transfer-transformer​github.com/google-research/text-to-text-transfer-transformer

注:T5的代码和模型同样open source在hugging face平台。

google (Google AI)​huggingface.co/google?sort_models=likes#models正在上传…重新上传取消

  • mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer

https://arxiv.org/pdf/2010.11934.pdf​arxiv.org/pdf/2010.11934.pdf

https://huggingface.co/models?search=mt5​huggingface.co/models?search=mt5

  • UL2 and Flan-UL2: Unifying Language Learning Paradigms

https://arxiv.org/pdf/2205.05131.pdf​arxiv.org/pdf/2205.05131.pdf

blog:

https://www.yitay.net/blog/flan-ul2-20b​www.yitay.net/blog/flan-ul2-20b

model:

google/ul2 · Hugging Face​huggingface.co/google/ul2正在上传…重新上传取消

google/flan-ul2 · Hugging Face​huggingface.co/google/flan-ul2正在上传…重新上传取消

  • PaLM: Scaling Language Modeling with Pathways

https://arxiv.org/pdf/2204.02311.pdf​arxiv.org/pdf/2204.02311.pdf

  • LaMDA: Language Models for Dialog Applications

https://arxiv.org/pdf/2201.08239.pdf​arxiv.org/pdf/2201.08239.pdf

blog:

https://blog.google/technology/ai/lamda/​blog.google/technology/ai/lamda/

  • Flan-T5 and Flan-PaLM: Scaling Instruction-Finetuned Language Models

https://arxiv.org/pdf/2210.11416.pdf​arxiv.org/pdf/2210.11416.pdf

google/flan-t5-large · Hugging Face​huggingface.co/google/flan-t5-large正在上传…重新上传取消

  • Flan: FINETUNED LANGUAGE MODELS ARE ZERO-SHOT LEARNERS

https://arxiv.org/pdf/2109.01652.pdf​arxiv.org/pdf/2109.01652.pdf

**注释:在谷歌的命名体系中,前缀Flan基本等于该模型经过了instruct-tuning。

BigScience (非盈利兴趣组织)

  • BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

https://arxiv.org/pdf/2211.05100.pdf​arxiv.org/pdf/2211.05100.pdf

bigscience/bloom · Hugging Face​huggingface.co/bigscience/bloom正在上传…重新上传取消

  • T0: MULTITASK PROMPTED TRAINING ENABLES ZERO-SHOT TASK GENERALIZATION

https://arxiv.org/pdf/2110.08207.pdf​arxiv.org/pdf/2110.08207.pdf

https://huggingface.co/bigscience/T0​huggingface.co/bigscience/T0

  • BLOOMZ and mT0: Multilingual version of BLOOM and T0

https://arxiv.org/pdf/2211.01786.pdf​arxiv.org/pdf/2211.01786.pdf

EleutherAI

  • GPT-NEO

https://github.com/EleutherAI/gpt-neo​github.com/EleutherAI/gpt-neo

  • GPT-NeoX

https://arxiv.org/pdf/2204.06745.pdf​arxiv.org/pdf/2204.06745.pdf

https://huggingface.co/EleutherAI/gpt-neox-20b​huggingface.co/EleutherAI/gpt-neox-20b

OpenAI

OpenAI的大模型自GPT3起都没有开源,关于OpenAI GPT 系列模型的API参见:

九号:OpenAI API 所有 GPT Models 详解47 赞同 · 0 评论文章

Stanford

Alpaca,LLaMA的指令微调模型,效果达到GPT-3.5水平。

https://github.com/tatsu-lab/stanford_alpaca​github.com/tatsu-lab/stanford_alpaca

最新:Prompt/Instruct Tuning 开源数据总结

九号:总结开源可用的Instruct/Prompt Tuning数据440 赞同 · 4 评论文章

**如有本文未提到的大模型,欢迎读者评论区留言。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/盐析白兔/article/detail/98746
推荐阅读
相关标签
  

闽ICP备14008679号