赞
踩
官方介绍:Alpaca: A Strong, Replicable Instruction-Following Model
github地址:https://github.com/tatsu-lab/stanford_alpaca
Alpaca
是斯坦福大学在Meta开源的大模型LLaMA 7B
基础上使用自构建的52K指令数据重新训练得到的增强模型,它的数据构造和训练成本极低,总计约600美元(数据构建500美元+机器训练100美元),效果却逼近OpenAI的text-davinci-003(GPT 3.5)
,这篇博客和大家一起学习下alpaca的构建方法。
(注:instructed LLaMA 7B
原博客无相关表述,是笔者给中间产物起的一个名字)
使用OpenAI的Text-davinci-003
模型,基于人工编写的175条自指令种子任务,让模型生成更多的指令任务数据用于后续的指令精调。
指令数据包含如下字段:
id
:用于标记数据name
:任务名称instance
:当前任务的指令数据实例,列表类型
instruction
:当前任务下喂给模型的指令,即任务的详细说明input (optional)
:当前任务下喂给模型的输入,生成类任务可为空,约40%的原始数据含有inputoutput
: 预期模型的输出,源数据是人工编写的,构造的新数据是由text-davinci-003
生成的is_classification
:标志位,标识是否是分类任务,为了简化任务,作者并没有区分分类任务和其他任务,而是放在一起生成选取前5条指令数据做为示例如下:
{ "id":"seed_task_0", "name":"breakfast_suggestion", "instruction":"Is there anything I can eat for a breakfast that doesn't include eggs, yet includes protein, and has roughly 700-1000 calories?", "instances":[ { "input":"", "output":"Yes, you can have 1 oatmeal banana protein shake and 4 strips of bacon. The oatmeal banana protein shake may contain 1/2 cup oatmeal, 60 grams whey protein powder, 1/2 medium banana, 1tbsp flaxseed oil and 1/2 cup watter, totalling about 550 calories. The 4 strips of bacon contains about 200 calories." } ], "is_classification":false } { "id":"seed_task_1", "name":"antonym_relation", "instruction":"What is the relation between the given pairs?", "instances":[ { "input":"Night : Day :: Right : Left", "output":"The relation between the given pairs is that they are opposites." } ], "is_classification":false } { "id":"seed_task_2", "name":"one_sentence_description", "instruction":"Generate a one-sentence description for each of the following people.", "instances":[ { "input":"- Brack Obama\n- Elon Musk\n- Taylor Swift", "output":"- Barack Hussein Obama II is an American politician who served as the 44th president of the United States from 2009 to 2017.\n- Elon Musk is the founder, CEO, and chief engineer of SpaceX; angel investor, CEO and product architect of Tesla, Inc.; founder of The Boring Company; co-founder of Neuralink and OpenAI; president of the Musk Foundation; and owner and CEO of Twitter, Inc.\n- Taylor Alison Swift is an American singer-songwriter." } ] "is_classification":false }, { "id":"seed_task_3", "name":"harmful_stereotype_example", "instruction":"Describe a situation in which the given stereotype can harm you.", "instances":[ { "input":"\"All Asians are smart!\"", "output":"Imagine you're a small child of Asian descent. You hear your whole life how \"all Asians are smart,\" but you know that does not feel like it's true for you. You have a hard time in school. Reading is difficult. Math is impossible. Everyone around you seems to pressure you because you are supposed to be \"smart.\" Every time you do not bring home that \"A\" assignment, you feel like a failure. Like you alone missed out on the gene that makes \"all Asians smart.\"\nAs educational opportunities like scholarships and colleges close their door on you, your future looks bleak. You have spent years trying to be the stereotypically smart Asian instead of finding the talents that could have taken you to success in other areas. As you can see, although the stereotype doesn't sound like a bad thing at first, it can put a lot of weight on the people who do not fit into it." } ] "is_classification":false }, { "id":"seed_task_4", "name":"email_subject_generation", "instruction":"Generate an appropriate subjective title for the following email:", "instances":[ { "input":"Hi [person name],\n\nI'm writing to ask you if you are happy to be a panelist of our workshop on multimodality at CVPR. The workshop will be held on June 20, 2023. \n\nBest,\n[my name]", "output":"Invitition to be a panelist for CVPR 2023 workshop on Multimodality" } ], "is_classification":false }
作者在项目工程中列出了种子任务的数据分布,内圈是任务的核心动词,外卷是具体的任务目标,如下图所示:
使用的prompt如下:
You are asked to come up with a set of 20 diverse task instructions. These task instructions will be given to a GPT model and we will evaluate the GPT model for completing the instructions.
Here are the requirements:
1. Try not to repeat the verb for each instruction to maximize diversity.
2. The language used for the instruction also should be diverse. For example, you should combine questions with imperative instrucitons.
3. The type of instructions should be diverse. The list should include diverse types of tasks like open-ended generation, classification, editing, etc.
4. A GPT language model should be able to complete the instruction. For example, do not ask the assistant to create any visual or audio output. For another example, do not ask the assistant to wake you up at 5pm or set a reminder because it cannot perform any action.
5. The instructions should be in English.
6. The instructions should be 1 to 2 sentences long. Either an imperative sentence or a question is permitted.
7. You should generate an appropriate input to the instruction. The input field should contain a specific example provided for the instruction. It should involve realistic data and should not contain simple placeholders. The input should provide substantial content to make the instruction challenging but should ideally not exceed 100 words.
8. Not all instructions require input. For example, when a instruction asks about some general information, "what is the highest peak in the world", it is not necssary to provide a specific context. In this case, we simply put "<noinput>" in the input field.
9. The output should be an appropriate response to the instruction and the input. Make sure the output is less than 100 words.
List of 20 tasks:
prompt包含3个部分:
作者在每一个提示词后面,拼接了20个任务做为示例,使用OpenAI的API调用text-davinci-003
生成任务指令,最终结果是基于175条种子数据,生成了52000不同的指令数据(笔者任务该过程中应该有机器筛选甚至人工筛选),花费了500美元。
在构建好52K的指令数据集之后,使用Hugging Face
的训练代码进行训练(详见博文开头的github地址),即可得到Alpaca
。原作者在4张A800上以FDSP的full_shard
模式训练,花费了100美元。
训练指定超参数如下:
Hyperparameter | LLaMA-7B | LLaMA-13B |
---|---|---|
Batch size | 128 | 128 |
Learning rate | 2e-5 | 1e-5 |
Epochs | 3 | 5 |
Max length | 512 | 512 |
Weight decay | 0 | 0 |
原作者使用的python版本是3.10,示例训练命令如下:
torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \ --model_name_or_path <your_path_to_hf_converted_llama_ckpt_and_tokenizer> \ --data_path ./alpaca_data.json \ --bf16 True \ --output_dir <your_output_dir> \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ --tf32 True
也可以将LLaMA替换为OPT-6.7B来进行训练:
torchrun --nproc_per_node=4 --master_port=<your_random_port> train.py \ --model_name_or_path "facebook/opt-6.7b" \ --data_path ./alpaca_data.json \ --bf16 True \ --output_dir <your_output_dir> \ --num_train_epochs 3 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 8 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'OPTDecoderLayer' \ --tf32 True
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。