当前位置:   article > 正文

Command-R模型介绍

command-r

参考网址

https://huggingface.co/CohereForAI/c4ai-command-r-v01

Model Summary  型号概要

C4AI Command-R is a research release of a 35 billion parameter highly performant generative model. Command-R is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities.
C4AI Command-R 是 350 亿参数高性能生成模型的研究版本。 Command-R 是一种大型语言模型,具有开放权重,针对推理、摘要和问答等各种用例进行了优化。 Command-R 具有以 10 种语言评估的多语言生成功能和高性能 RAG 功能。

Model Architecture: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety.
模型架构:这是一种使用优化的转换器架构的自回归语言模型。预训练后,该模型使用监督微调 (SFT) 和偏好训练来使模型行为与人类偏好保持一致,以实现有用性和安全性。

Languages covered: The model is optimized to perform well in the following languages: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.
涵盖的语言:该模型经过优化,可以在以下语言中表现良好:英语、法语、西班牙语、意大利语、德语、巴西葡萄牙语、日语、韩语、简体中文和阿拉伯语。

Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian.
预训练数据还包括以下 13 种语言:俄语、波兰语、土耳其语、越南语、荷兰语、捷克语、印度尼西亚语、乌克兰语、罗马尼亚语、希腊语、印地语、希伯来语、波斯语。

Context length: Command-R supports a context length of 128K.
上下文长度:Command-R 支持 128K 的上下文长度。

Tool use capabilities:  工具使用能力:

Command-R has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance, but we encourage experimentation.
Command-R 经过专门培训,具备对话工具使用能力。这些已使用特定的提示模板,通过监督微调和偏好微调的混合方式训练到模型中。偏离此提示模板可能会降低性能,但我们鼓励尝试。

Command-R’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command-R may use one of its supplied tools more than once.
Command-R 的工具使用功能将对话作为输入(带有可选的用户系统前导码)以及可用工具列表。然后,该模型将生成一个 json 格式的操作列表,以在这些工具的子集上执行。 Command-R 可以多次使用其提供的工具之一。

The model has been trained to recognise a special directly_answer tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the directly_answer tool, but it can be removed or renamed if required.
该模型经过训练可以识别特殊的 directly_answer 工具,该工具用于表明它不想使用任何其他工具。不调用特定工具的能力在多种情况下都很有用,例如问候用户或询问澄清问题。我们建议包含 directly_answer 工具,但如果需要,可以将其删除或重命名。

Grounded Generation and RAG Capabilities:
接地发电和 RAG 功能:

Command-R has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG).This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.
Command-R 经过专门培训,具备接​​地发电能力。这意味着它可以根据提供的文档片段列表生成响应,并且它将在响应中包含指示信息来源的基础跨度(引用)。这可用于启用诸如扎根总结和检索增强生成 (RAG) 的最后一步等行为。此行为已使用特定的提示模板,通过监督微调和偏好微调的混合方式训练到模型中。偏离此提示模板可能会降低性能,但我们鼓励尝试。

Command-R’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.
Command-R 的基础生成行为将对话作为输入(带有可选的用户提供的系统前导码,指示任务、上下文和所需的输出样式),以及检索到的文档片段列表。文档片段应该是块,而不是长文档,通常每个块大约有 100-400 个单词。文档片段由键值对组成。键应该是简短的描述性字符串,值可以是文本或半结构化的。

By default, Command-R will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as accurate grounded generation.
默认情况下,Command-R 将首先预测哪些文档相关,然后预测它将引用哪些文档,最后生成答案,从而生成接地响应。最后,它会将接地跨度插入到答案中。请参阅下面的示例。这被称为 accurate 接地一代。

The model is trained with a number of other answering modes, which can be selected by prompt changes . A fast citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.
该模型使用多种其他回答模式进行训练,可以通过提示更改来选择。分词器支持 fast 引用模式,它将直接生成包含基础跨度的答案,而无需先完整写出答案。这牺牲了一些接地精度,有利于生成更少的令牌。

Code Capabilities:  代码能力:

Command-R has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.
Command-R 已经过优化,可以通过请求代码片段、代码解释或代码重写来与您的代码进行交互。对于纯代码补全来说,它可能无法很好地开箱即用。为了获得更好的性能,我们还建议对代码生成相关指令使用低温(甚至贪婪解码)。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/500602
推荐阅读
相关标签
  

闽ICP备14008679号