当前位置:   article > 正文

摩尔定律和梅特卡夫定律_摩尔定律f

摩尔定律和梅特卡夫定律_摩尔定律f

摩尔定律和梅特卡夫定律

The GPT-3 (Generative Pre-Trained Transformer-3) is OpenAI’s latest and greatest natural language prediction model. Simply put, it generates text in response to any input text. It is a program that responds to questions or statements.

GPT-3(生成式预训练Transformer-3)是OpenAI的最新也是最出色的自然语言预测模型。 简而言之,它会响应任何输入文本来生成文本。 这是一个回答问题或陈述的程序。

The GPT-3 is pre-trained with a large amount of natural language text from the Internet (45TB of training text with 499 billion words). It cost at least 4.6 million US dollars to train on GPUs. The result model has 175 billion parameters.

GPT-3接受了来自互联网的大量自然语言文本的预训练(45 TB的训练文本具有4,990亿个单词)。 在GPU上进行训练至少花费460万美元 。 结果模型具有1750亿个参数。

The first wave of GPT-3 enabled applications have stunned the “developer twitter”. They offer a glimpse of our AI future.

启用GPT-3的第一批应用震惊了“开发者推特”。 他们提供了我们AI未来的一瞥。

In the official GPT-3 research study, the OpenAI team demonstrated that GPT-3 achieves state-of-the-art performance out-of-the-box without any fine tuning. But how does it work in the real world? Is it just another toy or a serious threat to humanity?

正式的GPT-3研究中 ,OpenAI团队证明了GPT-3无需任何微调即可实现最先进的性能。 但是它在现实世界中如何运作? 它只是另一个玩具还是对人类的严重威胁?

这个怎么运作 (How it works)

The GPT-3 model generates text one word at a time. As a hypothetical example, let’s say that a developer gives it the following words as input.

GPT-3模型一次生成一个单词的文本。 作为一个假设示例,假设开发人员将以下单词作为输入。

“Answer to the Ultimate Question of Life, the Universe, and Everything is”

“关于生命,宇宙和万物的终极问题的答案是”

The AI model could generate the word “forty” as the response. And then, the developer appends the generated word to the input and runs the model again.

AI模型可以生成单词“ 四十 ”作为响应。 然后,开发人员将生成的单词附加到输入中,然后再次运行模型。

“Answer to the Ultimate Question of Life, the Universe, and Everything is forty”

“关于生命,宇宙和万物的终极问题的答案是四十”

This time, the AI model could generate the word “two” as the response. Repeat again, and the next response should be the period sign, hence completing a sentence.

这次,AI模型可以生成单词“ two ”作为响应。 再次重复,下一个响应应为句号,从而完成一个句子。

“Answer to the Ultimate Question of Life, the Universe, and Everything is forty-two.”

“关于生命,宇宙和万物的终极问题的答案是四十二。”

GPT-3 can do this because it has seen this particular pop culture reference many times from the text in its training. So, its neural network can guess the “next word” with a high degree of statistical certainty.

GPT-3之所以能够做到这一点,是因为它在培训中从文本中多次看到了这种流行文化参考。 因此,其神经网络可以高度统计确定性地猜测“下一个单词”。

However, in natural language, predictions are not always so clear-cut. The word that follows an input often depends on the context. That is where GPT-3’s strength as a few-shot learner shows. Few-shot learning is to prime GPT-3 with a few examples, and then ask it to make predictions. That allows the user to give the AI model a language context, and dramatically improve accuracy. Figure 1 shows examples of zero-shot, one-shot, and few-shot learning to prime an AI model to generate foreign language translations.

但是,用自然语言来说,预测并不总是那么明确。 输入后的单词通常取决于上下文。 这就是一些学习者展示的GPT-3实力的体现。 很少有人会学习用一些示例来介绍GPT-3,然后要求它进行预测。 这使用户可以为AI模型提供语言环境,并大大提高准确性。 图1显示了零镜头,单镜头和少镜头学习的示例,这些学习可以启动AI模型以生成外语翻译。

Figure 1. Three types of learning for an AI translator. Image courtesy Language Models are Few-Shot Learners, Fig 2.1

图1. AI翻译人员的三种学习方式。 图片礼貌 语言模型是学习很少的人 ,图2.1

Few-shot learning is remarkably similar to how human babies learn languages. The learner learns from language examples, not from grammar rules. As we shall see, by priming GPT-3 with different examples, developers can create very different applications.

很少有的学习与婴儿学习语言的方式非常相似。 学习者从语言示例中学习,而不是从语法规则中学习。 正如我们将看到的,通过使用不同的示例启动GPT-3,开发人员可以创建非常不同的应用程序。

它通过图灵测试了吗? (Does it pass the Turing test?)

One of the first questions people ask about a language AI is whether it can pass the Turing test and fool humans into thinking that it is a human? Well, some argue that GPT-3 can already fool human-beings. Figure 2 shows an essay generated by GPT-3. According to the GPT-3 team, less than 12% of humans can tell it is written by a machine.

人们对语言AI提出的第一个问题是,它是否可以通过图灵测试并欺骗人类以为它是人类? 好吧,有人认为GPT-3已经可以欺骗人类。 图2显示了GPT-3生成的论文。 根据GPT-3小组的说法,只有不到12%的人可以分辨出它是由机器编写的。

Image for post

Figure 2. An original article was written by GPT-3. Image courtesy Language Models are Few-Shot Learners, Fig 3.14

图2. GPT-3撰写了原始文章。 图3.14 语言模型 礼貌的 人很少

With a little priming, GPT-3 can mimic the writing styles of famous people. The Learn from anyone project allows users to pick a famous person and then to provide a topic. It primes GPT-3 with known writings of this person and then uses the topic text as input. It returns a 200-word essay subsequently generated by GPT-3. The results speak for themselves.

只需一点点启动,GPT-3就可以模仿名人的写作风格。 向任何人学习的项目使用户可以选择一个名人,然后提供一个主题。 它以该人的已知著录作为GPT-3的词素,然后使用主题文本作为输入。 它返回GPT-3随后生成的200字的文章。 结果不言自明。

Perhaps more interesting is for GPT-3 to do paragraph-based English to English “translation”! That is to rephrase a paragraph of English text to make it simpler or more rigorous. Obviously, it is difficult to take machine generated legal language at face value, but even legal experts note that GPT-3 could be an assistant to attorneys and increase attorney productivity. Only a few examples are needed to prime GPT-3. The results are not perfect but quite close.

对于GPT-3而言,将基于段落的英语翻译为英语“翻译”可能更有趣! 那就是改写一段英文文本,使其更简单或更严格。 显然,很难以机器生成的法律语言来衡量表面价值,但是即使法律专家也指出,GPT-3可以作为律师的助手,并提高律师的工作效率。 启动GPT-3只需几个例子。 结果并不完美,但非常接近。

One of the fascinating aspects of GPT-3 is the ability for the AI to “learn math” from language. The AI is never taught the underlying structure and theorems of math. Yet, it can generate the correct answers to math questions. For simple two number additions, GPT-3 is almost 100% accurate despite that it has never learned what numbers mean. Figure 3 shows some examples.

GPT-3令人着迷的方面之一是AI能够从语言“学习数学”的能力。 从未教过AI的基本结构和数学定理。 但是,它可以生成数学问题的正确答案。 对于简单的两个数字加法,GPT-3几乎是100%准确的,尽管它从未了解过数字的含义。 图3显示了一些示例。

Image for post

Figure 3. GPT-3 does math. Image courtesy Language Models are Few-Shot Learners, Figs G.42 to G.48

图3. GPT-3进行数学运算。 图像礼貌 语言模型是少量学习者 ,图G.42至G.48

Combining this math capability with the fact that GPT-3 has seen a lot of structured data in its training, it seems possible to prime the AI to respond to English inputs with structured data output such as JSON or XML.

结合这种数学能力和GPT-3在其训练中已经看到大量结构化数据这一事实,似乎有可能使AI启动以响应具有JSON或XML等结构化数据输出的英语输入。

If the GPT-3 can generate structured data used by computer programs, maybe it could go one more step and generate computer programs directly. Is this possible? The answer appears to be yes!

如果GPT-3可以生成计算机程序使用的结构化数据,那么它可能又可以迈出一步,直接生成计算机程序。 这可能吗? 答案似乎是肯定的!

Like other deep neural networks, GPT-3 mostly appears as a black box for human-beings which made it challenging to come up with the correct examples to prime it for exact outputs. It is a trial and error process that could take days.

像其他深层神经网络一样,GPT-3在大多数情况下似乎是人类的黑匣子,这使得想出正确的例子来为精确的输出做好准备具有挑战性。 这是一个反复试验的过程,可能需要几天的时间。

Developing GPT-3 apps is not to write algorithms in traditional programming languages, but to come up with natural language examples to prime the AI. It requires a new type of no-code skills that will create new jobs in software development.

开发GPT-3应用程序不是用传统的编程语言编写算法,而是想出自然语言示例来充实AI。 它需要一种新型的无代码技能,这将在软件开发中创造新的工作机会。

“Human describes, AI builds, human debugs.” — Vitalik Buterin, Creator of Ethereum

“人类描述,人工智能构建,人工调试。” - 以太坊的 创造者 Vitalik Buterin

Machine-generated code could be a fascinating (and profitable) area of research going forward.

机器生成的代码可能是未来研究的一个有趣(且有利可图)的领域。

我们可以理解的AI (An AI we can understand)

Despite the OpenAI name, GPT-3 is neither open source nor open access. It provides a simple web services API for developers to prime the model, and then send in a text to get a response. The API is simple, but there is currently a waiting list.

尽管有OpenAI的名称,但GPT-3既不是开源也不是开放访问。 它为开发人员提供了一个简单的Web服务API,以供开发人员准备模型,然后发送文本以获取响应。 该API很简单,但是目前有一个等待列表。

The barrier to GPT-3 access is deliberate. GPT-3 is a powerful piece of software. However, since it is a black box, we cannot easily predict or control the text it generates. As we discussed, priming for the exact output is mostly a trial and error process. Given the amount of misogyny, and other hateful content that exists on the Internet, the amount of GPT-3’s 499 billion words training data, an unsupervised GPT-3 could generate text that is biased or hurtful. For example, just think about the kind of convincing-sounding fake news articles GPT-3 could generate.

故意使用GPT-3的障碍。 GPT-3是一款功能强大的软件。 但是,由于它是一个黑匣子,因此我们无法轻松预测或控制它生成的文本。 正如我们所讨论的那样,为准确的输出进行填充主要是一个反复试验的过程。 考虑到互联网上存在的厌恶和其他令人讨厌的内容数量,以及GPT-3的4,990亿个单词培训数据的数量,无人监管的GPT-3可能会产生带有偏见或伤害的文本。 例如,考虑一下GPT-3可能产生的令人信服的假新闻。

The developer community must use powerful AI systems responsively. It probably will require human-beings to have a deeper understanding of how language models work, as opposed to just putting up banned word lists.

开发人员社区必须响应性地使用功能强大的AI系统。 这可能需要人类对语言模型的工作方式有更深入的了解,而不是仅仅建立禁止的单词表。

While it is tough for human-beings to understand, much less to explain and control, the reasoning inside the AI black box, could the AI explain itself to us? Perhaps, developers would keep pushing boundaries on what’s possible for GPT-3 to create and explain!

虽然人类很难理解,但更不用说解释和控制了AI黑匣子内部的推理,AI可以向我们解释自己吗? 也许,开发人员会继续限制GPT-3创建和解释的可能性!

黑匣子 (A black box)

While the GPT-3 has shown great promise, it still exhibits some issues that have long plagued neural network AIs. Does the AI really understand the tasks given to it?

尽管GPT-3表现出了巨大的希望,但它仍然存在长期困扰神经网络AI的一些问题。 人工智能真的了解赋予它的任务吗?

At the philosophical level, it might not matter. After all, the AI can do math, translation, and grammar checks. Does it matter that the AI was never taught the concepts of math and grammar? GPT-3 was able to derive math and grammar rules and apply them. Yet, for developers who build GPT-3 applications, it is troubling not to know the boundary of the AI’s “knowledge” and need to watch out for cases the AI cannot handle.

在哲学层面上,这可能并不重要。 毕竟,AI可以进行数学,翻译和语法检查。 从来没有教过AI数学和语法概念有关系吗? GPT-3能够推导并应用数学和语法规则。 但是,对于构建GPT-3应用程序的开发人员而言,令人不知所措的是,它不知道AI“知识”的界限,而需要当心AI无法处理的情况。

一应俱全的AI (An AI for everything)

GPT-3 demonstrates that AI performance increases with the model size in a power-law relationship. The ever-growing model size will produce more powerful and more accurate AI. Could this be the Moore’s law of our time?

GPT-3证明AI性能随幂律关系中的模型大小而增加 。 不断增长的模型尺寸将产生更强大,更准确的AI。 这可能是我们那个时代的摩尔定律吗?

翻译自: https://medium.com/swlh/moores-law-f-452d0803b8c0

摩尔定律和梅特卡夫定律

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小惠珠哦/article/detail/858029
推荐阅读
相关标签
  

闽ICP备14008679号