ai人工智能的数据服务
Why talk of Data Science, AI and Dune in the same breath?
为什么要同时谈论数据科学,人工智能和沙丘?
When you have read this article I hope you will know better.
阅读本文后,希望您能有所了解。
If you have not seen the trailer to the upcoming release of Dune this December 2020 I suggest you do so:
如果您没有看到2020年12月即将发行的Dune的预告片,建议您这样做:
Even more so I would recommend that you read the first book in the series by the author Frank Herbert.
更何况我建议您阅读作者弗兰克·赫伯特(Frank Herbert)系列的第一本书。
A hot tip: you can do so for free at this online archive.
I will not cover the book in detail. Rather, I will talk of a few momentous aspects of this book first published in 1965.
我不会详细介绍这本书。 相反,我将谈论这本书于1965年首次出版的几个重要方面。
Namely I will discuss the Butlerian Jihad and the Mentat.
即我将讨论巴特勒圣战和曼塔特。
These may seem like strange terms, but stay with me.
这些似乎有些奇怪,但请与我同在。
You will come to understand, if you are not familiar already with these two concepts, that they are linked together.
如果您还不熟悉这两个概念,您将了解到它们是链接在一起的。
Dune is a fictional universe and within this the Butlerian Jihad, although not immediately named, quickly becomes apparent:
沙丘是一个虚构的宇宙,在其中,巴特勒圣战组织虽然没有立即命名,但很快就变得显而易见了:
Butlerian Jihad is a conflict taking place over 11,000 years in the future (and over 10,000 years before the events of Dune) which results in the total destruction of virtually all forms of “computers, thinking machines, and conscious robots”
巴特勒圣战是未来11,000多年内发生的冲突(在Dune事件发生前10,000多年),这实际上导致了几乎所有形式的“计算机,思维机器和有意识的机器人”的彻底破坏。
That is, one of the first imaginable things happened, man fighting robot.
就是说,最先想到的事情之一就是人类对抗机器人。
Powerful artificial general intelligence (AGI) was made and a fight ensued.
强大的人工智能(AGI)诞生了,随后发生了战斗。
Yet, the god of machine-logic was overthrown by the masses and a new concept was raised.
然而,机器逻辑之神被群众推翻,并提出了一个新概念。
“Man may not be replaced.”
“人不能被取代。”
In the beginning of the first book you can see a question raised by Paul (protagonist) when he is being tested for his humanity.
在第一本书的开头,您会看到保罗(主角)在接受人性测试时提出的一个问题。
“Why do you test for humans?” he asked.
“为什么要为人类测试?” 他问。
“To set you free.”
“让你自由。”
Not long after this a proclaimed historic quote is repeated in the book.
此后不久,在书中重复了一个宣称的历史性报价。
Anti-AI laws had been placed in effect; the punishment of owning such AI device or any kind being immediate death.
反人工智能法已经生效; 拥有此类AI设备或任何直接死亡的惩罚。
“Thou shalt not make a machine
“你不做机器
in the likeness of a human mind”
像人类的思想一样”
Thus, when AI is banned, instead in this fictional universe humans are trained to computer-like capabilities of computing.
因此,当AI被禁止时,取而代之的是,在这个虚构的宇宙中,人类被训练为具有类似计算机的计算能力。
The humans that are trained in this manner are called Mentat.
以这种方式训练的人称为Mentat 。
A Mentat is a fictional type of human.
门塔特人是虚构的人。
They are trained to mimic the cognitive and analytical ability of computers.
他们经过培训可以模仿计算机的认知和分析能力。
However, they are no simple calculators. Mentats have memory and perception that enable them process large amounts of data.
但是,它们不是简单的计算器。 薄荷具有记忆力和感知力,使它们能够处理大量数据。
Through this they device concise analyses.
通过这种方式,他们可以进行简洁的分析。
Assessing in this manner both people and situations connected through the interpretation of minor changes in body language or intonation.
通过对肢体语言或语调的微小变化的解释,以这种方式评估人与境之间的联系。
Already in the first book the limitations of this are presented.
在第一本书中已经介绍了它的局限性。
The quote from Vladimir Harkonnen, the main antagonist of the first book.
第一本书的主要反对者弗拉基米尔·哈科宁(Vladimir Harkonnen)的语录。
Here he speaks to his guard commander, instructing him how to control a Mentat.
在这里,他与警卫指挥官讲话,指示他如何控制门塔特。
Why do I find this information?
为什么找到此信息?
Well, providing false information is not new — yet this cognitive ability and the computing is an interesting aspect.
好吧,提供虚假信息并不是什么新鲜事物,但是这种认知能力和计算是一个有趣的方面。
August 2019 I wrote an article about adversarial machine learning and poisoning attacks.
2019年8月,我写了一篇关于对抗性机器学习和中毒攻击的文章。
In the article I wrote about recent research in IBM on the new kinds of security threats we were seeing.
在这篇文章中,我写了一篇关于IBM最近对我们所看到的新型安全威胁的研究的文章。
One of these were called poisoning attack:
其中一种被称为中毒攻击:
“Poisoning attacks: machine learning algorithms are often re-trained on data collected during operation to adapt to changes in the underlying data distribution. For instance, intrusion detection systems (IDSs) are often re-trained on a set of samples collected during network operation. Within this scenario, an attacker may poison the training data by injecting carefully designed samples to eventually compromise the whole learning process. Poisoning may thus be regarded as an adversarial contamination of the training data.”
“中毒攻击:机器学习算法通常会针对操作过程中收集的数据进行重新训练,以适应基础数据分布的变化。 例如,入侵检测系统(IDS)通常会在网络运行期间收集的一组样本上进行重新训练。 在这种情况下,攻击者可能会通过注入经过精心设计的样本来破坏训练数据,从而最终损害整个学习过程。 因此,中毒可被视为训练数据的对抗性污染。”
In this manner — feeding false data — the overall purpose can be distorted.
以这种方式(提供错误的数据),整个目的可能会失真。
One prominent example that has been repeated so often that it becomes a cliché is the chatbot Tay.
聊天机器人Tay经常被重复使用,以至于变得陈词滥调。
In 2016 Microsoft unleashed Tay, the teen-talking AI chatbot built to mimic and converse with users in real-time.
2016年,微软发布了Tay,这是一款与青少年聊天的AI聊天机器人,旨在实时模拟和与用户交谈。
Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.
Tay旨在模仿一名19岁美国女孩的语言模式,并从与Twitter的人类用户互动中学习。
It went very wrong.
事情做错了。
It is possible to distort the input of algorithms.
有可能使算法的输入失真。
Highly possible.
极有可能。
Sadly, this seems to be the case.
可悲的是,这似乎是事实。
The way to control and direct machine learning is through its information input. False information — false results.
控制和指导机器学习的方法是通过其信息输入。 错误的信息-错误的结果。
How do we control for error?
我们如何控制错误?
With billions of parameters, can we control for error responsibly?
拥有数十亿个参数,我们可以负责任地控制错误吗?
The Mentat machine learning problem is unlikely to be overcame.
Mentat机器学习问题不太可能解决。
Then again, we — as humans, do often receive false information.
再说一遍,我们作为人类,经常会收到虚假信息。
It is not like humans are not susceptible to manipulation.
这不像人类不容易受到操纵。
Simply put, we should not expect machines to be beyond manipulation.
简而言之,我们不应该期望机器会受到操纵。
Especially concerning machine behaviour based on large datasets.
特别是关于基于大型数据集的机器行为。
Do you see how a science fiction model from the 60’s can remind us both that we need to remember to be human, and that even extraordinarily made calculated decisions can be horribly wrong?
您是否看到60年代的科幻小说模型如何提醒我们,我们需要记住要成为人类,甚至非常规地做出有计划的决策也可能会犯错误?
A fair warning for data scientists and those working in the field of artificial intelligence.
对数据科学家和人工智能领域的工作人员的警告。
This is #500daysofAI and you are reading article 471. I am writing one new article about or related to artificial intelligence every day for 500 days.
这是#500daysofAI,您正在阅读文章471。我连续500天每天都在撰写一篇有关人工智能或与人工智能有关的新文章。
翻译自: https://medium.com/dataseries/data-science-ai-and-dune-53b9f5512f31
ai人工智能的数据服务