当前位置:   article > 正文

神经网络历史_神经网络完整有趣且令人费解的历史

generative adversarial networks: foundations, theory, and practice

神经网络历史

We will be looking at the history of neural networks. After thoroughly going through various sources, I found out that the history of neural networks piqued my interest, and I became engrossed. I had a lot of fun because researching this topic was gratifying. Below is the list of the table of contents. Feel free to skip to the topic which fascinates you the most.

我们将研究神经网络的历史。 在仔细研究了各种资源之后,我发现神经网络的历史激起了我的兴趣,并且我全神贯注。 我玩得很开心,因为研究这个主题很令人满意。 以下是目录列表。 随时跳到最让您着迷的主题。

目录: (Table of contents:)

  1. Introduction

    介绍
  2. The initiation of thoughts and ideas

    思想和观念的萌芽
  3. The golden era

    黄金时代
  4. The advancements

    进步
  5. The failure and disbandment of neural networks

    神经网络的失败与解散
  6. The re-emergence and complete domination of neural networks

    神经网络的重新出现和完全统治
  7. Conclusion

    结论

介绍:(Introduction:)

Neural networks and artificial intelligence have been a popular topic since the past century. The popularity of artificial intelligence robots taking over the world in pop culture movies has undeniably intrigued a lot of curious minds. Neural Networks withdraw inspiration from biological neurons. Neural networks are a programming paradigm inspired means which enable deep learning models to learn and train effectively on complex observational datasets. Neural networks have been through various phases during the past century. Neural networks went from being a strong prospect for solving complex computational problems, then being ridiculed to only a theoretical idea, and finally, prominent for a more desirable future. Let us revisit each stage in the history of neural networks in chronological order.

自上个世纪以来,神经网络和人工智能一直是热门话题。 毫无疑问,人工智能机器人在流行文化电影中席卷全球,这引起了很多好奇心。 神经网络从生物神经元中汲取了灵感。 神经网络是一种受编程范式启发的手段,它使深度学习模型能够在复杂的观测数据集上进行有效学习和训练。 在过去的一个世纪中,神经网络经历了各个阶段。 神经网络从解决复杂的计算问题的强大前景开始,然后被嘲笑为仅是一种理论概念,最后对于更理想的未来而闻名。 让我们按时间顺序重新审视神经网络历史上的每个阶段。

Note: This will be the very first part of the series titled “Everything About Deep Learning.” In this topic, we will try to cover every single fact, algorithm, activation functions, and what the future holds for artificial neural networks as well as deep learning. Today, we will start with the complete history of neural networks. In the next parts, we will cover the basics that enable the functioning of neurons, and in subsequent parts, we will cover all the concepts related to deep learning.

注意:这将是系列文章“关于深度学习的一切”的第一部分。 在本主题中,我们将尝试涵盖每一个事实,算法,激活函数,以及人工神经网络和深度学习的未来发展。 今天,我们将从神经网络的完整历史开始。 在接下来的部分中,我们将介绍启用神经元功能的基础知识,在随后的部分中,我们将介绍与深度学习有关的所有概念。

思想和观念的萌芽:(The Initiation of Thoughts and Ideas:)

Biologists, neurologists, and researchers have been working on the functionally of neurons since the past century. William James, an American philosopher in 1890, proposed an insightful theory that reflected the subsequent work of many researchers. The hypothesis, in simple terms, states that the activity at any given point in the brain cortex is the sum of tendencies from the overall movement discharged into it. Elaborating briefly, on the following statement, it means that the excitement of one neuron excites every other neuron until the signal has successfully reached the target.

自上个世纪以来,生物学家,神经病学家和研究人员一直在研究神经元的功能。 美国哲学家威廉·詹姆斯(William James)于1890年提出了一种有见地的理论,该理论反映了许多研究人员随后的工作。 简单来说,该假设指出,大脑皮层任何给定点的活动是排出到大脑皮层的整体运动的趋势之和。 在下面的陈述中简单地进行阐述,这意味着一个神经元的兴奋会激发其他每个神经元的兴奋,直到信号成功到达目标为止。

The credit to developing the first mathematical model for a single neuron goes to McCulloch and Pitts in the year 1943. The neuron model constructed was comprehensive and far-reaching. This model built has been modified and widely used even in the modern era. This moment brought upon a colossal shift in the minds of researchers and practitioners of neural networks. The mathematical functioning of a neuron model similar to the human brain was flabbergasting to most biologists. The support bandwagon for AI to be successful and concerns over AI taking over the world started from this moment onwards.

1943年,为单个神经元开发第一个数学模型的功劳归功于McCulloch和Pitts。构建的神经元模型是全面且影响深远的。 所构建的模型已被修改,甚至在现代时代也已广泛使用。 这一时刻使神经网络的研究人员和实践者的思想发生了巨大变化。 对于大多数生物学家来说,类似于人脑的神经元模型的数学功能令人吃惊。 从此刻起,就为AI取得成功提供了支持,并引发了人们对AI接管世界的担忧。

We will also look at each of these concepts in further detail over the next series of tutorials. Understanding each concept of neural networks from scratch, including the working of a single neuron, will also be accomplished over the series.

在接下来的系列教程中,我们还将更详细地研究每个概念。 从头开始理解神经网络的每个概念,包括单个神经元的工作,也将在本系列中完成。

黄金时代: (The Golden Era:)

Over the next two decades ranging from 1949 to 1969, a wide array of experiments were performed and handled. There were massive developments and expansions in existing methodologies. It would not be wrong to say that this period was the golden era of neural networks. This era started with a bang thanks to the Hebbian theory introduced by Donald Hebb in his book titled “The Organization of Behavior.” In simple terms, the Hebbian theory states that the conductance increases with repeated activation of one neuron by another, across a particular synapse.

在从1949年到1969年的接下来的二十年中,进行了大量的实验。 现有方法已经有了巨大的发展和扩展。 可以肯定地说这一时期是神经网络的黄金时代。 多亏了唐纳德·赫布(Donald Hebb)在他的《行为组织》一书中提出的赫比理论,这个时代开始了。 简单来说,Hebbian理论指出,电导随着特定突触中一个神经元的反复激活而增加。

During this phase, there were several evolutions in salient topics like learning filters, gradient descent, developments in neurodynamics, and triggering and propagation of large-scale brain activity. There was extensive research in synchronous activation of multiple neurons to represent each bit of information. Information theory with principles of Shannon’s entropy became an important area for the field of research. However, the most significant invention was the Perceptron model by Rosenblatt in the year 1958.

在此阶段,重要主题发生了一些演变,例如学习过滤器,梯度下降,神经动力学发展以及大规模脑活动的触发和传播。 在同步激活多个神经元以表示信息的每一方面都进行了广泛的研究。 具有香农熵原理的信息论成为研究领域的重要领域。 但是,最重要的发明是Rosenblatt于1958年发明的Perceptron模型。

The perceptron model is one of the most substantial discoveries in neural networks. The methods of backpropagation introduced by Rosenblatt were useful for training multi-layered networks. This era was candidly the golden era for neural networks due to the extensive research and continuous developments. Taylor constructed a winner-take-all circuit, with inhibition among output units and other progressions in the perceptron model were also accomplished.

感知器模型是神经网络中最重要的发现之一。 Rosenblatt引入的反向传播方法对于训练多层网络很有用。 由于广泛的研究和不断的发展,这个时代是神经网络的黄金时代。 泰勒构建了一个“赢者通吃”电路,并实现了输出单元之间的抑制以及感知器模型中的其他进程。

进展: (The Advancements:)

There were many topics researched and investigated during the 1970s to 1990s. Unfortunately, the developments were of no avail. There was research in combinations of many neurons to form neural networks to become more powerful than a single neuron and perform complex computations. Since gradient descent was not successful in obtaining desired solutions to complex tasks, the development of other mathematical random, probabilistic, or stochastic methods became necessary. Further theoretical results and analysis became established during this time frame.

在1970年代至1990年代有许多研究和调查的主题。 不幸的是,事态发展无济于事。 已经进行了许多神经元组合以形成神经网络的研究,以使其比单个神经元更强大并执行复杂的计算。 由于梯度下降未能成功地获得复杂任务的理想解决方案,因此有必要开发其他数学随机,概率或随机方法。 在这段时间内建立了进一步的理论结果和分析。

The Boltzmann machines and hybrid systems for complex computational problems were also successfully done during the advancements period. Boltzmann machines successfully combated the issue of mathematical problems. Achieving solutions to various drawbacks could not be accomplished due to hardware and software limitations. Nonetheless, during this period, a significant amount of successful research was conducted. Updates and improvements to existing studies became established during this time frame.

在发展阶段,还成功完成了用于复杂计算问题的玻尔兹曼机器和混合系统。 玻尔兹曼机器成功地解决了数学问题。 由于硬件和软件的限制,无法实现针对各种缺点的解决方案。 尽管如此,在此期间,进行了大量的成功研究。 在此时间范围内建立了对现有研究的更新和改进。

However, despite these advancements, nothing was crucial or fruitful to the development of neural networks. The burgeoning demand for artificial neural networks no longer existed. One of the significant reasons for this was due to the demonstration of the limitations of a simple perceptron. Minsky and Papert, in 1969, conducted this demonstration and showcased the flaws of a simple perceptron. It theoretically proved that the simple perceptron model was not computationally universal. This moment was infamous as it marked a black day for neural networks. There was a drastic reduction in funding support for the research in the field of neural networks. This motion kickstarted the fall of neural networks.

然而,尽管取得了这些进步,但对于神经网络的发展而言,没有什么是至关重要的或富有成果的。 对人工神经网络的Swift增长的需求已不复存在。 造成这种情况的重要原因之一是由于简单感知器的局限性得到证明。 Minsky和Papert于1969年进行了演示,并展示了简单感知器的缺陷。 理论上证明了简单的感知器模型不是计算通用的。 这一时刻声名狼藉,因为它标志着神经网络的黑日。 对神经网络领域研究的资金支持急剧减少。 这一动议开始了神经网络的崩溃。

神经网络的失败与解散: (The Failure and Disbandment of Neural Networks:)

The hype for Artificial neural networks was at an all-time peak during this time, but in due time all the hype related to neural networks just vanished. Artificial intelligence being the next big thing, was no longer the talking point for intellectuals. Artificial Neural networks and deep learning became ridiculed to only a theoretical concept. The main reasons for this were due to the lack of data and advanced technologies.

在此期间,对人工神经网络的炒作一直处于高峰,但在适当的时候,所有与神经网络有关的炒作都消失了。 人工智能是下一个大问题,已不再是知识分子的话题。 人工神经网络和深度学习只被一个理论概念嘲笑。 造成这种情况的主要原因是缺乏数据和先进技术。

At that time, there were not enough resources for the computation of complex tasks like image segmentation, image classification, face recognition, natural language processing based chatbots, etc. The data available during this time was quite limited, and there wasn’t enough data for a complex neural network architecture to provide the desired result. Albeit, even with the required data, it would be an incredibly challenging task to compute that amount of data with the resources available at that time.

当时,没有足够的资源来计算图像分割,图像分类,面部识别,基于自然语言处理的聊天机器人等复杂任务。这段时间内可用的数据非常有限,并且没有足够的数据为复杂的神经网络架构提供所需的结果。 即使具有所需的数据,使用当时可用的资源来计算该数据量也是一项非常艰巨的任务。

There were signs of optimism like the success in reinforcement learning and other smaller positives. Unfortunately, this was not good enough to rebuild the massive hype it once had. Thanks to the researchers and scientists with their extraordinary visions were able to continue developments in the field of artificial neural networks. However, for artificial neural networks to regain their lost prestige and hype, it would take another 20 years.

有乐观的迹象,例如在强化学习方面取得了成功以及其他一些较小的积极影响。 不幸的是,这还不足以重建其曾经的大规模宣传。 多亏了研究人员和科学家的非凡远见,才得以继续在人工神经网络领域发展。 但是,要使人工神经网络重新获得失去的声誉和炒作,还需要20年的时间。

神经网络的重新出现和完全统治: (The Re-emergence and Complete Domination of Neural Networks:)

The next two decades were dry for the state and popularity of deep learning. During this era, the support vector machines (SVM’s) and other similar machine learning algorithms were more dominant and practiced to solve complex tasks. The machine learning algorithms performed well for most datasets, but with bigger datasets, the performance of the machine learning algorithms did not significantly improve. The machine learning algorithm’s performance after a certain threshold was stagnant. Models that could learn and improve continuously with the increasing data became important.

接下来的二十年对于深度学习的状态和普及来说是枯燥的。 在这个时代,支持向量机(SVM)和其他类似的机器学习算法越来越占主导地位,并被实践用来解决复杂的任务。 机器学习算法对于大多数数据集表现良好,但是对于更大的数据集,机器学习算法的性能并未显着提高。 一定阈值后机器学习算法的性能停滞不前。 可以随着数据的增加而不断学习和改进的模型变得很重要。

In 2012, a team led by George E. Dahl won the “Merck Molecular Activity Challenge” using multi-task deep neural networks to predict the biomolecular target of one drug. In 2014, Hochreiter’s group used deep learning to detect off-target and toxic effects of environmental chemicals in nutrients, household products, and drugs and won the “Tox21 Data Challenge” of NIH, FDA, and NCATS. (Reference: Wiki)

2012年,由乔治·E·达尔(George E. Dahl)领导的团队使用多任务深度神经网络预测了一种药物的生物分子靶标,从而赢得了“默克分子活性挑战赛”。 2014年,Hochreiter的小组使用深度学习来检测环境化学物质在营养,家用产品和药物中的脱靶和有毒作用,并赢得了NIH,FDA和NCATS的“ Tox21数据挑战”。 (参考: Wiki )

A revolutionary moment started at this precise moment, and deep neural networks were now considered a game-changer. Deep learning and neural networks are now the salient features contemplated for any high-level competitions. Convolutional neural networks, long short term memory (LSTM’s), and generative adversarial networks are exceedingly popular.

革命性时刻就在这一精确时刻开始,深度神经网络现在被认为是改变游戏规则的人。 如今,深度学习和神经网络已成为任何高水平比赛所考虑的显着特征。 卷积神经网络,长期短期记忆(LSTM)和生成对抗网络非常受欢迎。

The aggrandizement of deep learning is rapidly increasing each day especially, with vast improvements. It is exciting to see what the future withholds for deep neural networks and artificial intelligence.

深度学习的强化每天都在Swift增加,尤其是有了巨大的进步。 很高兴看到深度神经网络和人工智能的未来保留了什么。

结论: (Conclusion:)

The journey of neural networks is one to remember for the upcoming ages. Neural networks and deep learning went from a fantastic prospect to now becoming one of the best methods of solving almost any complex problem whatsoever. I am thrilled to see the advancements that will take place in the deep learning field, and I am delighted that I am a part of the current generation who can contribute to this change.

对于即将到来的时代,神经网络的旅程是值得纪念的。 神经网络和深度学习从一个奇妙的前景变成了如今成为解决几乎任何复杂问题的最佳方法之一。 我很高兴看到深度学习领域将取得的进步,我很高兴我能够为这一变革做出贡献的这一代人。

Most of the viewers reading this article are probably fascinated as well. I will try to cover every topic from the history of neural networks to the working and understanding of every deep learning algorithm and architecture in the series titled “Everything About DL.” Let’s stick together on this journey and conquer deep learning. Other articles that you might like —

阅读这篇文章的大多数观众也可能着迷。 我将尝试在标题为“关于DL的一切”的系列中涵盖从神经网络的历史到对每个深度学习算法和体系结构的工作和理解的每个主题。 让我们共同努力,征服深度学习。 您可能喜欢的其他文章-

I hope all of you enjoyed reading this article. Have a wonderful day!

我希望大家都喜欢阅读本文。 祝你有美好的一天!

翻译自: https://towardsdatascience.com/the-complete-interesting-and-convoluted-history-of-neural-networks-2764a54e9e76

神经网络历史

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/从前慢现在也慢/article/detail/605492
推荐阅读
相关标签
  

闽ICP备14008679号