当前位置:   article > 正文

为什么深度学习与机器学习完全不同?_为什么有的数据用深度学习和机器学习效果差不多

为什么有的数据用深度学习和机器学习效果差不多

本文是领英上的文章《Why Deep Learning is Radically Different from Machine Learning》译文,原文地址:
http://www.linkedin.com/pulse/why-deep-learning-radically-different-from-machine-carlos-e-perez/


(1)
AI is the all encompassing umbrella that covers everything from Good Old Fashion AI (GOFAI) all the way to connectionist architectures like Deep Learning.
AI包罗万象,既包括GOFAI(基于物理符号系统假设和有限合理性原理的人工智能学派),又包括像深度学习这样的连接结构.


ML is a sub-field of AI that covers anything that has to do with the study of learning algorithms by training with data.
ML是AI的分支,它通过训练数据来研究学习算法。


There are a whole swaths (not swatches) of techniques that have been developed over the years like Linear Regression, K-means, Decision Trees, Random Forest, PCA, SVM and finally Artificial Neural Networks (ANN).
这些年来开发了一系列机器学习算法,比如:线性回归、K-means、决策树、随机森林、主成分分析、支持向量机,人工神经网络。


Artificial Neural Networks is where the field of Deep Learning had its genesis from.
人工神经网络是深度学习领域的起源。


(2)
Some ML practitioners who have had previous exposure to Neural Networks (ANN), after all it was invented in the early 60’s, would have the first impression that Deep Learning is nothing more than ANN with multiple layers.
人工神经网络是60年前发明的,所以一些接触过人工神经网络的机器学习从业者最初会认为DL深度学习不过就是多层人工神经网络。
Furthermore, the success of DL is more due to the availability of more data and the availability of more powerful computational engines like Graphic Processing Units (GPU).
而且,DL的成功是因为:现在能够获取到更多的数据;更强大的计算引擎比如GPU.
This of course is true, the emergence of DL is essentially due to these two advances, however the conclusion that DL is just a better algorithm than SVM or Decision Trees is akin to focusing only on the trees and not seeing the forest.
的确,DL的出现主要是因为这两个进步,然而,DL只是比较SVM和决策树哪个算法更好。类似于只盯着树木而不去看森林。


(3)
To coin Andreesen who said “Software is eating the world”, “Deep Learning is eating ML”.
引用安德里森的话:“Software is eating the world”, “Deep Learning is eating ML”。
Two publications by practitioners of different machine learning fields have summarized it best as to why DL is taking over the world.
不同机器学习邻域的从业者写了两本书,书中很好地解释了为什么DL深度学习正在接管这个世界。
Chris Manning an expert in NLP writes about the “Deep Learning Tsunami“:
Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse.

NLP专家Chris Manning写了一篇关于“深度学习海啸”的文章: 
多年以来,深层的学习浪潮在计算语言学领域中不断涌现,但2015年似乎是海啸袭击主要自然语言处理(NLP)会议的一年。然而,一些权威人士预测,最终的损害将更严重。


Nicholas Paragios writes about the “Computer Vision Research: the Deep Depression“:
It might be simply because deep learning on highly complex, hugely determined in terms of degrees of freedom graphs once endowed with massive amount of annotated data and unthinkable — until very recently — computing power can solve all computer vision problems. If this is the case, well it is simply a matter of time that industry (which seems to be already the case) takes over, research in computer vision becomes a marginal academic objective and the field follows the path of computer graphics (in terms of activity and volume of academic research). 


Nicholas Paragios写的“计算机视觉研究:the deep depression”:
这可能仅仅因为深度学习的高度复杂,高度依赖于自由图像的度,一旦被赋予了大量的注释数据和不可思议的直到现在才出现计算能力,所有的计算机视觉问题都可以被解决。如果是这样的话,那就只是时间问题了,工业(似乎已经是这样)接管了,计算机视觉的研究变成了一个边缘的学术目标,这个领域遵循了计算机图形学的道路(就学术研究的活动和数量而言)。


These two articles do highlight how the field of Deep Learning are fundamentally disruptive to conventional ML practices. 
这两篇文章确实强调了深度学习领域如何对传统ML实践的根本性破坏。


Certainly is should be equally disruptive in the business world.
当然在商业界也应该同样具有破坏性。


 I am however stunned and perplexed that even Gartner fails to recognize the difference between ML and DL. Here is their August 2016 Hype Cycle and Deep Learning isn’t even mentioned on the slide:
然而,我甚至惊愕不已,甚至连加特纳都没有意识到ML和dl之间的区别。这是他们2016年8月的Hype Cycle ,幻灯片上甚至没有提到深度学习:


(4)
What a travesty!这是多么悲剧啊!
Anyway, despite being ignored, DL continues to by hyped. 
不管怎样,尽管被忽视了,DL仍然被大肆宣传。


The current DL hype tends to be that we have these commoditized machinery, that given enough data and enough training time, is able to learn on its own.
目前DL的炒作往往是 ,一旦我们有了这些商品化的机械, 如果有足够的数据和足够的训练时间,深度学习就可以自己学习。 


This of course either an exaggeration of what the state-of-the-art is capable of or an over simplification of the actual practice of DL.
这当然也夸大了深度学习的能力,或者过分简化DL的实际操作。 


DL has over the past few years given rise to a massive collection of ideas and techniques that were previously either unknown or known to be untenable.
在过去的几年里,DL产生了大量的思想和技术,这些想法和技术以前是未知的或已知但站不住脚的。


At first this collection of concepts, seems to be fragmented and disparate. However over time patterns and methodologies begin to emerge and we are frantically attempting to cover this space in “Design Patterns of Deep Learning“.
起初,这一系列概念似乎支离破碎,各不相同。然而,随着时间的推移,模式和方法开始出现,我们疯狂地试图用“深度学习设计模式”覆盖这个空间。

 

(5)

Deep Learning today goes beyond just multi-level perceptrons but instead is a collection of techniques and methods that are used to building composable differentiable architectures. 
深度学习今天不仅仅是多层感知器,而是一个集技术与方法的集合,这个集合用于组合不同的结构。


These are extremely capable machine learning systems that we are only right now seeing just the tip of the iceberg. 
他们都是非常有能力的机器学习系统,我们现在只看到冰山一角。


The key take away from this is that, Deep Learning may look like alchemy today, but we eventually will learn to practice it like chemistry. 关键在于,今天的深度学习可能看起来像炼金术,但我们最终将学会像化学一样去实践它。


That is, we would have a more solid foundation so as to be able to build our learning machines with greater predictability of its capabilities.
也就是说,我们将有一个更坚实的基础,以便能够建立我们的学习机器具有更大的可预测性的能力。

---------------------

本文来自 haimianjie2012 的CSDN 博客 ,

全文地址请点击:https://blog.csdn.net/haimianjie2012/article/details/78242136?utm_source=copy

声明:本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:【wpsshop博客】
推荐阅读
相关标签
  

闽ICP备14008679号