赞
踩
Deep learning trends at Google. Source: SIGMOD 2016/Jeff Dean
1958: Perceptron (linear model)
1969: Perceptron has limitation
1980s: Multi-layer perceptron
Do not have significant difference from DNN today
1986: Backpropagation
Usually more than 3 hidden layers is not helpful
1989: 1 hidden layer is “good enough”, why deep?
2006: RBM initialization
2009: GPU
2011: Start to be popular in speech recognition
2012: win ILSVRC image competition
2015.2: Image recognition surpassing human-level performance
2016.3: Alpha GO beats Lee Sedol
2016.10: Speech recognition system as good as humans
Network parameter θ: all the weights and biases in the “neurons” 。
Fully Connect Feedforward Network:
Deep = Many hidden layers:
Matrix Operation:
Neural Network :
Example Application–Handwriting Digit Recognition
ou need to decide the network structure to let a good function in your function set.
Q: How many layers? How many neurons for each layer?
Q: Can the structure be automatically determined?
E.g. Evolutionary Artificial Neural Networks
Q: Can we design the network structure?
Find a function in function set that minimizes total loss L.
Find the network parameters θ^∗ that minimize total loss L.
Backpropagation: an efficient way to compute ∂L∕∂w in neural network.
Compute ∂z∕∂w for all parameters
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。