赞
踩
损失通过将输出与目标进行比较,并不断优化减小loss。
Softmax(with loss)
示例:
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip1"
bottom: "label"
top: "loss"
}
在概念上等同于softmax layer+多项对数损失层(multinomial logistic loss layer),但提供了更稳定的梯度。softmax只是输出每一类的概率,并没有与label做比较。
Sum-of-Squares / Euclidean
Hinge / Margin
参数(HingeLossParameter hinge_loss_param):
示例
# L1 Norm L1正则
layer {
name: "loss"
type: "HingeLoss"
bottom: "pred"
bottom: "label"
}
# L2 Norm L2正则
layer {
name: "loss"
type: "HingeLoss"
bottom: "pred"
bottom: "label"
top: "loss"
hinge_loss_param {
norm: L2
}
}
Hinge loss主要用于SVM。
Accuracy
示例
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
只有test阶段才有,因此需要加入include参数。它实际上不是损失并且没有后退步骤。
Inner Product
示例:
layer {
name: "fc8"
type: "InnerProduct"
# learning rate and decay multipliers for the weights
param { lr_mult: 1 decay_mult: 1 }
# learning rate and decay multipliers for the biases
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 1000
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
bottom: "fc7"
top: "fc8"
}
Reshape
参数 (ReshapeParameter reshape_param):
输入:单独的blob
示例:
layer {
name: "reshape"
type: "Reshape"
bottom: "input"
top: "output"
reshape_param {
shape {
dim: 0 # copy the dimension from below
dim: 2
dim: 3
dim: -1 # infer it from the other dimensions
}
}
}
这一操作不改变数据,只改变维度,也没有在过程中拷贝数据。输出的尺寸有shape参数的值规定,正数是对应的维度,除此外还有两个特殊值:
特别的,当时用参数:reshape_param { shape { dim: 0 dim: -1 } }
时,reshape层相当于flatten层,将n * c * h * w的数据变为n * (c*h*w)。
Concatenation
这个层把多个blob连接为一个blob。
层的学习暂时到这里。。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。