赞
踩
local_response_normalization出现在论文”ImageNet Classification with deep Convolutional Neural Networks”中,论文中说,这种normalization对于泛化是有好处的.
tf.nn.local_response_normalization(input, depth_radius=None, bias=None, alpha=None, beta=None, name=None)
'''
Local Response Normalization.
The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius. In detail,
'''
"""
input: A Tensor. Must be one of the following types: float32, half. 4-D.
depth_radius: An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window.
bias: An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0).
alpha: An optional float. Defaults to 1. A scale factor, usually positive.
beta: An optional float. Defaults to 0.5. An exponent.
name: A name for the operation (optional).
"""
论文地址
batch_normalization, 故名思意,就是以batch为单位进行normalization
- 输入:mini_batch:
-
-
- 输出:
算法如下:
(1)mini_batch mean:
def batch_normalization(x,
mean,
variance,
offset,
scale,
variance_epsilon,
name=None):
Args:
Tensor
of arbitrary dimensionality.Tensor
.Tensor
.Tensor
, often denoted Tensor
, often denoted None
. If present, the scale is applied to the normalized tensor.现在,我们需要一个函数 返回mean和variance, 看下面.
def moments(x, axes, shift=None, name=None, keep_dims=False):
# for simple batch normalization pass `axes=[0]` (batch only).
对于卷积的batch_normalization, x 为[batch_size, height, width, depth],axes=[0,1,2],就会输出(mean,variance), mean 与 variance 均为标量。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。