当前位置:   article > 正文

loss函数之MarginRankingLoss

marginrankingloss

MarginRankingLoss

排序损失函数

对于包含 N N N个样本的batch数据 D ( x 1 , x 2 , y ) D(x1,x2,y) D(x1,x2,y), x 1 x1 x1, x 2 x2 x2是给定的待排序的两个输入, y y y代表真实的标签,属于 { 1 , − 1 } \{1,-1\} {1,1}。当 y = 1 y=1 y=1是, x 1 x1 x1应该排在 x 2 x2 x2之前, y = − 1 y=-1 y=1是, x 1 x1 x1应该排在 x 2 x2 x2之后。第 n n n个样本对应的 l o s s loss loss计算如下:

l n = max ⁡ ( 0 , − y ∗ ( x 1 − x 2 ) + margin ⁡ ) l_{n}=\max (0,-y *(x 1-x 2)+\operatorname{margin}) ln=max(0,y(x1x2)+margin)

x 1 x1 x1, x 2 x2 x2排序正确且 − y ∗ ( x 1 − x 2 ) > margin ⁡ -y *(x 1-x 2)>\operatorname{margin} y(x1x2)>margin, 则loss为0

class MarginRankingLoss(_Loss):
    __constants__ = ['margin', 'reduction']
    def __init__(self, margin=0., size_average=None, reduce=None, reduction='mean'):
        super(MarginRankingLoss, self).__init__(size_average, reduce, reduction)
        self.margin = margin
    def forward(self, input1, input2, target):
        return F.margin_ranking_loss(input1, input2, target, margin=self.margin, reduction=self.reduction)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

pytorch中通过torch.nn.MarginRankingLoss类实现,也可以直接调用F.margin_ranking_loss 函数,代码中的size_averagereduce已经弃用。reduction有三种取值mean, sum, none,对应不同的返回 ℓ ( x , y ) \ell(x, y) (x,y)。 默认为mean,对应于上述 l o s s loss loss的计算

L = { l 1 , … , l N } L=\left\{l_{1}, \ldots, l_{N}\right\} L={l1,,lN}

ℓ ( x , y ) = { L ⁡ ,  if reduction  =  ’none’  1 N ∑ i = 1 N l i ,  if reduction  =  ’mean’  ∑ i = 1 N l i  if reduction  =  ’sum’  \ell(x, y)=\left\{

L, if reduction = 'none' 1Ni=1Nli, if reduction = 'mean' i=1Nli if reduction = 'sum' 
\right. (x,y)=L,N1i=1Nli,i=1Nli if reduction = ’none’  if reduction = ’mean’  if reduction = ’sum’ 

m a r g i n margin margin默认取值0

例子:

import torch
import torch.nn.functional as F
import torch.nn as nn
import math


def validate_MarginRankingLoss(input1, input2, target, margin):
    val = 0
    for x1, x2, y in zip(input1, input2, target):
        loss_val = max(0, -y * (x1 - x2) + margin)
        val += loss_val
    return val / input1.nelement()


torch.manual_seed(10)
margin = 0
loss = nn.MarginRankingLoss()
input1 = torch.randn([3], requires_grad=True)
input2 = torch.randn([3], requires_grad=True)
target = torch.tensor([1, -1, -1])
print(target)
output = loss(input1, input2, target)
print(output.item())

output = validate_MarginRankingLoss(input1, input2, target, margin)
print(output.item())

loss = nn.MarginRankingLoss(reduction="none")
output = loss(input1, input2, target)
print(output)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

输出:

tensor([ 1, -1, -1])
0.015400052070617676
0.015400052070617676
tensor([0.0000, 0.0000, 0.0462], grad_fn=<ClampMinBackward>)
  • 1
  • 2
  • 3
  • 4
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/335105
推荐阅读
相关标签
  

闽ICP备14008679号