赞
踩
When learning with long-tailed data, a common challenge is that instance-rich (or head) classes dominate the training procedure. The learned classification model tends to perform better on these classes, while performance is significantly worse for instance-scarce (or tail) classes (under-fitting).
The general scheme for long-tailed recognition is: classifiers are either learned jointly with the representations end-to-end, or via a two-stage approach where the classifier and the representation are jointly fine-tuned with variants of class-balanced sampling as a second stage.
In our work, we argue for decoupling representation and classification. We demonstrate that in a long-tailed scenario, this separation allows straightforward approaches to achieve high recognition performance, without the need for designing sampling strategies, balance-aware losses or adding memory modules.
Recent studies’ directions on solving long-tailed recognition problem:
For most sampling strategies presented below, the probability p j p_j pj of sampling a data point from class j j j is given by: p j = n j q ∑ i = 1 C n i q p_{j}=\frac{n_{j}^{q}}{\sum_{i=1}^{C} n_{i}^{q}} pj=∑i=1Cniqnjq where q ∈ [ 0 , 1 ] q \in[0,1] q∈[0,1], n j n_j nj denote the number of training sample for class j j
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。