当前位置:   article > 正文

CV数据集使用记录_image_info_test-dev2017.json如何使用

image_info_test-dev2017.json如何使用

标注工具

采用labelme标注实例,创建自己的数据集,并转换成COCO dataset formats

labelme标注得到的.json文件中, 某些points坐标值为负值

  20210107事后发现,先前的labelme版本(labelme_v3.16.7)以多边形(polygon)方式object instance得到的.json文件中,某些point的(x, y)值会为负值或超出图像边界标注,labelme的发布者已确认此种情况属于一个bug,在后续的版本中已得到修复。
An error occurred while aiming at the boundary · Issue #398 · wkentaro/labelme 20190731
  对于此前标注得到的.json文件,博主暂且(注:不知道这种处理方式是否合适)将customize_datasets_labelme2coco.py中的代码段修改后,把自建数据集重新生成一遍。

摘自:customize_datasets_labelme2coco.py

class labelme2coco(object):
	略略略

    def data_transfer(self):
        for num, json_file in enumerate(self.labelme_json):
            print('json file is :', json_file)
            with open(json_file, 'r') as fp:
                data = json.load(fp)  # 加载json文件
                self.images.append(self.image(data, num))  # 函数调用self.image()
                img_wh = (data['imageWidth'], data['imageHeight'])
                # [An error occurred while aiming at the boundary · Issue #398 · wkentaro/labelme 20190731](https://github.com/wkentaro/labelme/issues/398)
                flag_point_coordinate_clip = True  # c-y_note0107:先前的labelme版本存在bug导致某些point的(x, y)值会为负值或超出图像边界
                for shapes in data['shapes']:
                    label = shapes['label']
                    if label not in self.label:
                        self.label.append(label)
                        self.categories.append(self.category(label))  # 函数调用self.category()
                    if flag_point_coordinate_clip is True:
                        tmp_points = np.asarray(shapes['points'])
                        np.clip(tmp_points[:, 0], 0, img_wh[0], out=tmp_points[:, 0])
                        np.clip(tmp_points[:, 1], 0, img_wh[1], out=tmp_points[:, 1])
                        points = tmp_points.tolist()
                    else:
                        points = shapes['points']
                    assert len(points) > 2, "error segmentation, the number of points is less than 3;"
                    if len(points) == 2:
                        print("error segmentation, there are only two points;")
                        points.append(points[-1])
                    self.annotations.append(self.annotation(points, label, num))  # 函数调用self.annotation()
                    self.annID += 1
            print(f"finish the data_transfer of {num}th image")

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

采用labelme标注单张图片得到的单个.json文件内容

20210107记:
  采用labelme以多边形(polygon)方式标注单张图片得到的单个.json文件内容如下:

采用labelme以多边形(polygon)方式标注单张图片得到的单个.json文件内容如下:

{
  "version": "3.16.7",
  "flags": {},
  "shapes": [
    {
      "label": "outer-all",
      "line_color": null,
      "fill_color": null,
      "points": [
        [
          254.0434782608695,
          706.5217391304348
        ],
        [
          230.13043478260863,
          678.2608695652174
        ],
        [
          204.0434782608695,
          295.65217391304344
        ],
        [
          234.47826086956513,
          286.95652173913044
        ],
        [
          1306.2173913043478,
          236.95652173913044
        ],
        [
          1319.2608695652173,
          260.8695652173913
        ],
        [
          1348.8372093023256,
          484.8837209302326
        ],
        [
          1347.6744186046512,
          639.5348837209302
        ],
        [
          1313.953488372093,
          654.6511627906976
        ]
      ],
      "shape_type": "polygon",
      "flags": {}
    },
	略略略
  ],
  "lineColor": [
    0,
    255,
    0,
    128
  ],
  "fillColor": [
    255,
    0,
    0,
    128
  ],
  "imagePath": "202.jpg",
  "imageData": "/9j/4AAQSkZJRgABAQAAAQABAA略略略YmfypMn1oooA//Z",
  "imageHeight": 5110,
  "imageWidth": 2090
}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71

将多张图片所对应的多个.json文件经labelme2coco.py转换成COCO dataset formats

20210107记:
  将多张图片所对应的多个.json文件经labelme2coco.py转换成COCO dataset formats的单个.json文件内容如下:

将多张图片所对应的多个.json文件经labelme2coco.py转换成COCO dataset formats的单个.json文件内容如下:

{
 "info": "spytensor created",
 "license": [
  "license"
 ],
 "images": [
  {
   "height": 2224,
   "width": 3554,
   "id": 0,
   "file_name": "1889.jpg"
  },
  略略略
  {
   "height": 4031,
   "width": 2553,
   "id": 1,
   "file_name": "1030.jpg"
  }
  ],
 "annotations": [
  {
   "id": 0,  # object instance的id编号从0开始
   "image_id": 0,  # 图片的image_id编号从0开始
   "category_id": 1,
   "segmentation": [
    [
     471.23255813953494,
     726.8604651162791,
     1001.4651162790699,
     661.7441860465117,
     1582.860465116279,
     666.3953488372093,
     1415.418604651163,
     705.9302325581396,
     1422.3953488372092,
     1910.5813953488373,
     513.0930232558139,
     1929.1860465116279
    ]
   ],
   "bbox": [
    471.23255813953494,
    661.7441860465117,
    1111.6279069767443,
    1267.4418604651162
   ],
   "iscrowd": 0,
   "area": 1408923.7425635478
  },
  略略略,
  {
   "id": 8,
   "image_id": 0,
   "category_id": 1,
   "segmentation": [
    [
     1924.7209302325582,
     10.581395348837209,
     2392.1627906976746,
     1.279069767441861,
     2389.837209302326,
     105.93023255813954,
     1929.372093023256,
     110.5813953488372
    ]
   ],
   "bbox": [
    1924.7209302325582,
    1.279069767441861,
    467.4418604651164,
    109.30232558139534
   ],
   "iscrowd": 0,
   "area": 51092.48242293132
  },
  {
   "id": 9,  # 不同图片中的object instance的id是继续编号的
   "image_id": 1,
   "category_id": 1,
   "segmentation": [
    [
     1860.5,
     137.0,
     377.16666666666697,
     153.66666666666669,
     410.5,
     1149.5,
     1864.666666666667,
     1137.0
    ]
   ],
   "bbox": [
    377.16666666666697,
    137.0,
    1487.5,
    1012.5
   ],
   "iscrowd": 0,
   "area": 1506093.75
  }
  ],
 "categories": [
  {
   "id": 1,
   "name": "Carton"
  }
 ]
}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112

labelme使用记录

自定义采用labelme标注实例时的颜色

  希望能够手动地设置labelme中Polygons的边和顶点的颜色;
How to manually set label line colour? · Issue #712 · wkentaro/labelme

相关的labelme脚本文件有:
https://github.com/wkentaro/labelme/blob/master/labelme/app.py#L1135
https://github.com/wkentaro/labelme/blob/master/labelme/config/default_config.yaml#L18

E:\OtherProgramFiles\Anaconda3\envs\my_labelme2_py3\Lib\site-packages\labelme\config\default_config.yaml
将default_config.yaml文件中的
shape_color: auto  # null, 'auto', 'manual'
label_colors: null
改为
shape_color: 'manual'
label_colors: {'car': [255, 0, 0],}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

  如果期望能进一步地手动地设置labelme中Polygons的边和顶点的粗细,那么与其去看labelme的源代码,不如直接自己写opencv的代码来可视化;

相关的labelme脚本文件有:
E:\OtherProgramFiles\Anaconda3\envs\my_labelme2_py3\Lib\site-packages\labelme\app.py
E:\OtherProgramFiles\Anaconda3\envs\my_labelme2_py3\Lib\site-packages\labelme\cli
E:\OtherProgramFiles\Anaconda3\envs\my_labelme2_py3\Lib\site-packages\imgviz
  • 1
  • 2
  • 3
  • 4

(待阅读, 可能不适用) labelme标注不同物体显示不同颜色_樊城的博客-CSDN博客 20200403

voc_classes和coco_classes

def voc_classes():
    return [
        'aeroplane飞机', 'bicycle自行车', 'bird鸟', 'boat船', 'bottle瓶', 
        'bus公共汽车', 'car汽车', 'cat猫', 'chair椅子', 'cow奶牛', 
        'diningtable餐桌', 'dog狗', 'horse马', 'motorbike摩托车', 'person人',
        'pottedplant盆栽', 'sheep羊', 'sofa沙发/长椅', 'train火车', 'tvmonitor电视显示屏'
    ]

def coco_classes():
    return [
        'person人', 'bicycle自行车', 'car汽车', 'motorcycle摩托车', 'airplane飞机', 
        'bus公共汽车', 'train火车', 'truck卡车', 'boat船', 'traffic_light交通灯/红绿灯', 
        'fire_hydrant消防栓/灭火龙头', 'stop_sign停车标志/停止标志', 'parking_meter停车计时器', 'bench长椅', 'bird鸟', 
        'cat猫', 'dog狗', 'horse马', 'sheep羊', 'cow奶牛', 
        'elephant大象', 'bear熊', 'zebra斑马', 'giraffe长颈鹿', 'backpack双肩背包', 
        'umbrella伞', 'handbag手提包', 'tie领带领结', 'suitcase手提箱/行李箱', 'frisbee飞盘', 
        'skis指一只脚一个雪橇,雪橇头部是带弧度的滑雪板', 'snowboard指两只脚站在同一块板上,侧身滑行的滑雪板',
        'sports_ball运动球', 'kite风筝', 'baseball_bat棒球棒', 
        'baseball_glove棒球手套', 'skateboard滑板', 'surfboard冲浪板', 'tennis_racket网球拍', 'bottle瓶子', 
        'wine_glass(饮葡萄酒用的)玻璃酒杯 ', 'cup杯子', 'fork餐叉', 'knife刀', 'spoon勺/匙', 
        'bowl碗', 'banana香蕉', 'apple苹果', 'sandwich三明治 ', 'orange橙子/柑橘',
        'broccoli西兰花', 'carrot胡萝卜', 'hot_dog热狗(香肠面包)', 'pizza比萨饼/意大利饼', 'donut油炸圈饼', 
        'cake蛋糕', 'chair椅子', 'couch长沙发/长榻', 'potted_plant盆栽', 'bed床', 
        'dining_table餐桌', 'toilet坐便器/抽水马桶', 'tv电视机', 'laptop便携式电脑/笔记本电脑', 'mouse老鼠/鼠标', 
        'remote遥控器', 'keyboard键盘', 'cell_phone手机', 'microwave微波炉', 'oven烤箱/烤炉', 
        'toaster(电的)烤面包片器/吐司炉', 'sink洗碗槽', 'refrigerator冰箱', 'book书/本子/簿子', 'clock时钟', 
        'vase花瓶', 'scissors剪刀', 'teddy_bear(软的)玩具熊', 'hair_drier吹风机', 'toothbrush牙刷'
    ]

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

COCO dataset

COCO dataset 下载

Download COCO dataset. Run under ‘datasets’ directory.
yolov5/get_coco.sh at master · ultralytics/yolov5

# https://gist.github.com/mkocabas/a6177fc00315403d31572e17700d7fd9#gistcomment-3964091
mkdir coco
cd coco
mkdir images
cd images

wget -c http://images.cocodataset.org/zips/train2017.zip
wget -c http://images.cocodataset.org/zips/val2017.zip
wget -c http://images.cocodataset.org/zips/test2017.zip
wget -c http://images.cocodataset.org/zips/unlabeled2017.zip

unzip train2017.zip
unzip val2017.zip
unzip test2017.zip
unzip unlabeled2017.zip

rm train2017.zip
rm val2017.zip
rm test2017.zip
rm unlabeled2017.zip 

cd ../
wget -c http://images.cocodataset.org/annotations/annotations_trainval2017.zip
wget -c http://images.cocodataset.org/annotations/stuff_annotations_trainval2017.zip
wget -c http://images.cocodataset.org/annotations/image_info_test2017.zip
wget -c http://images.cocodataset.org/annotations/image_info_unlabeled2017.zip

unzip annotations_trainval2017.zip
unzip stuff_annotations_trainval2017.zip
unzip image_info_test2017.zip
unzip image_info_unlabeled2017.zip

rm annotations_trainval2017.zip
rm stuff_annotations_trainval2017.zip
rm image_info_test2017.zip
rm image_info_unlabeled2017.zip
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36

COCO dataset之.json注释文件内容

*** MSCOCO数据标注详解_风吴痕的博客-CSDN博客 20180319
*** COCO 数据集的使用 - xinet - 博客园20180428

COCO dataset之pycocotools接口使用

GitHub - cocodataset/cocoapi: COCO API - Dataset @ http://cocodataset.org/


  E:/OtherProgramFiles/Anaconda3/envs/my_gpu_py3/lib/site-packages/pycocotools/coco.py
__author__ = ‘tylin’
__version__ = ‘2.0’
Interface for accessing the Microsoft COCO dataset.

  Microsoft COCO is a large image dataset designed for object detection, segmentation, and caption generation. pycocotools is a Python API that assists in loading, parsing and visualizing the annotations in COCO. Please visit http://mscoco.org/ for more information on COCO, including for the data, paper, and tutorials. The exact format of the annotations is also described on the COCO website. For example usage of the pycocotools please see pycocotools_demo.ipynb. In addition to this API, please download both the COCO images and annotations in order to run the demo.

  An alternative to using the API is to load the annotations directly into Python dictionary. Using the API provides additional utility functions. Note that this API supports both instance and caption annotations. In the case of captions not all functions are defined (e.g. categories are undefined).

  The following API functions are defined:

  • COCO - COCO api class that loads COCO annotation file and prepare data structures.
  • decodeMask - Decode binary mask M encoded via run-length encoding.
  • encodeMask - Encode binary mask M using run-length encoding.
  • getAnnIds - Get ann ids that satisfy given filter conditions.
  • getCatIds - Get cat ids that satisfy given filter conditions.
  • getImgIds - Get img ids that satisfy given filter conditions.
  • loadAnns - Load anns with the specified ids.
  • loadCats - Load cats with the specified ids.
  • loadImgs - Load imgs with the specified ids.
  • annToMask - Convert segmentation in an annotation to binary mask.
  • showAnns - Display the specified annotations.
  • loadRes - Load algorithm results and create API for accessing them.
  • download - Download COCO images from mscoco.org server.
    Throughout the API “ann”=annotation, “cat”=category, and “img”=image.
    Help on each functions can be accessed by: “help COCO>function”.
    See also COCO>decodeMask,
    COCO>encodeMask, COCO>getAnnIds, COCO>getCatIds,
    COCO>getImgIds, COCO>loadAnns, COCO>loadCats,
    COCO>loadImgs, COCO>annToMask, COCO>showAnns

from pycocotools.coco import COCO
help(COCO.getImgIds)
---------------------------分割线---------------------------
class COCO:
    def __init__(self, annotation_file=None):
        """
        Constructor of Microsoft COCO helper class for reading and visualizing annotations.
        :param annotation_file (str): location of annotation file
        :param image_folder (str): location to the folder that hosts images.
        :return:
        """
		pass

	def createIndex(self):
		# create index and create class members
		ann['image_id']取出的是图片id;
		ann['id']取出的是每一张图片中的对象id;
		ann['category_id']取出的是图片中的对象的类别id;

		以ann['image_id']为key, [ann]为value构建键值对存入imgToAnns中; imgToAnns: defaultdict[list];
		以ann['id']为key, ann为value构建键值对存入anns中; anns:dict[dict];
		以ann['category_id']为key, ann['image_id']为value构建键值对存入catToImgs中; catToImgs: defaultdict[list];

		imgToAnns = {defaultdict} defaultdict(<class 'list'>, {图像id}:[ann], {图像id}:[ann], ...)
		anns = {dict} <class 'dict'>: {图像id: ann, 图像id: ann, ...}
    
	def info(self):
        # Print information about the annotation file.
	
	def getAnnIds(self, imgIds=[], catIds=[], areaRng=[], iscrowd=None):
		# Get ann ids that satisfy given filter conditions. default skips that filter
	
	def getCatIds(self, catNms=[], supNms=[], catIds=[]):
        # filtering parameters. default skips that filter.
	
    def getImgIds(self, imgIds=[], catIds=[]):
        # Get img ids that satisfy given filter conditions.

    def loadAnns(self, ids=[]):
        # Load anns with the specified ids.

    def loadCats(self, ids=[]):
        # Load cats with the specified ids.
    
	def loadImgs(self, ids=[]):
        # Load anns with the specified ids.

    def showAnns(self, anns, draw_bbox=False):
        # Display the specified annotations.

    def loadRes(self, resFile):
        # Load result file and return a result api object.

    def download(self, tarDir = None, imgIds = [] ):
        # Download COCO images from mscoco.org server.

    def loadNumpyAnnotations(self, data):
        # Convert result data from a numpy array [Nx7] where each row contains {imageID,x1,y1,w,h,score,class}

    def annToRLE(self, ann):
        # Convert annotation which can be polygons, uncompressed RLE to RLE.

    def annToMask(self, ann):
        # Convert annotation which can be polygons, uncompressed RLE, or RLE to binary mask.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64

有关COCOeval

COCOeval(cocoGt, cocoDt, iou_type)评估AP, AR, f-measures时的"默认值"

  COCOeval(cocoGt, cocoDt, iou_type)评估Average Precision, Average Recall, Maximum f-measures for classes时的"默认值":

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

Maximum f-measures for classes:
[-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0]

Score thresholds for classes (used in demos for visualization purposes):
[-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0]

2020-06-11 19:17:02,969 atss_core.inference INFO: OrderedDict([('bbox', OrderedDict([('AP', -1.0), ('AP50', -1.0), ('AP75', -1.0), ('APs', -1.0), ('APm', -1.0), ('APl', -1.0)]))])

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

use ‘test_net.py’ evaluate on ‘coco_2017_test_dev’ · Issue #46 · sfzhang15/ATSS · GitHub 20200611

COCO dataset 上的模型预测结果评估及可视化

With FiftyOne, you can now download specific subsets of COCO, visualize the data and labels, and evaluate your models on COCO more easily and in fewer lines of code than ever.

  • you can use the FiftyOne App to investigate the quality of a dataset and annotations, to visualize them and scroll through some examples;
  • you can use the FiftyOne App to analyze individual TP/FP/FN examples and cross-reference with additional attributes like whether the annotation is a crowd;
  • Once you have evaluated your model in FiftyOne, you can use the returned results object to view the AP, plot precision-recall curves, and interact with confusion matrices to quickly find the exact samples where your model is correct and incorrect for each class.

The COCO Dataset: Best Practices for Downloading, Visualization, and Evaluation | by Eric Hofesmann | Voxel51 | Medium 20210629

COCO dataset中是怎么划分small, medium, large objects的

tiny, extra small, small, medium, large, extra large;

COCO dataset之安装pycocotools

安装pycocotools时遇到的问题

  先出现问题: error: Microsoft Visual C++ 14.0 is required. Get it with “Build Tools for Visual Studio”
  又出现问题:
问题描述:

(my_gpu_py3) E:\WorkSpace\Pytorch_WorkSpace\OpenSourcePlatform\cocoapi-master\PythonAPI>python setup.py build_ext install
running build_ext
skipping 'pycocotools\_mask.c' Cython extension (up-to-date)
building 'pycocotools._mask' extension
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IE:\OtherProgramFiles\Anaconda3\envs\my_gpu_py3\lib\site-packages
\numpy\core\include -I../common -IE:\OtherProgramFiles\Anaconda3\envs\my_gpu_py3\include -IE:\OtherProgramFiles\Anaconda3\envs\my_gpu_py3\include "-IC:\Program Files (x86)\Micros
oft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\8.1\include\shared" "-IC:\Program F
iles (x86)\Windows Kits\8.1\include\um" "-IC:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tc../common/maskApi.c /Fobuild\temp.win-amd64-3.8\Release\../common/maskApi.obj
 -Wno-cpp -Wno-unused-function -std=c99
cl : Command line error D8021 : invalid numeric argument '/Wno-cpp'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\cl.exe' failed with exit status 2
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

解决方案:
*** Windows下安装 pycocotools - 简书 20190601
  解决方案一: 更换安装方式,在终端中使用 pip 命令安装,运行命令pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

(my_gpu_py3) E:\WorkSpace\Pytorch_WorkSpace\OpenSourcePlatform\cocoapi-master\PythonAPI>pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI
Collecting git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI
  Cloning https://github.com/philferriere/cocoapi.git to c:\users\lu\appdata\local\temp\pip-req-build-pqrd7e6h
  Running command git clone -q https://github.com/philferriere/cocoapi.git 'C:\Users\lu\AppData\Local\Temp\pip-req-build-pqrd7e6h'
Building wheels for collected packages: pycocotools
  Building wheel for pycocotools (setup.py) ... done
  Created wheel for pycocotools: filename=pycocotools-2.0-cp38-cp38-win_amd64.whl size=77297 sha256=32b309889a84f74e6278760aa9bdc1c7224cf67e9d7d3acdda811f058767fc49
  Stored in directory: C:\Users\lu\AppData\Local\Temp\pip-ephem-wheel-cache-bgtbl965\wheels\bd\1c\0d\8c82e1b9bc855b82e1eb53eadea4459efe171d2daf5a222701
Successfully built pycocotools
Installing collected packages: pycocotools
Successfully installed pycocotools-2.0

(my_gpu_py3) E:\WorkSpace\Pytorch_WorkSpace\OpenSourcePlatform\cocoapi-master\PythonAPI>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

  解决方案二: 删除cocoapi/PythonAPI/setup.py里的’Wno-cpp’和’Wno-unused-function’参数。
  cl: 命令行 error D8021 :无效的数值参数“/Wno-cpp” 和 cl: 命令行 error D8021 :无效的数值参数“/Wno-unused-function;

删除'Wno-cpp'和'Wno-unused-function'参数
## convert COCO format annotations to YOLO format annotations - [convert COCO format annotations to YOLO format annotations](https://github.com/ultralytics/yolov5/issues/5076#issuecomment-939028996) ## SKU110K商品检测数据集 ### SKU110K商品检测数据集下载 ```powershell cd data wget http://trax-geometry.s3.amazonaws.com/cvpr_challenge/SKU110K_fixed.tar.gz tar -zxvf SKU110K_fixed.tar.gz

cd SKU110K_fixed


### SKU110K_fixed的文件目录如下
```powershell
data/SKU110K_fixed$ tree -a > SKU110K_fixed_filetree.txt
.
├── annotations
│   ├── annotations_test.csv
│   ├── annotations_train.csv
│   ├── annotations_val.csv
│   └── readme.txt
├── images
│   ├── test_xxx.jpg
│   ├── 略
│   ├── test_xxx.jpg
│   ├── train_xxx.jpg
│   ├── 略
│   ├── train_xxx.jpg
│   ├── val_xxx.jpg
│   ├── 略
│   └── val_xxx.jpg
├── LICENSE.txt
└── SKU110K_fixed_filetree.txt
2 directories, 11749 files

----------
20210520记:
/images中有11743=test2936+train8219+val588张图片;

过滤掉truncated_filename_v1.txt中的truncated_imgs图片之后,

/annotations/sku110k_test.json中有2920张图片的注释;
/annotations/sku110k_train.json中有8185张图片的注释;
/annotations/sku110k_val.json中有584张图片的注释;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

annotations_val.csv的文件内容如下

The CSV columns are: image_name,x1,y1,x2,y2,class,image_width,image_height; 例如
val_0.jpg,5,1429,219,1612,object,2336,4160
  • 1
  • 2

将SKU110K_fixed数据集的注释文件转换为coco-style format

  通过sku110k_to_coco_v2.py脚本文件转换得到的sku110k_xxx.json中,不包含与truncated_imgs相关的注释信息;此外,image_id编号是依据filename来设置的,由于truncated_imgs的存在, 故此处image_id编号不连续;ann[“id”]编号是从0开始的连续整数编号;
*** tyomj/product_detection: Working with scale: 2nd place solution to Product Detection in Densely Packed Scenes
SKU110K商品检测数据集处理_songwsx的博客-CSDN博客 20201223

SKU110K数据集相关的链接

# Detectron2 uses Pillow library to read images. We found out that some images in 
# the SKU dataset are corrupted, which causes the dataloader to raise an IOError 
# exception. Therefore, we remove them from the dataset.
CORRUPTED_IMAGES = {
    "training": ("train_4222.jpg", "train_5822.jpg", "train_882.jpg", "train_924.jpg"),
    "validation": tuple(),
    "test": ("test_274.jpg", "test_2924.jpg"),
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

Panxjia commented on 6 Aug 2020
@qjadud1994 Sorry for the delay. We found the author of SKU110K released a revised version of data set. Specifically, compared with the previous version, there are 14 (8233Vs8219) fewer images in the training set and 5(2941Vs2936) images in the test set. We revised the annotations and the augment script accordingly. Another thing to note is that in the author’s newly released data set, some pictures may not be completed(train_5822, train_882, train_4222, train_924, test_274), the errors may be reported during using(cv2.imread, PIL.Image.open). Due to the permissions issues, we can only release the rotation augment script and the annotations of SKU110K-R. Both will be uploaded today.

二级标题

  

待补充

  

待补充

  



文字居中

数学公式粗体 \textbf{} 或者 m e m o r y {\bf memory} memory
数学公式粗斜体 \bm{}

摘录自“bookname_author”
此文系转载,原文链接:名称 20200505

高亮颜色说明:突出重点
个人觉得,:待核准个人观点是否有误

分割线

分割线


我是颜色为00ffff的字体
我是字号为2的字体
我是颜色为00ffff, 字号为2的字体
我是字体类型为微软雅黑, 颜色为00ffff, 字号为2的字体

分割线

分割线
问题描述:
原因分析:
解决方案:

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Gausst松鼠会/article/detail/498901
推荐阅读
相关标签
  

闽ICP备14008679号