In conjunction with this workshop, we will hold three challenges this year.

Track1

Weakly-supervised Semantic Segmentation

This track targets on learning to perform object semantic segmentation using image-level annotations as supervision [1, 2, 3]. The dataset is built upon the image detection track of ImageNet Large Scale Visual Recognition Competition (ILSVRC) [4], which totally includes 456, 567 training images from 200 categories. We provide pixel-level annotations of 15K images (validation/testing: 5, 000/10, 000) for evaluation.

Track2

Weakly supervised product retrieval

Given a photo containing multiple product instances and a user-provided description, the track aims to detect the boxes of each product and retrieve the correct single product image in the gallery. We collect 1132830 real-world product photos in e-commerce website where each photo contains 2.83 products on average and corresponds to a user-provided description, and a single-product gallery with 40033 images for evaluating the retrieval performance. We split 9220 photos and their corresponding descriptions as the test set and provide product-level bounding boxes for each photo. This new track poses a very common setting in real-world application (e.g. e-commerce) and an interesting testbed for learning from imperfect data which testifies both the weak-supervised object retrieval given a caption, fine-grained instance recognitions and cross-modality (i.e. text and image) object-level retrieval. More details of this challenge are provided at https://competitions.codalab.org/competitions/30123

Track3

Weakly-supervised Object Localization

This track targets on making the classification networks be equipped with the ability of object localization [7, 8, 9]. The dataset is built upon the image classification/localization track of ImageNet Large Scale Visual Recognition Competition (ILSVRC), which totally includes 1.2 million training images from 1000 categories. We provide pixel-level annotations of 44, 271 images (validation/testing: 23, 151/21, 120) for evaluation.

  • Evalution: IoU curve. With the predicted object localization map, we calculate the IoU scores between the foreground pixels and the ground-truth masks under different thresholds. In the ideal curve, the highest IoU score is expected to close to 1.0. The threshold value corresponding to the highest IoU score is expected to be 255 since the higher threshold values can reflect a higher contrast between the target object and the background.
  • Download: validation dataset, test list and evaluation scripts are available at Baidu Drive (pwd: z5yp) and Google Drive
  • Submission: https://evalai.cloudcv.org/web/challenges/challenge-page/557/overview

Track4

High-resolution Human Parsing

This track aims to recognize human parts (19 semantics in total) within high-resolution images by learning with low-resolution ones, which is few explored before. To this end, we annotated 10,500 single-person images (training/validation/testing: 6,000/500/4,000) with an average resolution of 3950 by 2200. Besides the provided high-resolution images, off-the-shelf low-resolution datasets such as LIP and Pascal-Person-Part are welcome adopted for pre-training. This new track poses a new task of learning from imperfect data, transferring the learned knowledge from low-resolution images to high-resolution images. More details of this challenge are provided at https://competitions.codalab.org/competitions/30375

    Details will be available soon.

RULES

This year, we have two strict rules for all competitors.

  1. For training, only the images provided in the training set are permitted. Competitors can use the classification models pre-trained on the training set of ILSVRC CLS-LOC to initialize the parameters but CANNOT leverage any datasets with pixel-level annotations. In particular, for Track 1 and Track 3, only the image-level annotations of training images can be leveraged for supervision and the bounding-box annotations are NOT permitted.
  2. We encourage competitors to design elegant and effective models competing for all the tracks rather than ensembling multiple models. Therefore, we restrict the parameter size of the inference model(s) should be LESS than 150M (slightly more than two DeepLab V3+ [10] models using Resnet 101 as the backbone). The competitors ranked at Top-3 are required to submit the inference code for verification.

References

[1] George Papandreou, Liang-Chieh Chen, Kevin Murphy, and Alan L Yuille. Weakly-and semi-supervised learning of a dcnn for semantic image segmentation. In ICCV, 2015.

[2] Yunchao Wei, Huaxin Xiao, Honghui Shi, Zequn Jie, Jiashi Feng, and Thomas S Huang. Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation. In CVPR, 2018.

[3] Peng-Tao Jiang, Qibin Hou, Yang Cao, Ming-Ming Cheng, Yunchao Wei, and Hong-Kai Xiong. Integral object mining via online attention accumulation. In ICCV, 2019.

[4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: a large-scale hierarchical image database. In CVPR, 2009.

[5] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In CVPR, 2017.

[6] Rui Qian, Yunchao Wei, Honghui Shi, Jiachen Li, Jiaying Liu, and Thomas Huang. Weakly Supervised Scene Parsing with Point-based Distance Metric Learning. In AAAI, 2019

[7] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning Deep Features for Discriminative Localization. In IEEE CVPR, 2016.

[8] Xiaolin Zhang, Yunchao Wei, Jiashi Feng, Yi Yang, and Thomas Huang. Adversarial complementary learning for weakly supervised object localization. In IEEE CVPR, 2018.

[9] Xiaolin Zhang, Yunchao Wei, Guoliang Kang, Yi Yang, and Thomas Huang. Self-produced guidance for weakly-supervised object localization. In ECCV, 2018.

[10] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.