Important Dates

Submission Deadline: May 14th (New Deadline: May 28th )

Leaderboard Published / Invitations Sent: June 4th

In conjunction with this workshop, we will have three tracks of classification challenge investigating cross-domain, multi-source settings as well as discrimination across a larger number of classes, bridging the gap between few-shot learning, domain adaptation, and semi-supervised learning. All tracks will include multiple sources and the use of unlabeled data to support semi-supervised algorithms.

Track1

Cross-domain, *small* scale

This setting is similar as the previous VL3 workshop challenge: https://www.learning-with-limited-labels.com/challenge, supporting teams that would like to continue development for cross-domain few-shot learning. However, we will have multiple sources rather than relying on ImageNet solely, with no explicit label overlap between sources and target. These additional sources remain consistent with prior literature, allowing results to be directly comparable to prior results (https://arxiv.org/abs/1912.07200). Sources: MiniImageNet, + CIFAR100, CUBS, Caltech256, DTD Targets: EuroSAT, ISIC2018, Plant Disease, ChestX-Ray8

  • Sources: MiniImageNet, + {CIFAR100, CUBS, Caltech256, DTD}.
  • Targets: EuroSAT, ISIC2018, Plant Disease, ChestX-Ray8.

Track2

Cross-domain, *LARGE* scale

In this track we add additional datasets to both source and target datasets for participants with sufficient compute resources. Importantly, in this task, in addition to the multiple sources, we provide *multiple tasks* from which to draw source data or models.

  • Source: MiniImageNet, CIFAR100, CUBS, Caltech256, DTD, + DomainNet*, COCO*, PASCAL*, KITTI*, Cityscapes* (*any version or task).
  • Target: EuroSAT, ISIC2018, Plant Disease, ChestX-Ray8, + PatchCamelyon, KenyanFood13, IP102, Bark-101.

Track3

Discrimination across larger number of classes

In this track, we will increase the “wayness” for few-shot learning, to bring it closer to semi-supervised learning.

  • Source: DomainNet domains different from sources (with no overlapping classes).
  • Target: DomainNet-126 way from the paper [1].

RULES

The following evaluation framework is established within these datasets:

General Information:

  • No meta-learning in-domain
  • 5-way classification for tracks 1 and 2, 126-way for track 3
  • k-shot, for varying k per dataset
  • 100 randomly selected few-shot 5-way trials (scripts provided to generate the trials)
  • Average accuracy across all trials reported for evaluation.

Paper: https://arxiv.org/abs/1912.07200

Data and Evaluation Code: https://github.com/yunhuiguo/cdfsl-benchmark

Challenge Submission Guidelines

The Cross-Domain Few-Shot Learning (CD-FSL) challenge benchmark includes data from the CropDiseases, EuroSAT, ISIC2018, and ChestX datasets, which covers plant disease images, satellite images, dermoscopic images of skin lesions, and X-ray images, respectively. The selected datasets reflect real-world use cases for few-shot learning since collecting enough examples from above domains is often difficult, expensive, or in some cases not possible. In addition, they demonstrate the following spectrum of readily quantifiable domain shifts from ImageNet: 1) CropDiseases images are most similar as they include perspective color images of natural elements, but are more specialized than anything available in ImageNet, 2) EuroSAT images are less similar as they have lost perspective distortion, but are still color images of natural scenes, 3) ISIC2018 images are even less similar as they have lost perspective distortion and no longer represent natural scenes, and 4) ChestX images are the most dissimilar as they have lost perspective distortion, all color, and do not represent natural scenes.

Participants are expected to run their own evaluations against the benchmark dataset according to the evaluation protocol, and submit the following 3 items:

  • Link to publicly accessible arXiv paper, minimum 2 pages and maximum 4 pages in length (including references) that describes the proposed method and the evaluation results.
  • Link to publicly accessible code on GitHub to reproduce all experiments (must also supply necessary models/resources as links).
  • Average accuracy across all tasks (determines challenge ranking).

Submission Link: https://cmt3.research.microsoft.com/L2IDChallenge2021

References

[1] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Trevor Darrell, and Kate Saenko.. Semi-supervised Domain Adaptation via Minimax Entropy. In ICCV, 2019.

[2] Sharada P Mohanty, David P Hughes, and Marcel Salathe. Using deep learning for image based plant disease detection. Frontiers in plant science, 7:1419, 2016.

[3] Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7):2217–2226, 2019.

[4] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, 5:180161, 2018.

[5] Noel Codella, Veronica Rotemberg, Philipp Tschandl, M Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv preprint. arXiv:1902.03368, 2019.

[6] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2097–2106, 2017.