Learning from Limited and Imperfect Data (L2ID)

A joint workshop combining Learning from Imperfect data (LID) and Visual Learning with Limited Labels (VL3)

June 20, 2021 (Full Day, Virtual Online)


Learning from limited or imperfect data (L^2ID) refers to a variety of studies that attempt to address challenging pattern recognition tasks by learning from limited, weak, or noisy supervision. Supervised learning methods including Deep Convolutional Neural Networks have significantly improved the performance in many problems in the field of computer vision, thanks to the rise of large-scale annotated data sets and the advance in computing hardware. However, these supervised learning approaches are notoriously "data hungry", which makes them sometimes not practical in many real-world industrial applications. This issue of availability of large quantities of labeled data becomes even more severe when considering visual classes that require annotation based on expert knowledge (e.g., medical imaging), classes that rarely occur, or object detection and instance segmentation tasks where the labeling requires more effort. To address this problem, many efforts, e.g., weakly supervised learning, few-shot learning, self/semi-supervised, cross-domain few-shot learning, domain adaptation, etc., have been made to improve robustness to this scenario. The goal of this workshop is to bring together researchers to discuss emerging new technologies related to visual learning with limited or imperfectly labeled data. Topics that are of special interest (though submissions are not limited to these):

  • Few-shot learning for image classification, object detection, etc.
  • Cross-domain few-shot learning
  • Weakly-/semi- supervised learning algorithms
  • Zero-shot learning · Learning in the “long-tail” scenario
  • Self-supervised learning and unsupervised representation learning
  • Learning with noisy data
  • Any-shot learning – transitioning between few-shot, mid-shot, and many-shot training
  • Optimal data and source selection for effective meta-training with a known or unknown set of target categories
  • Data augmentation
  • New datasets and metrics to evaluate the benefit of such methods
  • Real world applications such as object semantic segmentation/detection/localization, scene parsing, video processing (e.g. action recognition, event detection, and object tracking)

Challenge Information

This year we have two groups of challenges: 1) Localization and 2) Classification. The due date for submission is May 14th, 2021 (Deadline extended to May 28th!).


Workshop Paper Submission Information

The contributions can have two formats
  • Extended Abstracts of max 4 pages (excluding references)
  • Papers of the same length of CVPR submissions
We encourage authors who want to present and discuss their ongoing work to choose the Extended Abstract format.
According to the CVPR rules, extended abstracts will not count as archival.
The submissions should be formatted in the CVPR 2021 format and uploaded through the L2ID CMT Site

Please feel free to contact us if you have any suggestions to improve our workshop!    l2idcvpr@gmail.com


Date Speaker Topic
8:00-8:05 Organizers Introduction and opening
8:05-8:45 [A, B, D, H], Challenge 1 participants, Guoliang, Sanja, Angela, Colin Unlabeled data / Self/Semi-Supervised, Domain Adaptation
8:45-9:15 [A, B, D, H] Paper Spotlight Talks
9:15-9:55 [G, J, L], Trevor, Chelsea, Rogerio Few Shot Learning
9:55-10:10 Coffe Break
10:10-10:50 [E, K], Boqing, Olga, Dina, Vahan Robustness, adversarial, bias/fairness, deployment/industry
10:50-11:20 [G, J, L, E, K] Paper Spotlight Talks
11:20-12:00 [C, F, I], Challenge 2 participants, Anurag, Alexander, Humphrey Imperfect/Noisy/Weakly supervised
12:00-14:00 Gatherly Poster / Lunch Break
14:00-14:40 Aarti, Philip Theory/Optimization
14:40-15:10 [C, F, I] Paper Spotlight Talks
15:10-15:50 All available Future Directions
15:50-16:00 Organizers Wrap-up Discussion

The list of accepted papers

Accepted Oral Papers

ID Title
A Training Deep Generative Models in Highly Incomplete Data Scenarios with Prior Regularization
B Unsupervised Discriminative Embedding for Sub-Action Learning in Complex Activities
C Unlocking the Full Potential of Small Data with Diverse Supervision
D Distill on the Go: Online knowledge distillation in self supervised learning
E Learning Unbiased Representations via Mutual Information Backpropagation
F PLM: Partial Label Masking for Imbalanced Multi-label Classification
G ReMP: Rectified Metric Propagation for Few-Shot Learning
H A Closer Look at Self-training for Zero-Label Semantic Segmentation
I An Exploration into why Output Regularization Mitigates Label Noise
J Shot in the Dark: Few-Shot Learning with No Base-Class Labels
K Contrastive Learning Improves Model Robustness Under Label Noise
L A Simple Framework for Cross-Domain Few-Shot Recognition with Unlabeled Data


Important Dates

Description Date
Paper submission deadline March 25th, 2021
Notification to authors April 8th, 2021 (extended to Apr 13)
Camera-ready deadline April 20th, 2021
Challenge submission deadline May 14th, 2021