Learning from limited or imperfect data (L^2ID) refers to a variety of studies that attempt to address challenging pattern recognition tasks by learning from limited, weak, or noisy supervision. Supervised learning methods including Deep Convolutional Neural Networks have significantly improved the performance in many problems in the field of computer vision, thanks to the rise of large-scale annotated data sets and the advance in computing hardware. However, these supervised learning approaches are notoriously "data hungry", which makes them sometimes not practical in many real-world industrial applications. This issue of availability of large quantities of labeled data becomes even more severe when considering visual classes that require annotation based on expert knowledge (e.g., medical imaging), classes that rarely occur, or object detection and instance segmentation tasks where the labeling requires more effort. To address this problem, many efforts, e.g., weakly supervised learning, few-shot learning, self/semi-supervised, cross-domain few-shot learning, domain adaptation, etc., have been made to improve robustness to this scenario. The goal of this workshop is to bring together researchers to discuss emerging new technologies related to visual learning with limited or imperfectly labeled data. Topics that are of special interest (though submissions are not limited to these):
Please feel free to contact us if you have any suggestions to improve our workshop!    l2idcvpr@gmail.com
Date | Speaker | Topic |
---|---|---|
8:00-8:05 PT / 11:00-11:05 EDT | Organizers | Introduction and opening |
8:05-8:45 PT / 11:05-11:45 EDT |
Guoliang Kang - Pixel-Level Cycle Association: Domain Adaptive Semantic Seg.
Angela Dai - Learning from Imperfect RGB-D Scan Data Colin Raffel - Explicit and Implicit Entropy Minimization in Proxy-Label-Based SSL Sanja Fidler - Image GANs for Reducing Pixel-Wise Supervision Oral Papers: [A, B, D, H] |
Unlabeled data / Self/Semi-Supervised, Domain Adaptation |
8:45-9:15 PT / 11:45-12:15 EDT | [A, B, D, H, Classification Challenge Participants] | Paper Spotlight Talks |
9:15-9:55 PT / 12:15-12:55 EDT | Chelsea Finn - Few Shot Learning in the Real World
Rogerio Ferris - How Transferable are Contrastive Representations? Trevor Darrell - Recent Progress on Unsupervised Detection and Adaptation Oral Papers: [G, J, L] |
Few Shot Learning |
9:55-10:10 PT / 12:55-13:10 EDT | Coffe Break | |
10:10-10:50 PT / 13:10-13:50 EDT |
Boqing Gong - When Vision Transformers Outperform ResNets
Vahan Petrosyan - Tools to share datasets and find imperfect data in CV Olga Russakovsky - Mitigating bias and privacy concerns in visual data Dina Katabi - Making Contrastive Learning Robust to Shortcuts and Generalize it to New Modalities Oral Papers: [E, K] |
Robustness, adversarial, bias/fairness, deployment/industry |
10:50-11:20 PT / 13:50-14:20 EDT | Oral Papers: [G, J, L, E, K] | Paper Spotlight Talks |
11:20-12:00 PT / 14:20-15:00 EDT | Alexander Schwing - Not All Unlabeled Data are Equal
Humphrey Shi - Escaping the Big Data Paradigm with Compact Transformers Anurag Arnab - Video Understanding with Imperfect Data Oral Papers: [C, F, I] |
Imperfect/Noisy/Weakly supervised |
12:00-14:00 PT / 15:00-17:00 EDT | Gatherly Poster / Lunch Break | |
14:00-14:40 PT / 17:00-17:40 EDT | Aarti Singh - Learning from preferences and labels
Philip Isola - When and Why Does Contrastive Learing Work? |
Theory/Optimization |
14:40-15:10 PT / 17:40-18:10 EDT | Oral papers: [C, F, I, Localization Challenge Participants] | Paper Spotlight Talks |
15:10-15:50 PT / 18:10-18:50 EDT | All available | Future Directions |
15:50-16:00 PT / 18:50-19:00 EDT | Organizers | Wrap-up Discussion |
ID | Title |
---|---|
A | Training Deep Generative Models in Highly Incomplete Data Scenarios with Prior Regularization |
B | Unsupervised Discriminative Embedding for Sub-Action Learning in Complex Activities |
C | Unlocking the Full Potential of Small Data with Diverse Supervision |
D | Distill on the Go: Online knowledge distillation in self supervised learning |
E | Learning Unbiased Representations via Mutual Information Backpropagation |
F | PLM: Partial Label Masking for Imbalanced Multi-label Classification |
G | ReMP: Rectified Metric Propagation for Few-Shot Learning |
H | A Closer Look at Self-training for Zero-Label Semantic Segmentation |
I | An Exploration into why Output Regularization Mitigates Label Noise |
J | Shot in the Dark: Few-Shot Learning with No Base-Class Labels |
K | Contrastive Learning Improves Model Robustness Under Label Noise |
L | A Simple Framework for Cross-Domain Few-Shot Recognition with Unlabeled Data |
Description | Date |
---|---|
Paper submission deadline | March 25th, 2021 |
Notification to authors | April 8th, 2021 (extended to Apr 13) |
Camera-ready deadline | April 20th, 2021 |
Challenge submission deadline | May 14th, 2021 |