Learning from Limited and Imperfect Data (L2ID)

A joint workshop combining Learning from Imperfect data (LID) and Visual Learning with Limited Labels (VL3)

Oct. 24-28 2022 Tel-Aviv, Israel (Online)

News

  • Website is up.

Introduction

Learning from limited or imperfect data (L^2ID) refers to a variety of studies that attempt to address challenging pattern recognition tasks by learning from limited, weak, or noisy supervision. Supervised learning methods including Deep Convolutional Neural Networks have significantly improved the performance in many problems in the field of computer vision, thanks to the rise of large-scale annotated data sets and the advance in computing hardware. However, these supervised learning approaches are notoriously "data hungry", which makes them sometimes not practical in many real-world industrial applications. This issue of availability of large quantities of labeled data becomes even more severe when considering visual classes that require annotation based on expert knowledge (e.g., medical imaging), classes that rarely occur, or object detection and instance segmentation tasks where the labeling requires more effort. To address this problem, many efforts, e.g., weakly supervised learning, few-shot learning, self/semi-supervised, cross-domain few-shot learning, domain adaptation, etc., have been made to improve robustness to this scenario. The goal of this workshop, which builds on the successful CVPR 2021 L2ID workshop, is to bring together researchers across several computer vision and machine learning communities to navigate the complex landscape of methods that enable moving beyond fully supervised learning towards limited and imperfect label settings. Topics that are of special interest (though submissions are not limited to these):

  • Few-shot learning for image classification, object detection, etc.
  • Cross-domain few-shot learning
  • Weakly-/semi- supervised learning algorithms
  • Zero-shot learning · Learning in the “long-tail” scenario
  • Self-supervised learning and unsupervised representation learning
  • Learning with noisy data
  • Any-shot learning – transitioning between few-shot, mid-shot, and many-shot training
  • Optimal data and source selection for effective meta-training with a known or unknown set of target categories
  • Data augmentation
  • New datasets and metrics to evaluate the benefit of such methods
  • Real world applications such as object semantic segmentation/detection/localization, scene parsing, video processing (e.g. action recognition, event detection, and object tracking)

Workshop Paper Submission Information


The submissions should be formatted in the ECCV 2022 format and uploaded through the L2ID CMT Site
Submitted papers can have one of the following formats:
  • Extended Abstracts of max 4 pages (not eligible for proceedings)
  • Papers of the same length of ECCV submissions (eligible for proceedings)
We encourage authors who want to present and discuss their ongoing work to choose the Extended Abstract format. According to the ECCV rules, extended abstracts will not count as archival.


Important Dates

Description Date
Paper submission deadline July 15th, 2022
Notification to authors Early August, TBA, 2022
Camera-ready deadline TBA, 2022

People