Call for Papers: IEEE TPAMI Special Issue
Call for Papers
IEEE Transactions on
Pattern
Analysis and Machine Intelligence
Special Issue on
Learning with Fewer Labels in Computer Vision
The past several years have witnessed an explosion of interest in and a dizzyingly fast development of machine learning, a subfield of artificial intelligence. Foremost among these approaches are Deep Neural Networks (DNNs) that can learn powerful feature representations with multiple levels of abstraction directly from data when large amounts of labeled data is available. One of the core computer vision areas, namely, object classification achieved a significant breakthrough result with a deep convolutional neural network and the large scale ImageNet dataset, which is arguably what reignited the field of artificial neural networks and triggered the recent revolution in Artificial Intelligence (AI). Nowadays, artificial intelligence has spread over almost all fields of science and technology. Yet, computer vision remains in the heart of these advances when it comes to visual data analysis, offering the biggest big data and enabling advanced AI solutions to be developed.
Undoubtedly, DNNs have shown remarkable success in many computer vision tasks, such as recognizing/localizing/segmenting faces, persons, objects, scenes, actions and gestures, and recognizing human expressions, emotions, as well as object relations and interactions in images or videos. Despite a wide range of impressive results, current DNN based methods typically depend on massive amounts of accurately annotated training data to achieve high performance, and are brittle in that their performance can degrade severely with small changes in their operating environment. Generally, collecting large scale training datasets is time-consuming, costly, and in many applications even infeasible, as for certain fields only very limited or no examples at all can be gathered (such as visual inspection or medical domain), although for some computer vision tasks large amounts of unlabeled data may be relatively easy to collect, e.g., from the web or via synthesis. Nevertheless, labeling and vetting massive amounts of real-world training data is certainly difficult, expensive, or time-consuming, as it requires the painstaking efforts of experienced human annotators or experts, and in many cases prohibitively costly or impossible due to some reason, such as privacy, safety or ethic issues (e.g., endangered species, drug discovery, medical diagnostics and industrial inspection).
DNNs lack the ability of learning from limited exemplars and fast generalizing to new tasks. However, real-word computer vision applications often require models that are able to (a) learn with few annotated samples, and (b) continually adapt to new data without forgetting prior knowledge. By contrast, humans can learn from just one or a handful of examples (i.e., few shot learning), can do very long-term learning, and can form abstract models of a situation and manipulate these models to achieve extreme generalization. As a result, one of the next big challenges in computer vision is to develop learning approaches that are capable of addressing the important shortcomings of existing methods in this regard. Therefore, in order to address the current inefficiency of machine learning, there is pressing need to research methods, (1) to drastically reduce requirements for labeled training data, (2) to significantly reduce the amount of data necessary to adapt models to new environments, and (3) to even use as little labeled training data as people need.
This special issue focuses on learning with fewer labels for computer vision tasks such as image classification, object detection, semantic segmentation, instance segmentation, and many others and the topics of interest include (but are not limited to) the following areas:
•
Self-supervised learning methods
•
New methods for few-/zero-shot learning
• Meta-learning methods
• Life-long/continual/incremental learning methods
• Novel domain adaptation methods
• Semi-supervised learning methods
• Weakly-supervised learning methods
Priority will be
given to papers with high novelty and originality for research papers, and to
papers with high potential impact for survey/overview papers.
Paper submission and review:
Authors need to
submit full papers online through the TPAMI site at,
https://mc.manuscriptcentral.com/tpami-cs
selecting the choice that indicates this special issue. Peer
reviewing will follow the standard IEEE review process. Full length manuscripts
are expected to follow the TPAMI guidelines in
https://www.computer.org/tpami-author-information
Submission Deadline:
Paper Submission Deadline:
April 30, 2021.
Guest editors
• Li Liu: National University of Defense Technology, China, li.liu@oulu.fi
Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland
• Timothy Hospedales: Professor, University of Edinburgh, UK, t.hospedales@ed.ac.uk
Principal Scientist at Samsung AI Research Centre Alan Turing Institute Fellow
• Yann LeCun: Silver Professor, New York University, United States VP and Chief AI Scientist at Facebook, yann@fb.com
• Mingsheng Long: Associate Professor, Tsinghua University, China, mingsheng@tsinghua.edu.cn
• Jiebo Luo: Professor, University of Rochester, United States, jluo@cs.rochester.edu
• Wanli Ouyang: Senior Lecturer, University of Sydney, Australia, wanli.ouyang@sydney.edu.au
• Matti Pietikäinen: Professor (IEEE Fellow), Center for Machine Vision and Signal Analysis University of Oulu (CMVS), Finland,
matti.pietikainen@oulu.fi
• Tinne Tuytelaars: Professor, KU Leuven, Belgium, Tinne.Tuytelaars@esat.kuleuven.be
Main Contact
Li Liu
Email: li.liu@oulu.fi, dreamliu2010@gmail.com
National University
of Defense Technology, China
Center for Machine
Vision and Signal Analysis (CMVS),
University of Oulu, Finland
Any question about this page? Turn to its host!
This page is maintained currently
by Li Liu. Last modified: 2020 August 4