Skip to the content.

Over the past few years, deep learning (DL) has helped advance computer vision by leaps and bounds. From achieving super-human accuracy in image classification tasks to dramatically improving image generation, deep learning based algorithms have dominated the performance charts and established state-of-the-art time after time. Although deep learning has helped in minimizing the reliance on domain specific hand-crafted features, it is in fact the hundreds of hours of human machine learning expertise that goes into squeezing the last bit of performance from these models. From doing data processing to taking several decisions on choosing the right model architecture and associated hyperparameters, deep learning is still heavily dependent on humans to achieve desired results. This dependence limits the application of deep learning in several domains especially the non-technical ones such as health, education and retail, where human expertise in deep learning may not be readily available. Automated Machine Learning, which incorporates methods such as neural architecture search (NAS) and hyperparameter optimization (HPO), provides approaches/systems to help deep learning be used for various applications without any expert knowledge of deep learning. It can help democratize deep learning by reducing the need for DL expertise in application development.

Call for Papers

Recent years have witnessed a significant rise in research related to NAS that allows automatically finding deep network architectures. These architectures often achieve better performance than the state-of-the-art methods that have been carefully designed by DL researchers. Although NAS shows promise by exhibiting superior performance on standard benchmarks such as CIFAR-10/100 and ImageNet, the evidence is scarce that they would work equally well on real-world datasets. Moreover, the research has rarely explored vision-based tasks such as pose estimation, activity recognition in videos, generative models, vision-language tasks and real-time vision applications. This gap between published literature for NAS and their performance on real-world datasets/applications is yet to be addressed. The aim of this workshop is to advocate NAS for in-the-wild computer vision across this wide range of tasks and potentially across a range of computing platforms.

The workshop scope includes (but is not limited to),

Please refer here for Submission Instructions. The reviews will be double blind and there will not be any rebuttal phase.

Workshop papers will be included in IEEE Xplore.


CMT Submission Website


Debadeepta Dey

Dr. Debadeepta Dey is a principal researcher in the Adaptive Systems and Interaction (ASI) group led by Dr. Eric Horvitz at Microsoft Research, Redmond. He finished his PhD at the Robotics Institute, Carnegie Mellon University, Pittsburgh, USA, where he was advised by Prof. Drew Bagnell. He does fundamental as well as applied research in machine learning, control and computer vision with applications to autonomous agents in general and robotics in particular.

He is interested in bridging the gap between perception and planning for autonomous ground and aerial vehicles. His interests include decision-making under uncertainty, reinforcement learning, artificial intelligence and machine learning. His recent works include “Efficient Forward Architecture Search” which got accepted at NeurIPS 2019. He graduated in 2007 from Delhi College of Engineering with a Bachelor’s degree in Electrical Engineering. From 2007 to 2010 he was a researcher at the Field Robotics Center, Robotics Institute, Carnegie Mellon University.

Xia “Ben” Hu

Dr. Xia Hu is an Assistant Professor in Computer Science and Engineering at Texas A&M University starting from Fall 2015, and is also a member of the Center for Remote Health Technologies and Systems and the Center for the Study of Digital Libraries. He is currently directing the DATA (Data Analytics at Texas A&M) Lab. His research involves developing automated and interpretable data mining and machine learning algorithms with theoretical properties to better discover actionable patterns from large-scale, networked, dynamic and sparse data. His research is directly motivated by, and contributes to, applications in social informatics, health informatics and information security. His lab’s work has been featured in Various News Media, such as MIT Tech Review, ACM TechNews, New Scientist, Fast Company, Economic Times. His research is generously supported by federal agencies such as DARPA (XAI, D3M and NGS2) and NSF (CAREER, III, SaTC, CRII, S&AS), and industrial sponsors such as Adobe, Apple, Alibaba and JP Morgan. He has several notable works in Automated Machine Learning published at venues such as KDD and ICDM. His lab developed the popular open source library called AutoKeras for automated machine learning. AutoKeras has over 6000 stars and around 1000 forks on GitHub.

Peter Vajda

Dr. Peter Vajda is a Research Scientist working on computer vision at Facebook since 2014. His recent work includes publications such as FBNet and ChamNet which are both focused on finding platform-aware efficient neural network architectures. Before joining Facebook, he was Visiting Assistant Professor in Professor Bernd Girod’s group in Stanford University, Stanford, USA. He was working on personalized multimedia system and mobile visual search. He received M.Sc. in Computer Science from the Vrije Universiteit, Amsterdam, Netherlands and a M.Sc. in Program Designer Mathematician from Eötvös Loránd University, Budapest, Hungary. He completed his Ph.D. with Prof. Touradj Ebrahimi at the Ecole Polytechnique Fédéral de Lausanne (EPFL), Lausanne, Switzerland, 2012.


Gaurav Mittal

Mei Chen


Gaurav Mittal