It is well known that deep neural networks are universal function approximators and have good generalizability when the training and test datasets are sampled from the same distribution. Most deep-learning-based applications and theories in the past decade are based upon this setup. While the view of learning function approximators has been rewarding to the community, we are seeing more and more of its limitations when dealing with the real-world problem space that is combinatorially exploded. In this talk, I will discuss a possible shift of view, from learning function approximators to learning algorithm approximators, by some preliminary work in my lab. Our ultimate goal is to achieve generalizability when learning in a problem space of combinatorial complexity. We refer to this desired generalizability as compositional generalizability. To this goal, we take important problems in geometry, physics, and policy learning as testbeds. Particularly, I will introduce how we build algorithms with state-of-the-art compositional generalizability on these testbeds, following a bottom-up principle and a modularized principle.
Hao Su is an Assistant Professor of Computer Science and Engineering in UC San Diego. He is interested in fundamental problems in broad disciplines related to artificial intelligence, including machine learning, computer vision, computer graphics, and robotics. His most recent work focuses on integrating the disciplines for building and training embodied AI that can interact with the physical world. In the past, his work of ShapeNet, PointNet series, and graph neural networks have significantly impacted the emergence and growth of a new field, 3D deep learning. He also used to participate in the development of ImageNet, a large-scale 2D image database. He has served as the Area Chair, Associated Editor, and other comparable positions in the program committee of CVPR, ICCV, ECCV, ICRA, Transactions on Graphics (TOG), and AAAI.