About me
I am a Ph.D. student at Cornell University, advised by Prof. Kevin Ellis. I completed my bachelor’s and master’s degrees at Shanghai Jiao Tong University.
My research interest lies in the generalizability of AI models, in particular, generalizing/adapting to out-of-distribution samples (e.g., inputs of larger scales, unseen combinations, and new concepts/domains). I currently focus on neuro-symbolic program synthesis to integrate neural networks with programmatic prior for better generalizability and interpretability. It involves fields such as program synthesis, latent abstraction learning, and learning to optimize.
Publications
- LLM-Guided Probabilistic Program Induction for POMDP Model Estimation
Aidan Curtis, Hao Tang, Thiago Veloso, Kevin Ellis, Joshua Tenenbaum, Tom´as Lozano-P´erez, Leslie Pack Kaelbling
CoRL 2025 [arxiv]. - Programmatic Video Prediction Using Large Language Models
Hao Tang, Kevin Ellis, Suhas Lohit, Michael J Jones, Moitreya Chatterjee
ICLR Workshop World Models 2025 [arxiv]. - Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
Yichao Liang, Nishanth Kumar, Hao Tang, Adrian Weller, Joshua B Tenenbaum, Tom Silver, Jo˜ao F Henriques, Kevin Ellis
ICLR 2025 [arxiv]. - Combining Induction and Transduction for Abstract Reasoning
Wen-Ding Li*, Keya Hu*, Carter Larsen, Yuqing Wu, Simon Alford, Caleb Woo, Spencer M. Dunn, Hao Tang, Michelangelo Naim, Dat Nguyen, Wei-Long Zheng, Zenna Tavares, Yewen Pu†, Kevin Ellis†
ICLR 2025 & Best Paper at ARCPrize [arxiv]. - WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment
Hao Tang, Darren Key, and Kevin Ellis
NeurIPS 2024 [project] [arxiv] [code]. - Code Repair with LLMs gives an Exploration-Exploitation Tradeoff
Hao Tang, Keya Hu, Jin Peng Zhou, Sicheng Zhong, Wei-Long Zheng, Xujie Si, and Kevin Ellis
NeurIPS 2024 [project] [arxiv] [code]. - From Perception to Programs: Regularize, Overparameterize, and Amortize
Hao Tang, and Kevin Ellis
ICML 2023 [arxiv],
ICML Differentiable Everthing workshop 2023, PLDI MAPS symposium 2022. - Towards Scale-Invariant Graph-related Problem Solving by Iterative Homogeneous GNNs
Hao Tang, Zhiao Huang, Jiayuan Gu, Bao-Liang Lu, and Hao Su
NeurIPS 2020 [arxiv] [code] [short-video] [poster] [pdf] [appendix]. - Refactoring Policy for Compositional Generalizability using Self-Supervised Object Proposals
Tongzhou Mu*, Jiayuan Gu*, Zhiwei Jia, Hao Tang, and Hao Su
NeurIPS 2020 [arxiv] [code]. - Belief Propagation Neural Networks
Jonathan Kuck, Shuvam Chakraborty, Hao Tang, Rachel Luo, Jiaming Song, Ashish Sabharwal, and Stefano Ermon
NeurIPS 2020 [arxiv]. - Emotion Recognition using Multimodal Residual LSTM Network
Jiaxin Ma*, Hao Tang*, Wei-Long Zheng, and Bao-Liang Lu
ACM Multimedia 2019 [pdf]. - Investigating Sex Differences in Classification of Five Emotions from EEG and Eye Movement Signals
Lan-Qing Bao, Jie-Lin Qiu, Hao Tang, Wei-Long Zheng, and Bao-Liang Lu
IEEE International Engineering in Medicine and Biology Conference (EMBC) 2019. - Multimodal Emotion Recognition Using Deep Neural Networks
Hao Tang, Wei Liu, Wei-Long Zheng, and Bao-Liang Lu
International Conference on Neural Information Processing (ICONIP) 2017 [pdf].
Preprints
- PoE-World: Compositional World Modeling with Products of Programmatic Experts
Wasu Top Piriyakulkij, Yichao Liang, Hao Tang, Adrian Weller, Marta Kryven, Kevin Ellis
[arxiv].
Misc.
- Top Reviewer of NeurIPS, 2022.