About me

I am a Ph.D. student at Cornell University working with Kevin Ellis. I received my bachelor’s and master’s degrees from Shanghai Jiao Tong University.

My research pursues sample-efficient and generalizable AI through neuro-symbolic approaches, while also developing computationally efficient methods to address practical challenges. I have worked on LLM-based code generation [ROAP] and world modeling [WorldCoder], including applications to real robot systems [POMDPCoder]. Additionally, I improved LLMs’ reasoning efficiency through selective attention during my internship at Microsoft Research.

Preprints

  • Not All Thoughts Matter: Selective Attention for Efficient Reasoning
    Hao Tang, Guoqing Zheng, Kanishk Gandhi, Harkirat Behl, Vaishnavi Shrivastava, Mojan Javaheripi, Kevin Ellis, Shivam Garg, Dimitris Papailiopoulos

Publications

  • LLM-Guided Probabilistic Program Induction for POMDP Model Estimation
    Aidan Curtis, Hao Tang, Thiago Veloso, Kevin Ellis, Joshua Tenenbaum, Tom´as Lozano-P´erez, Leslie Pack Kaelbling
    CoRL 2025 [arxiv].
  • PoE-World: Compositional World Modeling with Products of Programmatic Experts
    Wasu Top Piriyakulkij, Yichao Liang, Hao Tang, Adrian Weller, Marta Kryven, Kevin Ellis
    NeurIPS 2025 [arxiv].
  • Programmatic Video Prediction Using Large Language Models
    Hao Tang, Kevin Ellis, Suhas Lohit, Michael J Jones, Moitreya Chatterjee
    ICLR Workshop World Models 2025 [arxiv].
  • Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
    Yichao Liang, Nishanth Kumar, Hao Tang, Adrian Weller, Joshua B Tenenbaum, Tom Silver, Jo˜ao F Henriques, Kevin Ellis
    ICLR 2025 [arxiv].
  • Combining Induction and Transduction for Abstract Reasoning
    Wen-Ding Li*, Keya Hu*, Carter Larsen, Yuqing Wu, Simon Alford, Caleb Woo, Spencer M. Dunn, Hao Tang, Michelangelo Naim, Dat Nguyen, Wei-Long Zheng, Zenna Tavares, Yewen Pu†, Kevin Ellis†
    ICLR 2025 & Best Paper at ARCPrize [arxiv].
  • WorldCoder, a Model-Based LLM Agent: Building World Models by Writing Code and Interacting with the Environment
    Hao Tang, Darren Key, and Kevin Ellis
    NeurIPS 2024 [project] [arxiv] [code].
  • Code Repair with LLMs gives an Exploration-Exploitation Tradeoff
    Hao Tang, Keya Hu, Jin Peng Zhou, Sicheng Zhong, Wei-Long Zheng, Xujie Si, and Kevin Ellis
    NeurIPS 2024 [project] [arxiv] [code].
  • From Perception to Programs: Regularize, Overparameterize, and Amortize
    Hao Tang, and Kevin Ellis
    ICML 2023 [arxiv],
    ICML Differentiable Everthing workshop 2023, PLDI MAPS symposium 2022.
  • Towards Scale-Invariant Graph-related Problem Solving by Iterative Homogeneous GNNs
    Hao Tang, Zhiao Huang, Jiayuan Gu, Bao-Liang Lu, and Hao Su
    NeurIPS 2020 [arxiv] [code] [short-video] [poster] [pdf] [appendix].
  • Refactoring Policy for Compositional Generalizability using Self-Supervised Object Proposals
    Tongzhou Mu*, Jiayuan Gu*, Zhiwei Jia, Hao Tang, and Hao Su
    NeurIPS 2020 [arxiv] [code].
  • Belief Propagation Neural Networks
    Jonathan Kuck, Shuvam Chakraborty, Hao Tang, Rachel Luo, Jiaming Song, Ashish Sabharwal, and Stefano Ermon
    NeurIPS 2020 [arxiv].
  • Emotion Recognition using Multimodal Residual LSTM Network
    Jiaxin Ma*, Hao Tang*, Wei-Long Zheng, and Bao-Liang Lu
    ACM Multimedia 2019 [pdf].
  • Investigating Sex Differences in Classification of Five Emotions from EEG and Eye Movement Signals
    Lan-Qing Bao, Jie-Lin Qiu, Hao Tang, Wei-Long Zheng, and Bao-Liang Lu
    IEEE International Engineering in Medicine and Biology Conference (EMBC) 2019.
  • Multimodal Emotion Recognition Using Deep Neural Networks
    Hao Tang, Wei Liu, Wei-Long Zheng, and Bao-Liang Lu
    International Conference on Neural Information Processing (ICONIP) 2017 [pdf].

Misc.

  • Top Reviewer of NeurIPS, 2022.
  • Organizer of ICML Workshop on Assessing World Models