Terry Yue Zhuo Headshot

ㄓㄨㄛˊㄩㄝˋ

Terry Yue Zhuo

I am affiliated to CSIRO's Data61 and Monash University. I am now visiting Singapore Management University and working as a research engineer at CSIRO's Data61.

My research concerns the reliability of programming agents, mainly in function calling, program reasoning, and code generation. I have been working on large language models of code these days, focusing on scaling, dataset and openness.

I am always open to collaboration. Please contact me if you are interested in working with me.


Recent Publications



Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra and Shayne Longpreo, “OctoPack: Instruction Tuning Code Large Language Models”, International Conference on Learning Representations (ICLR), 2024. [Github]

Terry Yue Zhuo, “ICE-Score: Instructing Large Language Models to Evaluate Code”, Findings of European Chapter of the Association for Computational Linguistics (EACL), 2024. [Github]

Terry Yue Zhuo, Armel Zebaze, Nitchakarn Suppattarachai, Leandro von Werra, Harm de Vries, Qian Liu, Niklas Muennighoff, “Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models”, Manuscript, 2024. [MarkTechPost] [Tweet] [Github]

Terry Yue Zhuo, Zhou Yang, Zhensu Sun, Yufei Wang, Li Li, Xiaoning Du, Zhenchang Xing and David Lo, “Data Augmentation Approaches for Source Code Models: A Survey”, Manuscript, 2023. [Github]

BigCode Team, “StarCoder: May The Source Be With You!”, Transactions on Machine Learning Research (TMLR), 2023. [Tweet 1] [Tweet 2] [TechCrunch] [StarCoderBase] [StarCoder] [Github]

BigCode Team, “SantaCoder: don't reach for the stars!”, Deep Learning For Code workshop (DL4C) @ ICLR, 2023. [Best Paper Award] [Tweet 1] [Tweet 2] [SantaCoder]