Iām a third-year Ph.D. student in Multimedia Lab, CUHK, supervised by Prof. Hongsheng Li & Prof. Xiaogang Wang.
My research interest includes multimodal reasoning, agentic vision, and unified understanding & generation MLLM. Please email me if you want to collaborate for academic research or have any questions.
I will be entering the job market in 2027. Please feel free to reach out if you have opportunities!
š„ News
- 2026.02: š„ Seed 2.0 is released.
- 2026.02: š Three papers (DraCo, MME-CoF, and AR3D-R1) are accepted by CVPR 2026.
- 2025.12: š„ Release š© DraCo: Draft as CoT for Text-to-Image Preview and Rare Concept Generation, an interleaved reasoning framework for improved image generation.
- 2025.10: š„ Release UlmEvalkit, an open-source toolkit for evaluating unified MLLMs and generative models for image generation tasks.
- 2025.09: š Three papers (T2I-R1, MINT-CoT, and BLINK-Twice) are accepted by NeurIPS 2025.
- 2025.05: š Two papers (MME-CoT and EasyRef) are accepted by ICML 2025.
- 2025.01: š One paper (MMSearch) is accepted by ICLR 2025.
š Selected Publications
Multi-modal Reasoning & Agentic Vision
-
Technical Report Seed2.0 Model Card: Towards Intelligence Frontier for Real-World Complexity
Contributor (VLM Post-train Group) -
ICLR 2025 MMSearch: Unveiling the Potential of Large Models as Multi-modal Search Engines
Dongzhi Jiang*, Renrui Zhang*, Ziyu Guo, Yanmin Wu, Jiayi Lei, Pengshuo Qiu, Pan Lu, Zehui Chen, Guanglu Song, Peng Gao, Yu Liu, Chunyuan Li, Hongsheng Li. -
ICML 2025 MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for Reasoning Quality, Robustness, and Efficiency
Dongzhi Jiang*, Renrui Zhang*, Ziyu Guo, Yanwei Li, Yu Qi, Xinyan Chen, Liuhui Wang, Jianhan Jin, Claire Guo, Shen Yan, Bo Zhang, Chaoyou Fu, Peng Gao, Hongsheng Li. -
Technical Report Seed1.8 Model Card: Towards Generalized Real-World Agency
Contributor (VLM Post-train Group)
Multi-modal Reasoning for Generation
-
NeurIPS 2025 T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT
Dongzhi Jiang*, Ziyu Guo*, Renrui Zhang*, Zhuofan Zong, Hao Li, Le Zhuo, Shilin Yan, Pheng-Ann Heng, Hongsheng Li. -
CVPR 2026 DraCo: Draft as CoT for Text-to-Image Preview and Rare Concept Generation
Dongzhi Jiang, Renrui Zhang, Haodong Li, Zhuofan Zong, Ziyu Guo, Jun He, Claire Guo, Junyan Ye, Rongyao Fang, Weijia Li, Rui Liu, Hongsheng Li.
All Publications
Multi-modal Reasoning for Understanding & Generation
-
ECCV 2024 MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
Renrui Zhang*, Dongzhi Jiang*, Yichi Zhang*, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Aojun Zhou, Kai-Wei Chang, Peng Gao, Hongsheng Li. -
NeurIPS 2025 MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning
Xinyan Chen, Renrui Zhang, Dongzhi Jiang, Aojun Zhou, Shilin Yan, Weifeng Lin, Hongsheng Li. -
NeurIPS 2025 D&B Track BLINK-Twice: You see, but do you observe? A Reasoning Benchmark on Visual Perception
Junyan Ye, Dongzhi Jiang, Jun He, Baichuan Zhou, Zilong Huang, Zhiyuan Yan, Hongsheng Li, Conghui He, Weijia Li. -
ICLR 2025 MAVIS: Mathematical Visual Instruction Tuning with an Automatic Data Engine
Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Ziyu Guo, Yichi Zhang, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Shanghang Zhang, Peng Gao, Hongsheng Li. -
CVPR 2026 Are Video Models Ready as Zero-Shot Reasoners? An Empirical Study with the MME-CoF Benchmark
Ziyu Guo, Xinyan Chen, Renrui Zhang, Ruichuan An, Yu Qi, Dongzhi Jiang, Xiangtai Li, Manyuan Zhang, Hongsheng Li, Pheng-Ann Heng. -
Arxiv Echo-4o: Harnessing the Power of GPT-4o Synthetic Images for Improved Image Generation
Junyan Ye*, Dongzhi Jiang*, Zihao Wang, Leqi Zhu, Zhenghao Hu, Zilong Huang, Jun He, Zhiyuan Yan, Jinghua Yu, Hongsheng Li, Conghui He, Weijia Li. -
Arxiv RealGen: Photorealistic Text-to-Image Generation via Detector-Guided Rewards
Junyan Ye, Leiqi Zhu, Yuncheng Guo, Dongzhi Jiang, Zilong Huang, Yifan Zhang, Zhiyuan Yan, Haohuan Fu, Conghui He, Weijia Li. -
CVPR 2026 Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation
Yiwen Tang, Zoey Guo, Kaixin Zhu, Ray Zhang, Qizhi Chen, Dongzhi Jiang, Junli Liu, Bohan Zeng, Haoming Song, Delin Qu, Tianyi Bai, Dan Xu, Wentao Zhang, Bin Zhao. -
Blog Nano-Consistent-150K
Junyan Ye, Dongzhi Jiang, Zilong Huang, Jun He, Leqi Zhu, Zhiyuan Yan, Ruichuan An, Hongsheng Li, Conghui He, Weijia Li.
Agentic Multi-modal Generation
-
Arxiv Mind-Brush: Integrating Agentic Cognitive Search and Reasoning into Image Generation
Jun He, Junyan Ye, Zilong Huang, Dongzhi Jiang, Chenjue Zhang, Leqi Zhu, Renrui Zhang, Xiang Zhang, Weijia Li -
Arxiv CoCo: Code as CoT for Text-to-Image Preview and Rare Concept Generation
Haodong Li, Chunmei Qing, Huanyu Zhang, Dongzhi Jiang, Yihang Zou, Hongbo Peng, Dingming Li, Yuhong Dai, ZePeng Lin, Juanxi Tian, Yi Zhou, Siqi Dai, Jingwei Wu
Multimodal Large Language Models
-
NeurIPS 2024 MoVA: Adapting Mixture of Vision Experts to Multimodal Context
Zhuofan Zong, Bingqi Ma, Dazhong Shen, Guanglu Song, Hao Shao, Dongzhi Jiang, Hongsheng Li, Yu Liu. -
WACV 2026 PiSA: A Self-Augmented Data Engine and Training Strategy for 3D Understanding with Large Models
Zilu Guo, Hongbin Lin, Zhihao Yuan, Chaoda Zheng, Pengshuo Qiu, Dongzhi Jiang, Renrui Zhang, Chun-Mei Feng, Zhen Li.
Diffusion Models
-
NeurIPS 2024 CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching
Dongzhi Jiang, Guanglu Song, Xiaoshi Wu, Renrui Zhang, Dazhong Shen, Zhuofan Zong, Yu Liu, Hongsheng Li. -
ICML 2025 EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM
Zhuofan Zong, Dongzhi Jiang, Bingqi Ma, Guanglu Song, Hao Shao, Dazhong Shen, Yu Liu, Hongsheng Li.
Autonomous Driving
- ICCV 2023 Temporal Enhanced Training of Multi-view 3D Object Detector via Historical Object Prediction
Zhuofan Zong*, Dongzhi Jiang*, Guanglu Song, Zeyue Xue, Jingyong Su, Hongsheng Li, Yu Liu.
š ļø Projects
- UlmEvalkit: An open-source toolkit for evaluating unified large multi-modal models and generative models
Dongzhi Jiang, Renrui Zhang, Yankai Shu, Yuyang Peng, Zhuofan Zong, Yuchen Duan, Zihao Wang, Jiaming Liu, Hao Chen, Ziyu Guo, Junyan Ye, Rui Liu, Pheng Ann Heng, Shanghang Zhang, Hongsheng Li.
š¼ Experience
- 2025.10 - Present: Research Intern, Seed, ByteDance
- 2022.11 - 2024.05: Research Intern, Base Model Group, Sensetime
š Education
- 2023.08 - Present: Ph.D. student in Multimedia Lab, CUHK
- 2019.09 - 2023.06: B.Eng. in Computer Science and Technology, Harbin Institute of Technology, Shenzhen
š Awards
- 2020, 2022: National Scholarship, Ministry of Education, China