2026 ArXiv-2602 Q-DiT4SR: Exploration of Detail-Preserving Diffusion Transformer Quantization for Real-World Image Super-Resolution Xun Zhang†, Kaicheng Yang†, Hongliang Lu, Haotong Qin, Yong Guo, and 1 more author In Under Review, Feb 2026 arXiv PDF Code ArXiv-2602 AdaTSQ: Pushing the Pareto Frontier of Diffusion Transformers via Temporal-Sensitivity Quantization Shaoqiu Zhang†, Zizhong Ding†, Kaicheng Yang, Junyi Wu, Xianglong Yan, and 4 more authors In Under Review, Feb 2026 arXiv PDF Code Improving Sampling for Masked Diffusion Models via Information Gain Kaisen Yang, Jayden Teoh, Kaicheng Yang, Yitong Zhang, and Alex Lamb In Under Review, Feb 2026 arXiv PDF Code 2025 ArXiv-2503 QArtSR: Quantization via Reverse-Module and Timestep-Retraining in One-Step Diffusion based Image Super-Resolution Libo Zhu, Haotong Qin, Kaicheng Yang, Wenbo Li, Yong Guo, and 3 more authors In Under Review, Mar 2025 arXiv PDF Code ICML-2502 BiMaCoSR: Binary One-Step Diffusion Model Leveraging Flexible Matrix Compression for Real Super-Resolution Kai Liu†, Kaicheng Yang†, Zheng Chen, Zhiteng Li, Yong Guo, and 3 more authors In ICML, Feb 2025 arXiv PDF Code ArXiv-2509 RobuQ: Pushing DiTs to W1.58A2 via Robust Activation Quantization Kaicheng Yang†, Xun Zhang†, Haotong Qin, Yucheng Lin, Kaisen Yang, and 1 more author In Under Review, Sep 2025 arXiv PDF Code ArXiv-2509 Explore-Execute Chain: Towards an Efficient Structured Reasoning Paradigm Kaisen Yang†, Lixuan He†, Rushi Shah, Kaicheng Yang, Qinwei Ma, and 2 more authors In Under Review, Sep 2025 arXiv PDF Code ICLR2026 PT^2-LLM: Post-Training Ternarization for Large Language Models Xianglong Yan†, Chengzhu Bao†, Zhiteng Li, Tianao Zhang, Kaicheng Yang, and 4 more authors In ICLR, Oct 2025 arXiv PDF Code ArXiv-2512 TreeQ: Pushing the Quantization Boundary of Diffusion Transformer via Tree-Structured Mixed-Precision Search Kaicheng Yang, Kaisen Yang, Baiting Wu, Xun Zhang, Qianrui Yang, and 3 more authors In Under Review, Dec 2025 arXiv PDF Code