Chulin Xie



Hi! I am a Ph.D. student in Computer Science at University of Illinois at Urbana-Champaign, advised by Prof. Bo Li.

My research focuses on enhancing the robustness, privacy and generalization of ML, and the intersection of these topics, especially in language models, generative models and federated learning. Previously, I received my Bachelor degree from Computer Science department at Zhejiang University in July 2020. I was a research intern at Microsoft Research and NVIDIA Research.


Feb 2, 2024 Our work on private and communication-efficient vertical FL got accepted to SaTML 2024.
Jan 17, 2024 Our work on red-teaming tool for Diffusion Models and Hybrid FL got accepted to ICLR 2024.
Dec 11, 2023 Our LLM trustworthiness benchmark DecodingTrust won Outstanding Paper Award at NeurIPS!
Sep 21, 2023 Our work on federated learning fairness will be presented at NeurIPS 2023 FL workshop as Oral.
Sep 2, 2023 Our work on differential privacy and certified robustness in FL got accepted to ACM CCS 2023.
May 22, 2023 I start an internship at Microsoft Research Redmond, working on privacy of LLMs.

Selected Publications

  1. Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM
    Chulin Xie, Pin-Yu Chen, Qinbin Li, Arash Nourian, Ce Zhang, and Bo Li
    SaTML 2024
  2. Ring-A-Bell! How Reliable are Concept Removal Methods for Diffusion Models?
    Yu-Lin Tsai, Chia-Yi Hsu, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, and Chun-Ying Huang
    ICLR 2024
  3. Effective and Efficient Federated Tree Learning on Hybrid Data
    Qinbin Li, Chulin Xie, Xiaojun Xu, Xiaoyuan Liu, Ce Zhang, Bo Li, Bingsheng He, and Dawn Song
    ICLR 2024
  4. Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
    Chulin Xie, Yunhui Long, Pin-Yu Chen, Qinbin Li, Sanmi Koyejo, and Bo Li
    ACM CCS 2023
  5. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
    Boxin Wang*, Weixin Chen*, Hengzhi Pei*, Chulin Xie*, Mintong Kang*, Chenhui Zhang*, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, and Bo Li
    NeurIPS Datasets & Benchmarks 2023 (Oral) Outstanding Paper Award
  6. CoPur: Certifiably Robust Collaborative Inference via Feature Purification
    Jing Liu, Chulin Xie, Sanmi Koyejo, and Bo Li
    NeurIPS 2022
  7. Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
    Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein
    TPAMI 2022
  8. CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
    Chulin Xie, Minghao Chen, Pin-Yu Chen, and Bo Li
    ICML 2021
  9. Style-based Point Generator with Adversarial Rendering for Point Cloud Completion
    Chulin Xie*, Chuxin Wang*, Bo Zhang, Hao Yang, Dong Chen, and Fang Wen
    CVPR 2021
  10. DBA: Distributed Backdoor Attacks against Federated Learning
    Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li
    ICLR 2020


  • Trustworthiness Evaluation in GPT Models
    • Fairness & Inclusiveness Monthly Community Meeting at Microsoft Research, Jan. 2024
    • Reading Group at Tsinghua University, Oct. 2023
    • Responsible AI Reading Group at Microsoft Research, Sept. 2023
  • On the Trustworthiness of LLMs and Underlying Connections Between Different Safety Perspectives
  • Fairness via Agent-Awareness for FL on Heterogeneous Data, NeurIPS FL workshop, Dec. 2023
  • Connections between Differential Privacy and Certified Robustness in FL, CCS, Nov. 2023
  • Distributed Backdoor Attacks, Federated Learning One World (FLOW) Seminar, Sept. 2020


  • Teaching Assistant for CS 446: Machine Learning, Fall 2023
  • Guest Lecturer for CS 598: Special Topics on Adversarial Machine Learning, Fall 2021