FedSecurity: A Benchmark for Attacks and Defenses in Federated Learning and Federated LLMs
Shanshan Han,
Baturalp Buyukates,
Zijian Hu,
Han Jin,
Weizhao Jin,
Lichao Sun,
Xiaoyang Wang,
Chulin Xie,
Kai Zhang,
Qifan Zhang,
Yuhui Zhang,
Chaoyang He,
Salman Avestimehr
August, 2024
Abstract
This paper introduces FedMLSecurity, a benchmark designed to simulate adversarial attacks and corresponding defense mechanisms in Federated Learning (FL). As an integral module of the open-sourced library FedML that facilitates FL algorithm development and performance comparison, FedMLSecurity enhances FedML’s capabilities to evaluate security issues and potential remedies in FL. FedMLSecurity comprises two major components: FedMLAttacker that simulates attacks injected during FL training, and FedMLDefender that simulates defensive mechanisms to mitigate the impacts of the attacks. FedMLSecurity is open-sourced 1 and can be customized to a wide range of machine learning models (e.g., Logistic Regression, ResNet, GAN, etc.) and federated optimizers (e.g., FedAVG, FedOPT, FedNOVA, etc.). FedMLSecurity can also be applied to Large Language Models (LLMs) easily, demonstrating its adaptability and applicability in various scenarios.
Publication
In Proceedings of the 30th SIGKDD Conference on Knowledge Discovery and Data Mining - Applied Data Science Track
Senior Staff Researcher
Dr. Qifan Zhang (张起帆) is now a Senior Staff Researcher of Palo Alto Networks. His research focuses on safeguarding critical internet infrastructure and addressing emerging threats in networked systems. His work centers on Network Security, with deep expertise in the Domain Name System (DNS)—the backbone of internet communication. By combining protocol analysis, fuzzing techniques, and formal methods, he designs automated tools to uncover high-risk vulnerabilities in DNS implementations and standards.
One of his flagship projects, ResolverFuzz, is a novel testing framework that exposed critical flaws in widely deployed DNS resolvers, including protocol-level security gaps (e.g., cache poisoning) and implementation errors (e.g., memory corruption). These discoveries have directly strengthened cybersecurity practices for the industry, open-source communities, and public infrastructure providers, earning recognition from organizations like CERT/CC and CVE.
Beyond DNS, he also explores the intersection of AI and Security, investigating risks in real-world machine learning deployments. My research, published in ACSAC 2022, demonstrated the first practical model extraction attacks against autonomous vehicle (AV) systems, using gradient-descent-based methods to reverse-engineer proprietary AI models. This work underscores the urgent need for robust defenses in safety-critical AI applications.
Prior to Palo Alto Networks, he earned his Ph.D. in Computer Engineering from University of California, Irvine advised by Prof. Zhou Li in 2025, and B.Eng. in Computer Science and Technology from ShanghaiTech University in 2020, complemented by a summer session at the University of California, Berkeley in 2017.
Pronunciation of his name: Chee-Fan Jang.
His Curriculum Vitae (last updated on March 14, 2025)