FedSecurity: A Benchmark for Attacks and Defenses in Federated Learning and Federated LLMs

Abstract

This paper introduces FedMLSecurity, a benchmark designed to simulate adversarial attacks and corresponding defense mechanisms in Federated Learning (FL). As an integral module of the open-sourced library FedML that facilitates FL algorithm development and performance comparison, FedMLSecurity enhances FedML’s capabilities to evaluate security issues and potential remedies in FL. FedMLSecurity comprises two major components: FedMLAttacker that simulates attacks injected during FL training, and FedMLDefender that simulates defensive mechanisms to mitigate the impacts of the attacks. FedMLSecurity is open-sourced 1 and can be customized to a wide range of machine learning models (e.g., Logistic Regression, ResNet, GAN, etc.) and federated optimizers (e.g., FedAVG, FedOPT, FedNOVA, etc.). FedMLSecurity can also be applied to Large Language Models (LLMs) easily, demonstrating its adaptability and applicability in various scenarios.

Publication
In Proceedings of the 30th SIGKDD Conference on Knowledge Discovery and Data Mining - Applied Data Science Track
Qifan Zhang
Qifan Zhang
Ph.D. candidate

Qifan Zhang (张起帆) is now a 5th- and final-year Ph.D. candidate in Department of Electrical Engineering & Computer Science of University of California, Irvine with focus on Computer Security, advised by Prof. Zhou Li. His research interests include Network Security, especially Domain Name System (DNS), and Machine Learning Security and Privacy. Before that, he received his B.Eng. degree in Computer Science and Technology from ShanghaiTech University in 2020, with an interim summer session in University of California, Berkeley in 2017.

Pronunciation of his name: Chee-Fan Jang.
His Curriculum Vitae (last updated on Aug 26, 2024)