Login

Mitigating Poisoning Attacks in Decentralized Federated Learning through Moving Target Defense

MA
State: Assigned to Zi Ye
Published: 2023-08-09

Decentralized Federated Learning (DFL) has emerged as a powerful approach to collaborative model training across a network of devices while preserving data privacy [1]. However, DFL systems are vulnerable to adversarial attacks, particularly poisoning attacks, where malicious participants intentionally inject biased or harmful data to compromise the integrity of the global model [2]. This project proposes the design and implementation of a mitigation strategy using Moving Target Defense (MTD) techniques to enhance the security and robustness of DFL systems against poisoning attacks [3], [4].

The main objectives of this project are as follows:

a) Poisoning Attack Analysis: Study and analyze various poisoning attack strategies targeting DFL systems to understand their impact on model convergence and overall system performance. b) MTD Design: Devise a Dynamic MTD framework specifically tailored for DFL, focusing on dynamically altering the network topology to create an unpredictable training environment, thereby thwarting poisoning attacks. c) Implementation: Implement the designed Dynamic MTD strategies within a decentralized federated learning framework, ensuring compatibility and minimal overhead on the system. d) Evaluation: Conduct extensive experimentation to evaluate the effectiveness of the Dynamic MTD techniques in mitigating poisoning attacks. Compare the performance of the defended DFL system against traditional DFL in terms of model accuracy, robustness, and resilience to adversarial attacks. e) Documentation: Create comprehensive documentation covering the design, implementation, experimentation, and results of the Dynamic MTD-based defense mechanism.

 

[1] Beltrán, E. T. M., Pérez, M. Q., Sánchez, P. M. S., Bernal, S. L., Bovet, G., Pérez, M. G., ... & Celdrán, A. H. (2022). Decentralized federated learning: Fundamentals, state-of-the-art, frameworks, trends, and challenges. arXiv preprint arXiv:2211.08413.

[2] Bouacida, N., & Mohapatra, P. (2021). Vulnerabilities in federated learning. IEEE Access, 9, 63229-63249.

[3] https://github.com/enriquetomasmb/fedstellar

[4] Beltrán, E. T. M., Gómez, Á. L. P., Feng, C., Sánchez, P. M. S., Bernal, S. L., Bovet, G., ... & Celdrán, A. H. (2023). Fedstellar: A Platform for Decentralized Federated Learning. arXiv preprint arXiv:2306.09750.

20% Design, 70% Implementation, 10% Documentation
Machine Learning, Python

Supervisors: Chao Feng

back to the main page