Login

Design and Implementation of a Black-box Robustness Analysis Module for an DFL Platform

MA
State: Assigned to Wenzhe Li
Published: 2024-01-07

The development of a black-box robustness analysis module for our Decentralized Federated Learning (DFL) platform is driven by the need to strengthen machine learning models against adversarial threats while maintaining the decentralized and privacy-preserving nature of the platform [1]. Although a white-box robustness analysis, which involves accessing internal model parameters, is possible, it often compromises privacy and transparency, which are fundamental principles of federated learning. The black-box approach aims to assess model robustness without examining the internal workings of the model, thereby ensuring that the evaluation process preserves privacy and aligns with the principles of decentralization [2]. The introduction of this module is intended to improve the security and reliability of the DFL platform, creating a resilient ecosystem where machine learning models can withstand adversarial challenges while safeguarding the privacy and integrity of decentralized data sources.

Below are the key components and considerations for such a module:

Adversarial Testing:

Implement techniques for generating adversarial examples that can expose vulnerabilities in machine learning models.

Evaluate model performance under perturbations and attacks without access to the model's internal parameters.

Input Diversity:

Design tests that cover a diverse set of input data to ensure the model's generalization capabilities in real-world scenarios.

Consider various data distributions and edge cases relevant to the application domain.

Metrics Collection:

Define robustness metrics to quantify the model's performance under different adversarial scenarios.

Collect and analyze metrics such as accuracy, sensitivity, and specificity.

Generate reports on robustness evaluations to guide model enhancements.

Integration with DFL Platform:

Integrate the black-box robustness analysis module seamlessly into the DFL platform [3].

Ensure compatibility with existing infrastructure and workflows.

 

[1] Bouacida, N. and Mohapatra, P., 2021. Vulnerabilities in federated learning. IEEE Access, 9, pp.63229-63249.

[2] Wang, S., Ko, R.K., Bai, G., Dong, N., Choi, T. and Zhang, Y., 2023. Evasion Attack and Defense On Machine Learning Models in Cyber-Physical Systems: A Survey. IEEE Communications Surveys & Tutorials.

[3] Beltrán, E.T.M., Gómez, Á.L.P., Feng, C., Sánchez, P.M.S., Bernal, S.L., Bovet, G., Pérez, M.G., Pérez, G.M. and Celdrán, A.H., 2024. Fedstellar: A platform for decentralized federated learning. Expert Systems with Applications, 242, p.122861.

20% Design, 70% Implementation, 10% Documentation
python

Supervisors: Chao Feng

back to the main page