Adversarial attacks and defences in Federated Learning
- Rodríguez Barroso, Nuria
- Francisco Herrera Triguero Directeur/trice
Université de défendre: Universidad de Granada
Fecha de defensa: 01 décembre 2023
- Óscar Cordón García President
- Rocío C. Romero Zaliz Secrétaire
- María José del Jesús Díaz Rapporteur
- Pietro Ducange Rapporteur
- Senén Barro Rapporteur
Type: Thèses
Résumé
Artificial Intelligence (AI) is currently in the process of revolutionising numerous facets of everyday life. Nevertheless, as its development progresses, the associated risks are on the rise. Despite the fact that its full potential remains uncertain, there is a growing apprehension regarding its deployment in sensitive domains such as education, culture, and medicine. Presently, one of the foremost challenges confronting us is finding a harmonious equilibrium between the prospective advantages and the attendant risks, thereby preventing precaution from impeding innovation. This necessitates the development of AI systems that are robust, secure, transparent, fair, respectful to privacy and autonomy, have clear traceability, and are subject to fair accountability for auditing. In essence, it entails ensuring their ethical and responsible application, giving rise to the concept of trustworthy AI. In this context, Federated Learning (FL) emerges as a paradigm of distributed learning that ensures the privacy of training data while also harnessing global knowledge. Although its ultimate objective is data privacy, it also brings forth other cross-cutting enhancements such as robustness and communication cost minimisation. However, like any learning paradigm, FL is susceptible to adversarial attacks aimed at altering the model’s operation or inferring private information. The central focus of this thesis is the development of defence mechanisms against adversarial attacks that compromise the model’s behaviour while concurrently promoting other requirements to ensure trustworthy AI.