FedGuard-CI: Federated Defense Architecture for Privacy-Preserving Collaborative Learning against Model Inversion Attacks
Abstract
Federated learning in collaborative intelligence (CI) environments introduces critical privacy risks, including model inversion and gradient leakage, particularly in sensitive domains such as healthcare and finance. This paper presents FedGuard-CI, a novel privacy-preserving framework that integrates dual-stage differential privacy, trust-aware secure aggregation, and a Model Inversion Risk Estimator (MIRE) to mitigate these threats. Experimental evaluation across multiple datasets demonstrates that FedGuard-CI achieves 93.1% accuracy at a privacy budget of ?=3.2\epsilon = 3.2?=3.2, outperforming FLAME and DP-FedAvg in both utility and privacy preservation. The framework reduces inversion success rate by 85% compared to FedAvg, with a 9.6% ISR and a 0.18 SSIM score, while maintaining low communication overhead (585 KB) and efficient runtime (30.2s per round). Ablation studies confirm the importance of MIRE and trust aggregation in enhancing both security and model performance. These results highlight FedGuard-CI’s practicality, scalability, and effectiveness as a foundation for secure and trustworthy federated intelligence.
Keywords
Federated Learning, Differential Privacy, Model Inversion Attack, Secure Aggregation, Collaborative Intelligence
