Tag
This paper introduces COSMOS, a model-agnostic personalized federated learning framework that uses clustered server models and pseudo-label-only communication. It provides theoretical analysis showing exponential personalization risk contraction and demonstrates superior performance over existing baselines in heterogeneous environments.
This paper presents a comprehensive experimental comparison of various federated learning aggregation strategies, analyzing their performance and efficiency under both homogeneous and heterogeneous data distributions.
This paper introduces a simulation framework for federated analysis of Multiple Sclerosis brain lesions, combining image segmentation with clinical data analysis to test federated learning methods while preserving patient privacy.
FairHealth is an open-source Python library designed for trustworthy healthcare AI in low-resource settings, offering modules for fairness auditing, privacy-preserving federated learning, and explainability.
This paper introduces GCD-FGL, a federated graph learning framework designed for generalized category discovery in dynamic environments. It addresses challenges like the neighborhood absorption effect and global semantic inconsistency to improve novel category detection across distributed clients.
This paper introduces EdgeFlowerTune, a benchmark for evaluating federated LLM fine-tuning under realistic edge system constraints, demonstrating that accuracy-only metrics can be misleading regarding deployability.
This paper introduces GLoRA, a gauge-aware server representation for Federated LoRA that addresses the semantic mismatch in factor aggregation by estimating a consensus update subspace. Experiments show GLoRA outperforms baselines in performance and efficiency across heterogeneous client scenarios.
This paper introduces FedeKD, a reliability-aware framework for federated knowledge distillation that uses an energy-based gating mechanism to mitigate negative transfer in heterogeneous settings. The authors demonstrate that weighting knowledge transfer based on sample-wise trust improves robustness and predictive performance without requiring public datasets.
The paper introduces MEMOA, a decentralized strategy for massive online agents that achieves optimality via mean-field Nash equilibria, outperforming greedy baselines while scaling better than centralized approaches.
MIT researchers developed a new framework called FTTE that accelerates privacy-preserving federated learning by 81%, enabling efficient AI training on resource-constrained edge devices like smartwatches and sensors.
This paper introduces 'dictator clients'—a novel class of malicious participants in federated learning capable of erasing other clients' contributions while preserving their own—and provides theoretical analysis of their impact on model convergence, including scenarios with multiple adversarial clients.
EdgeDetect is a federated intrusion detection system for 6G-IoT environments that combines importance-aware gradient binarization (32× compression) with Paillier homomorphic encryption to achieve 98% accuracy on CIC-IDS2017 while reducing communication overhead by 96.9% and enabling deployment on resource-constrained devices like Raspberry Pi 4.
The article explains the concept of Federated Learning as a privacy-preserving machine learning technique that trains models on local devices rather than central servers. It details the process of encrypted parameter updates and aggregation to mitigate data leakage risks while maintaining model performance.