Abstract
The Internet of Medical Things (IoMT) is transforming healthcare by enabling smart devices to generate vast amounts of sensitive data. Federated learning (FL) allows collaborative model training on distributed IoMT data while preserving privacy, but conventional FL methods face critical challenges in this setting. IoMT devices are often bandwidth-constrained and heterogeneous, making communication efficiency a top priority and the presence of faulty client updates can severely degrade global model performance while coping with highly non-IID data. This work proposes SEFA Sparse & Efficient Federated Aggregation which marries client-side dual compression with server-side two-stage robust aggregation. On the client side, SEFA applies top-k gradient sparsification and an adaptive 8-bit quantization scheme to drastically reduce uplink communication per round while incurring minimal accuracy loss. On the server side, SEFA incorporates a two-stage robust aggregation: first, a Multi-Krum with neighbor selection identifies a subset of updates that are closest to the majority second, a coordinate-wise median is computed over the selected updates to produce a fault-tolerant global model update. This design allows SEFA to achieves comparable accuracy to standard FL in IID settings while maintaining high accuracy under non-IID data with 30% faulty clients, outperforming baseline methods including FedAvg, coordinate-median (FedMedian), single-stage Multi-Krum and a recent IoMT FL framework by a modest margin. SEFA is evaluated on three representative IoMT healthcare datasets Pima Indians Diabetes, Body Performance and Maternal Health Risk using lightweight neural network models demonstrating that SEFA offers a practical and robust solution for deploying federated learning in IoMT environments, with strong fault tolerance, efficiency and minimal performance trade-offs.
Keywords
Get full access to this article
View all access options for this article.
