A Comprehensive Survey on Split-Fed Learning: Methods, Innovations, and Future Directions

Document Type

Article

Publication Title

IEEE Access

Abstract

In this work we presented Split-Fed Learning (SFL), a new framework that combines the concepts of Federated Learning (FL) and Split Learning (SL), to provide privacy-aware and scalable training of machine learning models in settings with distributed data. With organizations needing more and more to take advantage of decentralized data sources without compromising on security, the potential of using SFL becomes more apparent. We identified studies that assess SFL’s performance against FL (both conventional and naive techniques) and SL on several datasets. Experimental results in many research shows that SFL can reduce the communication overhead and training time by a large margin while achieving a competitive accuracy of final model. In contrast, SFL delivered 30% better training speed and 15% accuracy than baseline models particularly for non-IID data distributions challenges. Lastly, a thorough breakdown of the SFL architecture explains its novelty for separating model training between clients and servers with respect to model partitioning. Hence, this optimization not only improves the efficiency of resource utilization but also boosts the resiliency of learning process against data heterogeneity. These results demonstrate the potential of SFL as a scalable and efficient approach to privacy-preserving machine learning, providing an alternative solution to recent challenges faced by organizations that deal with sensitive data. This is the first paper establishing such type of literature for SFL, representing an initial step toward a theoretical background for future research for implementing distributed learning frameworks and next towards practical usage scenarios on testing SFL.

First Page

46312

Last Page

46333

DOI

10.1109/ACCESS.2025.3547641

Publication Date

1-1-2025

This document is currently not available here.

Share

COinS