Vol. 2, Issue 2, Part A (2025)
Machine learning for edge computing: Challenges and future prospects
Adrian Lim Wei Jun, Chloe Tan Hui Min and Marcus Ong Jian Hao
Edge computing combined with machine learning (ML) has revolutionized data processing by enabling computation near data sources. This study investigates adaptive ML models in edge environments, comparing Baseline, Compression, Federated Learning with Compression (FL+Compression), and Adaptive Pipeline strategies across latency, energy, and accuracy. The Adaptive Pipeline reduced latency by 38% and energy by 20-30% with <2% accuracy loss, validated via ANOVA and pairwise tests. These results highlight context-aware ML as key for efficient and secure edge inference. Recommendations include automated model management, hardware-software co-design, and standardization of federated frameworks. The rapid evolution of edge computing and machine learning (ML) has reshaped modern computing paradigms by enabling data processing at the network periphery, closer to data sources. This study, investigates the performance, efficiency, and feasibility of deploying adaptive ML models in edge environments characterized by resource constraints and heterogeneous hardware. Through a systematic literature-based analysis supplemented by simulation and statistical evaluation, four strategies Baseline (Static), Compression, Federated Learning with Compression (FL+Compression), and Adaptive Pipeline were compared across latency, energy consumption, and model accuracy metrics. Results revealed that the Adaptive Pipeline, which integrates compression, selective offloading, and asynchronous federated updates, achieved up to 38% reduction in latency and 20-30% savings in energy with minimal accuracy loss (<2%). These improvements were validated using permutation-based ANOVA and pairwise tests, confirming statistically significant performance advantages over static models. The discussion highlights that efficiency gains stem from the interplay between lightweight model architectures, energy-aware scheduling, and distributed learning optimization. Despite these advancements, security vulnerabilities and non-IID data challenges persist, emphasizing the need for resilient federated frameworks and adversarial defense mechanisms. The study concludes that adaptive, context-aware ML pipelines represent the most practical approach for achieving low-latency, energy-efficient, and secure inference at the edge. It proposes actionable recommendations, including the integration of automated model management, hardware-software co-design, federated learning standardization, and energy-conscious runtime scheduling. Collectively, the findings provide a structured roadmap for researchers and practitioners seeking to optimize ML deployment within edge ecosystems and pave the way for scalable, intelligent, and sustainable edge computing infrastructures.
Pages: 104-109 | 6 Views 3 Downloads