Vol. 2, Issue 2, Part A (2025)

Energy-efficient deep learning models for edge devices

Author(s):

Aditya Pramana and Siti Rahmawati

Abstract:

The rapid expansion of the Internet of Things (IoT) and edge computing ecosystems has intensified the demand for deploying deep learning models on low-power devices with limited computational capacity. Traditional deep neural networks, though highly accurate, are typically resource-intensive and unsuitable for real-time inference on embedded systems. This study presents a hybrid optimization framework that integrates model compression, quantization, and adaptive inference mechanisms to achieve energy-efficient deep learning on heterogeneous edge hardware. Experimental evaluations were conducted on multiple platforms, including NVIDIA Jetson Nano, Raspberry Pi 4, Google Coral Dev Board, and ARM Cortex-M7 microcontroller, using benchmark datasets such as CIFAR-10 and ImageNet. Statistical analysis using ANOVA and pairwise comparison tests confirmed significant improvements in energy efficiency across all configurations. The proposed hybrid model achieved up to 42% reduction in energy consumption compared to Once-for-All (OFA) networks while maintaining accuracy losses within 1-2%, thereby validating the hypothesis that hybrid static-dynamic optimization can deliver sustainable performance without sacrificing prediction quality. Furthermore, the adaptive inference feature dynamically adjusted the computational depth based on input complexity, leading to enhanced accuracy-per-joule ratios and consistent latency performance. These results demonstrate the potential of integrating hardware-aware neural architecture design with runtime adaptability to bridge the gap between computational capability and energy sustainability. The findings not only contribute to the growing field of green artificial intelligence but also establish practical design principles for scalable, environment-friendly deployment of AI systems at the network edge. The research concludes by recommending hardware-software co-design practices, runtime-aware architectures, and policy frameworks emphasizing energy efficiency as a key criterion for future AI innovation.

Pages: 110-118  |  8 Views  5 Downloads

How to cite this article:
Aditya Pramana and Siti Rahmawati. Energy-efficient deep learning models for edge devices. J. Mach. Learn. Data Sci. Artif. Intell. 2025;2(2):110-118.