Abstract
The rapid growth of connected devices and associated data has increased pressure on cloud-edge infrastructures, where cloud servers (CSs) saturate as devices scale. Most existing studies minimize execution time but overlook system scalability, while many simulation tools are unable to model how performance bottlenecks evolve as systems grow. In this work, we propose a server capacity-driven methodology using the VisualSim simulator that improves scalability by reducing the utilization of CSs and edge servers (ESs), enabling additional devices to be supported without infrastructure upgrades. The methodology progresses by saturating the CSs with computations to evaluate their maximum capacity, offloading computations to ESs to assess their limits, and distributing computations across CSs and ESs to achieve additional headroom for computations. Scalability is then measured by integrating new device data into the system for processing. It is evaluated using three heterogeneous systems: System-1, System-2, and System-3 with 20, 30, and 40 devices, and compared against Greedy Earliest Finish Time (EFT), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO). The experimental results show that our method enables System-1, System-2, and System-3 to process 66.67%, 62.50%, and 60% more device data, respectively, while reducing execution time by up to 48.40% and energy consumption by up to 12.50% for System-3. The proposed method consistently outperforms Greedy EFT, GA, and PSO, achieving up to 40% faster execution, 16% energy savings, and 7% lower CS utilization. This methodology offers a generalizable approach for analyzing the scalability of complex systems used in defense, healthcare, and other fields.
Get full access to this article
View all access options for this article.
