Abstract
Modern vehicles have evolved into sophisticated mobile computing platforms executing computationally intensive and latency-sensitive applications. However, on-board units are limited in computational and storage capabilities. Mobile edge computing mitigates this issue, but the complex application structures, high computational demands, and limited resources in intelligent vehicles present significant challenges. To address task offloading and resource optimization for dependent tasks in vehicular applications under dynamically changing computational loads, this paper proposes a deep reinforcement learning-based method called the joint multi-dimensional decision algorithm for dependent task offloading and resource allocation (JMDD-DTORA). This method models and schedules tasks in different layers using directed acyclic graphs and a hierarchical concurrent task scheduling scheme. A three-level offloading framework dynamically adjusts task offloading paths to ensure load balancing and resource optimization. The approach improves the deep deterministic policy gradient algorithm for joint optimization of task offloading and resource allocation. Simulation results demonstrate that JMDD-DTORA outperforms other methods in key metrics of task offloading cost, proving its effectiveness in the Internet of Vehicles environment.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
