Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Deep reinforcement learning‐based joint optimization of computation offloading and resource allocation in F‐RAN

View through CrossRef
AbstractThe fog radio access network (F‐RAN) has been regarded as a promising wireless access network architecture in the fifth generation (5G) and beyond systems to satisfy the increasing requirements for low‐latency and high‐throughput services by providing fog computing. However, because the cloud computing centre and fog computing‐enabled access points (F‐APs) in the F‐RAN have different computation and communication capabilities, it is crucial to make an efficient computation offloading and resource allocation strategy that can fully exploit the potential of the F‐RAN system. In this paper, the authors investigate a decentralized low‐complexity deep reinforcement learning (DRL)‐based framework for joint computation task offloading and resource allocation in the F‐RAN, which supports assistive computing‐enabled tasks offloading between F‐APs. Considering the constraints of task latency, wireless transmission rate, transmission power, and computational resource capacity, the authors formulate the system processing efficiency maximization problem by jointly optimizing offloading mode selection, channel allocation, power control, and computation resource allocation in the F‐RAN. To solve this non‐linear and non‐convex problem, the authors propose a federated DRL‐based computation offloading and resource allocation algorithm to improve the task processing efficiency and ensure privacy in the system, which can significantly reduce the computing complexity and signalling overhead of the training process compared with the centralized learning‐based method. Specifically, each local F‐AP agent consists of dueling deep Q‐network (DDQN) and deep deterministic policy gradient (DDPG) networks to appropriately deal with discrete and continuous valuable action spaces, respectively. Finally, the simulation results show that the proposed federated DRL algorithm can achieve significant performance improvements in terms of system processing efficiency and task latency compared with other benchmarks.
Title: Deep reinforcement learning‐based joint optimization of computation offloading and resource allocation in F‐RAN
Description:
AbstractThe fog radio access network (F‐RAN) has been regarded as a promising wireless access network architecture in the fifth generation (5G) and beyond systems to satisfy the increasing requirements for low‐latency and high‐throughput services by providing fog computing.
However, because the cloud computing centre and fog computing‐enabled access points (F‐APs) in the F‐RAN have different computation and communication capabilities, it is crucial to make an efficient computation offloading and resource allocation strategy that can fully exploit the potential of the F‐RAN system.
In this paper, the authors investigate a decentralized low‐complexity deep reinforcement learning (DRL)‐based framework for joint computation task offloading and resource allocation in the F‐RAN, which supports assistive computing‐enabled tasks offloading between F‐APs.
Considering the constraints of task latency, wireless transmission rate, transmission power, and computational resource capacity, the authors formulate the system processing efficiency maximization problem by jointly optimizing offloading mode selection, channel allocation, power control, and computation resource allocation in the F‐RAN.
To solve this non‐linear and non‐convex problem, the authors propose a federated DRL‐based computation offloading and resource allocation algorithm to improve the task processing efficiency and ensure privacy in the system, which can significantly reduce the computing complexity and signalling overhead of the training process compared with the centralized learning‐based method.
Specifically, each local F‐AP agent consists of dueling deep Q‐network (DDQN) and deep deterministic policy gradient (DDPG) networks to appropriately deal with discrete and continuous valuable action spaces, respectively.
Finally, the simulation results show that the proposed federated DRL algorithm can achieve significant performance improvements in terms of system processing efficiency and task latency compared with other benchmarks.

Related Results

Secured Computation Offloading in Multi-Access Mobile Edge Computing Networks through Deep Reinforcement Learning
Secured Computation Offloading in Multi-Access Mobile Edge Computing Networks through Deep Reinforcement Learning
Mobile edge computing (MEC) has emerged as a pivotal technology to address the computational demands of resource-constrained mobile devices by offloading tasks to nearby edge serve...
Confidence Guides Spontaneous Cognitive Offloading
Confidence Guides Spontaneous Cognitive Offloading
Background: Cognitive offloading is the use of physical action to reduce the cognitive demands of a task. Everyday memory relies heavily on this practice, for example when we write...
An Energy-efficient Task Offloading Model based on Trust Mechanism and Multi-agent Reinforcement Learning
An Energy-efficient Task Offloading Model based on Trust Mechanism and Multi-agent Reinforcement Learning
Abstract A task offloading model based on deep reinforcement learning and user experience degree is proposed. Firstly, after users generate blockchain tasks, Proof of Work ...
Shuttle Tankers in the Oil Export of Cascade and Chinook Fields
Shuttle Tankers in the Oil Export of Cascade and Chinook Fields
Abstract This paper is based on the work performed during the offloading operations in the Cascade and Chinook fields in Ultra Deep Waters of the U.S. Gulf of Mex...
QoE Aware and Cell Capacity Enhanced Computation Offloading for Multi-Server Mobile Edge Computing Systems with Energy Harvesting Devices
QoE Aware and Cell Capacity Enhanced Computation Offloading for Multi-Server Mobile Edge Computing Systems with Energy Harvesting Devices
The increasing complexity of intelligent services requires new paradigm to overcome the problems caused by resource-limited mobile devices. Mobile edge computing systems with energ...
A Novel SDN-Based Architecture of Task Offloading in Mobile Ad-Hoc Cloud
A Novel SDN-Based Architecture of Task Offloading in Mobile Ad-Hoc Cloud
As the core function of mobile Ad-hoc cloud, task offloading has always been a research hotspot of mobile cloud computing, and the construction, offloading decision, task division ...
SRA-E-ABCO: Terminal Task Offloading for Cloud-Edge-End Environments
SRA-E-ABCO: Terminal Task Offloading for Cloud-Edge-End Environments
Abstract With the rapid development of Internet technology, the cloud-edge-end computing model has gradually become an essential new computing model. Under this model, term...
Dependency-Aware Task Offloading for Vehicular Edge Computing with End-Edge-Cloud Collaborative Computing
Dependency-Aware Task Offloading for Vehicular Edge Computing with End-Edge-Cloud Collaborative Computing
Abstract Vehicular edge computing (VEC) is emerging as a new computing paradigm to improve the quality of vehicular services and enhance the capabilities of vehicles. It ai...

Back to Top