A Novel Task Provisioning Approach Fusing Reinforcement Learning for Big Data
The large-scale tasks processing for big data using cloud computing has become a hot research topic. Most of previous work on task processing is directly customized and achieved through existing methods. It may result in relatively more system response time, high algorithm complexity and resource wa...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8846672/ |
_version_ | 1819073716555874304 |
---|---|
author | Yongyi Cheng Gaochao Xu |
author_facet | Yongyi Cheng Gaochao Xu |
author_sort | Yongyi Cheng |
collection | DOAJ |
description | The large-scale tasks processing for big data using cloud computing has become a hot research topic. Most of previous work on task processing is directly customized and achieved through existing methods. It may result in relatively more system response time, high algorithm complexity and resource waste, etc. Based on this argument, aiming at realizing overall load balancing, bandwidth cost minimization and energy conservation while satisfying resource requirements, a novel large-scale tasks processing approach called TOPE (Two-phase Optimization for Parallel Execution) is developed. The deep reinforcement learning model is designed for virtual link mapping decisions. We treat whole network as a multi-agent system and the whole process of selecting each node's next hop node is formalized via Markov decision process. We train the learning agent by deep neural network to store parameters of deep network model while approximating the value function, rather than tons of state-action values. The virtual node mapping is achieved by designed distributed multi-objective swarm intelligence to realize our two-phase optimization for task allocation in topology structure of Fat-tree. We provide experiments to show the ability of TOPE in analyzing task requests and infrastructure network. The superiority of TOPE for large-scale tasks processing is convincingly demonstrated by comparing with state-of-the-art approaches in cloud environment. |
first_indexed | 2024-12-21T17:58:03Z |
format | Article |
id | doaj.art-2f05473df3af473b9aa1b6a7195bbb57 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-21T17:58:03Z |
publishDate | 2019-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-2f05473df3af473b9aa1b6a7195bbb572022-12-21T18:55:08ZengIEEEIEEE Access2169-35362019-01-01714369914370910.1109/ACCESS.2019.29431938846672A Novel Task Provisioning Approach Fusing Reinforcement Learning for Big DataYongyi Cheng0https://orcid.org/0000-0002-8300-8950Gaochao Xu1College of Computer Science and Technology, Jilin University, Changchun, ChinaCollege of Computer Science and Technology, Jilin University, Changchun, ChinaThe large-scale tasks processing for big data using cloud computing has become a hot research topic. Most of previous work on task processing is directly customized and achieved through existing methods. It may result in relatively more system response time, high algorithm complexity and resource waste, etc. Based on this argument, aiming at realizing overall load balancing, bandwidth cost minimization and energy conservation while satisfying resource requirements, a novel large-scale tasks processing approach called TOPE (Two-phase Optimization for Parallel Execution) is developed. The deep reinforcement learning model is designed for virtual link mapping decisions. We treat whole network as a multi-agent system and the whole process of selecting each node's next hop node is formalized via Markov decision process. We train the learning agent by deep neural network to store parameters of deep network model while approximating the value function, rather than tons of state-action values. The virtual node mapping is achieved by designed distributed multi-objective swarm intelligence to realize our two-phase optimization for task allocation in topology structure of Fat-tree. We provide experiments to show the ability of TOPE in analyzing task requests and infrastructure network. The superiority of TOPE for large-scale tasks processing is convincingly demonstrated by comparing with state-of-the-art approaches in cloud environment.https://ieeexplore.ieee.org/document/8846672/Large-scale tasksbig datatwo-phase optimizationreinforcement learningfat-tree |
spellingShingle | Yongyi Cheng Gaochao Xu A Novel Task Provisioning Approach Fusing Reinforcement Learning for Big Data IEEE Access Large-scale tasks big data two-phase optimization reinforcement learning fat-tree |
title | A Novel Task Provisioning Approach Fusing Reinforcement Learning for Big Data |
title_full | A Novel Task Provisioning Approach Fusing Reinforcement Learning for Big Data |
title_fullStr | A Novel Task Provisioning Approach Fusing Reinforcement Learning for Big Data |
title_full_unstemmed | A Novel Task Provisioning Approach Fusing Reinforcement Learning for Big Data |
title_short | A Novel Task Provisioning Approach Fusing Reinforcement Learning for Big Data |
title_sort | novel task provisioning approach fusing reinforcement learning for big data |
topic | Large-scale tasks big data two-phase optimization reinforcement learning fat-tree |
url | https://ieeexplore.ieee.org/document/8846672/ |
work_keys_str_mv | AT yongyicheng anoveltaskprovisioningapproachfusingreinforcementlearningforbigdata AT gaochaoxu anoveltaskprovisioningapproachfusingreinforcementlearningforbigdata AT yongyicheng noveltaskprovisioningapproachfusingreinforcementlearningforbigdata AT gaochaoxu noveltaskprovisioningapproachfusingreinforcementlearningforbigdata |