Fractal Parallel Computing

As machine learning (ML) becomes the prominent technology for many emerging problems, dedicated ML computers are being developed at a variety of scales, from clouds to edge devices. However, the heterogeneous, parallel, and multilayer characteristics of conventional ML computers concentrate the cost...

Full description

Bibliographic Details
Main Authors: Yongwei Zhao, Yunji Chen, Zhiwei Xu
Format: Article
Language:English
Published: American Association for the Advancement of Science (AAAS) 2022-01-01
Series:Intelligent Computing
Online Access:https://spj.science.org/doi/10.34133/2022/9797623
_version_ 1797810686067212288
author Yongwei Zhao
Yunji Chen
Zhiwei Xu
author_facet Yongwei Zhao
Yunji Chen
Zhiwei Xu
author_sort Yongwei Zhao
collection DOAJ
description As machine learning (ML) becomes the prominent technology for many emerging problems, dedicated ML computers are being developed at a variety of scales, from clouds to edge devices. However, the heterogeneous, parallel, and multilayer characteristics of conventional ML computers concentrate the cost of development on the software stack, namely, ML frameworks, compute libraries, and compilers, which limits the productivity of new ML computers. Fractal von Neumann architecture (FvNA) is proposed to address the programming productivity issue for ML computers. FvNA is scale-invariant to program, thus making the development of a family of scaled ML computers as easy as a single node. In this study, we generalize FvNA to the field of general-purpose parallel computing. We model FvNA as an abstract parallel computer, referred to as the fractal parallel machine (FPM), to demonstrate several representative general-purpose tasks that are efficiently programmable. FPM limits the entropy of programming by applying constraints on the control pattern of the parallel computing systems. However, FPM is still general-purpose and cost-optimal. We settle some preliminary results showing that FPM is as powerful as many fundamental parallel computing models such as BSP and alternating Turing machine. Therefore, FvNA is also generally applicable to various fields other than ML.
first_indexed 2024-03-13T07:12:34Z
format Article
id doaj.art-0fbe21bc64c949ccadaf58f89689114e
institution Directory Open Access Journal
issn 2771-5892
language English
last_indexed 2024-03-13T07:12:34Z
publishDate 2022-01-01
publisher American Association for the Advancement of Science (AAAS)
record_format Article
series Intelligent Computing
spelling doaj.art-0fbe21bc64c949ccadaf58f89689114e2023-06-05T16:38:42ZengAmerican Association for the Advancement of Science (AAAS)Intelligent Computing2771-58922022-01-01202210.34133/2022/9797623Fractal Parallel ComputingYongwei Zhao0Yunji Chen1Zhiwei Xu21 State Key Lab of Processors, ICT, CAS, China1 State Key Lab of Processors, ICT, CAS, China2 University of CASChinaAs machine learning (ML) becomes the prominent technology for many emerging problems, dedicated ML computers are being developed at a variety of scales, from clouds to edge devices. However, the heterogeneous, parallel, and multilayer characteristics of conventional ML computers concentrate the cost of development on the software stack, namely, ML frameworks, compute libraries, and compilers, which limits the productivity of new ML computers. Fractal von Neumann architecture (FvNA) is proposed to address the programming productivity issue for ML computers. FvNA is scale-invariant to program, thus making the development of a family of scaled ML computers as easy as a single node. In this study, we generalize FvNA to the field of general-purpose parallel computing. We model FvNA as an abstract parallel computer, referred to as the fractal parallel machine (FPM), to demonstrate several representative general-purpose tasks that are efficiently programmable. FPM limits the entropy of programming by applying constraints on the control pattern of the parallel computing systems. However, FPM is still general-purpose and cost-optimal. We settle some preliminary results showing that FPM is as powerful as many fundamental parallel computing models such as BSP and alternating Turing machine. Therefore, FvNA is also generally applicable to various fields other than ML.https://spj.science.org/doi/10.34133/2022/9797623
spellingShingle Yongwei Zhao
Yunji Chen
Zhiwei Xu
Fractal Parallel Computing
Intelligent Computing
title Fractal Parallel Computing
title_full Fractal Parallel Computing
title_fullStr Fractal Parallel Computing
title_full_unstemmed Fractal Parallel Computing
title_short Fractal Parallel Computing
title_sort fractal parallel computing
url https://spj.science.org/doi/10.34133/2022/9797623
work_keys_str_mv AT yongweizhao fractalparallelcomputing
AT yunjichen fractalparallelcomputing
AT zhiweixu fractalparallelcomputing