Custom Hardware Inference Accelerator for TensorFlow Lite for Microcontrollers

In recent years, the need for the efficient deployment of Neural Networks (NN) on edge devices has been steadily increasing. However, the high computational demand required for Machine Learning (ML) inference on tiny microcontroller-based IoT devices avoids a direct software deployment on such resou...

Full description

Bibliographic Details
Main Authors: Erez Manor, Shlomo Greenberg
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9825651/