DMS: Dynamic Model Scaling for Quality-Aware Deep Learning Inference in Mobile and Embedded Devices
Recently, deep learning has brought revolutions to many mobile and embedded systems that interact with the physical world using continuous video streams. Although there have been significant efforts to reduce the computational overheads of deep learning inference in such systems, previous approaches...
Main Authors: | Woochul Kang, Daeyeon Kim, Junyoung Park |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8907822/ |
Similar Items
-
Deep Neural Network Compression Technique Towards Efficient Digital Signal Modulation Recognition in Edge Device
by: Ya Tu, et al.
Published: (2019-01-01) -
Processing at the Edge: A Case Study with an Ultrasound Sensor-Based Embedded Smart Device
by: Jose-Luis Poza-Lujan, et al.
Published: (2022-02-01) -
Efficient federated learning on resource-constrained edge devices based on model pruning
by: Tingting Wu, et al.
Published: (2023-06-01) -
QoS-Aware Inference Acceleration Using Adaptive Depth Neural Networks
by: Woochul Kang
Published: (2024-01-01) -
Target Capacity Filter Pruning Method for Optimized Inference Time Based on YOLOv5 in Embedded Systems
by: Jihun Jeon, et al.
Published: (2022-01-01)