Deep learning acceleration: from quantization to in-memory computing
Deep learning has demonstrated high accuracy and efficiency in various applications. For example, Convolutional Neural Networks (CNNs) widely adopted in Computer Vision (CV) and Transformers broadly applied in Natural Language Processing (NLP) are representative deep learning models. Deep learning m...
Main Author: | Zhu, Shien |
---|---|
Other Authors: | Weichen Liu |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/163448 |
Similar Items
-
iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations
by: Zhu, Shien, et al.
Published: (2022) -
FAT: an in-memory accelerator with fast addition for ternary weight neural networks
by: Zhu, Shien, et al.
Published: (2022) -
Deep neuromorphic controller with dynamic topology for aerial robots
by: Dhanetwal, Manish
Published: (2021) -
Development of a computer-aided design environment for hardware oriented design
by: Hong, Teck Huat.
Published: (2008) -
A predictive maintenance framework for data analysis and resource optimization in industrial IoT
by: Ong, Kevin Shen Hoong
Published: (2022)