AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks

In this paper, we present AnalogVNN, a simulation framework built on PyTorch that can simulate the effects of optoelectronic noise, limited precision, and signal normalization present in photonic neural network accelerators. We use this framework to train and optimize linear and convolutional neural...

Full description

Bibliographic Details
Main Authors: Vivswan Shah, Nathan Youngblood
Format: Article
Language:English
Published: AIP Publishing LLC 2023-06-01
Series:APL Machine Learning
Online Access:http://dx.doi.org/10.1063/5.0134156
Description
Summary:In this paper, we present AnalogVNN, a simulation framework built on PyTorch that can simulate the effects of optoelectronic noise, limited precision, and signal normalization present in photonic neural network accelerators. We use this framework to train and optimize linear and convolutional neural networks with up to nine layers and ∼1.7 × 106 parameters, while gaining insights into how normalization, activation function, reduced precision, and noise influence accuracy in analog photonic neural networks. By following the same layer structure design present in PyTorch, the AnalogVNN framework allows users to convert most digital neural network models to their analog counterparts with just a few lines of code, taking full advantage of the open-source optimization, deep learning, and GPU acceleration libraries available through PyTorch.
ISSN:2770-9019