Investigation of Memristor-Based Neural Networks on Pattern Recognition

Mobile phones, laptops, computers, digital watches, and digital calculators are some of the most used products in our daily life. In the background, to make these gadgets work as per our desire, there are many simple components necessary for electronics to function, such as resistors, capacitors, an...

Full description

Bibliographic Details
Main Authors: Gayatri Routhu, Ngangbam Phalguni Singh, Selvakumar Raja, Eppala Shashi Kumar Reddy
Format: Article
Language:English
Published: MDPI AG 2023-03-01
Series:Engineering Proceedings
Subjects:
Online Access:https://www.mdpi.com/2673-4591/34/1/9
Description
Summary:Mobile phones, laptops, computers, digital watches, and digital calculators are some of the most used products in our daily life. In the background, to make these gadgets work as per our desire, there are many simple components necessary for electronics to function, such as resistors, capacitors, and inductors, which are three basic circuit elements. The Memristor is one such component. This paper provides simulation results of the memristor circuit and its V-I characteristics at different functions as an input signal. A well-trained ANN is able to recognize images with higher precision. To enhance the properties such as accuracy, precision, and efficiency in recognition, memristor characteristics are introduced to the neural network, however, older devices experience some non-linearity issues, causing conductance-tuning problems. At the same time, to be used in some advanceable applications, ANN requires a huge amount of vector-matrix multiplication based on in-depth network expansion. An ionic floating gate (IFG) device with the characteristics of a memristive device can solve these problems. This work proposes a fully connected ANN using the IFG model, and the simulation results of the IFG model are given as synapses in deep learning. We use algorithms such as the gradient-descent model, forward and backward propagation for network building, and weight setting in neural networks to enhance their ability to recognize images. A well-trained network is formed by tuning those memristive devices to an optimized state. The synaptic memory obtained from the IFG device will be used in other deep neural networks to increase recognition accuracy. To be an activation function in the neural network, sigmoid functions were used but later replaced by the ReLu function to avoid vanishing gradients. This paper shows how images were recognized by their front, top, and side views.
ISSN:2673-4591