Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks

In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and...

Full description

Bibliographic Details
Main Authors: Christopher Hillar, Tenzin Chan, Rachel Taubman, David Rolnick
Format: Article
Language:English
Published: MDPI AG 2021-11-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/23/11/1494
_version_ 1797510362058194944
author Christopher Hillar
Tenzin Chan
Rachel Taubman
David Rolnick
author_facet Christopher Hillar
Tenzin Chan
Rachel Taubman
David Rolnick
author_sort Christopher Hillar
collection DOAJ
description In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover <i>n</i>-node networks with robust storage of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mn>2</mn><mrow><mi mathvariant="sans-serif">Ω</mi><mo>(</mo><msup><mi>n</mi><mrow><mn>1</mn><mo>−</mo><mi>ϵ</mi></mrow></msup><mo>)</mo></mrow></msup></semantics></math></inline-formula> memories for any <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>ϵ</mi><mo>></mo><mn>0</mn></mrow></semantics></math></inline-formula>. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.
first_indexed 2024-03-10T05:31:24Z
format Article
id doaj.art-530fe3130c4548da95e254bcc23a234c
institution Directory Open Access Journal
issn 1099-4300
language English
last_indexed 2024-03-10T05:31:24Z
publishDate 2021-11-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj.art-530fe3130c4548da95e254bcc23a234c2023-11-22T23:15:48ZengMDPI AGEntropy1099-43002021-11-012311149410.3390/e23111494Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield NetworksChristopher Hillar0Tenzin Chan1Rachel Taubman2David Rolnick3Awecom, Inc., San Francisco, CA 94103, USASingapore University of Technology and Design, Singapore 487372, SingaporeAwecom, Inc., San Francisco, CA 94103, USASchool of Computer Science, McGill University, Montreal, QC H3A 0G4, CanadaIn 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover <i>n</i>-node networks with robust storage of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mn>2</mn><mrow><mi mathvariant="sans-serif">Ω</mi><mo>(</mo><msup><mi>n</mi><mrow><mn>1</mn><mo>−</mo><mi>ϵ</mi></mrow></msup><mo>)</mo></mrow></msup></semantics></math></inline-formula> memories for any <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>ϵ</mi><mo>></mo><mn>0</mn></mrow></semantics></math></inline-formula>. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.https://www.mdpi.com/1099-4300/23/11/1494Hopfield networksclusteringerror-correcting codesexponential memoryhidden graphneuroscience
spellingShingle Christopher Hillar
Tenzin Chan
Rachel Taubman
David Rolnick
Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
Entropy
Hopfield networks
clustering
error-correcting codes
exponential memory
hidden graph
neuroscience
title Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_full Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_fullStr Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_full_unstemmed Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_short Hidden Hypergraphs, Error-Correcting Codes, and Critical Learning in Hopfield Networks
title_sort hidden hypergraphs error correcting codes and critical learning in hopfield networks
topic Hopfield networks
clustering
error-correcting codes
exponential memory
hidden graph
neuroscience
url https://www.mdpi.com/1099-4300/23/11/1494
work_keys_str_mv AT christopherhillar hiddenhypergraphserrorcorrectingcodesandcriticallearninginhopfieldnetworks
AT tenzinchan hiddenhypergraphserrorcorrectingcodesandcriticallearninginhopfieldnetworks
AT racheltaubman hiddenhypergraphserrorcorrectingcodesandcriticallearninginhopfieldnetworks
AT davidrolnick hiddenhypergraphserrorcorrectingcodesandcriticallearninginhopfieldnetworks