Paper
16 December 1992 Perceptron Hamming-stability learning rule for Hopfield associative memory
Xinhua Zhuang, Yan Huang, Yunxin Zhao
Author Affiliations +
Abstract
In the paper, we are to design the optimal learning rule for the Hopfield associative memory (HAM) based on three well recognized criteria, that is, all desired attractors must be made not only isolately stable but also asymptotically stable, and the spurious stable states should be the fewest possible. To construct a satisfactory associative memory, those criteria are crucial. In the paper, we first analyze the real cause of the unsatisfactory performance of the Hebb rule and many other existing learning rules designed for HAMs and then show that three criteria actually amount to widely expanding the basin of attraction around each desired attractor. One effective way to widely expand basins of attraction of all desired attractors is to appropriately dig their respective steep kernel basin of attraction. For this, we introduce a concept called by the Hamming-stability. Surprisingly, we find that the Hamming-stability for all desired attractors can be reduced to a moderately expansive linear separability condition at each neuron and thus the well known Rosenblatt's perceptron learning rule is the right one for learning the Hamming-stability. Extensive experiments were conducted, convincingly showing that the proposed perceptron Hamming-stability learning rule did take good care of three optimal criteria.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Xinhua Zhuang, Yan Huang, and Yunxin Zhao "Perceptron Hamming-stability learning rule for Hopfield associative memory", Proc. SPIE 1766, Neural and Stochastic Methods in Image and Signal Processing, (16 December 1992); https://doi.org/10.1117/12.130856
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neurons

Content addressable memory

Image processing

Signal processing

Stochastic processes

Neural networks

Artificial neural networks

Back to Top