View(s) :
2 (0 ULiège)
Download(s) :
0 (0 ULiège)
p. 107-120
In incremental learning, it is necessary to conquer the dilemma of plasticity and stability. Because neural networks usually employ continuously distributed representation for state space, learning newly added data affects the existing memories. We apply a neural network with algebraic (lattice) structure to incremental learning, that has been proposed to model information processing in the dendrites of neurons. It has been proposed as a mathematical model of information processing in the dendrites of neurons. Because of the operation 'maximum' in lattice algebra weakening the continuously distributed representation, our proposed model succeeds in incremental learning.
Daisuke Uragami, Hiroyuki Ohta and Tatsuji Takahashi, « Lattice Neural Networks for Incremental Learning », CASYS, 24 | 2010, 107-120.
Daisuke Uragami, Hiroyuki Ohta and Tatsuji Takahashi, « Lattice Neural Networks for Incremental Learning », CASYS [Online], 24 | 2010, Online since 06 September 2024, connection on 27 December 2024. URL : http://popups.uliege.be/3041-539x/index.php?id=3065
School of Computer Science, Tokyo University of Technology, 1404-1 Katakuramachi, Hachioji City, Tokyo 192-0982, Japan
Department of Physiology, National Defense Medical College, 3-2 Namiki, Tokorozawa, Saitama 359-8513, Japan
Division of Information System Design, School of Science and Technology, Tokyo, Denki University, Hatoyama, Hiki, Saitama 350-0394, Japan