Institutional Repository

Temporal difference method with hebbian plasticity rule as a learning algorithm in networks of chaotic spiking neurons

Show simple item record

dc.contributor.author Aoun, Mario A.
dc.date.accessioned 2021-06-16T08:53:52Z
dc.date.available 2021-06-16T08:53:52Z
dc.date.issued 2007
dc.identifier.citation Aoun, M. A. (2007). Temporal difference method with hebbian plasticity rule as a learning algorithm in networks of chaotic spiking neurons (Master's thesis, Notre Dame University-Louaize, Zouk Mosbeh, Lebanon). Retrieved from http://ir.ndu.edu.lb/123456789/1322 en_US
dc.identifier.uri http://ir.ndu.edu.lb/123456789/1322
dc.description M.S. -- Faculty of Natural and Applied Sciences, Department of Computer Science, Notre Dame University, Louaize, 2007; "A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science"; Includes bibliographical references (leaves 52-55). en_US
dc.description.abstract The aim of this thesis is to deluge the latest studies in chaotic dynamics and their relevance in neural computing, as also to inspect a new learning algorithm for a network of chaotic spiking neurons as it is recently proposed by Nigel Crook et al. [1]. The thesis will tackle the latest research in the field of information processing and chaotic neural networks, and will contribute to the recent work of Nigel Crook et al. [1] by finding a suitable learning algorithm for chaotic neurons. The learning algorithm based on biological realism will be implemented in a network of chaotic spiking neurons and evaluated accordingly. After the extensive scientific studies in neuroscience during the last decade, which have been supported with experimental evidence, much explanation of the internal brain processes that lead the brain to learn and exhibit ‘intelligent behavior’ has been revealed. Such new ‘explanation’ relies on chaotic neurodynamics and is originally based on the experiments of Freeman and Skarda [2, 3]. The experiments show chaotic activity in the olfactory bulb of a rabbit’s brain when the rabbit is in a ‘perceptual’ state [2, 3]. Such studies led to the hypothesis, given by Freeman, “The physiology of perception” in 1991, which claims that “Chaos may be the chief property that makes the brain different from an artificial-intelligent machine” [3]. Walter J. Freeman is considered as being the ‘father’ of chaotic neurodynamics, his hypothesis gained major attention in the scientific community and led to further scientific results [7, 39, 40, 41]. Last but not least, we conclude that emulating chaotic neuro dynamics is a fundamental strategy in the design of new models of artificial neural networks. We’ll point out on the necessary links between neural computing and cognitive computations, nonlinear dynamics and chaotic neuro dynamics. This research is an extension to a wider research concerned in improving the capabilities of artificial neural networks by the exploit of non linear dynamical systems [1]. The thesis will go in depth in presenting the topic, will contribute to the latest research of applying chaotic dynamics in neural information processing and specifically to the work of N. Crook [1]. The fundamental theories behind the Nonlinear Dynamic State - NDS - Neuron invention [1], as also its roots which are found in Pyraguas theory of chaos control [5, 26] and its insights in Freeman theories on chaotic neuro dynamics and strange attractors are presented. The main idea in the NDS neuron research relies in stabilizing Unstable Periodic Orbits called UPOs of strange attractors to model neural states with these UPOs. If this is the case then the NDS neuron would have an unlimited number of states it could synchronize onto, since the number of possible UPOs is theoretically infinite [1]. My contribution went through the analysis of NDS neurons dynamics and their behavior with time when they are recurrently connected. The analysis, research and development resulted in the revelation of an expedient biological phenomenon which suits the dynamics of these neurons and learning capabilities, while ensuring their states adaptation, stability and synchronization. en_US
dc.format.extent 55 leaves : color illustrations
dc.language.iso en en_US
dc.publisher Notre Dame University-Louaize. en_US
dc.rights Attribution-NonCommercial-NoDerivs 3.0 United States *
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/us/ *
dc.subject.lcsh Neural computers
dc.subject.lcsh Neural networks (Computer science)
dc.subject.lcsh Algorithms--Data processing
dc.subject.lcsh Neurosciences
dc.title Temporal difference method with hebbian plasticity rule as a learning algorithm in networks of chaotic spiking neurons en_US
dc.type Thesis en_US
dc.rights.license This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 United States License. (CC BY-NC 3.0 US)
dc.contributor.supervisor Al Khalidi, Khaldoun, Ph.D. en_US
dc.contributor.department Notre Dame University-Louaize. Department of Computer Science en_US


Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States

Search DSpace


Advanced Search

Browse

My Account