Tuesday 21 January 2020 16:00 – 17:00 A. Payatakes Seminar Room
“Neuro-inspired deep learning architectures”
Dr Chavlis Spyridon Institute of Molecular Biology and Biotechnology (IMBB)
Abstract
A typical biological neuron, such as a pyramidal neuron of the hippocampus or the neocortex, receives thousands of afferent synaptic inputs to its dendritic tree, propagating its output downstream via efferent axonal transmission. In conventional Artificial Neural Networks (ANNs), dendritic trees are modeled as linear structures that sum weighted synaptic inputs to the cell bodies. However, numerous experimental and theoretical studies have shown that dendritic arbors are far more than simple linear integrators. That is, synaptic inputs can actively modulate neighboring synaptic activity; therefore, dendritic structures are highly nonlinear. In addition, the most widely-used ANN learning algorithm is backpropagation, which retrogradely broadcasts the total cost in order to fine-tune model parameters. Nevertheless, in biological systems, neurons communicate with each other via different rules which are based on features other than a global error (e.g., variance, correlation). In this study, inspired by the rules governing animals’ brains, we model the dendritic structures accompanied by their non-linearities and also we add biologically plausible learning rules. We apply this novel architecture to a typical machine learning task, namely the classification of images. We also show that our proposed architecture surpasses naive deep neural networks given the same complexity, i.e., number of parameters.