Bar-Ilan University researchers have shown that brain-inspired shallow neural networks can achieve the same classification success rates as deep learning architectures consisting of many layers and filters, but with less computational complexity. The findings suggest that efficient learning of non-trivial classification tasks can be achieved using shallow feedforward networks, potentially requiring less computational complexity.
The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer. However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward layers were later introduced. This is the essential component of the current implementation of deep learning algorithms, which lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chatbots.
According to Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research, “A positive answer questions the need for deep learning architectures, and might direct the development of unique hardware for the efficient and fast implementation of shallow learning.”
The study showed that efficient learning on an artificial shallow architecture can achieve the same classification success rates that previously were achieved by deep learning architectures consisting of many layers and filters, but with less computational complexity. However, the efficient realization of shallow architectures requires a shift in the properties of advanced GPU technology, and future dedicated hardware developments.
The efficient learning on brain-inspired shallow architectures goes hand in hand with efficient dendritic tree learning, which is based on previous experimental research by Prof. Kanter on sub-dendritic adaptation using neuronal cultures, together with other anisotropic properties of neurons, like different spike waveforms, refractory periods and maximal transmission rates.
For years, brain dynamics and machine learning development were researched independently, however recently brain dynamics have been revealed as a source for new types of efficient artificial intelligence. The research findings provide insights into the potential of developing new types of hardware and software that can mimic the capabilities of the human brain, potentially leading to significant advances in the field of artificial intelligence.
Figure: Scheme of Deep Machine Learning consisting of many layers (left) vs. Shallow Brain Learning consisting of a few layers with enlarged width (right). Credit Prof. Ido Kanter, Bar-Ilan University