Modeling Integrated Cortical Learning: Explorations of Cortical Map Development, Unit Selectivity, and Object Recognition




Journal Title

Journal ISSN

Volume Title



One of the most formative theories in neuroscience is the Hierarchical Theory of Cortex (HTC), which postulates a hierarchy of simple and complex cells within each cortical visual area. The Deep Convolutional Neural Networks (DCNN) architecture is the most computationally successful implementations of HTC, and has been adopted as a tool for linking cognition to neural processes. However, DCNNs are exceedingly abstract models of cortical learning. First, DCNNs use fixed connectivity, whereas cortical connectivity is plastic. Second, DCNNs use convolutional weight-sharing, whereas simple cells in visual cortex learn using local competition rules. Third, DCNNs use fixed pools, whereas complex cells in visual cortex may learn their pooling structure. This means that DCNNs do not develop an analogue to the cortical maps developed by cortex. In addition, differences in feature learning may mean that DCNNs learn very different high-level unit representations compared to the high-level visual cortex. In this dissertation, I introduce a biologically inspired framework for understanding unsupervised visual category learning, called the Temporal Relation Manifold TRM framework, which extends the object manifold framework of vision. With this new framework, I develop a model of hierarchical cortical learning that integrates biologically plausible models of axon development, simple cell learning, and complex cell learning, into a single model called the Integrated Cortical Learning Model (ICL). As part of these efforts I also introduce novel methods for incorporating axonal learning and development into artificial neural networks called the Axon Game and the Arbor Layer. I examined the utility of this new cortical model in three main sets of simulation studies. First, I explored its ability to develop high-level cortical maps organized by semantic categories. Second, I explored whether the ICL model would develop functionally specialized unit representations or unspecialized unit representations. Third, I tested the performance of several versions of the model on two image recognition benchmarks (Fashion-MNIST & ImageNette). These simulation studies showed three main results. First, that the ICL model developed continuous topological maps in its upper layers, but these maps were not substantially different from the maps developed in its lower layers in key ways. Second, the ICL model developed unspecialized unit representations similar to those of DCNNs, though this result may be due to propagation of shallow representations. Third, the ICL model performed at a comparable level to similarly-sized DCNNs with very modest tuning (91% accuracy for Fashion-MNIST, and 40% accuracy for ImageNette). Post-hoc analyses suggested that the proposed complex cell model may have been a limiting factor, highlighting an area for future study. The deep ICL model built for this dissertation showed the novel ability to learn hierarchical cortical maps, agreement with DCNN work on unit-level representations, and promising performance, all while using more biologically motivated unsupervised learning rules. In summary, this dissertation introduces a framework (TRM) and bio-inspired model (ICL) as an alternative to DCNNs, and evaluates this new model in terms of cortical map development, unit representations, and classification performance.



Biology, Neuroscience, Psychology, Cognitive, Artificial Intelligence