Despite major advances in artificial intelligence through deep learning methods, computer algorithms remain vastly inferior to mammalian brains at natural tasks.
Here I will report on our efforts to integrate neuroscience experiments on the mammalian neocortex
and computational circuit models based on these data with a theory that the brain embodies a generative model of the world in order to engineer novel cortically inspired neural networks.
Using in vitro multi-patching, and in vivo 2-photon imaging followed by a dense reconstruction of the circuit we obtain a faithful functional
and anatomical characterization of cortical activity in a world wide unique large scale dataset.
We integrate the functional and neuroanatomical constraints the mammalian neocortex with a novel computational model architecture (NetGard).
NetGard has neural components such as cell types, cortical layers, recurrent lateral and feedback connections, and dynamics --- all conspicuously absent from most deep learning models.
When NetGard is trained to predict single trial population responses of neurons to natural stimuli, it yields state-of-the-art prediction performance on neural data,
and reproduces several tuning properties and connectivity principles of neocortical circuits without being explicitly trained to do so.
The design principles of NetGard enable us to also test the utility of cortical circuitry in machine learning.
This architecture is the first step towards a trainable model of the cortical microcircuit that can provide a bridge between neuroscience and machine learning for the mutual benefit of both fields.
I will also report on our efforts to analyze how information is represented and transformed by populations of neurons in NetGard and real brains,
in order to infer new nonlinear recurrent computations called a message-passing algorithm within the generative model that is implicit in the neural circuitry.