Computation with sequences of neural assemblies

Series
ACO Student Seminar
Time
Friday, February 10, 2023 - 1:00pm for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Max Dabagia – Georgia Tech CS – maxdabagia@gatech.eduhttps://mdabagia.github.io/
Organizer
Abhishek Dhawan

Assemblies are subsets of neurons whose coordinated excitation is hypothesized to represent the subject's thinking of an object, idea, episode, or word. Consequently, they provide a promising basis for a theory of how neurons and synapses give rise to higher-level cognitive phenomena. The existence (and pivotal role) of assemblies was first proposed by Hebb, and has since been experimentally confirmed, as well as rigorously proven to emerge in the model of computation in the brain recently developed by Papadimitriou & Vempala. In light of contemporary studies which have documented the creation and activation of sequences of assemblies of neurons following training on tasks with sequential decisions, we study here the brain's mechanisms for working with sequences in the assemblies model of Papadimitriou & Vempala.  We show that (1) repeated presentation of a sequence of stimuli leads to the creation of a sequence of corresponding assemblies -- upon future presentation of any contiguous sub-sequence of stimuli, the corresponding assemblies are activated and continue until the end of the sequence; (2) when the stimulus sequence is projected to two brain areas in a "scaffold", both memorization and recall are more efficient, giving rigorous backing to the cognitive phenomenon that memorization and recall are easier with scaffolded memories; and (3) existing assemblies can be quite easily linked to simulate an arbitrary finite state machine (FSM), thereby capturing the brain's ability to memorize algorithms. This also makes the assemblies model capable of arbitrary computation simply in response to presentation of a suitable stimulus sequence, without explicit control commands. These findings provide a rigorous, theoretical explanation at the neuronal level of complex phenomena such as sequence memorization in rats and algorithm learning in humans, as well as a concrete hypothesis as to how the brain's remarkable computing and learning abilities could be realized.

 

Joint work with Christos Papadimitriou and Santosh Vempala.