A General Strategy for Deciphering the Brain: State Mapping

The brain is a dynamic system with different states or modes regulated by brainstem control systems. We need to understand brain function in the context of these modes. The two most obvious are wake and sleep. But within these are specific sequences of activity, nested oscillations, in various parts of the network, each phase of which could be viewed as discrete periods of time in which the system functions in a particular way, followed by a switch to the next state. Brain states can be of any length, months even, but the key is that they must be hierarchically organized, with long states giving rise to unique sets of shorter states and so on. Transition probabilities are likely to be high between short states that are members of the same long state, but relatively low between those belonging to separate ones. By identifying common sequences and correlating them with behavior, we will understand how the brain moves through time. Because states can exert a great influence on the underlying activity, all other brain related phenomena must be considered in the context of the state that it occurs in and the causal chain that led to that state. This is pointedly not done in the vast majority of neuroscience experiments (unless of course that state is fixed at a particular phase of awake attentive, as in many task-performance studies).

Using dense recording arrays or in vivo calcium/voltage imaging in certain structures, it may be possible to gain fragmented access to the information contained in the neural network itself. But how do we decode it? We have no sense of what the time base of the brain is, how long a “packet” is. Another way to think of this is settling time, at what point have all the effects of upstream inputs had a chance to modulate the activity of receiving cells? And how persistent are those effects at the network scale? In most artificial NNs, time is irrelevant because the steps are known and defined. But in the brain, timing is critical. In order to understand the context of spiking activity, it is critical to understand its phase relationship with the rest of the network. Like a timing gun, a phase-locked strobe used to diagnose engine ignition timing, we need phase information to understand neural activity. To know this, we first need to identify key signatures of sequential network activity, nested within a larger hierarchy of potential brain states. Many of the important clues about these parameters are contained in non-stationary, non-sinusoidal signals, and will require a deeper understanding of the underlying biophysics. These efforts would be greatly aided by isolating the activity of certain classes of interneurons (using fiber photometry for instance). Using this information, it should be possible to interpret neural activity on a “step by step” basis, and build up sequences of activity.

No matter the coverage, important pieces will be missing, tucked away in cells outside of the recorded area. It is not possible to record everywhere and there is a significant cost, in terms of brain damage, to implanting more electrodes and/or optical components than necessary. The best we can hope for is to find the right combination of high and low density recordings in the right areas, in order to maximize access to the information we are interested in deciphering, while minimizing the number and size of implanted devices.

Algorithms of the Brain…. and why we should care.

Like many of those in attendance, I was in awe of the achievements of the DeepMind team after seeing a talk by Demis Hassabis at the annual society for neuroscience conference this past November.  

Here is a similar talk he gave at CSAR: https://www.youtube.com/watch?v=ZyUFy29z3Cw

It struck me that despite vastly different methods of implementation neural networks are essentially solving the same problems that the brain solves, namely pattern matching by looking for statistical (ir)regularities in large datasets, and then building a network that embodies those abstractions for quick categorization of incoming sensory information and decision making.

Despite the obvious differences between network architectures and even the concept of a “neuron”, I think there is much to learn from understanding how deep learning works in silico in order to understand how our wet mushy neural networks are constructed by analogy.

The problem is that we as neuroscientists simply do not possess the experience and intuition to understand how the brain might work on a mechanistic level because we have no frame of reference from which to draw. We can create analogies based on other technical expertise we might happen to have, through our experiences, whether it’s the inner workings of other complex machines like a gasoline engine, the transistor radio, or the desktop computer. But these systems aren’t particularly helpful in understanding the brain because they solve fundamentally different problems.

Artificial neural networks, especially the general purpose ones like DeepMind works on, capture one important piece of the architecture of the brain that makes them worth considering, learning. A well-trained network can be conceptualized as a model of the abstract features of the training dataset itself that allow for categorization and decision making. I think this holds whether we are talking about ANNs or the brain itself.

In order to understand what algorithms the brain might be running, it is not enough to know neuroscience, one must also know algorithms. To the extent that algorithms can be considered to be mathematical solutions to problems regardless of the implementation, then there are likely to be a finite number of solutions to a particular problem, fewer solutions than there are ways to implement those solutions. So by focusing solely on a reductionist approach the brain, in which all the interacting pieces are isolated, studied, and weighted equally no matter their functional importance, one might be inadvertently obscuring the solution. It is hard to imagine someone with nothing but the knowledge of every interacting part, coming up with a useful understanding of the brain, in the same way that a parts manifest for a jetliner wouldn’t be much use for understanding flight.  By starting with the problems and the algorithmic solutions themselves we can arrive at the correct understanding of brain function much quicker.

This is also the strategy used by those currently developing artificial intelligence unconstrained by the details of the brain itself.  As daunting as the task sounds, it is actually easier than the alternatives. When you have a good working model of a system, all you need to do experimentally is to find evidence for or against a particular wetware implementation. This can be done at many levels of abstraction, for which there are already many models in existence. In addition, because computational models offer near perfect information (if one chooses to save and decipher it), the iterative process of gathering data and adjusting the model can be much faster. Predictions of the model can then be tested experimentally to determine validity. The model itself also serves as a sort of abstract data repository. The brain is too complex to be described adequately in written language. Data that would be indecipherably complex when written out in a results section can now be stored away as the parameters of a well fit model.

All experimentalists owe it to themselves to obtain a working knowledge of computer science and deep learning (and whatever its descendants will be). The experiments should be designed with models in mind, no matter your specialty. The models will suggest experiments and the experiments will allow effective tuning of the models. After having tasted the power of this arrangement in my work on the sharp-wave ripple, I now view it as the only way forward. The only foothold we have in the study of such an incredibly complex system.

For this reason, I think that all successful future neuroscientists will also be computer scientists with a deep understanding of a wide variety of algorithms. Not only will this type of background be critical for understanding how the brain works, but also for coming up with novel ways to understand the data we collect from it.

Notes:

There is an excellent article by Yves Frégnac that covers many of the same points discussed here, and many more. I highly recommend it.

http://science.sciencemag.org/content/358/6362/470.full