The brain is a dynamic system with different states or modes regulated by brainstem control systems. We need to understand brain function in the context of these modes. The two most obvious are wake and sleep. But within these are specific sequences of activity, nested oscillations, in various parts of the network, each phase of which could be viewed as discrete periods of time in which the system functions in a particular way, followed by a switch to the next state. Brain states can be of any length, months even, but the key is that they must be hierarchically organized, with long states giving rise to unique sets of shorter states and so on. Transition probabilities are likely to be high between short states that are members of the same long state, but relatively low between those belonging to separate ones. By identifying common sequences and correlating them with behavior, we will understand how the brain moves through time. Because states can exert a great influence on the underlying activity, all other brain related phenomena must be considered in the context of the state that it occurs in and the causal chain that led to that state. This is pointedly not done in the vast majority of neuroscience experiments (unless of course that state is fixed at a particular phase of awake attentive, as in many task-performance studies).
Using dense recording arrays or in vivo calcium/voltage imaging in certain structures, it may be possible to gain fragmented access to the information contained in the neural network itself. But how do we decode it? We have no sense of what the time base of the brain is, how long a “packet” is. Another way to think of this is settling time, at what point have all the effects of upstream inputs had a chance to modulate the activity of receiving cells? And how persistent are those effects at the network scale? In most artificial NNs, time is irrelevant because the steps are known and defined. But in the brain, timing is critical. In order to understand the context of spiking activity, it is critical to understand its phase relationship with the rest of the network. Like a timing gun, a phase-locked strobe used to diagnose engine ignition timing, we need phase information to understand neural activity. To know this, we first need to identify key signatures of sequential network activity, nested within a larger hierarchy of potential brain states. Many of the important clues about these parameters are contained in non-stationary, non-sinusoidal signals, and will require a deeper understanding of the underlying biophysics. These efforts would be greatly aided by isolating the activity of certain classes of interneurons (using fiber photometry for instance). Using this information, it should be possible to interpret neural activity on a “step by step” basis, and build up sequences of activity.
No matter the coverage, important pieces will be missing, tucked away in cells outside of the recorded area. It is not possible to record everywhere and there is a significant cost, in terms of brain damage, to implanting more electrodes and/or optical components than necessary. The best we can hope for is to find the right combination of high and low density recordings in the right areas, in order to maximize access to the information we are interested in deciphering, while minimizing the number and size of implanted devices.