The authors have declared that no competing interests exist.
Conceived and designed the experiments: BIR LT RS. Performed the experiments: BIR LT WW TJD. Analyzed the data: BIR LT. Contributed reagents/materials/analysis tools: TJD. Wrote the paper: BIR LT.
The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than
Recent years have seen dramatic progress in the field of brain– machine interfaces, with implications for rehabilitation medicine and basic neuroscience
Electronics implanted in the brain must be sufficiently energy-efficient to dissipate very little power while operating, so as to avoid damaging neural tissue; conserving power also extends device lifetimes and reduces system size
Considerable attention has been devoted to meeting the low-power operation constraint for brain implantation in the context of signal amplification
Multiple approaches to neural decoding have been implemented by several research groups. As we have discussed in
The overall architecture of a neural decoding system is decomposed into a set of operations implemented by Turing-type computing machines, shown here as a collection of
As we have described in previous work
Our computational architecture for neural decoding operates explicitly as a Turing-type universal computing machine, in which the decoding operation is programmed by selecting the rule array of the machine, which can also reprogram itself, resulting in an overall system that emulates the dynamics of a network of integrate-and-fire neurons. In contrast with existing approaches to neural decoding, this framework facilitates extreme power efficiency,
Our architecture decomposes the operation of neural signal decoding, allocating the computational load across two processing units: one implanted within the body, and therefore power-constrained, and the other located outside the body, and therefore less power-constrained. The overall architecture strategically imbalances the computational load of decoding in a way that leverages the relatively high computational power of the external unit to minimize power consumption in the implanted unit. Simultaneously, the system minimizes data throughput between the internal and external units in order to reduce the power costs of wireless communication between the two units.
The core decoding function executed by the internal unit is an evaluation of the probability that the system is in each of its available states. At each time step,
The detailed operation of the internal unit is diagrammed in
Block diagram of the low-power processing system of the internal component of our neural decoder, as implemented in one instantiation of our architecture. Functional blocks are color-coded in accord with the scheme used in
We use the terms
The components of
Pattern matching algorithms conceptually related to the one implemented in our internal unit have previously been used to decode neuronal activity, notably in the context of memory replay during sleep
The efficiency of application-specific digital microcontrollers and digital signal processors (DSPs) arises in large part from their ability to identify and prioritize the computational primitives, such as Fourier transformation or specific kinds of filtering, that are of greatest importance in particular applications
This paper is structured as follows: In this
We applied our neural decoding system to decode head-position trajectories from place cell ensemble activity in the hippocampus of a behaving rat. Place cells in rat hippocampus exhibit receptive fields tuned to specific locations in the environment
In the context of our place-cell– based position decoding problem, the states to be decoded,
Normalized spike rate for each of
Decoding template array
|
1 | 1 | 10 | 15 | 15 | 8 | 7 | 7 | 4 | 11 | 11 | 1 | 14 | 11 | 1 | 14 | 24 | 15 | 17 | 20 | 10 | 14 | 16 | 16 | 25 | 24 | 27 | 3 | 20 | 5 | 3 | 7 |
|
4 | 4 | 3 | 3 | 3 | 3 | 7 | 7 | 1 | 2 | 2 | 4 | 2 | 2 | 4 | 2 | 4 | 3 | 4 | 7 | 3 | 2 | 3 | 3 | 2 | 4 | 4 | 3 | 7 | 4 | 3 | 7 |
|
3 | 29 | 16 | 16 | 13 | 17 | 10 | 20 | 16 | 16 | 17 | 29 | 15 | 24 | 27 | 17 | 32 | 20 | 16 | 17 | 17 | 26 | 26 | 32 | 19 | 31 | 31 | 31 | 17 | |||
|
3 | 4 | 3 | 3 | 1 | 4 | 3 | 7 | 3 | 3 | 4 | 4 | 3 | 4 | 4 | 4 | 1 | 7 | 3 | 4 | 4 | 4 | 4 | 1 | 3 | 3 | 3 | 3 | 4 |
The decoding rules displayed graphically and as fine print in
Our system decodes the location of a maze-roaming rat, from spike trains recorded from thirty-two hippocampal place cells. Raw output of the decoding algorithm,
Real-time decoding compresses neural data as it is acquired, by extracting only meaningful information of interest from a high-bandwidth, multichannel signal. For data acquired from an
We explicitly calculate the computational efficiency of our decoding architecture in the
Input noise degrades the performance of the decoder, but our system has a number of mechanisms for mitigating the effects of noise. In considering the impact of noise on system performance, it is helpful to distinguish between correlated and uncorrelated noise, where the correlation is with reference to the collection of neural input signals,
A useful feature of our internal decoding algorithm is its ability to explicitly suppress output noise and tune performance by adjusting
The effects of correlated noise can also be suppressed in postprocessing, by the nonimplanted component of the system designated ‘External Computations’ in
Uncorrelated noise restricted to individual channels, associated with low channel-specific signal-to-noise ratios (SNRs), can be handled in the context of our decoding architecture by tuning the value of
Our system for neural decoding, implemented as described here, can be generalized in several ways.
First, consider the combinatorial logic we use to interpret spike train data. We designed the decoding architecture to facilitate applying primitive operations derived from the structure of neuronal receptive fields. We model the receptive field of a neuron with respect to a set of states as the probability distribution of action potential firing as a function of state index. Discretizing this model to a finite level of precision transforms the frequency distribution to a histogram over states. The general problem of evaluating the degree to which input spike counts represent evidence of particular states has often been cast in terms of Bayesian analysis
The combinational logic function we implement explicitly in this work, described by
While such receptive-field encoding patterns are common, a more precise and more general model based on the one we describe here will have broader applicability. In a system with more memory or more logic circuits, a decoding scheme could use more logical functions with more conditions, operating on multiple upper- and lower-bound thresholds per channel. Such an approach could more smoothly model the state dependence of neuronal activity present in most receptive fields, through level-dependent conditions, where spike count levels are defined by pairs of upper- and lower-bound thresholds. This approach should be especially useful in decoding from multimodal receptive fields, or from input channels carrying unsorted multiunit activity, as in
Second, consider the information content of channel silence. Our decoding scheme takes action on the basis of observed spikes. Yet the absence of spikes also conveys information. A more general version of the system we describe here could exploit the information content of channel silence, constructing histograms and templates for time-windowed spike absence in analogy with those described here for observed spikes.
Third, consider cross-channel correlations in neural activity. The logical operations and probabilistic assumptions we have described here treat input channels, and indeed channel activity in each position, as independent. While it is convenient to assume such independence, neural activity across channels and positions may in general exhibit nonzero correlations. In a more elaborate version of our decoding system, such correlations could be exploited, for example by implementing combinational logic functions depending simultaneously on activity across multiple channels.
Our decoding system, programmed as we have described here, illustrates how a simple universal computing architecture can implement effective, biomimetic neural decoding. In particular, the rule-based decoding program we describe can be understood as implementing a two-layer network of digitized integrate-and-fire neurons. The first layer of this network consists of
The particular scheme we describe here for programming our decoding architecture has an intuitive interpretation as an emulation of a network of integrate-and-fire neurons. However, the rule-based programming structure of the system is extremely versatile, and by no means limited to intuitive, biomimetic computations. Indeed, the computational universality of such rule-based systems has been explored and described in detail by several investigators, including Wolfram
This study uses behavioral and neurophysiologic data acquired from a live rodent. The data were provided to the authors by collaborators (T. J. Davidson) at the Picower Institute for Learning and Memory at the Massachusetts Institute of Technology. Experimental protocols were designed in accordance with guidelines set forth by the Department of Comparative Medicine at the Massachusetts Institute of Technology in the 2010 Laboratory Animal Users' Handbook, and were consistent with the goals of minimizing suffering in all work involving nonhuman vertebrate animals. All experimental methods were approved by the Committee on Animal Care at the Massachusetts Institute of Technology under protocol 0505-032-08, ‘Cortico-Hippocampal Ensemble Memory Formation.’
Our neural decoder is designed to process full-bandwidth, multichannel neural signals in real time, by interfacing directly with the digitized output stream from an array of amplifiers and neural recording electrodes. Such a direct connection of the logic to the amplifier array facilitates great flexibility in the spike detection and decoding algorithms that the associated system can implement. In the present work we describe a system for detecting neuronal action potentials (‘spikes’) using a single-threshold method; individual thresholds are programmable, and can be fine-tuned channel by channel. In the tests of the decoding system described here, however, we used acausally filtered, prerecorded neural data, as described in this section.
More advanced spike-detection methods are also possible. One such approach would be to implement dual-threshold detection, with or without a refractory period. Another would be to use more computationally intensive detection criteria, such as nonlinear energy operators
Importantly, we can also estimate the noise level on each channel in real time. Estimating the noise level is necessary in high-channel-count systems because in such systems it is typically desirable to set spike detection thresholds automatically. A common approach is to set the detection threshold to a multiple of the noise level, isolating spike events with approximate statistical confidence bounds.
The results presented here were obtained from neural data collected as follows: Spike train inputs were derived from recordings from the hippocampus of a rat, while the animal repeatedly, bidirectionally traversed a linear track for food reward, as described in
The objective in neural decoding is to infer an aggregate neural state, such as the intention to move a limb in a particular way, from signals sampled from a population of neurons encoding that state. The computational task of a neural decoding system operating in discrete time can therefore be described in the language of universal computing architectures, and our neural decoding architecture can be understood in terms of Turing-machine– like architectures with tapes and heads, as shown in
The rules for symbol interpretation are derived using a statistical procedure that we describe here in detail. Intuitively, the procedure treats neural spike counts from each input channel as test statistics, used to evaluate the probability that the system is in each of the possible states. The distributions of these test statistics, conditioned on being in or out of each of the possible states, are approximated as histograms collected during a learning phase. Maximally informative spike-count values (thresholds) can therefore be derived and converted to logical rules involving only comparison operations.
Our decoding architecture operates as follows:
Neural spikes are detected on each of
At the end of every time window,
In practice, as discussed in the
The decoding algorithm itself makes no
In order to determine the total computational load associated with our decoding architecture, we account for the number of operations required in each computational frame, defined as a single cycle through the input channels, in which a single spike-count vector,
Operation | Instances per Frame |
Clock Counter |
|
Memory Access |
|
Multiplexer |
|
Comparison |
|
Binary Logic ( |
|
Shift Register |
|
Shift Register |
|
Total |
|
Detailed accounting of the basic operations required in each computational frame of neural decoding, corresponding to the system block diagram of
Spike-count vectors
Decoding templates are constructed so as to be, in a statistical sense, both sensitive and specific to their associated states. By
The decoding templates are computed based on data gathered during a training period, as shown in
Histograms collected during the training phase of the decoding algorithm facilitate computation of thresholds for windowed spike activity, which are stored as templates in memory and used to discriminate between states. This histogram of spike activity, collected from recording channel
The template construction algorithm begins by computing two sets,
We use the following heuristic for setting these thresholds, designed to yield decoding performance meeting at least a tunable minimum quality. We begin by setting two parameters,
Our system accepts input from
As described in the
In our implementation we use
Our decoding architecture separately implements routines based purely on
We applied our decoding system to a problem involving the decoding of continuous movement, as described in the
The classic Viterbi algorithm is an efficient, recursive approach to state sequence estimation; it is optimal in the sense that it generates a maximum
In the context of our decoder, the observed states are the values generated at the output of the decoder,
The first term in
The second term in
Behavioral and neurophysiologic data used in the present work were collected in the laboratory of Matthew A. Wilson, Picower Institute for Learning and Memory, MIT Department of Brain and Cognitive Sciences.