Javascript must be enabled to continue!
Embedding optimization reveals long-lasting history dependence in neural spiking activity
View through CrossRef
AbstractInformation processing can leave distinct footprints on the statistics of neural spiking. For example, efficient coding minimizes the statistical dependencies on the spiking history, while temporal integration of information may require the maintenance of information over different timescales. To investigate these footprints, we developed a novel approach to quantify history dependence within the spiking of a single neuron, using the mutual information between the entire past and current spiking. This measure captures how much past information is necessary to predict current spiking. In contrast, classical time-lagged measures of temporal dependence like the autocorrelation capture how long—potentially redundant—past information can still be read out. Strikingly, we find for model neurons that our method disentangles thestrengthandtimescaleof history dependence, whereas the two are mixed in classical approaches. When applying the method to experimental data, which are necessarily of limited size, a reliable estimation of mutual information is only possible for a coarse temporal binning of past spiking, a so called past embedding. To still account for the vastly different spiking statistics and potentially long history dependence of living neurons, we developed an embedding-optimization approach that does not only vary the number and size, but also an exponential stretching of past bins. For extra-cellular spike recordings, we found that the strength and timescale of history dependence indeed can vary independently across experimental preparations. While hippocampus indicated strong and long history dependence, in visual cortex it was weak and short, while in vitro the history dependence was strong but short. This work enables an information theoretic characterization of history dependence in recorded spike trains, which captures a footprint of information processing that is beyond time-lagged measures of temporal dependence. To facilitate the application of the method, we provide practical guidelines and a toolbox.Author summaryEven with exciting advances in recording techniques of neural spiking activity, experiments only provide a comparably short glimpse into the activity of only a tiny subset of all neurons. How can we learn from these experiments about the organization of information processing in the brain? To that end, we exploit that different properties of information processing leave distinct footprints on the firing statistics of individual spiking neurons. In our work, we focus on a particular statistical footprint: How much does a single neuron’s spiking depend on its own preceding activity, which we call history dependence. By quantifying history dependence in neural spike recordings, one can, in turn, infer some of the properties of information processing. Because recording lengths are limited in practice, a direct estimation of history dependence from experiments is challenging. The embedding optimization approach that we present in this paper aims at extracting a maximum of history dependence within the limits set by a reliable estimation. The approach is highly adaptive and thereby enables a meaningful comparison of history dependence between neurons with vastly different spiking statistics, which we exemplify on a diversity of spike recordings. In conjunction with recent, highly parallel spike recording techniques, the approach could yield valuable insights on how hierarchical processing is organized in the brain.
Cold Spring Harbor Laboratory
Title: Embedding optimization reveals long-lasting history dependence in neural spiking activity
Description:
AbstractInformation processing can leave distinct footprints on the statistics of neural spiking.
For example, efficient coding minimizes the statistical dependencies on the spiking history, while temporal integration of information may require the maintenance of information over different timescales.
To investigate these footprints, we developed a novel approach to quantify history dependence within the spiking of a single neuron, using the mutual information between the entire past and current spiking.
This measure captures how much past information is necessary to predict current spiking.
In contrast, classical time-lagged measures of temporal dependence like the autocorrelation capture how long—potentially redundant—past information can still be read out.
Strikingly, we find for model neurons that our method disentangles thestrengthandtimescaleof history dependence, whereas the two are mixed in classical approaches.
When applying the method to experimental data, which are necessarily of limited size, a reliable estimation of mutual information is only possible for a coarse temporal binning of past spiking, a so called past embedding.
To still account for the vastly different spiking statistics and potentially long history dependence of living neurons, we developed an embedding-optimization approach that does not only vary the number and size, but also an exponential stretching of past bins.
For extra-cellular spike recordings, we found that the strength and timescale of history dependence indeed can vary independently across experimental preparations.
While hippocampus indicated strong and long history dependence, in visual cortex it was weak and short, while in vitro the history dependence was strong but short.
This work enables an information theoretic characterization of history dependence in recorded spike trains, which captures a footprint of information processing that is beyond time-lagged measures of temporal dependence.
To facilitate the application of the method, we provide practical guidelines and a toolbox.
Author summaryEven with exciting advances in recording techniques of neural spiking activity, experiments only provide a comparably short glimpse into the activity of only a tiny subset of all neurons.
How can we learn from these experiments about the organization of information processing in the brain? To that end, we exploit that different properties of information processing leave distinct footprints on the firing statistics of individual spiking neurons.
In our work, we focus on a particular statistical footprint: How much does a single neuron’s spiking depend on its own preceding activity, which we call history dependence.
By quantifying history dependence in neural spike recordings, one can, in turn, infer some of the properties of information processing.
Because recording lengths are limited in practice, a direct estimation of history dependence from experiments is challenging.
The embedding optimization approach that we present in this paper aims at extracting a maximum of history dependence within the limits set by a reliable estimation.
The approach is highly adaptive and thereby enables a meaningful comparison of history dependence between neurons with vastly different spiking statistics, which we exemplify on a diversity of spike recordings.
In conjunction with recent, highly parallel spike recording techniques, the approach could yield valuable insights on how hierarchical processing is organized in the brain.
Related Results
Evaluating the Science to Inform the Physical Activity Guidelines for Americans Midcourse Report
Evaluating the Science to Inform the Physical Activity Guidelines for Americans Midcourse Report
Abstract
The Physical Activity Guidelines for Americans (Guidelines) advises older adults to be as active as possible. Yet, despite the well documented benefits of physical a...
Adaptive Drop Approaches to Train Spiking-YOLO Network for Traffic Flow Counting
Adaptive Drop Approaches to Train Spiking-YOLO Network for Traffic Flow Counting
Abstract
Traffic flow counting is an object detection problem. YOLO (" You Only Look Once ") is a popular object detection network. Spiking-YOLO converts the YOLO network f...
Spiking neural network with local plasticity and sparse connectivity for audio classification
Spiking neural network with local plasticity and sparse connectivity for audio classification
Purpose. Studying the possibility of implementing a data classification method based on a spiking neural network, which has a low number of connections and is trained based on loca...
Subthreshold variability of neuronal populations driven by synchronous synaptic inputs
Subthreshold variability of neuronal populations driven by synchronous synaptic inputs
AbstractEven when driven by the same stimulus, neuronal responses are well-known to exhibit a striking level of spiking variability. In-vivo electrophysiological recordings also re...
Information-Theoretic Limits for Steganography in Multimedia
Information-Theoretic Limits for Steganography in Multimedia
<pre>Steganography in multimedia aims to embed secret data into an innocent multimedia cover object. The embedding introduces some distortion to the cover object and produces...
Sample-efficient Optimization Using Neural Networks
Sample-efficient Optimization Using Neural Networks
<p>The solution to many science and engineering problems includes identifying the minimum or maximum of an unknown continuous function whose evaluation inflicts non-negligibl...
Backpropagation With Sparsity Regularization for Spiking Neural Network Learning
Backpropagation With Sparsity Regularization for Spiking Neural Network Learning
The spiking neural network (SNN) is a possible pathway for low-power and energy-efficient processing and computing exploiting spiking-driven and sparsity features of biological sys...
Neural stemness contributes to cell tumorigenicity
Neural stemness contributes to cell tumorigenicity
Abstract
Background: Previous studies demonstrated the dependence of cancer on nerve. Recently, a growing number of studies reveal that cancer cells share the property and ...

