Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

The Computational Structure of Spike Trains

View through CrossRef
Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike train's structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.
Title: The Computational Structure of Spike Trains
Description:
Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains.
Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity.
Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series.
We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train.
Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process.
We use CSMs to approximate each of these quantities.
The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm.
The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike train's structure.
We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.

Related Results

Generation of Spike Trains with Controlled Auto- and Cross-Correlation Functions
Generation of Spike Trains with Controlled Auto- and Cross-Correlation Functions
Emerging evidence indicates that information processing, as well as learning and memory processes, in both the network and single-neuron levels are highly dependent on the correlat...
A New Class of Metrics for Spike Trains
A New Class of Metrics for Spike Trains
The distance between a pair of spike trains, quantifying the differences between them, can be measured using various metrics. Here we introduce a new class of spike train metrics, ...
Decoding Poisson Spike Trains by Gaussian Filtering
Decoding Poisson Spike Trains by Gaussian Filtering
The temporal waveform of neural activity is commonly estimated by low-pass filtering spike train data through convolution with a gaussian kernel. However, the criteria for selectin...
Generation of Synthetic Spike Trains with Defined Pairwise Correlations
Generation of Synthetic Spike Trains with Defined Pairwise Correlations
Recent technological advances as well as progress in theoretical understanding of neural systems have created a need for synthetic spike trains with controlled mean rate and pairwi...
Generation of Correlated Spike Trains
Generation of Correlated Spike Trains
Neuronal spike trains display correlations at diverse timescales throughout the nervous system. The functional significance of these correlations is largely unknown, and computatio...
Rate Coding Versus Temporal Order Coding: What the Retinal Ganglion Cells Tell the Visual Cortex
Rate Coding Versus Temporal Order Coding: What the Retinal Ganglion Cells Tell the Visual Cortex
It is often supposed that the messages sent to the visual cortex by the retinal ganglion cells are encoded by the mean firing rates observed on spike trains generated with a Poisso...
On the Mathematical Consequences of Binning Spike Trains
On the Mathematical Consequences of Binning Spike Trains
We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov pro...
A New Multineuron Spike Train Metric
A New Multineuron Spike Train Metric
The Victor-Purpura spike train metric has recently been extended to a family of multineuron metrics and used to analyze spike trains recorded simultaneously from pairs of proximate...

Recent Results

Warman's carnival glass
Warman's carnival glass
Ellen Tischbein Schroy, Carnival glass, 2004, KP Books...
Don’t Leave Indonesian Manuscripts in Danger: An Analysis of Digitalization and Preservation
Don’t Leave Indonesian Manuscripts in Danger: An Analysis of Digitalization and Preservation
AbstractAlthough manuscript digitalization helps to safeguard old manuscripts, the challenges of old manuscripts’ preservation remain present. Old manuscripts are increasingly negl...
LXII Motifs of Cultural Eschatology in German Poetry from Naturalism to Expressionism
LXII Motifs of Cultural Eschatology in German Poetry from Naturalism to Expressionism
When Gottfried Keller in his poetic rejoinder to Justinus Kerner's romantic plaint extolled the world-transforming forces of technology, he expressed the dominant faith of his time...
Philosophic sur ordinateur ou intelligence artificielle
Philosophic sur ordinateur ou intelligence artificielle
L'informatique se définissant comme le traitement rationnel de l'information par machine automatique et l'intelligence se caractérisant par une même capacité de traitement rationne...

Back to Top