Dynamic Connectivity¶
Exploring the dynamics of timeseries data with timecorr¶
Written by Jeremy Manning
The timecorr Python toolbox provides tools for computing and exploring the correlational structure of timeseries data. The details of this approach are described in this preprint; some excerpts from the methods of that paper are reproduced below for convenience.
In its most basic usage, the timecorr
function provided by the toolbox takes as input a number-of-timepoints (\(T\)) by number-of-features (\(K\)) matrix, and returns as output a \(K \times K \times T\) tensor containing a timeseries of the moment-by-moment correlations reflected in the data. There are two additional special features of the timecorr toolbox that we’ll also explore in this tutorial: dynamic inter-subject functional correlations and high-order dynamic correlations. Before explaining those additional features, we’ll expand on how timecorr estimates dynamic correlations from multi-dimensional timeseries data.
Given a \(T\) by \(K\) matrix of observations, \(\mathbf{X}\), we can compute the (static) Pearson’s correlation between any pair of columns, \(\mathbf{X}(\cdot, i)\) and \(\mathbf{X}(\cdot, j)\) using:
We can generalize this formula to compute time-varying correlations by incorporating a kernel function that takes a time \(t\) as input, and returns how much the observed data at each timepoint \(\tau \in \left[ -\infty, \infty \right]\) contributes to the estimated instantaneous correlation at time \(t\).
Given a kernel function \(\kappa_t(\cdot)\) for timepoint \(t\), evaluated at timepoints \(\tau \in \left[ 1, ..., T \right]\), we can update the static correlation formula above to estimate the instantaneous correlation at timepoint \(t\):
Here \(\mathrm{timecorr}_{\kappa_t}(\mathbf{X}(\cdot, i), \mathbf{X}(\cdot, j))\) reflects the correlation at time \(t\) between columns \(i\) and \(j\) of \(\mathbf{X}\), estimated using the kernel \(\kappa_t\). We evaluate the timecorr equation in turn for each pair of columns in \(\mathbf{X}\) and for kernels centered on each timepoint in the timeseries, respectively, to obtain a \(K \times K \times T\) timeseries of dynamic correlations, \(\mathbf{Y}\).
Dynamic inter-subject functional correlations (dISFC)¶
Given a multi-subject dataset, Simony et al. (2016) define a measure of stimulus-driven inter-regional correlations called inter-subject functional correlation (ISFC). The timecorr toolbox extends this idea to compute a timeseries of ISFC matrices from a multi-subject dataset. (The original ISFC approach computes a static ISFC matrix, although the authors of the original study use a sliding window approach to approximate a timeseries.)
Dynamic high-order functional correlations¶
The image below summarizes a variety of neural patterns that one could (in principle) compute or estimate from a neural dataset. Within-brain analyses are carried out within a single brain, whereas across-brain analyses compare neural patterns across two or more individuals’ brains. Univariate analyses characterize the activities of individual units (e.g., nodes, small networks, hierarchies of networks, etc.), whereas multivariate analyses characterize the patterns of activities across units. Order 0 patterns involve individual nodes; order 1 patterns involve node-node interactions; order 2 (and higher) patterns relate to interactions between homologous networks. Each of these patterns may be static (e.g., averaging over time) or dynamic.
Computing high-order functional correlations naively would be computationally intractable for even modest numbers of nodes (brain regions) and orders. This is because the resulting patterns at each timepoint scale exponentially with the order of interactions one attempts to investigate. To make the computations tractable, we use the so-called kernel trick popularized in classification approaches. Rather than carrying out the computations in the “native” feature space (i.e., exponential scaling), we first project the data onto a much lower dimensional space (with \(K\) dimensions), and then we perform the key computations in the low-dimensional space. This enables us to achieve linear scaling with the order of functional correlations, at the expense of precision (since low-dimensional embeddings are lossy). We primarily use two approaches for “embedding” the high-dimensional dynamic correlations in a \(K\)-dimensional space:
Dimensionality reduction approaches take the \(\mathcal{O}(K^2)\) patterns and embed them in a \(K\)-dimensional space that preserves (to the extent possible) the relations between the patterns at different timepoints. Graph measure approaches forgo attempts to preserve the original activity patterns in favor of instead preserving each node’s changing positions within the broader network (with respect to other nodes).
An overview of the timecorr approach, along with a summary of some of the key findings from this paper may be found in this video (credit: Lucy Owen; recording of a talk she gave at Indiana University).
from IPython.display import YouTubeVideo
YouTubeVideo('y1HYFXVJ5to')
Getting Started¶
Before getting started with this tutorial, we need to make sure you have the necessary software installed and data downloaded.
Software¶
This tutorial requires the following Python packages to be installed. See the Software Installation tutorial for more information.
seaborn
matplotlib
bokeh
holoviews
numpy
pandas
nltools
datalad
Let’s now load the modules we will be using for this tutorial.
import os
from glob import glob as lsdir
import numpy as np
import pandas as pd
from nltools.mask import create_sphere, expand_mask
from nltools.data import Brain_Data, Adjacency
from nilearn.plotting import plot_stat_map
import timecorr as tc
import seaborn as sns
import holoviews as hv
from holoviews import opts, dim
from bokeh.io import curdoc
from bokeh.layouts import layout
from bokeh.models import Slider, Button
from bokeh.embed import file_html
from bokeh.resources import CDN
import panel as pn
import IPython
import datalad.api as dl
import warnings
warnings.simplefilter('ignore')
%matplotlib inline
hv.extension('bokeh')
hv.output(size=200)