Calculate an FFT on a TimeSeries DataType and return a FourierSpectrum DataType.
A module for calculating the FFT of a TimeSeries object of TVB and returning a FourierSpectrum object. A segment length and windowing function can be optionally specified. By default the time series is segmented into 1 second blocks and no windowing function is applied.
# type: (TimeSeries, float, function, bool) -> FourierSpectrum Calculate the FFT of time_series broken into segments of length segment_length and filtered by window_function.
time_series : TimeSeries The TimeSeries to which the FFT is to be applied.
segment_length : float The segment length determines the frequency resolution of the resulting power spectra – longer windows produce finer frequency resolution
window_function : str Windowing functions can be applied before the FFT is performed. Default is None, possibilities are: ‘hamming’; ‘bartlett’;’blackman’; and ‘hanning’. See, numpy.<function_name>.
detrend : bool Default is True, False means no detrending is performed on the time series.
Implementation of differet BOLD signal models. Four different models are distinguished:
Classical means that the coefficients used to compute the BOLD signal are derived as described in [Obata2004] . Revised coefficients are defined in [Stephan2007]
References:
[Stephan2007] | (1, 2) Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ (2007) Comparing hemodynamic models with DCM. NeuroImage 38: 387-401. |
[Obata2004] | Obata, T.; Liu, T. T.; Miller, K. L.; Luh, W. M.; Wong, E. C.; Frank, L. R. & Buxton, R. B. (2004) Discrepancies between BOLD and flow dynamics in primary and supplementary motor areas: application of the balloon model to the interpretation of BOLD transients. Neuroimage, 21:144-153 |
Bases: tvb.basic.neotraits._core.HasTraits
A class for calculating the simulated BOLD signal given a TimeSeries object of TVB and returning another TimeSeries object.
The haemodynamic model parameters based on constants for a 1.5 T scanner.
tau_o : tvb.analyzers.fmri_balloon.BalloonModel.tau_o = Float(field_type=<class ‘float’>, default=0.98, required=True)
Balloon model parameter. Haemodynamic transit time (s). The average time blood take to traverse the venous compartment. It is the ratio of resting blood volume (V0) to resting blood flow (F0).
gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
The Balloon model equations. See Eqs. (4-10) in [Stephan2007] .. math:
\frac{ds}{dt} &= x - \kappa\,s - \gamma \,(f-1) \\
\frac{df}{dt} &= s \\
\frac{dv}{dt} &= \frac{1}{\tau_o} \, (f - v^{1/\alpha})\\
\frac{dq}{dt} &= \frac{1}{\tau_o}(f \, \frac{1-(1-E_0)^{1/\alpha}}{E_0} - v^{&/\alpha} \frac{q}{v})\\
\kappa &= \frac{1}{\tau_s}\\
\gamma &= \frac{1}{\tau_f}
An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
Declares a numpy array. dtype enforces the dtype. The default dtype is float32. An optional symbolic shape can be given, as a tuple of Dim attributes from the owning class. The shape will be enforced, but no broadcasting will be done. domain declares what values are allowed in this array. It can be any object that can be checked for membership Defaults are checked if they are in the declared domain. For performance reasons this does not happen on every attribute set.
Returns the storage size in Bytes of the extended result of the .... That is, it includes storage of the evaluated ... attributes such as ..., etc.
An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.
An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.
Useful graph analyses.
Node betweenness centrality is the fraction of all shortest paths in the network that contain a given node. Nodes with high values of betweenness centrality participate in a large number of shortest paths.
Parameters: | A – binary (directed/undirected) connection matrix (array) |
---|---|
Returns: | BC: a vector representing node between centrality vector. |
Notes:
Betweenness centrality may be normalised to the range [0,1] as BC/[(N-1)(N-2)], where N is the number of nodes in the network.
Original Mika Rubinov, UNSW/U Cambridge, 2007-2012 - From BCT 2012-12-04
Reference: [1] Kintali (2008) arXiv:0809.1906v2 [cs.DS] (generalization to directed and disconnected graphs)
Author: Paula Sanz Leon
Compute the inverse shortest path lengths of G.
Parameters: | G – binary undirected connection matrix |
---|---|
Returns: | D: matrix of inverse distances |
Computes global efficiency or local efficiency of a connectivity matrix. The global efficiency is the average of inverse shortest path length, and is inversely related to the characteristic path length.
The local efficiency is the global efficiency computed on the neighborhood of the node, and is related to the clustering coefficient.
Parameters: |
|
---|---|
Returns: |
|
References: [1] Latora and Marchiori (2001) Phys Rev Lett 87:198701.
Note
Algorithm: algebraic path count
Note
Original: Mika Rubinov, UNSW, 2008-2010 - From BCT 2012-12-04
Note
Tested with Numpy 1.7
Warning
tested against Matlab version... needs indexing improvement
Example:
>>> import numpy.random
>>> A = np.random.rand(5, 5)
>>> E = efficiency_bin(A)
>>> E.shape == (1, )
>>> True
If you want to compute the local efficiency for every node in the network:
>>> E = efficiency_bin(A, compute_local_efficiency=True)
>>> E.shape == (5, 1)
>>> True
Author: Paula Sanz Leon
Get connected components sizes. Returns the size of the largest component of an undirected graph specified by the binary and undirected connection matrix A.
Parameters: | A – array - binary undirected (BU) connectivity matrix. |
---|---|
Returns: |
|
Raises: | Value Error - If A is not square. |
Warning
Requires NetworkX
Author: Paula Sanz Leon
A strategy to lesion a connectivity matrix.
A single node is removed at each step until the network is reduced to only 2 nodes. This method represents a structural failure analysis and it should be run several times with different random sequences.
Parameters: |
|
---|---|
Returns: |
|
References: Alstott et al. (2009).
Author: Paula Sanz Leon
A strategy to lesion a connectivity matrix.
A single node is removed at each step until the network is reduced to only 2 nodes. At each step different graph metrics are computed (degree, strength and betweenness centrality). The single node with the highest degree, strength or centrality is removed.
Parameters: |
|
---|---|
Returns: |
|
See also: sequential_random_deletion, localized_area_deletion
References: Alstott et al. (2009).
Author: Paula Sanz Leon
Perform Independent Component Analysis on a TimeSeries Object and returns an IndependentComponents datatype.
# type: (TimeSeries, int) -> IndependentComponents Run FastICA on the given time series data.
time_series : TimeSeries The timeseries to which the ICA is to be applied.
n_components : int Number of principal components to unmix.
Perform Fast Independent Component Analysis.
Read more in the User Guide.
The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. The derivative should be averaged along its last dimension. Example:
Estimated un-mixing matrix. The mixing matrix can be obtained by:
w = np.dot(W, K.T)
A = w.T * (w * w.T).I
The data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e. X = AS where columns of S contain the independent components and A is a linear mixing matrix. In short ICA attempts to un-mix’ the data by estimating an un-mixing matrix W where ``S = W K X.`
This implementation was originally made for data of shape [n_features, n_samples]. Now the input is transposed before the algorithm is applied. This makes it slightly faster for Fortran-ordered input.
Implemented using FastICA: A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430
This module implements information theoretic analyses.
TODO: Fix docstring of sampen TODO: Convert sampen to a traited class TODO: Fix compatibility with Python 3 and recent numpy
Computes (quadratic) sample entropy of a given input signal y, with embedding dimension n, and a match tolerance of r (ref 2). If an array of scale factors, taus, are given, the signal will be coarsened by each factor and a corresponding entropy will be computed (ref 1). If no value for r is given, it will be set to 0.15*y.std().
Currently, the implementation is lazy and expects or coerces scale factors to integer values.
With qse=True (default) the probability p is normalized for the value of r, giving the quadratic sample entropy, such that results from different values of r can be meaningfully compared (ref 2).
To check that algorithm is working, look at ref 1, fig 1, and run
>>> sampen(numpy.random.randn(3*10000), r=.15, taus=numpy.r_[1:20], qse=False, m=2)
Filler analyzer: Takes a TimeSeries object and returns a Float.
Filler analyzer: Takes a TimeSeries object and returns two Floats.
These metrics are described and used in:
Hellyer et al. The Control of Global Brain Dynamics: Opposing Actions of Frontoparietal Control and Default Mode Networks on Attention. The Journal of Neuroscience, January 8, 2014, 34(2):451– 461
Proxy of spatial coherence (V):
Proxy metastability (M): the variability in spatial coherence of the signal globally or locally (within a network) over time.
Proxy synchrony (S) : the reciprocal of mean spatial variance across time.
# type: dict(TimeSeries, float, int) -> (float, float) Compute the zero centered variance of node variances for the time_series.
time_series : TimeSeries Input time series for which the metric will be computed.
start_point : float Determines how many points of the TimeSeries will be discarded before computing the metric
segment : int Divides the input time-series into discrete equally sized sequences and use the last segment to compute the metric. Only used when the start point is larger than the time-series length
Filler analyzer: Takes a TimeSeries object and returns a Float.
# type: dict(TimeSeries, float, int) -> float Compute the zero centered global variance of the time_series.
time_series : TimeSeries Input time series for which the metric will be computed.
start_point : float Determines how many points of the TimeSeries will be discarded before computing the metric
segment : int Divides the input time-series into discrete equally sized sequences and use the last segment to compute the metric. Only used when the start point is larger than the time-series length
Filler analyzer: Takes a TimeSeries object and returns a Float.
# type: dict(TimeSeries, float, int) -> float Compute the zero centered variance of node variances for the time_series.
time_series : TimeSeries Input time series for which the metric will be computed.
start_point : float Determines how many points of the TimeSeries will be discarded before computing the metric
segment : int Divides the input time-series into discrete equally sized sequences and use the last segment to compute the metric. Only used when the start point is larger than the time-series length
Compute cross coherence between all nodes in a time series.
# type: (TimeSeries, int) -> CoherenceSpectrum # Adapter for cross-coherence algorithm(s) # Evaluate coherence on time series.
time_series : TimeSeries The TimeSeries to which the Cross Coherence is to be applied.
nfft : int Data-points per block (should be a power of 2).
Calculate the cross spectrum and complex coherence on a TimeSeries datatype and return a ComplexCoherence datatype.
# type: (TimeSeries, float, float, float, str, bool, bool, int, bool, float, float) -> ComplexCoherenceSpectrum Calculate the FFT, Cross Coherence and Complex Coherence of time_series broken into (possibly) epochs and segments of length epoch_length and segment_length respectively, filtered by window_function.
time_series : TimeSeries The timeseries for which the CrossCoherence and ComplexCoherence is to be computed.
epoch_length : float In general for lengthy EEG recordings (~30 min), the timeseries are divided into equally sized segments (~ 20-40s). These contain the event that is to be characterized by means of the cross coherence. Additionally each epoch block will be further divided into segments to which the FFT will be applied.
segment_length : float The segment length determines the frequency resolution of the resulting power spectra – longer windows produce finer frequency resolution.
segment_shift : float Time length by which neighboring segments are shifted. e.g. segment shift = segment_length / 2 means 50% overlapping segments.
window_function : str Windowing functions can be applied before the FFT is performed.
average_segments : bool Flag. If True, compute the mean Cross Spectrum across segments.
subtract_epoch_average: bool Flag. If True and if the number of epochs is > 1, you can optionally subtract the mean across epochs before computing the complex coherence.
zeropad : int Adds n zeros at the end of each segment and at the end of window_function. It is not yet functional.
detrend_ts : bool Flag. If True removes linear trend along the time dimension before applying FFT.
max_freq : float Maximum frequency points (e.g. 32., 64., 128.) represented in the output. Default is segment_length / 2 + 1.
npat : float This attribute appears to be related to an input projection matrix... Which is not yet implemented.
Returns the shape of the main result and the average over epochs
A module for calculating the FFT of a TimeSeries and returning a ComplexCoherenceSpectrum datatype.
[Freyer_2012] | Freyer, F.; Reinacher, M.; Nolte, G.; Dinse, H. R. and Ritter, P. Repetitive tactile stimulation changes resting-state functional connectivity-implications for treatment of sensorimotor decline. Front Hum Neurosci, Bernstein Focus State Dependencies of Learning and Bernstein Center for Computational Neuroscience Berlin, Germany., 2012, 6, 144 |
Input: originally the input could be 2D (tpts x nodes/channels), and it was possible to give a 3D array (e.g., tpspt x nodes/cahnnels x trials) via the segment_length attribute. Current TVB implementation can handle 4D or 2D TimeSeries datatypes. Be warned: the 4D TimeSeries will be averaged and squeezed.
Output: (main arrays) - the cross-spectrum - the complex coherence, from which the imaginary part can be extracted
By default the time series is segmented into 1 second epoch blocks and 0.5 second 50% overlapping segments to which a Hanning function is applied.
Perform Principal Component Analysis (PCA) on a TimeSeries datatype and return a PrincipalComponents datatype.
Calculate a wavelet transform on a TimeSeries datatype and return a WaveletSpectrum datatype.
# type: (TimeSeries, Range, float, float, str, str) -> WaveletCoefficients Calculate the continuous wavelet transform of time_series.
time_series : TimeSeries The timeseries to which the wavelet is to be applied.
frequencies : Range The frequency resolution and range returned. Requested frequencies are converted internally into appropriate scales.
sample_period : float The sampling period of the computed wavelet spectrum.
q_ratio : float NFC. Must be greater than 5. Ratios of the center frequencies to bandwidths.
normalisation : str The type of normalisation for the resulting wavet spectrum. Default is ‘energy’, options are: ‘energy’; ‘gabor’.
mother : str The mother wavelet function used in the transform.
A module for calculating the wavelet transform of a TimeSeries object of TVB and returning a WaveletSpectrum object. The sampling period and frequency range of the result can be specified. The mother wavelet can also be specified... (So far, only Morlet.)
[TBetal_1996] | C. Tallon-Baudry et al, Stimulus Specificity of Phase-Locked and Non-Phase-Locked 40 Hz Visual Responses in Human., J Neurosci 16(13):4240-4249, 1996. |
[Mallat_1999] | S. Mallat, A wavelet tour of signal processing., book, Academic Press, 1999. |