Skip to content

Commit 445918f

Browse files
committed
updated preprint
1 parent a0cb67e commit 445918f

File tree

4 files changed

+5
-5
lines changed

4 files changed

+5
-5
lines changed

preprint/ms_thoughtExperiment2.log

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
This is pdfTeX, Version 3.14159265-2.6-1.40.15 (TeX Live 2015/dev/Debian) (preloaded format=pdflatex 2017.4.4) 6 AUG 2018 11:20
1+
This is pdfTeX, Version 3.14159265-2.6-1.40.15 (TeX Live 2015/dev/Debian) (preloaded format=pdflatex 2017.4.4) 6 AUG 2018 11:41
22
entering extended mode
33
restricted \write18 enabled.
44
%&-line parsing enabled.
@@ -1169,7 +1169,7 @@ Here is how much of TeX's memory you used:
11691169
e/texmf-dist/fonts/type1/urw/helvetic/uhvb8a.pfb></usr/share/texlive/texmf-dist
11701170
/fonts/type1/urw/helvetic/uhvr8a.pfb></usr/share/texlive/texmf-dist/fonts/type1
11711171
/urw/helvetic/uhvro8a.pfb>
1172-
Output written on ms_thoughtExperiment2.pdf (17 pages, 7590621 bytes).
1172+
Output written on ms_thoughtExperiment2.pdf (17 pages, 7590620 bytes).
11731173
PDF statistics:
11741174
492 PDF objects out of 1000 (max. 8388607)
11751175
417 compressed objects within 5 object streams

preprint/ms_thoughtExperiment2.pdf

-1 Bytes
Binary file not shown.
-10 Bytes
Binary file not shown.

preprint/ms_thoughtExperiment2.tex

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -201,18 +201,18 @@ \subsection{Study design}
201201

202202
\subsection{Data acquisition}
203203

204-
MRI data were collected using a 3T Siemens Verio scanner. A high-resolution MPRAGE structural scan was acquired with 192 sagittal slices (TR=1900 msec, TE=2.5 msec, 0.8mm slice thickness, 0.75x0.75 in-plane resolution), using a 32-channel head coil. Functional echo-planar images (EPI) were acquired with 21 axial slices oriented along the rostrum and splenium of the corpus callosum (slice thickness of 5 mm, in-plane resolution 2.4x2.4 mm), using a 12-channel head coil. To allow for audible instructions during scanning, a sparse temporal sampling strategy was used (TR=3000ms with 1800ms acquisition time and 1200ms pause between acquisitions). Excluding two dummy scans, a total of 253 volumes were collected for each run. The full raw data are available on OpenNeuro \href{https://openneuro.org/datasets/ds001419}{openneuro.org/datasets/ds001419}.
204+
MRI data were collected using a 3T Siemens Verio scanner. A high-resolution MPRAGE structural scan was acquired with 192 sagittal slices (TR=1900 msec, TE=2.5 msec, 0.8mm slice thickness, 0.75x0.75 in-plane resolution), using a 32-channel head coil. Functional echo-planar images (EPI) were acquired with 21 axial slices oriented along the rostrum and splenium of the corpus callosum (slice thickness of 5 mm, in-plane resolution 2.4x2.4 mm), using a 12-channel head coil. To allow for audible instructions during scanning, a sparse temporal sampling strategy was used (TR=3000ms with 1800ms acquisition time and 1200ms pause between acquisitions). Excluding two dummy scans, a total of 253 volumes were collected for each run. The full raw data are available on OpenNeuro ( \href{https://openneuro.org/datasets/ds001419}{openneuro.org/datasets/ds001419}).
205205

206206
\subsection{Data preprocessing}
207207

208-
Basic preprocessing was performed using SPM12 (www.fil. ion.ucl.ac.uk/spm). Functional images were motion corrected using the realign function. The structural image was co-registered to the mean image of the functional time series and then used to derive deformation maps using the segment function \citep{Ashburner_2005}. The deformation fields were then applied to all images (structural and functional) to transform them into MNI standard space and up-sample them to 2mm isomorphic voxel size. The full normalized fMRI time courses are available online ( \href{https://doi.org/10.6084/m9.figshare.5951563.v1}{doi.org/10.6084/ m9.figshare.5951563.v1}). All further preprocessing steps were carried out using Nilearn 0.2.5 \citep{Abraham_2014} in Python 2.7. To generate an activity map for each of the 75 blocks, each voxel's time course was z-transformed to have mean zero and standard deviation one. Time courses were detrended using a linear function and movement parameters were added as confounds. Then TRs were grouped into blocks using a simple boxcar design shifted by 2 TR (the expected shift in the hemodynamic response function) and averaged, to give one averaged image per block. These images were used for all further analyses and are available on NeuroVault (\href{https://neurovault.org/collections/3467}{neurovault.org/collections/3467}).
208+
Basic preprocessing was performed using SPM12 (www.fil. ion.ucl.ac.uk/spm). Functional images were motion corrected using the realign function. The structural image was co-registered to the mean image of the functional time series and then used to derive deformation maps using the segment function \citep{Ashburner_2005}. The deformation fields were then applied to all images (structural and functional) to transform them into MNI standard space and up-sample them to 2mm isomorphic voxel size. The full normalized fMRI time courses are available online (\href{https://doi.org/10.6084/m9.figshare.5951563.v1}{doi.org/10.6084/ m9.figshare.5951563.v1}). All further preprocessing steps were carried out using Nilearn 0.2.5 \citep{Abraham_2014} in Python 2.7. To generate an activity map for each of the 75 blocks, each voxel's time course was z-transformed to have mean zero and standard deviation one. Time courses were detrended using a linear function and movement parameters were added as confounds. Then TRs were grouped into blocks using a simple boxcar design shifted by 2 TR (the expected shift in the hemodynamic response function) and averaged, to give one averaged image per block. These images were used for all further analyses and are available on NeuroVault (\href{https://neurovault.org/collections/3467}{neurovault.org/collections/3467}).
209209

210210
\subsection{Data analysis}
211211

212212
Emulating the “common task framework” \citep{Liberman_2015,Donoho_2017}, the study's data were analyzed with regard to a clearly defined objective and a metric for evaluating success. In the “common task framework”, data for training are shared and used by different parties. The parties try to learn a prediction rule from the training data, which can be applied to a set of test data. Only after the predictions have been submitted, is the prediction of test data evaluated. It can then be explored how different approaches to prediction compared to one another, given the same dataset and objective.
213213
Accordingly, the first two fMRI runs (50 blocks total, 10 blocks per condition) of our study were used as a training set and the third fMRI run (25 blocks total, 5 blocks per condition) was used as the held-out test set. To ensure proper blinding of test data, the block order was randomly shuffled and the 25 blocks were then assigned letters from A to Y. The true labels of the blocks were only known by the first author (MW), who did not participate in making predictions for the test data. Fifteen of the authors formed four groups. Each group had to submit their predictions regarding the domain (e.g. “motor imagery”) and specific content (e.g. “tennis”) for each block in written form.
214214
The authors making the predictions were all graduate students of psychology, enrolled in a project seminar at Bielefeld University. Only after all predictions were submitted were the true labels of the test blocks revealed.
215-
The groups were allowed to analyze the training and test data in any way they deemed fit, but all used a combination of the following methods: (i) Visual inspection with dynamic varying of thresholds using a software such as Mricron or FSLView. (ii) Voxel-wise correlation of brain maps from the training and the test set, to find the blocks which are most similar to each other. (iii) Voxel-wise correlations of brain maps with maps from NeuroSynth \citep{Yarkoni_2011}, to find the keywords from the NeuroSynth database whose posterior probability maps are most similar to the participant's activity patterns. The basic principles of these analyses are presented in the following sections of the manuscript. Full code is available online (\href{(https://doi.org/10.5281/zenodo.1323665}{(doi.org/10.5281/zenodo.1323665}).
215+
The groups were allowed to analyze the training and test data in any way they deemed fit, but all used a combination of the following methods: (i) Visual inspection with dynamic varying of thresholds using a software such as Mricron or FSLView. (ii) Voxel-wise correlation of brain maps from the training and the test set, to find the blocks which are most similar to each other. (iii) Voxel-wise correlations of brain maps with maps from NeuroSynth \citep{Yarkoni_2011}, to find the keywords from the NeuroSynth database whose posterior probability maps are most similar to the participant's activity patterns. The basic principles of these analyses are presented in the following sections of the manuscript. Full code is available online (\href{https://doi.org/10.5281/zenodo.1323665}{doi.org/10.5281/zenodo.1323665}).
216216

217217
\subsubsection*{Similarity of blocks}
218218
For similarity analyses, Pearson correlations between the voxels of two brain images were computed. This was done either by correlating the activity maps of two individual blocks with each other, or by correlating an individual block with an average of all independent blocks belonging to the same condition.

0 commit comments

Comments
 (0)