Skip to content

Revisit permutation test methodology #102

Open
@tsalo

Description

@tsalo

In working on #101, I've come across a few things in the permutation test methods that confuse me.

First, the permutation tests loop over datasets and parallelize across permutations. This makes sense in a non-imaging context, when you won't have many, if any, parallel datasets. However, in neuroimaging meta-analyses, you'll typically have many more parallel datasets (e.g., voxels) than permutations. Would it make sense to flip the approach in PyMARE, or would that cause too many problems for non-imaging meta-analyses?

Second, I'm comparing PyMARE's approach to Nilearn's permuted_ols function. I've noticed that there are a few steps in Nilearn's procedure that aren't in PyMARE, including some preprocessing done on the target_vars (y), tested_vars (X), and confounding_vars (also X). Should we (1) adopt this step and/or (2) treat confounding variables differently from tested variables?

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedExtra attention is neededquestionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions