- Why is my MRI/fMRI analysis giving "bad/wrong/unexpected" results?
- Why do the raw MRI images look weird (blurry, stripes, not-brain-like, heavily distorted)
- Why is it when I do my "bread and butter" localizer, nothing survives significance?
- Why are my result maps (slice or on the surface) noisy-looking/speckly/no-strong-blobs?
- Why am I getting "variance explained" / significant contrasts (differences) outside the brain or in ventricles or white matter?
- Why are my subjects giving very different results? E.g. your final "paper figure" result looks completely qualitatively different across subjects.
- If I change a supposedly incidental way of performing an analysis (e.g. choice of metric, choice of atlas, using one arbitrary threshold), why do the downstream results look qualitatively/substantially different?
- Bear in mind, just because you are staring at something unexpected doesn't necessarily mean that something is wrong.
- Distinction between actually wrong ("bug") vs. just sub-optimal measurements.
- List of potential suspects, topics, issues. It's very hard. You have to proceed methodically. Often we have to deal with uncertainty (sometimes you can't definitively rule out something).
What exactly is the effect?
Why is it bad? (What consequences does it have?)
In what circumstances does the effect seem to happen?
What causes the effect?
How can we prevent the effect from happening?
If the effect already exists, how can we compensate for (mitigate) the effect?
- Timing/synchronization problems (including a mis-recording or failure to save what literally happened at every moment in the experiment)
- Be meticulous. Keep extensive records. Synchronize with the scanner. Test the timing and record empirical results. Try your experiment on a phantom or a test human.
- If you know what the timing was, you can resample your data. If you don't know what it was but know that it was a fixed offset, you can empirically determine optimal shifts.
- Stimulus presentation problems (sharpness, calibration, viewing angle; volume calibration, audibility over scanner noise; glitches in the experimental presentation)
- Get calibration measurements. Fix the setup.
- Interpret your results with lots of caveats.
- Experimental design was crappy or insufficient (either the stimulus design, the cognitive trial design, the distribution of trials, the inter-trial intervals, etc.)
- Think about experimental design optimization, you can run simulations, you can consult someone who knows.
- No real fixes other than trying to do better in your fMRI signal extraction ("denoising"). Pool your data across subjects is one way to approach this.
- Subjects falling asleep, not doing what you want, are actually cognitively impaired, didn't understand the task
- Check / train your subjects; encourage them to drink caffeine; give monetary incentives for performance?; bribe them with food?; stress to your subjects the importance of e.g. fixation, being still; practice your tasks (either outside the scanner or potentially even in mock scanners); screen for good subjects; give behavioral feedback to your subjects; threaten to make them do more runs or scan sessions. Don't schedule at sleepy times. Consider more short-ish runs as opposed to fewer very long runs. Make sure your scanning process is as streamlined and no-hiccups as possible (get your subjects in and out as efficiently as possible).
- At least diagnose who (or when) the bad performance was based on behavioral records; delete bad subjects (based on evidence).
- Stimulus delivery issue: (eye movements when you don't want them; subjects didn't have corrected acuity; subjects didn't receive the right auditory levels)
- have good screening procedures for subjects and calibration. You could imagine doing precise eye tracking outside of the scanner to measure how good subjects are (and to train them).
- compensate for eye movements if you have a good guess as to where the eyes were;
- Too much noise (or too little signal)
- Too much head motion (drifts vs. jerks; the latter is the main problem)
- train your subjects, find still subjects
- Too much thermal noise
- think about your MRI parameters
- Too much physiological noise (breathing / respiratory)
- ask your subjects to keep calm and steady
- Subject's hemodynamic responses are just really weak
- pre-select subjects?
- Your anatomical scans just don't have good enough gray/white contrast (either sequence parameters are sub-optimal, your subject moved, you scanned at high magnetic field, your scans have huge bias effects (spatial gradations))
- Quickly visually inspect it at the scanner and re-collect if necessary.
- The "pre-processing" you applied just failed miserably (for a variety of possible reasons)
- Partial brain coverage can make co-registration hard
- Functional data registration to anatomical data failed miserably
- Cortical surface reconstruction had massive errors
- Stability of the brain from run to run or scan session to scan session is just poor (e.g. the motion correction is giving sub-optimal results)
- Atlas registration (e.g. MNI or fsaverage registration can be very off, potentially)
- LOOK and INSPECT the results of these steps instead of just assuming that everything was fine.
- Your scientific analysis is wrong and doesn't quantify things well.
- Run through the entire process from start to finish for either a "easy" fMRI experiment or the actual experiment (i.e. pilot).
- Consult with people who have expertise on MRI/fMRI/processing.
- Be conservative. If it's ain't broke, don't fix it. (This applies to all aspects: the experiment, the pre-processing, the MRI data collection parameters, etc.) If you do change things, change one thing at a time.
- Create safeguards and sanity checks.
- For example, if you have no idea whether the T1 data are okay, it is a potential suspect. So create ways to assess data quality to rule out suspects.
- As another example, if you visually confirm that the fMRI data from a given subject are visually stable from run to run (and looks like a high-quality brain), this provides a good safeguard for low-level image artifacts and pre-processing robustness/sanity, etc.
- Create a list of suspects and rule them out one at a time.
- Make a list of suspects: basically, everything on this page.
- For example, if you have a bread-and-butter fMRI experiment and you can establish that you get sane fMRI-based analysis results from your subjects, this is strong evidence that you can rule out egregious pre-processing issues, subject problems, etc.
- One strategy to check bleeding-edge code development is run the code on "known good experimental data" and/or simulated data.