General goals of why you might want to collect eye tracking data:
Eye gaze behavior is itself arguably interesting and of itself.
E.g. priming changes eye behavior
Also for finding/interpreting group differences
Eyetracking in order to confirm awakeness and central fixation stability
To know where gaze was DURING free-viewing (in the scanner or outside) in order to better interpret the data
Central fixation is especially critical for covert attention studies
How would you use it?
Gaze-contingent presentation - referring to changing the stimulus in real-time as a function of where subjects are looking
"Just record" the data
Include calibration stages DURING the experiment also, for extra protection and safety?
Hardware systems
EyeLink is the "standard" and is expensive. Sampling rate can be up to 2000 Hz (it is user-changeable).
Avotec: can provide the raw eye videos which allows you to postprocess yourself! (e.g. 320x240, 60 fps)
Tobii. 60 Hz (for glasses); up to 1200 Hz for desktop system
Obviously, in-scanner hardware is harder/more difficult than out-of-scanner setups
Issues/problems with eyetracking:
Takes time to set up
During the experiment it may start failing
Sometimes it just "doesn't work" no matter what you try
Corrective lens in the scanner tends to interfere with quality
Eyetracking accuracy is limited
The hardware is expensive
If you try to analyze eyetracking data, it may get really hard and annoying
Shifts of head position can/will invalidate the accuracy of the eye estimates
Depending on the clarity of the eye video, large saccades may start to cause loss of eye — hence, there is effectively a limited field of view with regards to effectiveness
Eyetracking in the scanner is especially confined and constrained (e.g. with respect to the RF coil) and a big pain in the ass
Currently, the vast majority of fMRI experiments do not do it, sadly
It is tricky to distinguish bad eye fixation from bad eyetracking data
Examples
Add a caption...
Add a caption...
notice blink removal; line plot vs. 2D histogram
Two separate questions: Is the eyetracking working or failing? Is the subject's fixation stability good or bad? It is hard to distinguish these two possibilities. Also, notice that the bottom plot seems to indicate that the preprocessing maybe artificially introduced some drift or something..
Green indicates stuff that was deemed blinks/garbage. Red indicates stuff left in. This is an example of "pre-processing".
Right is after high-pass filtering (notice it's roughly centered on 0 and that the very slow trends are gone)
2D histograms and elliptical fit (2D Gaussian) as a function of different stimulus conditions / tasks / etc.
What does one typically do to preprocess/analyze eyetracking data?
Load the data and separate raw data into fixation vs. saccades vs. actual-data (Eyelink gives both the raw data as well as annotations of the raw data (e.g. blink events that they thought occurred))
Doing bookkeeping and synchronization of data with other data (e.g. fMRI data)
Detect and reject blinks? Delete them (with some window around the blink events)?
Smooth (i.e. low-pass filter) and downsample if there is too much high-frequency noise?
Explicitly detect saccades?
Detrend the data (i.e., high-pass filter the data)?
Enforce zero-mean and/or zero-median (this is in effect like detrending)
Interpolate missing data (either due to failure of eyetracking video/hardware, or blinks, or drastic eyelid-closing stuff)
Be careful about order of operations — if you have missing data, you might have to fill in data, do some processing, and then re-censor the data.
Visualization: 2D histogram, line plots (good for highlighting long saccades)
Quality control and sanity-checking all of the data. Make sure you know the cases where eyetracking FAILED and hence the data are not trustable.
How does one USE the eyetracking data?
Just plot visualizations and/or summary analyses to show how good central fixation was maintained
Gaze-contingent visual stimulus analysis (e.g. adjust the experimental stimulus contingent on where the subject was supposedly looking)
Plot where the eye was (e.g. in free viewing paradigm) on top of the visual stimulus? E.g., where do people tend to look in an image?
Simple t-tests (or nonparametric equivalents) to see if eye position changed as a function of condition (or across groups, etc.)
Compute 'generalized variance' (or some other 2D Gaussian fit) to summarize how large the spread is of x- and y- positions
Bin the eyetracking data and use that to index into other measures (behavioral data, fMRI data, EEG data, etc.)
Analyze pupil size!?
Analyze microsaccades?
Quantify time spent looking at some particular part of the screen
edf2asc -miss NaN eye01.edf - This will include metadata including events sent to the Eyelink (like messages). I use it to figure out time synchronization.
edf2asc -s -miss NaN eye01.edf - This includes just the actual data samples.
Note! There is a PHYSICAL.INI file that lives on the Eyelink computer (in C:\ELCL\EXE). See Eyelink installation manual for details. It sets initial values for screen_phys_coords, screen_distance, screen_pixel_coords. We should get these values right. It appears that we can use the PTB Eyelink commands to set these on the fly.