Volumes and voxels: The "raw" form of the data you get from the scanner (for most neuroscientists). However, keep in mind that when talking with MRI physicists, the "raw" data means points/voxels in k-space and/or data from individual RF coil elements.
Segmentation: Means assigning a label to different tissues (often done in volumes/voxels). This is a crucial step before creating "surfaces". Errors in segmentation will propagate to all of the surface analyses.
Surfaces:
Typically represented as a triangular mesh which has a certain discrete resolution (i.e. spacing between vertices).
Typically derived from anatomical data (that are separate from, e.g., the fMRI data).
Triangular meshes store data in terms of vertices (also called points or nodes) in vast majority of uses (due to the prevalence of FreeSurfer).
In vast majority of uses (due to the prevalence of FreeSurfer), the number of vertices in a triangular mesh are not equal to the number of voxels that those meshes are derived from (and, moreover, there is no easy/straightforward way of making them nicely correspond).
One can compute the initial number of vertices generated from an anatomical data by looking at the implementation of "marching cubes tesselation".
Cortical thickness: The cortical sheet actually is a surface with thickness (like a real blanket). Cortical thickness is known to vary across areas and in disease.
Mid-gray, white, pial surfaces: Cortical sheet has thickness; therefore different surfaces can be generated at different "depth levels" of the cortex. These depth levels are normalized by the cortical thickness, therefore they indicate the position of a surface relative to the local thickness (e.g. mid-gray is equally distant to white and pial surfaces).
Curvature: This makes the human brain hard to visualize/analyze. (It would be nice if the human cortex was not curved.) There are several algorithms to make a computer quantify curvature (each having different advantages and disadvantages compared to the pure mathematical definition of curvature).
Curvature is just one anatomical feature and typically curvature is only useful at the macroscale for intersubject alignment. (Human brains don't really align well at the finer scale.)
Software (major ones) for working with volumes and surfaces: FreeSurfer, Connectome Workbench (HCP), FSL, freeview (part of FreeSurfer), AFNI SUMA, BrainVoyager, FSLeyes, ITK-SNAP, 3DSlicer, MRIcron, ANTS
File formats:
.niml: AFNI format
.mgz: FreeSurfer format
.nii: NIFTI format (typically used for volume), most common.
.gii: GIFTI format (typically used for surface)
grayordinates: the HCP term that is used to refer to both cortical "nodes" and subcortical voxels.
.vmr, .fmr, .vmp, .srf, smp ... : Volume MR data (vmr), functional MR data (fmr), volume map (vmp), surface (srf)m surface map (smp)... These are the most common BrainVoyager formats that can be encountered in the wild.
Linear volumetric registration: Typically refers to spatially matching two volumes. Partially respects the geometry of cortical surfaces. Also referred as "spatial normalization" when done with regards to a template volume (e.g. MNI).
Rigid-body (6 degrees of freedom): Translate, rotate.
Rigid-body with scaling (7 degrees of freedom): Translate (3 DOF), rotate (3 DOF), global scaling (1 DOF).
Nonlinear volumetric registration / warping: Provides more parameters to match two volumes. Commonly used to register across different people (or to atlases) or to account for geometric distortions across imaging sequences (e.g. FSL-TOPUP). State of the art software seems to be ANTs.
Native space: True to the subject's actual reality (i.e. not warped or put into a group space)
Scanner space: Relative to the actual spatial arrangement of the head with respect to the scanner that occurred during scanning.
Most registration programs make use of the dicom/nifti headers that contain an affine matrix (4x4) telling how the data array (x, y, z in data array/voxel indices) is positioned with regards to the canonical axes of the scanner (x, y, z with regards to the scanner) for initialing the alignment/registration process.
Standard space: A template/atlas space like MNI, Talairach, etc...
Atlases/templates/standard spaces:
MNI: The most commonly used volumetric anatomical brain space for both linear and nonlinear registration/standardization.
Talairach: An older, nowadays less used, volumetric space based on one human (a French lady). Talairach registration means compartmentalized (12 boxes) linear scaling, not registering gyri/sulci.
fsaverage: A commonly used FreeSurfer "group template surface". This template is generated by nonlinear curvature-based surface registration of multiple brains.
Often done in a multi-scale fashion (minimizing curvature smoothed at multiple scales).
It "works" for the coarse scale (very large sulci/gyri), but obviously has accuracy limits at the fine scale.
Motion correction: Registration of multiple 3D volumes one by one. E.g., in fMRI, each volume is volumetrically registered (and typically with rigid-body) to a reference volume.
Surface/curvature/cortex-based registration: This does respect surface geometry, but, of course, assumes that surfaces are accurate enough (requires good segmentation and good surfaces).
Hyperalignment: Some people propose abandoning spatial constraints (i.e. give up on macroanatomy) and just using more abstract relationship of stimulus (or whatever) selectivity of two different things (e.g. ROIs)
Anatomical alignment vs. functional alignment: distinct brain areas may or may not align well even after you anatomically align them.
Gradient: sometimes used to mean just "gradual", but it has many other technical definitions. For example, you might refer to gradients across the cortical surface. or, you might actually mean in a mathematical sense the derivative of some quantity.
Interpolation: Taking a set of data values defined in one gridding/space and transforming them to try to get a representation of that data in a different gridding/space
Smoothing / filtering: Sometimes used to "dampen" the effect of intersubject variability.
fsLR, fsaverage_sym - some variants of fsaverage in which you deliberately impose hemispheric symmetry. Might be convenient when you intend to average hemispheres together.
Nature of the fsaverage (or curvature-based) (or cortex-based alignment (CBA) alignment
this is the fsaverage surface (sphere version)
this is a single subject's fsaverage.reg surface (which is supposedly registered to the fsaverage sphere)
this shows both. the idea is that regridding/interpolating data from a subject onto fsaverage then allows comparison across subjects.
This is an example "real" (non-inflated) cortical surface (this is a white-matter surface). Notice the variation in triangle density and size.
Ways of using group spaces
Group spaces can be useful.
You can use a group space to average data across people (at the expense of some inaccuracy at the fine scale).
You can define rough ROIs in a group space and then project to your individual subjects.
You can use the group space just for visualization purposes. Like, at least the spatial layout of your different subjects is roughly comparable.
You can report results from your subjects in the group space so that people have some chance of relating it to their subjects.
You can take advantage of 'atlases' prepared in a group space.
(Note that some software tools use group spaces under the hood as spatial priors.)
Pros and cons of Volume-based Processing
Here we are talking about, e.g., pre-processing each fMRI subject (say, acquired at 2mm isotropic resolution) to fix head motion (and maybe slice timing) and then what you get is just volumes over time. And then all analyses just work with voxels in those volumes (2mm voxels).
Pro: It's easy to think about. It's simple. It is closer to how the data are actually acquired at the scanner.
Pro: One voxel always means the same mm extent.
Pro: Analyzing data from the whole volume is useful because you can use ventricles and/or white matter to serve as a 'control' for the voxels you care about.
Pro: Staring at slices can reveal lots of weird MR image artifact issues.
Pro: It's easier to say "voxel" than "vertex".
Pro: Visualization is more straightforward. You just show images of slices, where pixels are voxels, and you just color the pixels.
Pro or con: Data size (and CPU requirements) may be smaller (or larger), but it really depends on how you have set things up...
Pro: Volumes are true to 3D anatomy, e.g., up is up. down is down.
Con: Voxels in and of themselves don't actually tell you anything about the true topology of the cortical surface.
Con: Visualization of cortical organization is basically impossible to see by eye in a volumetric format.
Con: Looking at 100s of slices is really painful.
Con: If you live only in volumes, you can't take advantage of the "fsaverage" group concept.
Pros and cons of Surface-based processing
Pro: Visualizing surfaces is critical for understanding cortical organization. You can see your entire dataset in one compact picture (assuming you primarily care about cortex). Arguably, the main point of surface processing is to help us UNDERSTAND the complicated spatial organization of the brain.
Pro: Allows you to exploit surface-based spaces (e.g. fsaverage)
Pro: You can do "smart" smoothing that respects the topology of the cortical surface. (For example, if you volumetrically smooth, you might inadvertently smooth kissing gyri.)
Pro: You can generate "sexy/pretty" figures. (But is that a good reason?)
Con: It complicates the concepts, it involves extra processing (CPU time, pipeline generation), it may "fail" (surface quality might be horrible).
Con: Visualization of meshes is not as straightforward as a voxel, i.e., see below.
Data defined on surfaces are quite tricky to render.
This is an example of "nearest" or Voronoi-style visualization.
Con: You have to think about and make choices about how to "transfer/interpolate/map" volumetric data onto surface vertices, including WHEN in your processing to do it (e.g. in pre-processing, or at the very end only for making pretty pictures).
In general, any transformation of your data is not ideal... you generally induce smoothing and resolution loss.
There are different types of interpolation, e.g. nearest, linear, cubic, which to use?
Some approaches try to take into account the thickness of the cortex (but these approaches can be tricky/finicky/inaccurate).
The accuracy of results will depend, obviously, on quality of surfaces and the quality of the co-registration between fMRI data and the anatomy/surfaces.
Certainly, depending on your fMRI resolution (and/or any spatial smoothing you performed), the step in which you put the data on the surface might be creating really weird partial-voluming issues.
Con: If all your data are vertex-oriented, and that's all you look at, it's hard to know how good the surface mapping has been done, and you lose the "control comparisons" of non-gray matter voxels.
Con: For the untrained observer, it may be very hard to get "used to" looking at various inflated and/or flattened views of surfaces.
Con: Surfaces are (somewhat annoyingly) vectors, which often require a different file format, which can cause headaches.
FAQ / tricky issues
Volumes vs. surfaces, the two approaches are very different. Blanket statements are too hard to make; you have to carefully consider all the details. Just because someone uses surfaces (or volumes) doesn't mean that the results are good (or bad).
fsaverage/MNI. Bear in mind the distinction between the space (or coordinate system) vs. the data vs. the surface (triangulated) mesh vs. the "subject" vs. "atlases or templates" in that space.
"Why isn't there an MNI surface?" MNI is a volumetric space and typically registration approaches for volumetric data doesn't really respect sulci/gyri structure. So, one could make an "MNI surface" but it's a bit rough/coarse/approximate. We know that if you start with surface parameterizations and average people with respect to the surface, the results are substantially better.
Counting voxels, counting vertices. Typically, the number of vertices for a given area (a typical 1.0-mm T1 fed to FreeSurfer gives you vertices with about ~1mm spacing) are going to substantially exceed the number of voxels (e.g. 3mm or 2mm fMRI data). (But certainly the underlying issues are a lot more complex than that; marching-cube algorithm, etc.)
But the real question here, is, why do you want to count voxels or vertices?
If the goal is to quantify cortical surface area, then certainly that is a valid issue and there are accurate ways to do that (as opposed to simplistic "counting")
Note that depending on the nature of the surface mesh, you may get behavior like: a given voxel doesn't actually contribute to any vertex, and multiple vertices inherit data from the same voxel.
Certainly, if volume data gets put onto a surface, you may have "oversampled" the volume data. This isn't necessarily a problem, but just be careful of what you do and how you interpret what you are doing.
Upsampling isn't intrinsically "bad". Some people worry that vertices are not statistically independent. But note that your original voxels are not independent either.
Note that FreeSurfer's surfaces (native subject surfaces e.g. lh.white, lh.pial, lh.inflated) have exactly the same number of vertices and the theory is that the vertices are in 1-to-1 correspondence with each other. Like, if you think of the cortex as columns, each given vertex has its corresponding vertex "along the column".
Surface accuracy? Obviously, the value of surface-based analysis is limited by the accuracy of a given software/algorithm in reconstructing the surface.
Quality may vary. Good to check quality (with your eyeballs). Accuracy will depend on the quality of the segmentation. It also depends on if you manually edit or just do it fully automated. And there are different algorithm/methods for creating surfaces and their results vary even if we use the exact same segmentation.
Be very careful about expending effort on manual corrections, since whether or not it will make things better will depend on the software...
Note that the resolution of the segmentation (e.g. 1 mm or is it higher resolution?) is also going to place limits on the surface precision/accuracy.
Quality is complicated. It includes both topological-type metrics (are there weird defects in your surface?) as well as accuracy in following the gray matter. Both are needed to have a good surface.
How much you care about surface quality really depends on spatial scale of what you care about (e.g. one voxel vs. a brain area vs. group-average data analysis approaches).
Example of some questionable surface quality issues
How can you equate the sizes of two ROIs (if one wanted to do that)?
Two ROIs can have the same surface area, but will/can have different number of vertices.
Two ROIs can have the same number of vertices, but will/can have different surface areas.
One approach is to lean on fsaverage... but there are lots of pitfalls/details-to-think-about there too.
Searchlight issues. This is tricky to do accurately for surfaces. The most accurate approach is to quantify exactly the native surface anatomy of each given subject. The most convenient approach is to take everyone to, say, fsaverage, and then do some heuristic approach on the fsaverage sphere. Or, alternatively, use each subject's FreeSurfer sphere surface and just "cut off" the top of the sphere (centered on the vertex of interest). See https://www.dropbox.com/sh/1betxbakqkzv6hh/AAAt68q7u6QinivnAFjDoCU7a?dl=0 for some more information and helpful code.
Further reading
[On surface rendering] Gao, J. S., Huth, A. G., Lescroart, M. D., & Gallant, J. L. (2015). Pycortex: an interactive surface visualizer for fMRI.Frontiers in Neuroinformatics, 9(September), 1–12. https://doi.org/10.3389/fninf.2015.00023
[fsaverage] Fischl, B., Sereno, M. I., Tootell, R. B. H., & Dale, A. M. (1999). High-resolution intersubject averaging and a coordinate system for the cortical surface.Human Brain Mapping, 8(4), 272–284. https://doi.org/10.1002/(SICI)1097-0193(1999)8:4<272::AID-HBM10>3.0.CO;2-4
[fsaverage] Fischl, B., Sereno, M. I., & Dale, A. M. (1999). Cortical surface-based analysis: II. Inflation, flattening, and a surface-based coordinate system.NeuroImage, 9(2), 195–207. https://doi.org/10.1006/nimg.
[Volume vs surface, see Fig.1 here] Oosterhof, N. N., Wiestler, T., Downing, P. E., & Diedrichsen, J. (2011). A comparison of volume-based and surface-based multi-voxel pattern analysis.NeuroImage, 56(2), 593–600. https://doi.org/10.1016/j.neuroimage.2010.04.270
[Fine-scale surface, processing, visualization, depth considerations] A critical assessment of data quality and venous effects in sub-millimeter fMRI. NeuroImage (2019). Kay, K., Jamison, K.W., Vizioli, L., Zhang, R., Margalit, E., & Ugurbil, K. https://doi.org/10.1016/j.neuroimage.2019.02.006