This is an old revision of the document!
Single Subject Data Processing usually involves several preprocessing steps to reduce the influence of non-experimental influences on the data. These usually involve reducing the differences in slice time acquisition within each volume (TR), realigning each TR over the time course to correct for motion, coregistering the EPI (functional) images with the high-resolution (structural) images, normalizing the data to a standard template, and smoothing the data with a FWHM kernel.
After these steps, fMRI data acquired during along with a functional task are fit to a linear regression model representing the onsets and durations of each condition. It is this last step that finally generates the “pretty pictures” of activation on the brain.
AFNI has both a Graphical User Interface (GUI) and Command-Line Interface (CLI) for performing all of these single-subject processing steps. The GUI option is uber_subject.py and requires the installation of PyQT, whereas the CLI option is afni_proc.py, which should run on most computers with a fresh install of AFNI.
These instructions are specific to MacOS X 10.7 or later
I've launched uber_subject.py and specified information about my files and design. In particular I've gone with the default options for processing blocks (time shift, co-register, normalize, motion correct, smooth, mask, and regress) specified an anatomic datafile (MPRAGE) and two functional runs. I've written out four stimulus files, representing four conditions. The files are specified in SECONDS and within each file, there is one row for each functional run.
Then it's on to specifying the options. I'll set an outlier limit of 10% (0.1), meaning if 10% of voxels are outliers, then censor that TR. I'll allow two CPUs to be used for processing, process the data using both a standard regression model and with a REML estimator. The data will be coregistered using the LPC cost function and be normalized to the TT_N27 (Colin27) brain. I'll set some contrasts of interest, remove the first 6TRs for each run to allow for scanner warmup. Finally the data will be blurred to 6mm and censored for motion over 3mm or 3 degrees (roughly).
Whether you use uber_subject.py or specify your analysis directly to afni_proc.py, you will end up with something like what is printed below. This syntax (generated by uber_subject.py) does all of the things described above, with the advantage that you could have just written it to the command-line and scripted it out. It can be useful to use uber_subject.py to setup your initial options and then modify the resulting afni_proc.py commands as you see fit. There are a number of options in afni_proc.py that are not available (yet) in uber_subject.py.
set top_dir = demo set anat_dir = $top_dir/anat set epi_dir = $top_dir/func set stim_dir = $top_dir/stim_times # set subject and group identifiers set subj = Subject1 set group_id = Controls # run afni_proc.py to create a single subject processing script afni_proc.py -subj_id $subj \ -script proc.$subj -scr_overwrite \ -blocks tshift align tlrc volreg blur mask scale regress \ -copy_anat $anat_dir/T1MEMPRAGEs021a1001.nii.gz \ -tcat_remove_first_trs 0 \ -dsets \ $epi_dir/fMRIFastLocalizer1s004a001.nii.gz \ $epi_dir/fMRIFastLocalizer2s006a001.nii.gz \ -align_opts_aea -giant_move \ -volreg_align_to first \ -volreg_align_e2a \ -volreg_tlrc_warp \ -blur_size 6.0 \ -regress_stim_times \ $stim_dir/times-afni_cond1.txt \ $stim_dir/times-afni_cond2.txt \ $stim_dir/times-afni_cond3.txt \ $stim_dir/times-afni_cond4.txt \ -regress_stim_labels \ Cond1 Cond2 Cond3 Cond4 \ -regress_basis 'GAM' \ -regress_censor_motion 0.3 \ -regress_censor_outliers 0.1 \ -regress_opts_3dD \ -jobs 2 \ -gltsym 'SYM: Cond1 -Cond2' -glt_label 1 Cond1-Cond2 \ -gltsym 'SYM: Cond1 -Cond3' -glt_label 2 Cond1-Cond3 \ -gltsym 'SYM: Cond2 -Cond3' -glt_label 3 Cond2-Cond3 \ -gltsym 'SYM: 0.333*Cond1 +0.333*Cond2 +0.333*Cond3' -glt_label 4 \ mean.CCC \ -gltsym 'SYM: Cond1 -0.5*Cond2 -0.5*Cond3' -glt_label 5 C-CC \ -regress_reml_exec \ -regress_make_ideal_sum sum_ideal.1D \ -regress_est_blur_epits \ -regress_est_blur_errts
uber_subject.py/afni_proc.py will copy all of the input files into a new directory and then run processing on those files. Should you want to change options and re-run, you can simply delete that results folder and start again without worrying about deleting your original files.
Inside of the results folder are a series of scripts that you should run. The @ss_review_driver is likely the one to start with. Launching this script will take you through a series of steps to check your data. It will start by taking you through the censored outliers and motion, check the registration accuracy, and even pull up activation maps for inspection. I highly recommend keeping a log of this information in a lab notebook or database for later reference!