Designing your study


Thinking about your design #


Designing you study is a key element of ERP research design. Multiple considerations can guide your design to ensure that valuable time spent acquiring, pre-processing and analysing data produces valuable (and hopefully publishable) results.

Whilst I list a number of resources here, one main considerations that most ERP research overlooks one important principle. Termed the Hillyard Principle (see Luck, 2014), by Steve Luck, the basic premise that ERP responses should be elicited by stimuli that are - in principle - physically identical. Many ERP studies violate this principle, mostly because it is usually far easier to create stimuli that do not adhere to it.

I want to run a study that examines how people respond to words that are related vs unrelated (see this article if you’re unsure which component this experiment might study). Below are examples of my stimuli:

                        Prime                    Target

Related:          dog                        cat

Unrelated:      apple                     garage

Here, my targets (which I’ll measure my ERP response to) are physically different. This introduces a potential secondary factor that might influence any results I find. Instead of being able to attribute any differences in my ERPs to related vs unrelated words, it’s very possible that a host of features (including the word length, orthography, frequency… etc. etc.) might actually be the driving force behind any differences I find. Instead I could improve my design as follows:

                        Prime                    Target

Related:          dog                        cat

Unrelated:      apple                     cat

Now my ERPs are elicited by the word cat in both conditions, meaning that any physical differences between the conditions must arise purely from their relationship to the prime.

The Hillyard Priciple is by no means the only thing to consider in your design, but is a key design issue that necessitates further attention than it is typically afforded.

Other important literature #

General design #

Picton, T. W., Bentin, S., Berg, P., Donchin, E., Hillyard, S. A., Johnson Jr, R., … & Taylor, M. J. (2000). Guidelines for using human event‐related potentials to study cognition: Recording standards and publication criteria.

Optimising your paradigm for a given component #

Kappenman, E. S., Farrens, J. L., Zhang, W., Stewart, A. X., & Luck, S. J. (2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465.

Sample size & power #

Politzer-Ahles (unpublished) ERP power analyzer.

Clayson, P. E., Carbine, K. A., Baldwin, S. A., & Larson, M. J. (2019). Methodological reporting behavior, sample sizes, and statistical power in studies of event‐related potentials: Barriers to reproducibility and replicability. Psychophysiology, 56(11), e13437.

Larson, M. J., & Carbine, K. A. (2017). Sample size calculations in human electrophysiology (EEG and ERP) studies: A systematic review and recommendations for increased rigor. International Journal of Psychophysiology, 111, 33-41.

Misuse of null-hypothesis testing for nuisance effects #

Sassenhagen, J., & Alday, P. M. (2016). A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests. Brain and language, 162, 42-45.