Speech-to-Speech Synchronization test (SSS test)

The Speech-to-Speech Synchronization Test is a simple behavioral tool, where individuals listen to a rhythmic train of syllables, while concurrently and continuously whisper the syllable “tah”. This task allows to split the general population into two groups according to individual differences in the degree of speech auditory-motor synchronization (Assaneo et al., 2019b): some of them spontaneously align the produced syllabic rate to the perceived one (high synchrony group), while others do not (low synchrony group). Group belonging (i.e. being a high or a low synchronizer) is predictive of performance in a set of cognitive tasks as well as of brain structural and functional features (Assaneo et al., 2019b; M.F. Assaneo et al., 2019a; Kern et al., 2021).

Currently, there are two different versions of the test:

  • The Implicit Fixed Version: the external syllabic rate remains stable at 4.5 Hz and participants are not explicitly instructed to synchronize their vocalizations to the auditory stimulus.
  • The Explicit Accelerated Version: the external syllabic rate starts at 4.3 Hz and increases in steps of 0.1 Hz every 10 second until it reaches 4.7 Hz. The individuals are instructed to synchronize their production to the perceived rate.

While distinctive brain features between synchrony groups have only been assessed with the Implicit Fixed Version version (Assaneo et al., 2019b, 2020;M.F. Assaneo et al., 2019a), differences in behaviour have been reported with both, with the Implicit Fixed (Assaneo et al., 2019b, 2021, 2020) as well as with the Explicit Accelerated version (Kern et al., 2021). Which version is the most suitable remains an open question, and the selection of one of the two alternatives will rely on the researcher.

Back to Open Materials


SSStest_explicit

The subject is instructed to complete a synchronization task. They will be listening to a 60-second audio comprising a rhythmic train of syllables and during the entire minute they have to continuously whisper the syllable "tah" in synch with the external stimulus. Here, the subject is explicitly instructed to whisper with the same frequency of the auditory stimulus. Participants' vocalizations are being recorded and they can be later analyzed using the scripts found at: https://github.com/fassaneo/SpeechSynchrony/tree/main/Analysis_matlab

Gorilla Open Materials Attribution-NonCommerical Research-Only




SSStest_implicit

For this version of the SSS test, subjects are instructed to perform a syllable detection task. They are presented during one minute to a rhythmic train of syllables. Their goal is to pay attention to the presented audio trying to remember the syllables while continuously and repeteadly whispering the syllable tah. Once the audio presentation ends, participants complete four two alternatives-forced-choices, answering whether a particular syllable was presented or not (i.e. “Did you hear the syllable BAH?”). They answer with the keyboard pressing Y for yes or N for no. Participants' vocalizations are being recorded and they can be later analyzed using the scripts found at: https://github.com/fassaneo/SpeechSynchrony/tree/main/Analysis_matlab

Gorilla Open Materials Attribution-NonCommerical Research-Only

Preferred Citation Assaneo, Ripollés, P., Orpella, J., Lin, W.M., de Diego-Balaguer, R., Poeppel, D., 2019. Spontaneous synchronization to speech reveals neural mechanisms facilitating language learning. Nat. Neurosci. 22, 627–632.
https://doi.org/10.1038/s41593-019-0353-z
Kern, P., Assaneo, M.F., Endres, D., Poeppel, D., Rimmele, J.M., 2021. Preferred auditory temporal processing regimes and auditory-motor synchronization. Psychon. Bull. Rev.
https://doi.org/10.3758/s13423-021-01933-w
Assaneo, M.F., Orpella, J., Ripollés, P., Noejovich, L., López-Barroso, D., Diego-Balaguer, R. de, Poeppel, D., 2020. Population-level differences in the neural substrates supporting Statistical Learning. bioRxiv 2020.07.03.187260.
https://doi.org/10.1101/2020.07.03.187260
Assaneo, M.F., Rimmele, J.M., Sanz Perl, Y., Poeppel, D., 2021. Speaking rhythmically can shape hearing. Nat. Hum. Behav. 5, 71–82.
https://doi.org/10.1038/s41562-020-00962-0
Assaneo, M.F., Rimmele, J.M., Orpella, J., Ripollés, P., de Diego-Balaguer, R., Poeppel, D., 2019. The Lateralization of Speech-Brain Coupling Is Differentially Modulated by Intrinsic Auditory and Top-Down Mechanisms. Front. Integr. Neurosci. 13.
https://doi.org/10.3389/fnint.2019.00028
Conducted at New York University and Universidad Nacional Autónoma de México
Published on 16 December 2021
Corresponding author Ireri Gomez Masters Student
Universidad Nacional Autónoma de México