Gorilla Academy: The visual speech head start improves speech perception

Visual information about speech content from the talker’s mouth is often available before auditory information from the talker's voice. This experiment examines perceptual responses to words with and without this visual head start.

Participants were presented with audio or audiovisual stimuli and asked to respond by typing what they heard into a text entry box. The stimuli have 3 factors, each with 2 levels:

  • Type (Audio/AudioVisual)
  • Condition (Mouth-leading/Voice-leading)
  • Noise (12dB noise/clean).

If you want to see how I created this experiment, organised and analysed the data, you can watch my video tutorials on Gorilla Academy.

This is a replication of Karas et al (2019).

Back to Open Materials

Karas experiment

Creative Commons Attribution (CC BY)

Karas task

Creative Commons Attribution (CC BY)

Generic Consent

Creative Commons Attribution (CC BY)

Demographic Questions

Creative Commons Attribution (CC BY)

Preferred Citation Karas et al (2019)
Published on 28 October 2020