Gorilla LogoHome

Gorilla Academy: The visual speech head start improves speech perception

Visual information about speech content from the talker’s mouth is often available before auditory information from the talker's voice. This experiment examines perceptual responses to words with and without this visual head start.

Participants were presented with audio or audiovisual stimuli and asked to respond by typing what they heard into a text entry box. The stimuli have 3 factors, each with 2 levels:

  • Type (Audio/AudioVisual)
  • Condition (Mouth-leading/Voice-leading)
  • Noise (12dB noise/clean).

If you want to see how I created this experiment, organised and analysed the data, you can watch my video tutorials on Gorilla Academy.

This is a replication of Karas et al (2019).

Back to Open Materials


Karas experiment

Built with Experiment

Creative Commons Attribution (CC BY)


Karas task

Built with Task Builder 1

Creative Commons Attribution (CC BY)


Generic Consent

Built with Questionnaire Builder 1

Creative Commons Attribution (CC BY)


Demographic Questions

Built with Questionnaire Builder 1

Creative Commons Attribution (CC BY)

Public

Fully open! Access by URL and searchable from the Open Materials search page

Preferred Citation Karas et al (2019)
https://doi.org/10.7554/eLife.48116
Published on 28 October 2020