Description
Built with Experiment
The procedure begins with instructions for a parent to prepare their device, environment, and child for testing.
A calibration phase of 4 trials acts as a baseline measure to observe looking behaviour and ensure the child looks to the side of the screen that the image is first presented on.
A short attention-getting video of black and white objects and sounds begins the experimental phase. Two blocks of 12 trials are divided by the same video, which is also repeated at the very end of the 24 trials. Trials are not infant-initiated as they would be in the lab, but run automatically, without pause from start to finish.
For each trial, a smiley fixation point precedes 2 images which are displayed on the right and left side of the screen: one target image, and one distractor image. The 2 images are displayed for 2500ms (pre-naming phase) before an auditory stimulus labels the target image at 2500ms. The 2 images then remain on screen for a further 2500ms of free-looking time (post-naming phase).
Trials are recorded individually and require offline manual coding of eye position. We used a bespoke piece of software to divide each trial into 50ms stills and then coded gaze as: left, right, on-screen but an indeterminate location, or off-screen. For each trial, we then calculated the proportion of looking time to the target image as: looks to target/ looks to target + looks to the distractor. This was calculated for the pre-naming and the post-naming phases. The expectation is that a child will look longer at the picture which is labelled auditorily, indexing word recognition.
Gorilla Open Materials Attribution-NonCommerical Research-Only
Fully open! Access by URL and searchable from the Open Materials search page