Gorilla LogoHome

Intermodal Preferential Looking Paradigm (IPL)- Word Recognition in 2-year-olds

This online adaptation of the Intermodal Preferential Looking Task was developed to see if in-lab findings in very young children could be replicated using an online modality.

To see the full experimental procedure from start to finish, use the link below named: FULL EXPERIMENT_Intermodal Preferential Looking Paradigm (IPL) Experiment- Word Recognition in 2-year-olds.

All the individual components to this experiment are also listed below.

To replicate this study, you will need to gain access to the webcam recording feature in Gorilla since it is in Beta, so not a default option.

This study has since been developed into a semantic priming study and used on 30-month-olds, and 36-month-olds. Link coming soon.

Back to Open Materials


Audio set-up (IPL)

Built with Questionnaire Builder 1

This a short set of questions to determine if the participant's device's audio is suitable for the IPL study.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Consent Form (IPL)

Built with Questionnaire Builder 1

This is the informed consent form for parents/carers to complete before beginning the experiment. It explains why the research is being done, what the experiment involves, how data will be handled, and how a parent/carer can opt out.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Debrief (IPL)

Built with Questionnaire Builder 1

This screen appears at the end of the experimental procedure. Parents/carers can leave comments and express a preference for how they are reimbursed for their participation.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Demographic- Parent A (IPL)

Built with Questionnaire Builder 1

This is a demographic questionnaire which relates to the parent/carer who is doing the IPL study with the child. It covers questions such as the relationship to the child, education, and language use.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Demographic- Parent B (IPL)

Built with Questionnaire Builder 1

This is a demographic questionnaire which relates to any other parent/carer in the child's life. It covers questions such as the relationship to the child, education, and language use.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Experiment overview screen (IPL)

Built with Task Builder 1

This is the landing page for the experiment, which outlines the eligibility criteria for the infant IPL word recognition task in 2-year-olds.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Ineligible End Screen (IPL)

Built with Task Builder 1

This is the landing page for when a parent/carer does not enter the correct information to make them eligible for the infant IPL word recognition task in 2-year-olds. For example, if they indicate their child is bilingual, this would make them ineligible to continue.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Ineligible device set-up (IPL)

Built with Task Builder 1

This is the landing page for when a parent/carer does not enter the eligible device information to be able to do the infant IPL word recognition task in 2-year-olds. For example, if they indicate there was no audio, then this would make them ineligible to continue.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Infant Intermodal Preferential Looking Task (IPL: Word recognition_List1)

Built with Task Builder 1

The procedure begins with instructions for a parent/carer to prepare their device, environment, and child for testing. There is an audio test task so the parent/carer can see what the webcam will record.

A calibration phase of 4 trials acts as a baseline measure to observe looking behaviour and ensure the child looks to the side of the screen that the image is first presented on.

A short attention-getting video of black and white objects and sounds begins the experimental phase. Two blocks of 12 trials are divided by the same video, which is also repeated at the very end of the 24 trials. Trials are not infant-initiated as they would be in the lab, but run automatically, without pause from start to finish.

For each trial, a smiley fixation point precedes 2 images which are displayed on the right and left side of the screen: one target image, and one distractor image. The 2 images are displayed for 2500ms (pre-naming phase) before an auditory stimulus labels the target image at 2500ms. The 2 images then remain on screen for a further 2500ms of free-looking time (post-naming phase).

Trials are recorded individually and require offline manual coding of eye position. We used a bespoke piece of software to divide each trial into 50ms stills and then coded gaze as: left, right, on-screen but an indeterminate location, or off-screen. For each trial, we then calculated the proportion of looking time to the target image as: looks to target/ looks to target + looks to the distractor. This was calculated for the pre-naming and the post-naming phases. The expectation is that a child will look longer at the picture which is labelled auditorily, indexing word recognition.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Infant Intermodal Preferential Looking Task (IPL: Word recognition_List2)

Built with Task Builder 1

Same as Infant Intermodal Preferential Looking Task (IPL: Word recognition_List1) using List order 2

Gorilla Open Materials Attribution-NonCommerical Research-Only


Infant Intermodal Preferential Looking Task (IPL: Word recognition_List3)

Built with Task Builder 1

Same as Infant Intermodal Preferential Looking Task (IPL: Word recognition_List1) using List order 3

Gorilla Open Materials Attribution-NonCommerical Research-Only


Infant Intermodal Preferential Looking Task (IPL: Word recognition_List4)

Built with Task Builder 1

Same as Infant Intermodal Preferential Looking Task (IPL: Word recognition_List1) using List order 4

Gorilla Open Materials Attribution-NonCommerical Research-Only


FULL EXPERIMENT_Intermodal Preferential Looking Paradigm (IPL) Experiment- Word Recognition in 2-year-olds

Built with Experiment

This is the complete experimental procedure that was used for the infant IPL word recognition study in 2-year-olds and includes the demographic questionnaires and debrief etc.

To use this procedure without the questionnaires, see the experiment NO QUESTIONNAIRES_Intermodal Preferential Looking Paradigm (IPL) Experiment- Word Recognition in 2-year-olds

Gorilla Open Materials Attribution-NonCommerical Research-Only


NO QUESTIONNAIRES_Intermodal Preferential Looking Paradigm (IPL) Experiment- Word Recognition in 2-year-olds

Built with Experiment

This version of the experiment can be used if the parent/carer does not need to complete the demographic questionnaires. For example, if they started the process but had a technical error at the point of starting the experiment. In this scenario, the data from the questionnaires will have been saved, and to avoid having the parent/carer do this part of the procedure again, this link will give them access to the experiment only.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Participant info (IPL)

Built with Questionnaire Builder 1

This series of questions collects information from the parent/carer about the child doing the infant IPL word recognition task in 2-year-olds. It covers questions such as age, gender, siblings, languages spoken etc.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Infant Intermodal Preferential Looking Task (IPL, Word recognition, List1-4)

Built with Task Builder 1

The procedure begins with instructions for a parent to prepare their device, environment, and child for testing. There is an audio test task so the parent can see what the webcam will record.

A calibration phase of 4 trials acts as a baseline measure to observe looking behaviour and ensure the child looks to the side of the screen that the image is first presented on.

A short attention-getting video of black and white objects and sounds begins the experimental phase. Two blocks of 12 trials are divided by the same video, which is also repeated at the very end of the 24 trials. Trials are not infant-initiated as they would be in the lab, but run automatically, without pause from start to finish.

For each trial, a smiley fixation point precedes 2 images which are displayed on the right and left side of the screen: one target image, and one distractor image. The 2 images are displayed for 2500ms (pre-naming phase) before an auditory stimulus labels the target image at 2500ms. The 2 images then remain on screen for a further 2500ms of free-looking time (post-naming phase).

Trials are recorded individually and require offline manual coding of eye position. We used a bespoke piece of software to divide each trial into 50ms stills and then coded gaze as: left, right, on-screen but an indeterminate location, or off-screen. For each trial, we then calculated the proportion of looking time to the target image as: looks to target/ looks to target + looks to the distractor. This was calculated for the pre-naming and the post-naming phases. The expectation is that a child will look longer at the picture which is labelled auditorily, indexing word recognition.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Screen audio recording (IPL)

Built with Task Builder 1

This is a webcam screening task for the infant IPL word recognition task in 2-year-olds. This step was added so that the Gorilla pop up, asking for permission to record, appears and parental consent is given before the experimental procedure begins.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Screening (IPL)

Built with Questionnaire Builder 1

This series of questions screens for eligibility for the infant IPL word recognition task in 2-year-olds. It asks about age, languages used, and any developmental or language delay, as well as if the child was born prematurely.

Gorilla Open Materials Attribution-NonCommerical Research-Only


Target Word Checklist (IPL)

Built with Questionnaire Builder 1

This is a list of the words used in the infant IPL word recognition task in 2-year-olds. It is presented in the same format as the CDI assessment to see if the child: understands, understands and says, does not understand.

Gorilla Open Materials Attribution-NonCommerical Research-Only

Public

Fully open! Access by URL and searchable from the Open Materials search page

Preferred Citation Nguyen, D*., Fitzpatrick, N*., & Floccia, C. (2023) Adapting language development research paradigms to online testing: Data from preferential looking, word learning and vocabulary assessment in toddlers. Journal of Child Language.
*Joint first authors. Paper under review

Conducted at University of Plymouth, Plymouth, UK
Plymouth BabyLab
Author: Nadine Fitzpatrick
nadine.fitzpatrick@plymouth.ac.uk
Published on 07 June 2023