From 1st to 3rd of July, lab members went to Philadelphia, USA to attend International Congress of Infant Study. Many of them made great presentations in poster and talk session. Here are the topics and abstracts of presentations:
Pedagogical cues and action complexity affect transmission of information in two-year-old children
Marina Bazhydai¹, Priya Silverstein¹, Gert Westermann¹, Eugenio Parise¹
¹Lancaster University
It has been argued that both pedagogical communication (using explicit teachingoriented verbal cues, direct eye contact, and child-directed speech; Csibra & Gergely,
2009) and intentional but non-pedagogical communication allow for learning to occur (e.g., Gopnik & Schulz, 2004). While receptivity to information presented using both intentional and pedagogical cues has been studied extensively, children’s active transmission of information following these cues is understudied. For example, Vredenburgh, Kushnir and Casasola (2015) showed that 2-year-olds are more likely to demonstrate an action to an adult after being shown it in a pedagogical than in an intentional but non-pedagogical context, despite having learnt both actions equally well.
In the current study we aimed to replicate this finding, and extend it by investigating whether manipulating action complexity as well as pedagogical cues when demonstrating an action selectively affects the likelihood of the action being shown to an ignorant adult. Exp. 1. In a pre-test phase, 24-month-old children (N = 20) interacted with two unfamiliar adults who demonstrated to them two actions leading to a comparable outcome. One demonstrator showed an action in an intentional, but non-pedagogical manner, while the other showed another action in a pedagogical manner by using explicit verbal cues (“This is how you do it!”), direct eye contact, and child-directed speech. At test, the children were then encouraged to show an ignorant familiar adult who was not present during demonstrations how to play with the toy. We measured which action they performed first, as well as the time spent performing both actions at transmission. There were two trials with two novel toys, resulting in a total of four action demonstrations and two transmission phases. While children showed both actions during transmission, they were more likely to perform the pedagogically demonstrated action first (see Figure 1a). These results are in the same direction as the previous findings (Vredenburgh, Kushnir, & Casasola, 2015), but no statistical analyses have been performed, as data collection is ongoing. Exp. 2. To investigate how other factors mediate this effect, we manipulated both pedagogical demonstration and action complexity in a second experiment (N = 31). At pre-test, one demonstrator showed a simpler action in a non-pedagogical manner, while the other showed a more complex action in a pedagogical manner. As in Exp. 1, children showed both actions during transmission, but were significantly more likely to demonstrate the simple action first (see Figure 1b), even though it was presented without the explicit pedagogical cues (t(30) = -2.68, p < 0.01). Natural pedagogy theory (Csibra & Gergely, 2009) predicts that even when the action is more complex, children still transmit this action preferentially, as they have interpreted the action as kind-generalisable knowledge to be taught to another conspecific. However, as we found instead that less complex actions are transmitted preferentially, this may favour a salience-based interpretation of our findings. Pedagogical cues and ease of execution both enhance the saliency of an action, increasing the likelihood of children transmitting this action first. However, when pitted against each other, ease of execution wins.
Labels in infants’ object categorization: Facilitative, or merely nondisruptive?
Kin Chung Jacky Chan¹, Gert Westermann¹
¹Lancaster University
Previous research has claimed that both labels and novel communicative signals (such as beeps used in a communicative context) can facilitate object categorization in 6-montholds, and that such effects can be attributed to their communicative nature (e.g., Balaban & Waxman, 1995; Ferguson & Waxman, 2016). Yet, other research has shown that young infants can form categories simply based on visual information in the absence of any auditory signals (e.g., Behl-Chadha, 1996; Quinn, Eimas & Rosenkrantz, 1993). Further, the auditory overshadowing hypothesis suggests that auditory input disrupts concurrent visual processing (and thus, categorization) in infants, with more familiar auditory signals interfering less (Sloutsky & Robinson, 2008). Therefore, two questions arise: (1) do communicative auditory signals truly facilitate categorization in 6-month-olds when compared to silence; and (2) can the comparable influence of labels and novel communicative signals be explained by familiarity with these auditory stimuli? The present study addressed these questions by familiarizing 6-month-old infants (N=28) on a sequence of animals (dinosaurs or fish) in silence and in the presence of pre-familiarized tone sequences. These stimuli have been used in previous studies to argue that labels and tones used in a communicative sense during familiarization enable infants to form categories (Ferguson & Waxman, 2016). Nevertheless, we found that infants in both conditions looked longer at an out-of-category object during test (silent: t(11) = 3.25, p = .005, d = 1.02; familiar tones: t(12) = 2.31, p = .040, d = 0.64; Figure 1), indicating successful categorization even in the absence of labels or communicative tones, and in the presence of familiar tones. These results are consistent with the auditory overshadowing hypothesis in showing that infants can form categories in silence, and that labels do not facilitate, but merely do not disrupt, categorization. Furthermore, our results also suggest that the effect of communicative tones is one of familiarity rather than communicative function. The findings in the present study warrant a reinterpretation of the results of previous studies using the same stimulus set, such that labels and familiar auditory signals do not facilitate, but, due to familiarity, also do not disrupt category formation in infants, whereas less familiar auditory signals disrupt categorization. An important implication of the findings of the present study is that it is vital to include appropriate control conditions in studies of linguistic effects on object categorization.
The effect of labeling on infants novel object exploration.
Marina Loucaides¹, Gert Westermann¹, Katherine Twomey²
¹Lancaster University, ²Manchester University
Children learn their first object names by associating the items they see with words they hear. Understanding how children link words with objects offers important insight into cognitive development (Yurovsky, Smith, & Yu, 2013). One significant aspect of learning word-object associations is the way in which children interact with objects in a dynamically complete everyday environment (Pereira, Smith, & Yu, 2014). For a full understanding of infants’ object exploration it is important to explore where children look during labeling and non-labeling events, whether they use different exploratory styles during physical interaction or passive observation, and whether language level affects the learning of new labels. This study used head-mounted eye-tracking to investigate how children at 24 months (N=24) interact with novel objects. Infants sat on a chair at a table across from
the experimenter and next to their parent. They were assigned to a physical interaction group, in which they handled objects for 30 s each, and to a non physical interaction group, in which only the experimenter handled the objects. Within each of these groups half of the novel objects were labeled by the experimenter with novel labels, (e.g., Look, a blicket!), and half of the objects were unlabeled. Following this session and a five-minute break, the experimenter tested children’s retention of label-object mappings by presenting the labeled objects and asking children for each in turn (e.g., Which one’s the blicket?). Parents also completed a vocabulary inventory (UK-CDI). It was found that participants looked significantly longer at the objects in the non physical interaction than in the physical interaction condition. There was no significant difference between looking times to the labeled and the unlabeled objects for either interaction condition, thus there was not a main effect of labelling. In the physical interaction condition participants showed a range of exploratory styles by looking at the experimenter’s face and hands, the parent’s face and hands and their own hands. In contrast, the participants in the non-physical interaction looked at the experimenter’s face and hands but not at the parent’s face and hands or their own hands. In the physical condition, a significant effect of labeling was found for looking at the experimenter’s face. The participants looked longer at the experimenter’s face when the object was labeled than when the object was not labeled. Furthermore, the number of switches between the objects and other areas of interest during the physical condition was significantly higher when a novel object was labeled than not labeled. Productive vocabulary scores, the looking time at the labeled objects and switches during looking time at the labeled objects were not found to predict the retention of the novel words in both physical and non-physical conditions. This research helps to enhance our understanding of early cognition by demonstrating how different environmental settings and labeling of novel objects can influence toddlers’ interaction with objects, and how this affects their word learning.
Probing Communication-Induced Memory Biases in Preverbal Infants: Two Replication Attempts of Yoon, Johnson and Csibra (2008)
Priya Silverstein¹, Gert Westermann¹, Teodora Gliga², Eugenio Parise¹
¹Lancaster University, ²Birkbeck, University of London
In a seminal study, Yoon, Johnson and Csibra [PNAS, 105, 36 (2008)] showed that ninemonth-old infants retained qualitatively different information about novel objects in
communicative and non-communicative contexts. In a communicative context, the infants encoded the identity of novel objects at the expense of encoding their location, which
was preferentially retained in non-communicative contexts. This result has not yet been replicated. Here we attempted two replications, while also including a measure of eyetracking to obtain more detail of infants’ attention allocation during stimulus presentation. Experiment 1 was designed following the methods described in the original paper. After discussion with one of the original authors, some key changes were made to the methodology in Experiment 2. Neither experiment replicated the results of the original study, with Bayes Factor Analysis suggesting moderate support for the null hypothesis. Both experiments found differential attention allocation in communicative and noncommunicative contexts, with more looking to the face in communicative than noncommunicative contexts, and more looking to the hand in non-communicative than communicative contexts. High and low level accounts of these attentional differences are discussed.
Han Ke also gave a talk in a Symposium session: Using innnovation methods to understand children’s curiosity-driven learning.
New evidence for systematicity in infants’ curiosity-driven learning
Han Ke¹, Gert Westermann¹, Katherine Twomey²
¹Lancaster University, ²Manchester University
Decades of research demonstrate that infants’ learning is sensitive to task features. However, what level of complexity best supports learning is unclear. Moreover, infancy studies typically employ carefully-designed experiments with complexity determined a priori. Whether infants systematically generate a particular level of difficulty during everyday, curiosity-driven exploration is therefore unknown. Twomey & Westermann’s (2017) model of visual curiosity-driven learning predicted that infants will generate intermediate task complexity (cf. Kidd, Piantadosi & Aslin, 2012) but to date this prediction has not been tested. In the current work, we explored the possibility that infants will systematically prefer intermediate task complexity when allowed to freely explore a set of 3D objects. To test this hypothesis in a naturalistic environment, we developed a shape priming paradigm using 3D-printed toy-like stimuli and headmounted
eyetracking. Twelve- (N=18), 18- (N=18) and 24-month-old (N=18) infants were tested. Stimuli were sets of 3D-printed stimuli. Stimulus edges differed in a continuum from corners to rounded. Differences between exemplars were manipulated such that each object was perceptually distinct from every other object, but differences between successive stimuli in the continuum were controlled. Infants were presented with a single ‘prime’ object from a set for free exploration, followed by the full remaining set. Infants’ first touches and subsequent exploratory sequences were coded offline. Data show that regardless of age group infants show an identical pattern of object selection, first selecting exemplars of greatest perceptual difference from the prime(see Figure 2). Cluster analysis demonstrated that overall infants exhibited 2 or 3 exploratory styles, generating either high-intermediate or low-intermediate complexity sequences. Overall, this study offers new evidence that infants as young as 12 months actively impose structure on their learning environment outside the constrained lab environment.