Lab Blog

September 2018

Being involved in The ManyBabies Project – the biggest ever infancy study!   by  Priya Silverstein

When studying babies, often we try to make generalisations about what all babies do. However, as a researcher that studies babies, or as a parent or carer bringing your baby in for a study, I’m sure you’ll be aware that babies are highly variable! For example, in a simple eye-tracking study, one baby might look at the screen for most of the time, whereas another may look mostly at the adult whose lap they’re sitting on, and another may look all around the room! So far, we don’t really know what makes babies act differently in our studies, and little research has been done on this.

One of the problems with studies that we usually do is that we’re not able to get massive numbers of babies to take part in each study. Because of this, we can’t often look at all of this interesting variability across infants (as you need pretty big numbers to do this). Another issue is that as we usually do studies only in one lab (e.g. our very own Lancaster Babylab), we don’t get as much variability in terms of background and culture, with all of the babies in one study coming from the same area.

So, following in the footsteps of adult psychologists (e.g. The ManyLabs Project), some infancy researchers got together and decided to create something called the ManyBabies Project. This is where babylabs all over the world get infants to take part in the exact same study. This means we’re able to get thousands of infants to take part, and we get loads of interesting variability to look at.

For the first version of the ManyBabies Project, we’re looking at ‘The Infant-Directed Speech Preference’. This is the finding that babies tend to prefer to listen to that sing-songy voice that we put on when we’re talking to babies, than the voice we use when we’re talking to other adults. The first part of this project has already finished, so if your baby was in this version of the study (ran by Katie Twomey) then I can give you a snippet of the results!

We found that although babies all over the world did prefer to listen to infant-directed-speech, there was a lot of variability across the world, with North American babies responding the strongest (no surprise as the people the babies were listening to had North American accents)!

Another interesting finding was that the strongest preference was in the ‘Headturn Preference Procedure’. This was the version of the study where your baby heard the speech coming from speakers on either side of them (one side for infant-directed-speech and one for adult-directed speech), and the speaking stopped when they turn away. We think that maybe the reason their preference was strongest for this version is because turning towards a side is more active than simply looking forwards, which we take in looking-time and eye-tracking studies to mean that they’re interested, but could also just mean that they’re bored!

As a follow up, I’m currently running the second part of this study, which is to assess ‘Test-Retest Reliability’. This is to check whether the same single baby will show the same amount of preference for infant-directed-speech in two sessions a week apart. This will tell us whether magnitude of preference is something that varies across babies (with some babies preferring it a lot, and others not at all), or across sessions (just more how the baby is feeling on the day). Keep your eyes and ears out for updates!

There are lots of other exciting things that are being explored from these data, including things that will be useful for the rest of our studies in babylabs across the world (e.g. whether it matters whether babies are sitting on someone’s lap or in a highchair, and whether it matters whether the baby has just woken up or just eaten).

We’re very proud to be a part of this exciting project, and look forward to being a part of other versions of the ManyBabies project in the future!

More detail about ManyBabies Project: https://manybabies.github.io/


September 2016

Learning words affects how babies see objects by Dr. Katie Twomey

Anyone who’s spent time with a young child knows that they pick up words very quickly! But it’s less well-known that learning a word for something fundamentally changes the way children interact with that thing. In a fascinating recent study, researchers from Lancaster University showed 10-month-old babies a series of pictures of made-up cartoon animals. All the animals belonged to the same category, but importantly, the pictures were created especially for the study. Using these “novel” stimuli meant that the researchers could be sure that babies’ behaviour was related only to what they’d learned from the pictures that day, and not based on anything they might have learned at home, for example.

D7H_6498Half the babies saw a series of eight pictures on a computer screen and heard a nonsense word (e.g., “Look! A blicket!”), and the other half saw the same eight pictures, but heard two words which divided the pictures into two categories. As a real-world example, imagine a series of pictures: four cats and four kittens. One group of babies would see all eight pictures and hear “Look! A cat!”, while the other would hear “Look! A kitten!” during the kitten pictures, and “Look! A cat!” during the cat pictures. Importantly, all babies saw the same pictures in the same order: the only difference between the two groups was the word they heard.

After babies has been shown all eight pictures (known as “familiarisation” in the trade!) the researchers showed babies new pictures – some of which were members of the familiarised category, and some of which were not. Astonishingly, the way in which babies looked at these new pictures suggested that simply hearing different words had affected how they had learned about the pictures: babies in the first group formed a single category (e.g., all the pictures are cats), whereas babies in the second group grouped the pictures into two categories (e.g., four pictures are cats, four pictures are kittens). This is the kind of finding that developmental psychologists get very excited about! It suggests that all else being equal, the words babies learn can actually affect how they see the world.

baby-reading-bookThis study extends this idea. We’re interested in what might be going on when word learning interacts with learning about
objects. We think that if a baby has an equal amount of experience of two items, but knows a word for one of those items, they should interact differently with the named item even when they don’t hear its name. To explore this idea, we’ve been giving parents two toys and asking them to play with them with their baby, every day for a week, or asking them to read their baby a book containing photos of the objects, again for a week. The clever bit is that only one of the toys will have a name. After a week, we invite parents back to the Babylab to take part in an eye-tracking study, during which we showing babies pictures of the two objects on a computer screen and record where and for how long they look (more information on eye-tracking is available here). If we’re right, and knowing a name for an item does change the way babies react to it, we should see a difference in the amount of time babies spend looking at the named and unnamed objects.

What’s most intriguing about this study is that the babies taking part are at the very beginning of language acquisition – at this age they say few, if any, words. So, it’s possible that this study might reveal the astonishing effect of learning words on our perception of the world even before speech starts! If so, this will provide evidence that language influences the way we think from a very early age indeed. We’ll be in touch once we’ve collected all our data, but if you have any questions in the meantime please drop Katie a line on k.twomey[at]lancaster.ac.uk.

References:

[1] Althaus, N., & Westermann, G. (2016). Labels constructively shape categories in 10-month-old infants. Journal of Experimental Child Psychology.

Acknowledgements:

We are very grateful to all the children and adults who took part – we couldn’t do it without you! This work was funded by the  ESRC International Centre for Language and Communicative Development (LuCiD) [ES/L008955/1].


May 2016

Toddler robots help solve the language puzzle by Dr. Katie Twomey

The world is an incredibly complicated place when it comes to learning your first words. Imagine a toddler playing with a toy duck, a toy rabbit, and a brand new, orange toy with a very long neck. The toddler’s mum or dad points at the toys and says “Look, giraffe!”. But how does the child know what “giraffe” means? Is it the colour of the toys? The game being played? Is it another word for “rabbit”, perhaps? If you think about it, “giraffe” could refer to anything the child can see, hear, taste or smell! But despite having to solve this puzzle every time they hear a new word, by around a year babies begin to speak.

IMAG0115Understandably, how children crack the word learning problem has been studied intensively for decades, and we now know that two-year-old children can work out the meaning of a new word based on words they already know. That is, our toddler can work out that “giraffe” refers to the new toy, because the other two are called “duck” and “rabbit”. This clever behaviour has been demonstrated many times, all over the world, with toddlers from 18 months e.g., [1]–[3]. So how do they do it? There are two likely possibilities. The first is that toddlers associate new words (e.g. “giraffe”) with new things (e.g., the orange toy). We’ll call this a “new-to-new” strategy. The second is that they use a process of elimination to work out that because the brown toy is called “rabbit”, and the yellow toy is called “duck”, then the orange toy must be “giraffe”. This is known as “mutual exclusivity”.

In a recent study with two-year-old children [4], toddlers were better at learning words for a new toy when there were only two other, familiar toys present. When there were more than two familiar toys, toddlers couldn’t learn the words. If children use a new-new strategy, then the number of toys shouldn’t affect how easy it is to learn words, because they only need to look at the new one to link it with the word. The authors of this study concluded that children must be using mutual exclusivity: looking at four familiar toys to rule them out before linking the new word to the new toy is harder than looking at just two familiar toys.

So, it looks like toddlers use mutual exclusivity to work out what words mean. So far, so astonishing! But where do the robots come in? With any study in developmental psychology, we can observe what children do but only guess what they’re thinking. With robots, we can observe what they do and observe what goes on inside their “brains” – after all, robots are just computer programs with bodies. And if robots do what children do, the way the robot is programmed may have something in common with how children’s thinking works.

LuCiD researcher Dr. Katie Twomey, with Dr. Jessica Horst from the University of Sussex, and Dr. Anthony Morse and Prof. Angelo Cangelosi from the University of Plymouth wanted to find out what goes on when children in the study were learning words. They used iCub [5], a humanoid robot designed to have similar physical proportions to a three-year-old child. They programmed it using simple software [6] that lets the robot link the words it hears (through its microphones) to objects it sees (with its cameras). So, like children in the study, iCub could either use a new-to-new strategy, or mutual exclusivity to link words to objects.

First the researchers trained iCub to recognise toys when it heard their names, by pointing. You can watch an example of this here https://www.youtube.com/watch?v=6aHrYkQhWcY. Then, iCub did the same task as children: the researchers showed it several familiar and one brand new toy. When iCub heard a word (e.g., “bus”) the robot pointed, indicating which toy it associated the word with. With just its simple software, iCub could solve the word learning puzzle: it pointed at new toys when it heard new words. Amazingly, when the researchers tested iCub’s word learning they found that iCub did exactly what children did: it only learned words in the context of two familiar objects – not three or four. So, like toddlers, iCub learned words using mutual exclusivity.

IMAG0135But why use a cutting edge piece of technology when children do this all the time? Remember, we can see what iCub is thinking! What this new study shows is that mutual exclusivity behaviour can be achieved with a very simple “brain” that just learns associations between words and objects – there’s no need for iCub to reason in-depth about what it sees, and what it already knows. In fact, intelligent as iCub seems, it actually can’t think “I know that the brown toy is a rabbit, and I know that that the yellow toy is a duck, so this new toy must be giraffe”, because its software is too simple. This suggests that at least some aspects of early learning are based on an astonishingly powerful – but elegantly simple – association-making ability that allows babies and toddlers to rapidly absorb information from the very complicated learning environment.

More adventures with iCub are underway – watch this space! And remember, next time you say your child has a brain like a sponge, you might be closer than you think!

The full study is reported in the following article, available open-access from early July 2016:

  1. E. Twomey, A. F. Morse, A. Cangelosi, and J. S. Horst, “Children’s referent selection and word learning: insights from a developmental robotic system,” Interaction Studies, vol. 17, no. 1, 2016.

References:

[1]       E. M. Markman, “Constraints on Word Meaning in Early Language-Acquisition,” Lingua, vol. 92, no. 1–4, pp. 199–227, 1994.

[2]       J. Halberda, “The development of a word-learning strategy,” Cognition, vol. 87, no. 1, pp. B23–B34, 2003.

[3]       C. Houston-Price, K. Plunkett, and P. Harris, “‘Word-learning wizardry’ at 1;6,” J. Child Lang., vol. 32, no. 1, pp. 175–189, 2005.

[4]       J. S. Horst, E. J. Scott, and J. P. Pollard, “The Role of Competition in Word Learning Via Referent Selection.,” Dev. Sci., vol. 13, no. 5, pp. 706–713, 2010.

[5]       G. Metta, L. Natale, F. Nori, G. Sandini, D. Vernon, L. Fadiga, C. von Hofsten, K. Rosander, M. Lopes, J. Santos-Victor, A. Bernardino, and L. Montesano, “The iCub humanoid robot: An open-systems platform for research in cognitive development,” Neural Netw. Neural Netw., vol. 23, no. 8–9, pp. 1125–1134, 2010.

[6]       A. F. Morse, J. de Greeff, T. Belpaeme, and A. Cangelosi, “Epigenetic Robotics Architecture (ERA),” IEEE Trans. Auton. Ment. Dev., vol. 2, no. 4, pp. 325–339, 2010.

This blog was originally written for the ESRC International Centre for Language and Communicative Development. See the original post here.

March 2016

More than meets the eye: how children can learn verbs from what they hear, not what they see by Dr. Katie Twomey

How children learn verbs is one of the trickiest conundrums facing researchers in language acquisition. Nouns are easy: it’s not surprising that the first object names babies learn are for the objects they see and interact with on a day-to-day basis, like shoe, bottle, and blanket. Verbs are more of a challenge, though, because verbs refer to actions, which may only be visible for a short space of time (e.g., throw) – if at all (e.g., like). To make things more confusing, as well as learning the verbs themselves, children have to learn how to use them properly. For example, we know that five-year-olds tend to make mistakes like *mummy filled milk into the bottle or *daddy poured the bath with water. The question is, how do children learn to stop making these mistakes?Over the years various explanations have been offered. Perhaps the most influential theory, which we’ll call the what they see approach, assumes that children decide how to use verbs based on the visual scene. Specifically, they mention the most-changed thing first (Pinker, 1989). So, they would say “mummy filled the bottle with milk” when the bowl becomes completely full but the movement of the milk isn’t noticeable, and “mummy poured milk into the bottle” when the movement of the milk is more visible, and the bottle doesn’t become completely full.

However, what if the bottle isn’t transparent, or they can’t see the milk? More generally, it’s likely that the visual information children need to decide how to use these verbs isn’t consistently available, especially for rare verbs like “infuse”. Luckily there’s an alternative way of learning this – the what they hear approach (Twomey, Chang & Ambridge, 2014). This explanation only requires children to listen to how verbs are used, and then copy it. These verbs tend to be followed by certain types of noun. For example, fill is usually followed by container-type words like bucket, cup and box, and pour is followed by liquid-type words like water, paint and juice. To avoid making a mistake, children only have to learn that fill is followed by a container-type word, and pour by a liquid-type word – intriguingly, they don’t need to look at anything at all.

UntitledResearchers from the ESRC International Centre for Language and Communicative Development tested these two theories by showing 5-year-olds, 9-year-old and adults animations of a robot on a spaceship performing made-up actions with two objects, for example filling a cone with blobs of oil using a shooting motion. In each action, one object was changed more than the other (e.g., the cone became completely full), but also, the experimenter described the action using a made-up verb (e.g., “pilked”) and either a container-type or a liquid-type noun (e.g., the robot pilked the oil). After seeing several of these “learning” animations, participants saw a new “recap” animation with the same action but new, equally-changed objects, and were asked to describe it. If the what they see theory is correct, then participants should describe the scene by mentioning the most-changed object in the training scenes first, for example the robot pilked the cone with oil. If the what they hear theory is correct, though, participants should describe the scene by mentioning the container-type or liquid-type object which came first in training, for example the robot pilked the oil into the cone.

Perhaps not surprisingly, 5-year-olds didn’t seem to be using either strategy. This might be because they needed more practice with the learning animations – after all, at this age they still make mistakes with this kind of verb. However, 9-year-olds and adults followed the what they hear strategy, placing the object that they’d heard first on the learning trials earlier in their descriptions of the scene. This suggests that older children can learn how to use these more difficult verbs by listening to how they are used by adults.

Excitingly, this is one of the first times children have been shown to be able to how verbs work without paying detailed attention to the visual scene. While many studies have shown that quality and quantity of language matters for babies’ very first words, this work highlights shows it just as important to provide older them with plenty of opportunities to hear and use language.

Further reading:

The original paper is freely available at http://www.sciencedirect.com/science/article/pii/S0010027716301160

For more information about early word learning see our Nursery World articles here http://www.lucid.ac.uk/resources/for-practitioners/nursery-world-magazine.

References:

Pinker, S. (1989). Learnability and cognition: The acquisition of argument structure. Cambridge, Mass.: Harvard University Press.

Twomey, K. E., Chang, F., & Ambridge, B. (2016). Lexical distributional cues, but not situational cues, are readily used to learn abstract locative verb-structure associations. Cognition, 153, 124–139

Twomey, K. E., Chang, F., & Ambridge, B. (2014). Do as I say, not as I do: A lexical distributional account of English locative verb class acquisition. Cognitive Psychology, 73, 41–71.

Acknowledgements:

We are very grateful to all the children and adults who took part – we couldn’t do it without you! This work was funded by Leverhulme Research Project Grant RPG-158 and ESRC Centre Grant ES/L008955/1.

This blog was originally written for the ESRC International Centre for Language and Communicative Development. See the original post here.


May 2015

Understanding how children’s curiosity drives their learning by Dr. Katie Twomey

For decades, developmental psychologists have used ingeniously-designed studies to investigate how children learn about the things they see.  Typically, we show children a category of items and let them become familiar with them.  Then, we introduce them to new things. Because we know that toddlers prefer to look at new things rather than old things, their reaction to the new items can tell us a lot about how they learn.  For example, we could show a toddler several different pictures of cats, and then show them a picture of a new cat next to a picture of a dog.  If the toddler is more interested in the dog than the cat, we can tell they recognise the dog as new.  In other words, they’ve learned a CAT category which excludes the dog picture.

These studies allow us to change aspects of the categories toddlers see, and based on these techniques it’s now well-established that small changes to these categories can have a big effect on what toddlers learn, and even how they learn their first words.  For example, we know that experience with lots of different examples of a category makes it easier to learn the word for that category, and can even speed up vocabulary growth!

Despite knowing so much about how children behave in these studies, until recently there’s been a gap in our knowledge: our understanding of learning is limited to what toddlers and babies do in the lab.  Toddlers outside the lab rarely come across a whole category of items at once, and they’re even less likely to interact with each item for the same amount of time.  Instead, as any parent knows, toddlers are curious explorers who play with what they want, when they want!  Only very recently have researchers started to examine how toddlers and babies learn when learning is curiosity-driven.  For example, recent work has shown that when parents and babies play with toys together, what babies see is surprisingly different to what their parents see.  Objects appear substantially bigger to babies than they do to adults, and it’s thought that this predominance of objects in babies’ visual field may be important for early category and word learning.

So, we know that toddlers love exploring, and that the way in which they explore affects their learning.  But what drives them to explore?  Why keep trying to walk – and fall over again, and again, and again – when you’re already an expert crawler?  Where does this curiosity come from?  Colleagues in robotics have explored this question with robots which learn via an internal reward system called “intrinsic motivation”, but so far there’s been little work investigating what happens in children (although this forthcoming work is a great example using puppy robots!).

We decided to tackle this problem using a computer model of how children behave in a recent category learning study.  Instead of deciding what the model saw, we let it learn via curiosity.  The model chose the order in which it saw images by comparing what it had already learned with how easily it could learn new items.  We showed that the model learned best from images which are complex – but not too complex!  This “Goldilocks” effect, by which learning is best supported by just the right amount of information, has been seen in related studies with babies.

What does this computational work tell us?  It shows that given the chance to choose what it learned from and when, the model created an environment for itself that was just right for learning.  It also suggests that toddlers’ curiosity is based on what they already know and how quickly they can learn, as well as what they’re seeing at the time.  Of course, this is a computer simulation.  Our next, important task is to explore some of these ideas with babies. Excitingly, because babies learn their first words for the categories that they see day-to-day, like CAT or SHOE, understanding how curiosity-driven learning works will help us understand the critical first steps in one of the most valuable skills a human being can learn – language.

Research into curiosity-driven learning is important because it helps us understand how children learn in the real world, rather than in a cleverly-designed study.  After all, how many parents have time to meticulously plan everything their child sees that day?  The important message that seems to be emerging from these early studies is that because they can construct their own “best” learning environment, it’s good to let toddlers explore!

This work was presented at the Fifth Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, Brown University, USA, August 2015. This blogwas originally published via the ESRC International Centre for Language and Communicative Development

References

Kidd, C., Piantadosi, S. T., & Aslin, R. N. (2012). The Goldilocks effect: Human infants allocate attention to visual sequences that are neither too simple nor too complex. PloS One, 7(5), e36399.

Mather, E., & Plunkett, K. (2011). Same items, different order: Effects of temporal variability on infant categorization.Cognition, 119(3), 438–447.

Oudeyer, P.-Y., & Smith, L. (in press). How evolution may work through curiosity-driven developmental process.Topics Cogn. Sci.

Perry, L. K., Samuelson, L. K., Malloy, L. M., & Schiffer, R. N. (2010). Learn Locally, Think Globally: Exemplar Variability Supports Higher-Order Generalization and Word Learning.Psychological Science, 21(12), 1894–1902.

Smith, L. B., Yu, C., & Pereira, A. F. (2011). Not your mother’s view: The dynamics of toddler visual experience.Developmental Science, 14(1), 9–17.

Twomey, K. E., Ranson, S. L., & Horst, J. S. (2014). That’s more like it: Multiple exemplars facilitate word learning. Infant and Child Development, 23(2), 105–122.


March 2015

Watching TV can actually be good for toddlers by Dr. Gemma Taylor

Scaremongering about the negative effects of children’s TV-watching is not new. But in our busy lives it’s more and more tempting to let your child watch television for half an hour or so while you tidy up, wash up, make phone calls, pay your bills or simply take a moment to sit down.

Despite the prevalence of television programmes targeting young children, the American Academy of Paediatrics discourages television exposure for children under the age of two years and recommends that exposure is restricted to less than one-two hours thereafter. But new researchhas shown that after watching an educational children’s television programme, toddlers can learn to count to five and learn to read a simple map presented on the show.

Bad reputation

In its 2011 policy statement, the American Academy of Paediatrics reported that television viewing was associated with an overall reduction in both parent-child interactions and children’s creative play, irrespective of whether the television was on in the background or the foreground. Television itself does not offer an ideal learning situation for children. We know that children up to three years of age exhibit a video deficit – meaning they learn less from television than they do from a live interaction. So it’s clear that children’s television exposure should be moderated.

When presented with TV programmes, children are faced with a transfer task, meaning that they must transfer what they learn from a 2D television screen to the 3D world. The poorer quality of visual and social information presented on TV can lead to a less detailed representation of the information in children’s memory and subsequent difficulties transferring learnt information to the real world.

Visual information such as size and depth cues are reduced on a 2D television screen compared to the 3D world. Likewise, in contrast to a real world social situation, an actor or character on television cannot respond to what the child is looking at, saying or doing at a given moment in time. Characterising the video deficit as a transfer problem is helpful for understanding how to support children’s learning from television.

Helping children learn from television

There is still hope for educational children’s TV programmes. Children’s television producers and parents can employ a number of techniques to enhance learning from TV and support children’s knowledge transfer to the real world. One reason that children often ask to watch the same TV programmes over and over again is that they learn better from repeatedly being exposed to the same thing.

Repetition within a TV show, such as repeating sequences or new words, or repeatedly watching the same show across a number of days can enhance children’s learning, memory and transfer of the information to the real world.

What’s more, the more familiar the television character, the more likely it is that children will learn from a television programme featuring that character. Repetition helps children to store more detailed representations of the information in their memory. While watching the same children’s TV show with your child may be tedious for parents, it is beneficial for children.

Interactivity is key

Making children’s TV an interactive activity is also beneficial for children’s learning and later knowledge transfer to the real world. Television programmes aimed at children aged 2 years and above such as Dora the Explorer and Blue’s Clues try to promote a social interaction between the child and the television character by getting characters to look directly at the camera, and using questions and pauses to allow time for children’s responses.

Children are more likely to understand the content of a children’s television programmes when theyrespond to a character’s interactive questions. In programmes like Dora the Explorer, television characters’ feedback responses to children are limited to things like “Good job” and “That’s right”. Watching television with your child and giving them better feedback on their responses will give you the opportunity to further support their learning from television.

So yes, children’s TV does have the potential to be educational for young children. But not all children’s TV programmes are created equal. While some provide a good learning platform for young children, others are better suited to entertainment purposes only. Watching TV with your child and making the experience more interactive can enhance the educational value for your child.The Conversation

This article was originally published on The Conversation and via the ESRC International Centre for Language and Communicative Development.