Study blogs for parents

Welcome to our study blogs page! Below you will find descriptions of the studies you may have taken part in. Studies are listed by the age group of the children who have taken part. If you have any questions please get in touch!

Studies with 6-month-old babies


Infant Media Exposure and Language Development Questionnaire (Spring/Summer 2015; Researcher: Gemma Taylor)

Screen media such as television and touchscreen apps are becoming increasingly available to children in their day-to-day lives. In fact, in 2013 31% of 0-2 year old and 67% of 2-4 year old American children were watching television at least once a day for a total 44 and 64 mins a day, respectively [1]. Importantly, when adult-directed television is on in the background, parents talk less to their children [2]. Given that children’s language development is strongly related to the number of words that children hear on a daily basis [3], we need to consider the implications of screen media exposure on children’s language development.

We used an online questionnaire to ask parents of 6-36 month old children in the UK about their children’s experience with different types of baby media and their language skills. This research will help shape our understanding of how young children use media to acquire language.

Details of related studies can be found by entering the following titles into Google Scholar (www.scholar.google.com)

[1] Rideout, V. (2013). Zero to eight: Children’s media use in America 2013. San Francisco, CA: Common Sense Media.

[2] Pempek, T. A., Kirkorian, H. L., & Anderson, D. R. (2014). The Effects of Background Television on the Quantity and Quality of Child-Directed Speech by Parents. Journal of Children and Media, 8(3), 211–222.

[3] Weisleder, A., & Fernald, A. (2013). Talking to children matters: early language experience strengthens processing and builds vocabulary. Psychological Science, 24(11), 2143–52.

 

Studies with 10-month-old babies


How do labels affect babies’ interactions with objects? Study 2 (Autumn 2016; Researcher: Katie Twomey, Elle Hart, Charlotte Rothwell; 10-month-old and 14-month-old babies)

In the previous study (see below) we demonstrated that teaching babies a word for an object affects how they later examine that object, even when they see it in silence. This exciting finding which suggests that babies’ language knowledge is very closely linked with what they know about the world. However, babies in the first study only learned names for single objects. In reality, babies are much more likely to learn names for object categories. For example, they might see their ginger pet cat at home, then a tabby cat at Grandma’s, and read about a black-and-white cat in their favourite book. Although these are different objects, they’re still all called “cat”.

In this study, then, we’re interested in how learning a word for a category of objects – rather than a single object – affects how babies later examine members of that category. We’re asking parents to read their child a very simple storybook containing photos of simple toys and their names, twice a day for a week.  Then, we invite parents to the Babylab to take part in a simple looking time task in which we will show their baby picture pictures of the objects they’ve seen in the storybook, and record how long they look at the pictures for (for more details, see below). Very like in the first study, the results will tell us about the relationship between what infants know about object categories, and what they know about words.

How bilingual babies listen to and process speech sounds from different languages (Summer 2016; Researcher: Shirley Cheung)

During the first 6-8 months of life, infants are sensitive to small changes in the sounds of language, even from languages they’ve never heard before! After that, their sensitivity to the small changes of sounds (phonemic contrasts) shifts from universal to language-specific in what is called perceptual narrowing [1]. Perceptual narrowing is crucial for language acquisition. As infants’ become increasingly committed to the native language(s) in their environment, their sensitivity to non-native sounds decreases.

My study wants to know if language background has an effect on the perception of these small changes in sounds, especially at the time where perceptual narrowing occurs. Do bilingual babies detect the changes better than monolingual babies due to the extra work of having to manage a second language? If both groups of babies detect the contrasts, do they process it in different parts of the brain?

I will use a technique called functional near-infrared spectroscopy (fNIRS for short) to measure detection and areas of brain activation to phonemic contrasts in 10-12 month old infants. fNIRS shines harmless infrared light on the scalp to measure changes in blood-oxygen levels in the brain [2]. It is quite comfortable for infants and is used by many labs across the world to study infant development.

For even more detailed information, click here.

Details of related studies can be found by entering the following titles in to Google Scholar (www.scholar.google.com)

[1] Kuhl, P. K. (2004). Early Language Acquisition: Cracking the Speech Code. Nature Reviews Neuroscience, 5, 831–843.

[2] Minagawa-Kawai, Y., Mori, K., Hebden, J.C., Dupoux, E., 2008. Optical imaging of infants’ neurocognitive development: recent advances and perspectives. Dev. Neurobiol. 68, 712–728.

How do labels affect babies’ interactions with objects? Study 1. (Summer 2015; Researcher: Katie Twomey)

UPDATE: This work was presented at the 38th Annual Cognitive Science Society Meeting, Philadelphia, Pennysylvania, USA (August 2016) and the 1st Lancaster Conference on Infancy and Child Development (August 2016). Read the paper here. See above for our second study.

Anyone who’s spent time with a young child knows that they pick up words very quickly! But it’s less well-known that learning a word for something fundamentally changes the way children interact with that thing. In a fascinating recent study, researchers from Oxford University showed 10-month-old babies a series of pictures of made-up cartoon animals [1]. All the animals belonged to the same category, but importantly, the pictures were created especially for the study. Using these “novel” stimuli meant that the researchers could be sure that babies’ behaviour was related only to what they’d learned from the pictures that day, and not based on anything they might have learned at home, for example.

Half the babies saw a series of eight pictures on a computer screen and heard a nonsense word (e.g., “Look! A blicket!”), and the other half saw the same eight pictures, but heard two words which divided the pictures into two categories. As a real-world example, imagine a series of pictures: four cats and four kittens. One group of babies would see all eight pictures and hear “Look! A cat!”, while the other would hear “Look! A kitten!” during the kitten pictures, and “Look! A cat!” during the cat pictures. Importantly, all babies saw the same pictures in the same order: the only difference between the two groups was the word they heard.

After babies has been shown all eight pictures (known as “familiarisation” in the trade!) the researchers showed babies new pictures – some of which were members of the familiarised category, and some of which were not. Astonishingly, the way in which babies looked at these new pictures suggested that simply hearing different words had affected how they had learned about the pictures: babies in the first group formed a single category (e.g., all the pictures are cats), whereas babies in the second group grouped the pictures into two categories (e.g., four pictures are cats, four pictures are kittens). This is the kind of finding that developmental psychologists get very excited about! It suggests that all else being equal, the words babies learn can actually affect how they see the world.

This study extends this idea. Based on predictions made by a computer simulation of babies’ word learning [2], we’re interested in what might be going on when word learning interacts with learning about objects. The simulation predicts that if a baby has an equal amount of experience of two items, but knows a word for one of those items, they should interact differently with the named item even when they don’t hear the word for that item. To explore this idea, we’ve been giving parents two toys and asking them to play with them with their baby, every day for a week. The clever bit is that only one of the toys will have a name. After a week, we invite parents back to the Babylab to take part in an eye-tracking study, during which we showing babies pictures of the two objects on a computer screen and record where and for how long they look (more information on eye-tracking is available here). If we’re right, and knowing a name for an item does change the way babies react to it, we should see a difference in the amount of time babies spend looking at the named and unnamed objects.

What’s most intriguing about this study is that the babies taking part are at the very beginning of language acquisition – at this age they say few, if any, words. So, it’s possible that this study might reveal the astonishing effect of learning words on our perception of the world even before speech starts! If so, this will provide evidence that language influences the way we think from a very early age indeed. We’ll be in touch once we’ve collected all our data, but if you have any questions in the meantime please drop Katie a line on k.twomey[at]lancaster.ac.uk.

Details of related studies can be found by entering the following titles into Google Scholar (www.scholar.google.com)

[1] Plunkett, K., Hu, J. F., & Cohen, L. B. (2008). Labels can override perceptual categories in early infancy. Cognition, 106(2), 665–681. http://doi.org/10.1016/j.cognition.2007.04.003

[2] Westermann, G., & Mareschal, D. (2014). From perceptual to language-mediated categorization. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1634), 20120391.

Fast learning of word-picture mappings (Summer/Autumn 2015; Researchers: Katie Twomey & Ben Malem)

We know that babies are amazingly good at linking words to things in the world, or “fast mapping”. For example, we know that 18-month-old babies can quickly link words to pictures in the course of a short study [1], and we know that hearing a word alongside its picture many times helps older children learn that word [2]. What we don’t know, however, is whether young babies at the very beginning of language acquisition can fast map words to pictures after encountering them just a handful of times?

In this study, we explore this by showing babies pictures on a computer screen and recording how long they look for. For example, we might show your baby a picture of a cat and a picture of a dog and say “Look! A dog!”. If he or she looks for longer at the dog than at the cat, we can conclude that they have learned the link between the DOG picture and the word “dog”.

Importantly, if the babies in this study do show that they’ve learned to link the word to the picture, this will be the first study to show such young children are capable of doing so. Perhaps the wonders of word learning begin younger than we expected! We’ll be in touch once we have collected all our data, but please drop Katie a line on k.twomey[at]lancaster.ac.uk if you have any questions.

Details of related studies can be found by entering the following titles into Google Scholar (www.scholar.google.com)

[1] Houston-Price, C., Plunkett, K., & Harris, P. (2005). “Word-learning wizardry” at 1;6. Journal of Child Language, 32(1), 175–189.

[2] Mather, E., & Plunkett, K. (2009). Learning Words Over Time: The Role of Stimulus Repetition in Mutual Exclusivity. Infancy, 14(1), 60–76.

Studies with 12-month-old babies


Investigating the Goldilocks effect (Summer/Autumn 2015; Researchers: Ben Malem & Katie Twomey)

UPDATE: this work was presented at the 2016 International Conference on Infant Studies, Noew Orleans, USA (May 1016). View the talk here.

This project was funded by an ESRC International Centre for Language and Communicative Development (LuCiD) Summer Internship awarded to Ben Malem and Katie Twomey.

Infants are naturally curious. They love to explore the world around them either in order to help them achieve goals or just to see whether a toy tastes as good as it looks. This inner drive to explore for exploring’s sake is hard to explain as there is seemingly little motivation for infants to do this – why would they want to spend time learning useless information that they are unable to use to accomplish a goal? [1] Research has suggested that the brain actually gives infants internal rewards for the acquisition of knowledge, which, in turn, drives children to seek out all sorts of new information, a function which is thought to have been a product of evolution – the more we know, the better we are at surviving!

It’s unclear, though, exactly how infants go about viewing things in their environment and why they might choose to play with one toy over the other. One thought is that the perceived complexity of objects will determine whether or not an infant will want to look at it, with this being a way for infants to be economical with their attention. [2,3] They don’t want to waste their precious time with things that are too complicated for them, whilst they also don’t want to be bored with things that are too easy and so tend to seek out those that are just right, as Goldilocks did. Known as the “Goldilocks” effect, this effect is thought to emerge from the way in which infants efficiently explore their immediate environment – and has even been shown to be the best way for computers and robots to learn! [4]

In order to explore the Goldilocks effect in early learning, we created a study to examine how 12-month-old infants chose to explore their visual scene. We were interested in how what an infant looks at now affects what they look at next – do they look at the next most difficult or next simplest thing? We decided to explore this idea in a looking time experiment in which we showed infants pictures on a computer screen. We designed stimuli incorporating systematic differences so that we could measure the difference in complexity between subsequent pictures. To create our stimuli we created two novel shapes that differed in shape and colour. We morphed these two shapes together and took screenshots at different stages of this morphing. This resulted a continuum of five shapes A-E that differed systematically in shape and colour. For example, the difference between shapes A and B was small: A is short, wide and blue, while B was slightly taller and thinner and indigo. In contrast, the difference between shapes A and E was pronounced, with E much narrower, shorter and bright red. We used the difference between the shapes as a measure of the complexity of infants’ learning environment.

Earlier work suggests that the relative complexity of stimulus depends on the what children already know; for example, a picture of a cat might be complex to an infant who had no cats at home, but simple to a child with pets [5]. To track the effect of learning about one stimulus on infants’ interaction with subsequent stimuli, we presented our shapes in different combinations; the complexity of each shape on a given presentation would therefore be entirely dependant on which shape they saw in the previous presentation. We then recorded where and for how long infants looked, and calculated the complexity of the stimulus sequences infants generated.

Based on our existing work we expect to find a Goldilocks effect [6] by which infants generate sequences of intermediate complexity. More generally, though, this research is fascinating as it opens up new avenues for further studies and has huge potential for practical applications. Most importantly, if we possess a greater understanding of how infants best acquire knowledge then we can adapt learning environments to be appropriate for certain age groups in order to maximise learning.

If your child has taken part in this study then we’ll be in touch soon once data collection has been concluded and we’ve analysed the results! For any questions in the meantime then please contact Ben at b.malem@lancaster.ac.uk or Katie at k.twomey@lancaster.ac.uk.

Details of related studies can be found by entering the following titles in to Google Scholar (www.scholar.google.com)

[1] Gottlieb, J., Oudeyer, P., Lopes, M., & Baranes, A. (2013). Information-seeking, curiosity, and attention: computational and neural mechanisms. Trends in Cognitive Sciences, 17(11), 585-593.

[2] Kidd, C., Piantadosi, S. T., & Aslin, R. N. (2012). The goldilocks effect: Human infants allocate attention to visual sequences that are neither too simple nor too complex. PLoS ONE, 7(5), 1-8.

[3] Kidd, C., Piantadosi, S. T., & Aslin, R. N. (2014). The Goldilocks effect in infant auditory attention. Child Development, 85(5), 1795–1804.

[4] Oudeyer, P.-Y., & Kaplan, F. (2007). What is intrinsic motivation? a typology of computational approaches. Frontiers in Neurorobotics, 1. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/pmc2533589/

[5] Kovack-Lesh, K. A., McMurray, B., & Oakes, L. M. (2014). Four-month-old infants’ visual investigation of cats and dogs: Relations with pet experience and attentional strategy. Developmental Psychology, 50(2), 402–413. http://doi.org/10.1037/a0033195

[6] Twomey, K. E., & Westermann, G. A neural network model of curiosity-driven infant categorization. Manuscript in preparation.

Studies with 14-month-old babies


How do labels affect babies’ interactions with objects? Study 2 (Autumn 2016; Researcher: Katie Twomey, Elle Hart, Charlotte Rothwell; 10-month-old and 14-month-old babies)

In the previous study (see below) we demonstrated that teaching babies a word for an object affects how they later examine that object, even when they see it in silence. This exciting finding which suggests that babies’ language knowledge is very closely linked with what they know about the world. However, babies in the first study only learned names for single objects. In reality, babies are much more likely to learn names for object categories. For example, they might see their ginger pet cat at home, then a tabby cat at Grandma’s, and read about a black-and-white cat in their favourite book. Although these are different objects, they’re still all called “cat”.

In this study, then, we’re interested in how learning a word for a category of objects – rather than a single object – affects how babies later examine members of that category. We’re asking parents to read their child a very simple storybook containing photos of simple toys and their names, twice a day for a week.  Then, we invite parents to the Babylab to take part in a simple looking time task in which we will show their baby picture pictures of the objects they’ve seen in the storybook, and record how long they look at the pictures for (for more details, see below). Very like in the first study, the results will tell us about the relationship between what infants know about object categories, and what they know about words.

Studies with 24-month-old babies


How can we help word learning?  (Autumn 2016 onwards; Researcher: Katie Twomey; 24-36-month olds)

Previous work from the Developmental Cognitive Science lab has shown that, perhaps surprisingly, a little bit of distraction does toddlers good, at least when it comes to learning new words! In a computerised word learning task we showed that toddlers who saw objects on different coloured backgrounds learned the names of those objects better than toddlers who saw objects on the same background repeatedly. Now we’re interested in whether background noise in general helps, or whether the boost to word learning is specifically linked to the background colour.

In our new word learning study we’re teaching two-year-old children new words by presenting them with pictures of objects on a computer screen and recording how long they look for, and what they look at. The previous study made the learning task noisy by changing the colour of the background objects appeared on; this time we’re going to make the task noisy be varying the locations that objects appear in. So, for half the children objects will appear in a straight line, and for the other half, object will appear in lots of different locations. If noise in general helps toddlers to focus on what they need to learn, then we expect the children who see lots of different location to learn words best.

This study is part of the ESRC International Centre for Communicative Development and will run from Autumn 2016 onwards.

Word learning in toddlers (Summer 2015; Researcher: Matt Hilton; 24-36 month olds)

Interactions between a child and their caregiver are very important for the language development of the child. But children cannot rely on their caregiver to teach them the meaning of every word that they need to learn, this would take far too long!! Instead, children must work hard to to decipher the meaning of the words that they hear. This is particularly tricky because the things that children need to learn words for come in all shapes and sizes. For example, Labradors and Chihuahuas look very different, but the child soon comes to learn that they are both called a “dog”. Or maybe a child needs to very quickly work out what a caregiver is talking about, you might find yourself  in the supermarket and ask your child if they would like to ride in the trolley. They may not have ever heard the word “trolley” before, but they will likely know what you mean from the context.
 
This study seeks to understand how children work out the meaning of words that they hear, and specifically to investigate how children learn these words when they are spoken by their caregiver. To investigate this, children will play a game with one of their caregivers. In the game, each caregiver shows their child some toys, and they then request one of the toys from their child. For example “show Mummy which one is the car!”. The caregiver then presents some unfamiliar toys to the child, and can then ask for them using unfamiliar nonsense words, for example “can you pass me the blicket?”. We can then determine whether children select the correct familiar objects, and we can also asses whether children will assume that the unfamiliar word refers to one of the unfamiliar toys. This allows us to investigate whether children perform differently on this task when they are interacting with a previously unknown research assistant instead.