{"id":53,"date":"2015-08-03T21:15:28","date_gmt":"2015-08-03T21:15:28","guid":{"rendered":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/?page_id=53"},"modified":"2018-09-20T09:38:40","modified_gmt":"2018-09-20T09:38:40","slug":"lab-blog","status":"publish","type":"page","link":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/lab-blog\/","title":{"rendered":"Lab Blog"},"content":{"rendered":"<p>September 2018<\/p>\n<h3><strong>Being involved in The ManyBabies Project<\/strong> <strong>\u2013 the biggest ever infancy study!\u00a0\u00a0<\/strong>\u00a0<i>by\u00a0<\/i>\u00a0<a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/people\/\">Priya Silverstein<\/a><\/h3>\n<p>When studying babies, often we try to make generalisations about what all babies do. However, as a researcher that studies babies, or as a parent or carer bringing your baby in for a study, I\u2019m sure you\u2019ll be aware that babies are highly variable! For example, in a simple eye-tracking study, one baby might look at the screen for most of the time, whereas another may look mostly at the adult whose lap they\u2019re sitting on, and another may look all around the room! So far, we don\u2019t really know what makes babies act differently in our studies, and little research has been done on this.<\/p>\n<p>One of the problems with studies that we usually do is that we\u2019re not able to get massive numbers of babies to take part in each study. Because of this, we can\u2019t often look at all of this interesting variability across infants (as you need pretty big numbers to do this). Another issue is that as we usually do studies only in one lab (e.g. our very own Lancaster Babylab), we don\u2019t get as much variability in terms of background and culture, with all of the babies in one study coming from the same area.<\/p>\n<p>So, following in the footsteps of adult psychologists (e.g. The ManyLabs Project), some infancy researchers got together and decided to create something called the ManyBabies Project. This is where babylabs all over the world get infants to take part in the exact same study. This means we\u2019re able to get thousands of infants to take part, and we get loads of interesting variability to look at.<\/p>\n<p>For the first version of the ManyBabies Project, we\u2019re looking at \u2018The Infant-Directed Speech Preference\u2019. This is the finding that babies tend to prefer to listen to that sing-songy voice that we put on when we\u2019re talking to babies, than the voice we use when we\u2019re talking to other adults. The first part of this project has already finished, so if your baby was in this version of the study (ran by Katie Twomey) then I can give you a snippet of the results!<a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2018\/09\/123.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-480 alignright\" src=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2018\/09\/123-300x210.jpg\" alt=\"\" width=\"320\" height=\"224\" srcset=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2018\/09\/123-300x210.jpg 300w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2018\/09\/123.jpg 572w\" sizes=\"auto, (max-width: 320px) 100vw, 320px\" \/><\/a><\/p>\n<p>We found that although babies all over the world did prefer to listen to infant-directed-speech, there was a lot of variability across the world, with North American babies responding the strongest (no surprise as the people the babies were listening to had North American accents)!<\/p>\n<p>Another interesting finding was that the strongest preference was in the \u2018Headturn Preference Procedure\u2019. This was the version of the study where your baby heard the speech coming from speakers on either side of them (one side for infant-directed-speech and one for adult-directed speech), and the speaking stopped when they turn away. We think that maybe the reason their preference was strongest for this version is because turning towards a side is more active than simply looking forwards, which we take in looking-time and eye-tracking studies to mean that they\u2019re interested, but could also just mean that they\u2019re bored!<\/p>\n<p>As a follow up, I\u2019m currently running the second part of this study, which is to assess \u2018Test-Retest Reliability\u2019. This is to check whether the same single baby will show the same amount of preference for infant-directed-speech in two sessions a week apart. This will tell us whether magnitude of preference is something that varies across babies (with some babies preferring it a lot, and others not at all), or across sessions (just more how the baby is feeling on the day). Keep your eyes and ears out for updates!<\/p>\n<p>There are lots of other exciting things that are being explored from these data, including things that will be useful for the rest of our studies in babylabs across the world (e.g. whether it matters whether babies are sitting on someone\u2019s lap or in a highchair, and whether it matters whether the baby has just woken up or just eaten).<\/p>\n<p>We\u2019re very proud to be a part of this exciting project, and look forward to being a part of other versions of the ManyBabies project in the future!<\/p>\n<p>More detail about ManyBabies Project:\u00a0<a href=\"https:\/\/manybabies.github.io\/\">https:\/\/manybabies.github.io\/<\/a><\/p>\n<div>\n<hr \/>\n<\/div>\n<div class=\"cms-content\" style=\"text-align: justify\">September 2016<\/div>\n<h3 class=\"cms-content\" style=\"text-align: justify\"><strong>Learning words affects how babies\u00a0see objects\u00a0<\/strong><em>by\u00a0<\/em><strong><strong><strong><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/people\/\" target=\"_blank\" rel=\"noopener\">Dr. Katie Twomey<\/a><\/strong><\/strong><\/strong><\/h3>\n<div class=\"cms-content\" style=\"text-align: justify\"><\/div>\n<div class=\"cms-content\" style=\"text-align: justify\"><\/div>\n<div class=\"cms-content\" style=\"text-align: justify\">\n<p>Anyone who\u2019s spent time with a young child knows that they pick up words very quickly! But it\u2019s less well-known that learning a word for something fundamentally changes the way children interact with that thing. In a fascinating recent study, researchers from Lancaster University showed 10-month-old babies a series of pictures of made-up cartoon animals. All the animals belonged to the same category, but importantly, the pictures were created especially for the study. Using these \u201cnovel\u201d stimuli meant that the researchers could be sure that babies\u2019 behaviour was related only to what they\u2019d learned from the pictures that day, and not based on anything they might have learned at home, for example.<\/p>\n<p><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/D7H_6498.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-283 alignright\" src=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/D7H_6498-300x215.jpg\" alt=\"D7H_6498\" width=\"272\" height=\"194\" srcset=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/D7H_6498-300x215.jpg 300w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/D7H_6498-768x549.jpg 768w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/D7H_6498.jpg 800w\" sizes=\"auto, (max-width: 272px) 100vw, 272px\" \/><\/a>Half the babies saw a series of eight pictures on a computer screen and heard a nonsense word (e.g., \u201cLook! A <em>blicket<\/em>!\u201d), and the other half saw the same eight pictures, but heard two words which divided the pictures into two categories. As a real-world example, imagine a series of pictures: four cats and four kittens. One group of babies would see all eight pictures and hear \u201cLook! A cat!\u201d, while the other would hear \u201cLook! A kitten!\u201d during the kitten pictures, and \u201cLook! A cat!\u201d during the cat pictures. Importantly, all babies saw <em>the same pictures in the same order<\/em>: the only difference between the two groups was the word they heard.<\/p>\n<p>After babies has been shown all eight pictures (known as \u201cfamiliarisation\u201d in the trade!) the researchers showed babies new pictures \u2013 some of which were members of the familiarised category, and some of which were not. Astonishingly, the way in which babies looked at these new pictures suggested that simply hearing different words had affected how they had learned about the pictures: babies in the first group formed a single category (e.g., all the pictures are cats), whereas babies in the second group grouped the pictures into two categories (e.g., four pictures are cats, four pictures are kittens). This is the kind of finding that developmental psychologists get very excited about! It suggests that all else being equal, the words babies learn can actually affect how they see the world.<\/p>\n<p><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/baby-reading-book.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-284 alignleft\" src=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/baby-reading-book-241x300.jpg\" alt=\"baby-reading-book\" width=\"238\" height=\"296\" srcset=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/baby-reading-book-241x300.jpg 241w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/baby-reading-book.jpg 283w\" sizes=\"auto, (max-width: 238px) 100vw, 238px\" \/><\/a>This study extends this idea. We\u2019re interested in what might be going on when word learning interacts with learning about<br \/>\nobjects. We think that if a baby has an equal amount of experience of two items, but knows a word for one of those items, they should interact differently with the named item even when they don\u2019t hear its name. To explore this idea, we\u2019ve been giving parents two toys and asking them to play with them with their baby, every day for a week, or asking them to read their baby a book containing photos of the objects, again for a week. The clever bit is that only one of the toys will have a name. After a week, we invite parents back to the Babylab to take part in an eye-tracking study, during which we showing babies pictures of the two objects on a computer screen and record where and for how long they look (more information on eye-tracking is available <a href=\"http:\/\/www.lancaster.ac.uk\/babylab\/research\/\">here<\/a>). If we\u2019re right, and knowing a name for an item does change the way babies react to it, we should see a difference in the amount of time babies spend looking at the named and unnamed objects.<\/p>\n<p>What\u2019s most intriguing about this study is that the babies taking part are at the very beginning of language acquisition \u2013 at this age they say few, if any, words. So, it\u2019s possible that this study might reveal the astonishing effect of learning words on our perception of the world even before speech starts! If so, this will provide evidence that language influences the way we think from a very early age indeed. We\u2019ll be in touch once we\u2019ve collected all our data, but if you have any questions in the meantime please drop Katie a line on k.twomey[at]lancaster.ac.uk.<\/p>\n<h3><strong>References:<\/strong><\/h3>\n<p>[1] Althaus, N., &amp; Westermann, G. (2016). Labels constructively shape categories in 10-month-old infants. <em>Journal of Experimental Child Psychology<\/em>.<\/p>\n<\/div>\n<h3 style=\"text-align: justify\"><strong>Acknowledgements:<\/strong><\/h3>\n<p style=\"text-align: justify\">We are very grateful to all the children and adults who took part \u2013 we couldn\u2019t do it without you! This work was funded by the \u00a0ESRC International Centre for Language and Communicative Development (LuCiD) [ES\/L008955\/1].<\/p>\n<div class=\"cms-content\" style=\"text-align: justify\">\n<hr \/>\n<\/div>\n<div class=\"cms-content\" style=\"text-align: justify\">May 2016<\/div>\n<div class=\"cms-content\" style=\"text-align: justify\"><\/div>\n<div class=\"cms-content\" style=\"text-align: justify\">\n<h3><strong>Toddler robots help solve the language puzzle\u00a0<\/strong><em>by\u00a0<\/em><strong><strong><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/people\/\" target=\"_blank\" rel=\"noopener\">Dr. Katie Twomey<\/a><\/strong><\/strong><\/h3>\n<p>The world is an incredibly complicated place when it comes to learning your first words. Imagine a toddler playing with a toy duck, a toy rabbit, and a brand new, orange toy with a very long neck. The toddler\u2019s mum or dad points at the toys and says \u201cLook, giraffe!\u201d. But how does the child know what \u201cgiraffe\u201d means? Is it the colour of the toys? The game being played? Is it another word for \u201crabbit\u201d, perhaps? If you think about it, \u201cgiraffe\u201d could refer to anything the child can see, hear, taste or smell! But despite having to solve this puzzle every time they hear a new word, by around a year babies begin to speak.<\/p>\n<p style=\"text-align: justify\"><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0115.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-241 alignleft\" src=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0115-300x169.jpg\" alt=\"IMAG0115\" width=\"300\" height=\"169\" srcset=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0115-300x169.jpg 300w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0115-768x433.jpg 768w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0115-1024x577.jpg 1024w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0115-460x260.jpg 460w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>Understandably, how children crack the word learning problem has been studied intensively for decades, and we now know that two-year-old children can work out the meaning of a new word based on words they already know. That is, our toddler can work out that \u201cgiraffe\u201d refers to the new toy, because the other two are called \u201cduck\u201d and \u201crabbit\u201d. This clever behaviour has been demonstrated many times, all over the world, with toddlers from 18 months e.g., [1]\u2013[3]. So how do they do it? There are two likely possibilities. The first is that toddlers associate new words (e.g. \u201cgiraffe\u201d) with new things (e.g., the orange toy). We\u2019ll call this a \u201cnew-to-new\u201d strategy. The second is that they use a process of elimination to work out that because the brown toy is called \u201crabbit\u201d, and the yellow toy is called \u201cduck\u201d, then the orange toy must be \u201cgiraffe\u201d. This is known as \u201cmutual exclusivity\u201d.<\/p>\n<p>In a recent study with two-year-old children [4], toddlers were better at learning words for a new toy when there were only two other, familiar toys present. When there were more than two familiar toys, toddlers couldn\u2019t learn the words. If children use a new-new strategy, then the number of toys shouldn\u2019t affect how easy it is to learn words, because they only need to look at the new one to link it with the word. The authors of this study concluded that children must be using mutual exclusivity: looking at four familiar toys to rule them out before linking the new word to the new toy is harder than looking at just two familiar toys.<\/p>\n<p>So, it looks like toddlers use mutual exclusivity to work out what words mean. So far, so astonishing! But where do the robots come in? With any study in developmental psychology, we can observe what children do but only guess what they\u2019re thinking. With robots, we can observe what they do <em>and<\/em> observe what goes on inside their \u201cbrains\u201d \u2013 after all, robots are just computer programs with bodies. And if robots do what children do, the way the robot is programmed may have something in common with how children\u2019s thinking works.<\/p>\n<p>LuCiD researcher Dr. Katie Twomey, with Dr. Jessica Horst from the University of Sussex, and Dr. Anthony Morse and Prof. Angelo Cangelosi from the University of Plymouth wanted to find out what goes on when children in the study were learning words. They used iCub [5], a humanoid robot designed to have similar physical proportions to a three-year-old child. They programmed it using simple software [6] that lets the robot link the words it hears (through its microphones) to objects it sees (with its cameras). So, like children in the study, iCub could either use a new-to-new strategy, or mutual exclusivity to link words to objects.<\/p>\n<p>First the researchers trained iCub to recognise toys when it heard their names, by pointing. You can watch an example of this here https:\/\/www.youtube.com\/watch?v=6aHrYkQhWcY. Then, iCub did the same task as children: the researchers showed it several familiar and one brand new toy. When iCub heard a word (e.g., \u201cbus\u201d) the robot pointed, indicating which toy it associated the word with. With just its simple software, iCub could solve the word learning puzzle: it pointed at new toys when it heard new words. Amazingly, when the researchers tested iCub\u2019s word learning they found that iCub did exactly what children did: it only learned words in the context of two familiar objects \u2013 not three or four. So, like toddlers, iCub learned words using mutual exclusivity.<\/p>\n<p><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0135.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-242 alignright\" src=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0135-169x300.jpg\" alt=\"IMAG0135\" width=\"169\" height=\"300\" srcset=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0135-169x300.jpg 169w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0135-768x1362.jpg 768w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0135-577x1024.jpg 577w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/IMAG0135.jpg 1840w\" sizes=\"auto, (max-width: 169px) 100vw, 169px\" \/><\/a>But why use a cutting edge piece of technology when children do this all the time? Remember, we can see what iCub is thinking! What this new study shows is that mutual exclusivity behaviour can be achieved with a very simple \u201cbrain\u201d that just learns associations between words and objects \u2013 there\u2019s no need for iCub to reason in-depth about what it sees, and what it already knows. In fact, intelligent as iCub seems, it actually <em>can\u2019t <\/em>think \u201cI know that the brown toy is a rabbit, and I know that that the yellow toy is a duck, so this new toy must be <em>giraffe<\/em>\u201d, because its software is too simple. This suggests that at least some aspects of early learning are based on an astonishingly powerful \u2013 but elegantly simple \u2013 association-making ability that allows babies and toddlers to rapidly absorb information from the very complicated learning environment.<\/p>\n<p>More adventures with iCub are underway \u2013 watch this space! And remember, next time you say your child has a brain like a sponge, you might be closer than you think!<\/p>\n<p>The full study is reported in the following article, available open-access from early July 2016:<\/p>\n<ol start=\"2016\">\n<li>E. Twomey, A. F. Morse, A. Cangelosi, and J. S. Horst, \u201cChildren\u2019s referent selection and word learning: insights from a developmental robotic system,\u201d <em>Interaction Studies<\/em>, vol. 17, no. 1, 2016.<\/li>\n<\/ol>\n<h3><strong>References:<\/strong><\/h3>\n<p>[1]\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 E. M. Markman, \u201cConstraints on Word Meaning in Early Language-Acquisition,\u201d <em>Lingua<\/em>, vol. 92, no. 1\u20134, pp. 199\u2013227, 1994.<\/p>\n<p>[2]\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 J. Halberda, \u201cThe development of a word-learning strategy,\u201d <em>Cognition<\/em>, vol. 87, no. 1, pp. B23\u2013B34, 2003.<\/p>\n<p>[3]\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 C. Houston-Price, K. Plunkett, and P. Harris, \u201c\u2018Word-learning wizardry\u2019 at 1;6,\u201d <em>J. Child Lang.<\/em>, vol. 32, no. 1, pp. 175\u2013189, 2005.<\/p>\n<p>[4]\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 J. S. Horst, E. J. Scott, and J. P. Pollard, \u201cThe Role of Competition in Word Learning Via Referent Selection.,\u201d <em>Dev. Sci.<\/em>, vol. 13, no. 5, pp. 706\u2013713, 2010.<\/p>\n<p>[5]\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 G. Metta, L. Natale, F. Nori, G. Sandini, D. Vernon, L. Fadiga, C. von Hofsten, K. Rosander, M. Lopes, J. Santos-Victor, A. Bernardino, and L. Montesano, \u201cThe iCub humanoid robot: An open-systems platform for research in cognitive development,\u201d <em>Neural Netw. Neural Netw.<\/em>, vol. 23, no. 8\u20139, pp. 1125\u20131134, 2010.<\/p>\n<p>[6]\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 A. F. Morse, J. de Greeff, T. Belpaeme, and A. Cangelosi, \u201cEpigenetic Robotics Architecture (ERA),\u201d <em>IEEE Trans. Auton. Ment. Dev.<\/em>, vol. 2, no. 4, pp. 325\u2013339, 2010.<\/p>\n<\/div>\n<div class=\"cms-content\" style=\"text-align: justify\">This blog was originally written for\u00a0the ESRC International Centre for Language and Communicative Development. See the original post <a href=\"http:\/\/www.lucid.ac.uk\/news-and-events\/blogs\/toddler-robots-help-solve-the-language-puzzle\/\">here<\/a>.<\/div>\n<div class=\"cms-content\" style=\"text-align: justify\"><\/div>\n<div class=\"cms-content\" style=\"text-align: justify\">\n<hr \/>\n<\/div>\n<div class=\"cms-content\" style=\"text-align: justify\">March 2016<\/div>\n<div class=\"cms-content\" style=\"text-align: justify\"><\/div>\n<h3 class=\"cms-content\" style=\"text-align: justify\"><strong>More than meets the eye: how children can learn verbs from what they hear, not what they see<\/strong><strong>\u00a0<\/strong><em>by\u00a0<\/em><strong><strong><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/people\/\" target=\"_blank\" rel=\"noopener\">Dr. Katie Twomey<\/a><\/strong><\/strong><\/h3>\n<p class=\"cms-content\" style=\"text-align: justify\">How children learn verbs is one of the trickiest conundrums facing researchers in language acquisition. Nouns are easy: it\u2019s not surprising that the first object names babies learn are for the objects they see and interact with on a day-to-day basis, like <em>shoe<\/em>, <em>bottle<\/em>, and <em>blanket<\/em>. Verbs are more of a challenge, though, because verbs refer to actions, which may only be visible for a short space of time (e.g., <em>throw<\/em>) \u2013 if at all (e.g., <em>like<\/em>). To make things more confusing, as well as learning the verbs themselves, children have to learn how to use them properly. For example, we know that five-year-olds tend to make mistakes like *<em>mummy filled milk into the bottle <\/em>or *<em>daddy poured the bath with water<\/em>. The question is, how do children learn to stop making these mistakes?Over the years various explanations have been offered. Perhaps the most influential theory, which we\u2019ll call the <em>what they see<\/em> approach, assumes that children decide how to use verbs based on the visual scene. Specifically, they mention the most-changed thing first (Pinker, 1989). So, they would say \u201cmummy filled the bottle with milk\u201d when the bowl becomes completely full but the movement of the milk isn\u2019t noticeable, and \u201cmummy poured milk into the bottle\u201d when the movement of the milk is more visible, and the bottle doesn\u2019t become completely full.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">However, what if the bottle isn\u2019t transparent, or they can\u2019t see the milk? More generally, it\u2019s likely that the visual information children need to decide how to use these verbs isn\u2019t consistently available, especially for rare verbs like \u201cinfuse\u201d. Luckily there\u2019s an alternative way of learning this \u2013 the <em>what they hear<\/em> approach (Twomey, Chang &amp; Ambridge, 2014). This explanation only requires children to listen to how verbs are used, and then copy it. These verbs tend to be followed by certain types of noun. For example, <em>fill<\/em> is usually followed by container-type words like <em>bucket<\/em>, <em>cup <\/em>and <em>box<\/em>, and <em>pour<\/em> is followed by liquid-type words like <em>water<\/em>, <em>paint <\/em>and <em>juice<\/em>. To avoid making a mistake, children only have to learn that <em>fill<\/em> is followed by a container-type word, and <em>pour<\/em> by a liquid-type word \u2013 intriguingly, they don\u2019t need to look at anything at all.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\"><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/Untitled.png\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-243 alignright\" src=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/Untitled-300x125.png\" alt=\"Untitled\" width=\"300\" height=\"125\" srcset=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/Untitled-300x125.png 300w, http:\/\/wp.lancs.ac.uk\/westermann-lab\/files\/2015\/08\/Untitled.png 734w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>Researchers from the ESRC International Centre for Language and Communicative Development tested these two theories by showing 5-year-olds, 9-year-old and adults animations of a robot on a spaceship performing made-up actions with two objects, for example filling a cone with blobs of oil using a shooting motion. In each action, one object was changed more than the other (e.g., the cone became completely full), but also, the experimenter described the action using a made-up verb (e.g., \u201cpilked\u201d) and either a container-type or a liquid-type noun (e.g., <em>the robot pilked the oil<\/em>). After seeing several of these \u201clearning\u201d animations, participants saw a new \u201crecap\u201d animation with the same action but new, equally-changed objects, and were asked to describe it. If the <em>what they see<\/em> theory is correct, then participants should describe the scene by mentioning the most-changed object in the training scenes first, for example <em>the robot pilked the cone with oil<\/em>. If the <em>what they hear<\/em> theory is correct, though, participants should describe the scene by mentioning the container-type or liquid-type object which came first in training, for example <em>the robot pilked the oil into the cone.<\/em><\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Perhaps not surprisingly, 5-year-olds didn\u2019t seem to be using either strategy. This might be because they needed more practice with the learning animations \u2013 after all, at this age they still make mistakes with this kind of verb. However, 9-year-olds and adults followed the <em>what they hear <\/em>strategy, placing the object that they\u2019d heard first on the learning trials earlier in their descriptions of the scene. This suggests that older children can learn how to use these more difficult verbs by listening to how they are used by adults.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Excitingly, this is one of the first times children have been shown to be able to how verbs work without paying detailed attention to the visual scene. While many studies have shown that quality and quantity of language matters for babies\u2019 very first words, this work highlights shows it just as important to provide older them with plenty of opportunities to hear and use language.<\/p>\n<h3 class=\"cms-content\" style=\"text-align: justify\"><strong>Further reading:<\/strong><\/h3>\n<p class=\"cms-content\" style=\"text-align: justify\">The original paper is freely available at http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0010027716301160<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">For more information about early word learning see our Nursery World articles here http:\/\/www.lucid.ac.uk\/resources\/for-practitioners\/nursery-world-magazine.<\/p>\n<h3 class=\"cms-content\" style=\"text-align: justify\"><strong>References:<\/strong><\/h3>\n<p class=\"cms-content\" style=\"text-align: justify\">Pinker, S. (1989). <em>Learnability and cognition: The acquisition of argument structure.<\/em> Cambridge, Mass.: Harvard University Press.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Twomey, K. E., Chang, F., &amp; Ambridge, B. (2016). Lexical distributional cues, but not situational cues, are readily used to learn abstract locative verb-structure associations. <em>Cognition<\/em>, <em>153<\/em>, 124\u2013139<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Twomey, K. E., Chang, F., &amp; Ambridge, B. (2014). Do as I say, not as I do: A lexical distributional account of English locative verb class acquisition. <em>Cognitive Psychology<\/em>, <em>73<\/em>, 41\u201371.<\/p>\n<h3 class=\"cms-content\" style=\"text-align: justify\"><strong>Acknowledgements:<\/strong><\/h3>\n<p class=\"cms-content\" style=\"text-align: justify\">We are very grateful to all the children and adults who took part \u2013 we couldn\u2019t do it without you! This work was funded by Leverhulme Research Project Grant RPG-158 and ESRC Centre Grant ES\/L008955\/1.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">This blog was originally written for\u00a0the ESRC International Centre for Language and Communicative Development. See the original post <a href=\"http:\/\/www.lucid.ac.uk\/news-and-events\/blogs\/more-than-meets-the-eye-how-children-can-learn-verbs-from-what-they-hear-not-what-they-see\/\">here<\/a>.<\/p>\n<hr \/>\n<p class=\"cms-content\" style=\"text-align: justify\">May 2015<\/p>\n<h3 class=\"cms-content\" style=\"text-align: justify\"><strong>Understanding how children&#8217;s curiosity drives their learning\u00a0<\/strong><em>by\u00a0<\/em><strong><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/people\/\" target=\"_blank\" rel=\"noopener\">Dr. Katie Twomey<\/a><\/strong><\/h3>\n<p class=\"cms-content\" style=\"text-align: justify\">For decades, developmental psychologists have used ingeniously-designed studies to investigate how children learn about the things they see.\u00a0 Typically, we show children a category of items and let them become familiar with them.\u00a0 Then, we introduce them to new things. Because we know that toddlers prefer to look at new things rather than old things, their reaction to the new items can tell us a lot about how they learn.\u00a0 For example, we could show a toddler several different pictures of cats, and then show them a picture of a new cat next to a picture of a dog.\u00a0 If the toddler is more interested in the dog than the cat, we can tell they recognise the dog as new.\u00a0 In other words, they\u2019ve learned a CAT category which excludes the dog picture.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">These studies allow us to change aspects of the categories toddlers see, and based on these techniques it\u2019s now well-established that small changes to these categories can have a big effect on what toddlers learn, and even how they learn their first words.\u00a0 For example, we know that experience with lots of different examples of a category makes it <a href=\"http:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/icd.1824\/abstract\">easier to learn the word for that category<\/a>, and can even <a href=\"http:\/\/www.ncbi.nlm.nih.gov\/pubmed\/21106892\">speed up vocabulary growth<\/a>!<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Despite knowing so much about how children behave in these studies, until recently there\u2019s been a gap in our knowledge: our understanding of learning is limited to what toddlers and babies do in the lab.\u00a0 Toddlers outside the lab rarely come across a whole category of items at once, and they\u2019re even less likely to interact with each item for the same amount of time.\u00a0 Instead, as any parent knows, toddlers are curious explorers who play with what they want, when they want!\u00a0 Only very recently have researchers started to examine how toddlers and babies learn when learning is curiosity-driven.\u00a0 For example, <a href=\"http:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/j.1467-7687.2009.00947.x\/full\">recent work<\/a> has shown that when parents and babies play with toys together, what babies see is surprisingly different to what their parents see. \u00a0Objects appear substantially bigger to babies than they do to adults, and it\u2019s thought that this predominance of objects in babies\u2019 visual field may be important for early category and word learning.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">So, we know that toddlers love exploring, and that the way in which they explore affects their learning.\u00a0 But what drives them to explore?\u00a0 Why keep trying to walk \u2013 and fall over again, and again, and again \u2013 when you\u2019re already an expert crawler?\u00a0 Where does this curiosity come from?\u00a0 Colleagues in robotics have explored this question with robots which learn via an internal reward system called \u201c<a href=\"http:\/\/journal.frontiersin.org\/researchtopic\/intrinsic-motivations-and-open-ended-development-in-animals-humans-and-robots-1326\">intrinsic motivation<\/a>\u201d, but so far there\u2019s been little work investigating what happens in children (although <a href=\"http:\/\/www.pyoudeyer.com\/OudeyerSmithTopicsCogSci14.pdf\">this forthcoming work<\/a> is a great example using puppy robots!).<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">We decided to tackle this problem using a computer model of how children behave in a recent\u00a0<a href=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0010027711000680\">category learning study<\/a>.\u00a0 Instead of deciding what the model saw, we let it learn via curiosity.\u00a0 The model chose the order in which it saw images by comparing what it had already learned with how easily it could learn new items.\u00a0 We showed that the model learned best from images which are complex \u2013 but not too complex!\u00a0 This \u201cGoldilocks\u201d effect, by which learning is best supported by just the right amount of information, has been seen in <a href=\"http:\/\/dx.plos.org\/10.1371\/journal.pone.0036399\">related studies with babies<\/a>.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">What does this computational work tell us?\u00a0 It shows that given the chance to choose what it learned from and when, the model created an environment for itself that was just right for learning.\u00a0 It also suggests that toddlers\u2019 curiosity is based on what they already know and how quickly they can learn, as well as what they\u2019re seeing at the time.\u00a0 Of course, this is a computer simulation.\u00a0 Our next, important task is to explore some of these ideas with babies. Excitingly, because babies learn their first words for the categories that they see day-to-day, like CAT or SHOE<em>, <\/em>understanding how curiosity-driven learning works will help us understand the critical first steps in one of the most valuable skills a human being can learn \u2013 language.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Research into curiosity-driven learning is important because it helps us understand how children learn in the real world, rather than in a cleverly-designed study.\u00a0 After all, how many parents have time to meticulously plan everything their child sees that day?\u00a0 The important message that seems to be emerging from these early studies is that because they can construct their own \u201cbest\u201d learning environment, it\u2019s good to let toddlers explore!<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\"><em>This work was\u00a0presented at the\u00a0<a href=\"http:\/\/www.icdl-epirob.org\/\" target=\"_blank\" rel=\"noopener\">Fifth Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics<\/a>, Brown University, USA, August 2015. This blogwas originally published via the<\/em> <a href=\"http:\/\/www.lucid.ac.uk\/news-and-events\/blogs\/understanding-how-children-s-curiosity-drives-their-learning\/\" target=\"_blank\" rel=\"noopener\">ESRC\u00a0International Centre for Language and Communicative Development<\/a><\/p>\n<p class=\"cms-content\" style=\"text-align: justify\"><strong>References<\/strong><\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Kidd, C., Piantadosi, S. T., &amp; Aslin, R. N. (2012). <a href=\"http:\/\/journals.plos.org\/plosone\/article?id=10.1371\/journal.pone.0036399\" target=\"_blank\" rel=\"noopener\">The Goldilocks effect: Human infants allocate attention to visual sequences that are neither too simple nor too complex<\/a>. <em>PloS One<\/em>, <em>7<\/em>(5), e36399.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Mather, E., &amp; Plunkett, K. (2011). <a href=\"http:\/\/www.ncbi.nlm.nih.gov\/pubmed\/21382616\" target=\"_blank\" rel=\"noopener\">Same items, different order: Effects of temporal variability on infant categorization.<\/a><em>Cognition<\/em>, <em>119<\/em>(3), 438\u2013447.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Oudeyer, P.-Y., &amp; Smith, L. (in press). <a href=\"http:\/\/www.pyoudeyer.com\/OudeyerSmithTopicsCogSci14.pdf\" target=\"_blank\" rel=\"noopener\">How evolution may work through curiosity-driven developmental process.<\/a><em>Topics Cogn. Sci<\/em>.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Perry, L. K., Samuelson, L. K., Malloy, L. M., &amp; Schiffer, R. N. (2010). <a href=\"http:\/\/pss.sagepub.com\/content\/21\/12\/1894\" target=\"_blank\" rel=\"noopener\">Learn Locally, Think Globally: Exemplar Variability Supports Higher-Order Generalization and Word Learning<\/a>.<em>Psychological Science<\/em>, <em>21<\/em>(12), 1894\u20131902.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Smith, L. B., Yu, C., &amp; Pereira, A. F. (2011). <a href=\"http:\/\/www.ncbi.nlm.nih.gov\/pubmed\/21159083\" target=\"_blank\" rel=\"noopener\">Not your mother\u2019s view: The dynamics of toddler visual experience.<\/a><em>Developmental Science<\/em>, <em>14<\/em>(1), 9\u201317.<\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Twomey, K. E., Ranson, S. L., &amp; Horst, J. S. (2014). <a href=\"http:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/icd.1824\/abstract;jsessionid=39CDA525ECCFAE3600C26768DFF998E2.f03t04\" target=\"_blank\" rel=\"noopener\">That\u2019s more like it: Multiple exemplars facilitate word learning. <\/a><em>Infant and Child Development<\/em>, <em>23<\/em>(2), 105\u2013122.<\/p>\n<div id=\"blog\" class=\"blog\">\n<article class=\"post\">\n<div class=\"cms-content\">\n<div class=\"cms-content\">\n<hr \/>\n<p style=\"text-align: justify\">March 2015<\/p>\n<p style=\"text-align: justify\"><strong>Watching TV can actually be good for toddlers <\/strong><em>by\u00a0<\/em><strong><a href=\"http:\/\/wp.lancs.ac.uk\/westermann-lab\/people\/\" target=\"_blank\" rel=\"noopener\">Dr. Gemma Taylor<\/a><\/strong><\/p>\n<p class=\"cms-content\" style=\"text-align: justify\">Scaremongering about the negative effects of children\u2019s TV-watching is not new. But in our busy lives it\u2019s more and more tempting to let your child watch television for half an hour or so while you tidy up, wash up, make phone calls, pay your bills or simply take a moment to sit down.<\/p>\n<article class=\"post\">\n<div class=\"cms-content\">\n<p style=\"text-align: justify\">Despite the prevalence of television programmes targeting young children, the <a href=\"http:\/\/pediatrics.aappublications.org\/content\/early\/2013\/10\/24\/peds.2013-2656.abstract\">American Academy of Paediatrics discourages<\/a> television exposure for children under the age of two years and recommends that exposure is restricted to less than one-two hours thereafter. But <a href=\"http:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/15213269.2014.932288#.VPmGnGSsXiQ\">new research<\/a>has shown that after watching an educational children\u2019s television programme, toddlers can learn to count to five and learn to read a simple map presented on the show.<\/p>\n<p style=\"text-align: justify\"><strong>Bad reputation<\/strong><\/p>\n<p style=\"text-align: justify\">In its <a href=\"http:\/\/pediatrics.aappublications.org\/content\/early\/2011\/10\/12\/peds.2011-1753\">2011 policy statement<\/a>, the American Academy of Paediatrics reported that television viewing was associated with an overall reduction in both parent-child interactions and children\u2019s creative play, irrespective of whether the television was on in the background or the foreground. Television itself does not offer an ideal learning situation for children. We know that <a href=\"http:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/dev.21068\/abstract\">children up to three years of age exhibit a video deficit<\/a> \u2013 meaning they learn less from television than they do from a live interaction. So it\u2019s clear that children\u2019s television exposure should be moderated.<\/p>\n<p style=\"text-align: justify\">When presented with TV programmes, children are faced with a transfer task, meaning that they must transfer what they learn from a 2D television screen to the 3D world. The poorer quality of visual and social information presented on TV can lead to a less detailed representation of the information in children\u2019s memory and subsequent difficulties transferring learnt information to the real world.<\/p>\n<p style=\"text-align: justify\">Visual information such as size and depth cues are reduced on a 2D television screen compared to the 3D world. Likewise, in contrast to a real world social situation, an actor or character on television cannot respond to what the child is looking at, saying or doing at a given moment in time. Characterising the video deficit as a <a href=\"http:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/cdep.12041\/abstract\">transfer problem<\/a> is helpful for understanding how to support children\u2019s learning from television.<\/p>\n<p style=\"text-align: justify\"><strong>Helping children learn from television<\/strong><\/p>\n<p style=\"text-align: justify\">There is still hope for educational children\u2019s TV programmes. Children\u2019s television producers and parents can employ a number of techniques to enhance learning from TV and support children\u2019s knowledge transfer to the real world. One reason that children often ask to watch the same TV programmes over and over again is that they <a href=\"http:\/\/psycnet.apa.org\/psycinfo\/1999-15231-005\">learn better from repeatedly being exposed<\/a> to the same thing.<\/p>\n<p style=\"text-align: justify\">Repetition <a href=\"http:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/dev.20208\/abstract?systemMessage=Wiley+Online+Library+will+be+disrupted+on+7th+March+from+10%3A00-13%3A00+GMT+%2805%3A00-08%3A00+EST%29+for+essential+maintenance.++Apologies+for+the+inconvenience\">within a TV show<\/a>, such as repeating sequences or new words, or <a href=\"http:\/\/psycnet.apa.org\/psycinfo\/1999-15231-005\">repeatedly watching the same show<\/a> across a number of days can enhance children\u2019s learning, memory and transfer of the information to the real world.<\/p>\n<p style=\"text-align: justify\">What\u2019s more, <a href=\"http:\/\/www.tandfonline.com\/doi\/pdf\/10.1080\/15213269.2011.573465\">the more familiar the television character<\/a>, the more likely it is that children will learn from a television programme featuring that character. Repetition helps children to store more detailed representations of the information in their memory. While watching the same children\u2019s TV show with your child may be tedious for parents, it is beneficial for children.<\/p>\n<p style=\"text-align: justify\"><strong>Interactivity is key<\/strong><\/p>\n<p style=\"text-align: justify\">Making children\u2019s TV an interactive activity is also beneficial for children\u2019s learning and later knowledge transfer to the real world. Television programmes aimed at children aged 2 years and above such as <a href=\"http:\/\/www.imdb.com\/title\/tt0235917\/\">Dora the Explorer<\/a> and <a href=\"http:\/\/www.imdb.com\/title\/tt0163929\/\">Blue\u2019s Clues<\/a> try to promote a social interaction between the child and the television character by getting characters to look directly at the camera, and using questions and pauses to allow time for children\u2019s responses.<\/p>\n<p style=\"text-align: justify\">Children are more likely to understand the content of a children\u2019s television programmes when they<a href=\"http:\/\/www.tandfonline.com\/doi\/abs\/10.1080\/15213260701291379#.VPl3n2SsXiQ\">respond<\/a> to a character\u2019s interactive questions. In programmes like Dora the Explorer, television characters\u2019 feedback responses to children are limited to things like \u201cGood job\u201d and \u201cThat\u2019s right\u201d. Watching television with your child and giving them better feedback on their responses will give you the opportunity to further support their learning from television.<\/p>\n<p style=\"text-align: justify\">So yes, children\u2019s TV does have the potential to be educational for young children. But not all children\u2019s TV programmes are created equal. While some provide a good learning platform for young children, others are better suited to entertainment purposes only. Watching TV with your child and making the experience more interactive can enhance the educational value for your child.<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/counter.theconversation.edu.au\/content\/38455\/count.gif\" alt=\"The Conversation\" width=\"1\" height=\"1\" \/><\/p>\n<p style=\"text-align: justify\">This article was originally published on <a href=\"http:\/\/theconversation.com\/\">The Conversation<\/a>\u00a0and via the <a href=\"http:\/\/www.lucid.ac.uk\/news-and-events\/blogs\/can-watching-tv-actually-be-good-for-toddlers\/\">ESRC International Centre for Language and Communicative Development<\/a>.<\/p>\n<\/div>\n<\/article>\n<\/div>\n<\/div>\n<\/article>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>September 2018 Being involved in The ManyBabies Project \u2013 the biggest ever infancy study!\u00a0\u00a0\u00a0by\u00a0\u00a0Priya Silverstein When studying babies, often we try to make generalisations about what all babies do. However, as a researcher that studies babies, or as a parent or carer bringing your baby in for a study, I\u2019m sure you\u2019ll be aware that&hellip;<\/p>\n","protected":false},"author":299,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-53","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/wp-json\/wp\/v2\/pages\/53","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/wp-json\/wp\/v2\/users\/299"}],"replies":[{"embeddable":true,"href":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/wp-json\/wp\/v2\/comments?post=53"}],"version-history":[{"count":23,"href":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/wp-json\/wp\/v2\/pages\/53\/revisions"}],"predecessor-version":[{"id":485,"href":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/wp-json\/wp\/v2\/pages\/53\/revisions\/485"}],"wp:attachment":[{"href":"http:\/\/wp.lancs.ac.uk\/westermann-lab\/wp-json\/wp\/v2\/media?parent=53"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}