Science in general, Sound-symbolism, Uncategorized

Sound-symbolism boosts novel word learning: the MS Paint version

I have a new article out!

Gwilym Lockwood, Mark Dingemanse, and Peter Hagoort. 2016. “Sound-Symbolism Boosts Novel Word Learning.” Journal of Experimental Psychology: Learning, Memory, and Cognition. doi:10.1037/xlm0000235 (download link, go on, it’s only eight pages)

and I’m particularly proud of this one because:

a) it’s a full article discussing some of the stats I’ve been talking about at conferences for almost two years, and

b) it’s probably the only scientific article to formally cite Professor Oak’s Pokédex.

So, if you like things like iconicity and logit mixed models and flawed experiments cunningly disguised as pre-tests that I meant to do all along, you can read it here.

Enough of that, though. I know that what you’re really here for is Sound-symbolism boosts novel word learning: the MS Paint version.

The first thing we did was to select our words from almost a hundred ideophones and arbitrary adjectives. Participants heard the Japanese word, then saw two possible translations – one real, one opposite – and they had to guess which the correct one was. This was pretty easy for the ideophone task. People can generally guess the correct meaning with some certainty, because it just kind of sounds right for one of the options (due to the cross-modal correspondences between the sound of the word and its sensory meaning). It was a fair bit harder for the arbitrary adjectives, where there are no giveaways in the sound of the word.

2AFC stimuli selection

It’s kind of taken for granted in the literature that people can guess the meanings of ideophones at above chance accuracy in a 2AFC test, but I’ve always struggled to find a body of research which shows this. This pre-test shows that people can indeed guess ideophones at above chance accuracy in a 2AFC test – at 63.1% accuracy (μ=50%, p<0.001) across 95 ideophones, in fact. So, now, anybody who wants to make that claim has the stats to do so. Nice. We’re now rerunning this online with thousands of people as part of the Groot Nationaal Onderzoek project, so stay tuned for more on that.

Then, two different groups did a learning task. We originally had the learning task as a 2AFC set up where participants learned by guessing and then getting feedback. In terms of results, this did work… but about a third of the participants realised that they could “learn” by ignoring the Japanese words completely and just remembering to pick fat when they saw the options fat and thin. Damn.

2AFC failed test

Anyway. We got two more groups in to do separate learning and test rounds with a much better design. One group got all the ideophones, half with their real meanings, half with their opposite meanings. The other group got all the arbitrary adjectives, half with their real meanings, half with their opposite meanings.

In the same way that it’s easy to guess the meanings of the ideophones, we predicted that the ideophones with their real translations would be easy to learn because of the cross-modal correspondences between linguistic sound and sensory meaning…

concept sounds participants real trimmed

…that the ideophones with their opposite translations would be hard to learn, because the sounds and meanings clash rather than match…

concept sounds participants opposite trimmed

…and that there wouldn’t be much difference between conditions for the arbitrary adjectives, because there’s no real association between sound and meaning in arbitrary words anyway.

concept sounds participants arbitrary trimmed

And sure enough, that’s exactly what we found. Participants were right 86.1% of the time for ideophones in the real condition, but only 71.1% for ideophones in the opposite condition. With the arbitrary adjectives, it was 79.1% versus 77%, which isn’t a proper difference.

Additional bonus for replication fans! (that’s everybody, right?): in a follow-up EEG experiment doing this exact same task with Japanese ideophones, another 29 participants got basically the same results (86.7% for the real condition, 71.3% for the opposite condition). That’s going to be submitted in the next couple of weeks.

Here’s the histogram from the paper… but in glorious technicolour:

accuracy for each condition with both experiments (colour) updated

(It would have cost us $900 to put one colour figure in the article, even though it’s the publisher who’s printing it and making money from it. The whole situation is quite silly.)

The point of this study is that it’s easier to learn words that sound like what they mean than words that don’t sound like what they mean, and that words that don’t particularly sound like anything are somewhere in the middle. This seems fairly obvious, but people have assumed for a long time that this doesn’t really happen. There’s been a fair bit of research about onomatopoeia and ideophones helping babies learn their first language, but not that much yet about studies with adults. It also provides some support for the broader suggestion that we use similar sounds to talk about and understand sensory things across languages, but not so much for other things, so words with sound-symbolism may well have been how language started out in the first place.

I’d love to re-run this study on a more informal (and probably unethical) basis where a class of school students learning Japanese are given a week to learn the same word list for a vocab test where they’d have to write down the Japanese words on a piece of paper. I reckon that there’d be the same kind of difference between conditions, but it’d be nice to see that happen when they really have to learn the words to produce a week later, not just recognise a few minutes later. If anybody wants to offer me a teaching position at a high school where I can try this out and probably upset lots of parents, get in touch; I need a job when my PhD contract runs out in August.

The thing I find funniest about this entire study is that when I was studying Japanese during my undergrad degree, I found ideophones really difficult to learn. I thought they all sounded kind of the same, and pretty daft to boot. The ideophone for “exciting/excited” is wakuwaku, which I felt so uncomfortable saying that I feigned indifference about things in oral exams to avoid saying it (but to be fair, feigned indifference was my approach to most things in my late teens and early twenties). There’s probably an ideophone to express the internal psychological conflict you get when you realise you’re doing a PhD in something you always tried to ignore during your undergrad degree, but I’m not sure what it is. I’ll bet my old Japanese lecturers would be pretty niyaniya if they knew, though.

Standard
Uncategorized

(almost) everything you ever wanted to know about sound-symbolism research but were too afraid to ask.

Publications are like buses. Not because you spend most of your PhD with no publications then two turn up at once (although that is what’s just happened to me), but because you might get overtaken by another bus going the same way, and you might want to be somewhere else by the time you get to your original destination.

The bus I’ve just taken is my new review paper:

Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: a review of behavioral, developmental, and neuroimaging research into sound-symbolism. Language Sciences, 1246. http://doi.org/10.3389/fpsyg.2015.01246

I wrote it along with Mark Dingemanse, my supervisor at the Max Planck Institute. It covers experimental research on sound-symbolism from the last few years and pulls together the main themes and findings so far. To summarise, these are:

  1. That large vowels (e.g. a, o) are associated with large things and slow things and dark things and heavy things
  2. That small vowels (e.g. i, e) are associated with small things and fast things and bright things and light things
  3. That voiced consonants (e.g. b, g) have the same kind of associations as large vowels
  4. That voiceless consonants (e.g. p, k) have the same kind of associations as small vowels
  5. That this is probably due to a combination of acoustic properties (i.e. the way something sounds when you hear it) and articulatory properties (i.e. the way something feels when you say it)
  6. That these cross-modal associations mean people can guess the meanings of sound-symbolic words in languages that they don’t know
  7. That these cross-modal associations mean children and adults learn sound-symbolic words more easily
  8. That these cross-modal associations in sound-symbolic words elicit either different brain processes from regular words and/or stronger versions of the same brain processes as regular words
  9. That it’s more informative to investigate these cross-modal associations using real sound-symbolic words from real languages than using non-words from made-up languages
  10. That it’s more informative to investigate these cross-modal associations using complicated experiment tasks than asking participants to choose between two options
  11. That it’s not accurate to look at arbitrariness and iconicity are two competitors in a zero-sum language game, even if it does make our work seem more important

We’re pretty happy with this, and the paper is a nice one-stop shop for everything you’ve ever wanted to know about sound-symbolism research but were too afraid to ask. We don’t finish it off with a grand model of how it works, because we don’t really know (and because I’ve still got at least two more experiments to do in my PhD before I’ll have a decent idea), but we do collect a lot of individual strands of research into a few coherent themes which should be useful for anybody else who’s doing similar stuff.

Even though it’s hot off the press this morning, it’s taken a long time to get to this stage. I started doing all the reading and the writing in spring 2014, then Mark and I restructured it quite a lot, and then it got put on the back burner while I read more things and did more things. We came back to it at the start of this year, added and changed a few things, and submitted it earlier this summer. After a fairly quick and painless review process, it’s now out.

The first frustration is that there was a small but important misprint in the text; it’s frustrating that it’s there, it’s frustrating that it slipped past the two authors, two reviewers, and editor, and it’s frustrating that Frontiers won’t amend it (despite being an online-only journal). In this misprint, we accidentally misreport Moos et al. (2014). They found that people associate the vowel [a] with the colour red, and that this colour association becomes more yellow/green as the vowel gets smaller (like the vowel [i]). However, we wrote this the wrong way round in the text and accompanying figure. So, here’s the correct version of Figure 1 from the review paper:

cross-modal mappings - vowel space (bw) for distribution

 

Secondly, since submitting the article and having the positive reviews back, I’ve come across two studies in particular which I wish we could have included but couldn’t because we were already on that bus. These studies are:

Sidhu, D. M., & Pexman, P. M. (2015). What’s in a Name? Sound Symbolism and Gender in First Names. PloS One, 10(5), e0126809. http://doi.org/10.1371/journal.pone.0126809(which starts and ends with the Shakespeare quote about roses by different names smelling as sweet to describe arbitrariness and iconicity, which is a quote I’ve always wanted to use myself, so good on them)

Jones, M., Vinson, D., Clostre, N., Zhu, A. L., Santiago, J., & Vigliocco, G. (2014). The bouba effect: sound-shape iconicity in iterated and implicit learning. In Proceedings of the 36th Annual Meeting of the Cognitive Science Society. (pp. 2459–2464). Québec.(which I’d seen referred to in various presentations as a work in progress, but I hadn’t come across the actual, citable CogSci conference paper until a couple of weeks ago)

Both these studies investigate the kiki/bouba effect, which is the way people associate spiky shapes with spiky sounds (i.e. small vowels and voiceless consonants) and round shapes with round sounds (i.e. rounded vowels like o and voiced consonants). Both studies have well-designed methods which are quite complicated to explain but address the questions really well, and find similar things. The original kiki/bouba studies found the split between round and spiky from making people choose between two options, and so people chose round shapes with round sounds and spiky shapes with spiky sounds. Simple enough.

However, these two studies show that roundness and spikiness don’t contribute equally to the effect. Rather, there’s a massive effect of roundness, while the associations between spiky sounds and spikiness is much less strong, and may even just be an association by default because it was the other option in the original studies.I’d then have included another paragraph or two in the review paper about how future studies can and should address whether the associations outlined in points 1-4 fall along an even continuum (in the way that size associations seem to fall evenly between i and a) or whether one particular feature is driving the effect (in the way that roundness drives the round/spiky non-continuum). Sadly, I only came across these studies after it was too late to include them, but hopefully they’ll be picked up on by others in future!

Standard
EEG/ERP, Sound-symbolism

Ideophones in Japanese modulate the P2 and late positive complex responses: MS Paint version

I just had my first paper published:

Lockwood, G., & Tuomainen, J. (2015). Ideophones in Japanese modulate the P2 and late positive complex responses. Language Sciences, 933. http://doi.org/10.3389/fpsyg.2015.00933

It’s completely open access, so have a look (and download the PDF, because it looks a lot nicer than the full text).

It’s a fuller, better version of my MSc thesis, which means that I’ve been working on this project on and off since about April 2013. Testing was done in June/July 2013 and November 2013. Early versions of this paper have been presented at an ideophone workshop in Tokyo in December 2013, a synaesthesia conference in Hamburg in February 2014, and a neurobiology of language conference in Amsterdam in August 2014. It was rejected once from one journal in August 2014, and was submitted to this journal in October 2014. It feels great to have it finally published, but also kind of anticlimactic, given that I’m focusing on some different research now.

I feel like the abstract and full article describe what’s going on quite well; this is a generally under-researched area within the (neuro)science of language as it is, so it’s written for the sizeable number of people who aren’t knowledgeable about ideophones in the first place. However, if you can’t explain your research using shoddy MS Paint figures, then you can’t explain it at all, so here goes.

Ideophones are “marked words which depict sensory imagery” (Dingemanse, 2012). In essence, this means that ideophones stick out compared to regular words, ideophones are real words (not just off the cuff onomatopoeia), ideophones try and imitate the thing they mean rather than just describing it, and ideophones mean things to do with sensory experiences. This sounds like onomatopoeia, but it’s a lot more than that. Ideophones have been kind of sidelined within traditional approaches to language because of a strange fluke whereby the original languages of academia (i.e. European languages, and especially French, German, and English) are from one of the very few language families across the world which don’t have ideophones. Since ideophones aren’t really present in the languages of the people who wrote about languages most often, those writers kind of just ignored them. The less well-known linguistic literature on ideophones has been going on for decades, and variously describes ideophones as vivid, quasi-synaesthetic, expressive, and so on.

What this boils down to is that for speakers of languages with ideophones, listening to somebody say a regular word is like this:

listening to a regular word

and listening to somebody say an ideophone is like this:

listening to an ideophone

Why, though?

Ideophones are iconic and/or sound-symbolic. These terms are slightly different but are often used interchangeably and both mean that there’s a link between the sound of something language-y (or the shape/form of something language-y in signed languages) and its meaning. This means that, when you’re listening to a regular word, you’re generally just relying on your existing knowledge of the combinations of sounds in your language to know what the meaning is:

regular word processing

…whereas when a speaker of a language with ideophones listens to an ideophone, they feel a rather more direct connection between what the ideophone sounds like and what the meaning of the ideophone is:

ideophone processing

These links between sound and meaning are known as cross-modal correspondences.

Thing is, it’s one thing for various linguists and speakers of languages with ideophones to identify and describe what’s happening; it’s quite another to see if that has any psycho/neurolinguistic basis. This is where my research comes in.

I took a set of Japanese ideophones (e.g. perapera, which means “fluently” when talking about somebody’s language skills; I certainly wish my Japanese was a lot more perapera) and compared them with regular Japanese words (e.g. ryuuchou-ni, which also means “fluently” when talking about somebody’s language skills, but isn’t an ideophone). My Japanese participants read sentences which were the same apart from swapping the ideophones and the arbitrary words around, like:

花子は ぺらぺらと フランス語を話す
Hanako speaks French fluently (where “fluently” = perapera).

花子は りゅうちょうに フランス語を話す
Hanako speaks French fluently (where “fluently” = ryuuchou-ni).

While they read these sentences, I used EEG (or electroencephalography) to measure their brain activity. This is done by putting a load of electrodes in a swimming cap like this:

electrode set up

After measuring a lot of participants reading a lot of sentences in the two conditions, I averaged them together to see if there was a difference between the two conditions… and indeed there was:

figure 1 from japanese natives paper

The red line shows the brain activity in response to the ideophones, and the blue line shows the brain activity in response to the arbitrary words. The red line is higher than the blue line at two important points; the peak at about 250ms after the word was presented (the P2 component), and the consistent bit for the last 400ms (the late positive complex).

Various other research has found that a higher P2 component is elicited by cross-modally congruent stimuli… i.e. this particular brain response is bigger to two things that match nicely (such as a high pitched sound and a small object). Finding this in response to the Japanese ideophones suggests that the brain recognises that the sounds of the ideophones cross-modally match the meanings of the ideophones much more than the sounds of the arbitrary words match the meanings of the arbitrary words. This may be why ideophones are experienced more vividly than arbitrary words.

higher P2 for ideophones

lower P2 for arbitrary words

As for the late positive complex, it’s hard to say. It could be that the cross-modal matching of sound and meaning in ideophones actually makes it harder for the brain to work out the ideophone’s role in a sentence because it has to do all the cross-modal sensory processing on top of all the grammatical stuff it’s doing in the first place. It’s very much up for discussion.

Standard
EEG/ERP

Papers of the Year: 2014

I’m not really one for new year’s resolutions, but they are a useful crutch for getting things done sometimes. And so, 2015 will herald the dawn of a brand new academic blog, packed full of information and insights from the business end of sound-symbolism and synaesthesia research, along with a sprinkling of observations and anecdotes about life in early academia in general.

December, though, is a great time to start. What better way to begin a new blog than tapping into the buzzfeed zeitgeist and have a listicle with gifs?  Without further ado, I hereby present the moderately prestigious, barely anticipated, inaugural annual Papers of the Year awards listicle. In no particular order, here are the five most interesting and/or important papers I’ve read this year.

1. Behme (2014). “A ‘Galilean’ Science of Language.” Journal of Linguistics 50, no. 03: 671–704. doi:10.1017/S0022226714000061.

(.pdf here)

mjpopcorn

Far more august minds than mine have spilled lot of virtual ink over Behme’s book review … well, I say book review, but it’s more like a brief section on Chomsky’s book The Science of Language which is then used as a launchpad to critically assess Chomsky’s entire scholarship. From the strictly academic side of things, I’d say that the majority of the criticism is justified, although I’m not sure I agree with Behme’s rather absolutist stance that ignoring or discarding any single piece of evidence that conflicts with your theory is absolutely reprehensible and invalidates your entire research programme. To do so on a massive scale is of course problematic, but I think there is a little more leeway in linguistics than Behme makes out. This is also a really interesting paper because of the reactions it inspires. We had a journal club session in the Neurobiology of Language department at MPI about this paper, and it was fascinating to see people’s opinions about the tone and style. Some (myself included) believe that reviews like this are perfectly fine if the author accepts that they have to stand behind their rather direct points of view; others feel that the tone was aggressive and that there’s no place in science for this kind of attack. Either way, it’s beautifully written and addresses some hugely important and uncomfortable truths about the science of language and The Science of Language.

2. Revill, Namy, DeFife, and Nygaard (2014). “Cross-Linguistic Sound Symbolism and Crossmodal Correspondence: Evidence from fMRI and DTI.” Brain and Language 128, no. 1: 18–24. doi:10.1016/j.bandl.2013.11.002.

(no free .pdf available)

excited duck

I’ve been reading and re-reading this paper quite a lot this year. It’s an fMRI study on sound-symbolism which finds increased activation for sound-symbolic words in the left superior parietal cortex, which the authors take to mean the engagement of cross-modal sensory integration networks. That is to say, it seems that monolingual native English speakers are able to integrate sound and sensory meaning when the sound of the word naturally fits the meaning. My experiments use a similar approach with EEG, so it was very exciting to read a paper which independently expressed the same kind of ideas using a different imaging technique. Sadly, the wider behavioural experiment which they used to test the stimuli hasn’t been published yet – I’m interested to see the variation in the words they used, as some words were from languages without much sound-symbolism (Dutch, for example), while other words were from languages with lots of ideophones (e.g. Yoruba). I’m looking forward to reading about that in more detail.

3. Skipper (2014). “Echoes of the Spoken Past: How Auditory Cortex Hears Context during Speech Perception.” Philosophical Transactions of the Royal Society B: Biological Sciences 369, no. 1651: 20130297. doi:10.1098/rstb.2013.0297.

(open access paper available here)

husky hearing questioning

This paper addresses context beyond language and asks why neuroimaging meta-analyses show that the auditory cortex is less active (and sometimes deactivated) when people listen to meaningful speech compared to less meaningful sounds. Skipper’s model suggests that the auditory cortex doesn’t “listen” to speech, but instead matches the input to predictions made from context; the closer the prediction matches the input, the less error checking there is, and consequently the less activation of the auditory cortex there is. The role of the auditory cortex, therefore, is to confirm or deny internal predictions about the identity of sounds. When predictions originating from PVF-SP (posterior ventral frontal regions for speech perception) regions are accurate, no error signal is generated in the auditory cortex and so less processing is required. More accurate predictions could be generated from verbal and non-verbal context (indeed, Skipper argues that verbal and non-verbal is a false distinction), resulting in less error signal, and therefore less metabolic expenditure (suggesting a metabolic conservation basis for the existence of the predictive model).

It’s interesting, and definitely plausible, but I think he goes too far. He throws the baby out with the bathwater when arguing against the necessity of traditional linguistic units; just because context (rather than specifically phonemes, syllables, etc.) seems to be the basis for predictions and error checking, that doesn’t mean that well-attested traditional linguistic units aren’t important or aren’t there. Indeed, if they’re not important, why are they there, and why are they so consistently distinctive?

Linguistic reservations aside, this is one of the most interesting ideas I’ve read this year.

4. Perniss and Vigliocco (2014). “The Bridge of Iconicity: From a World of Experience to the Experience of Language.” Philosophical Transactions of the Royal Society B: Biological Sciences 369, no. 1651: 20130300. doi:10.1098/rstb.2013.0300.

(open access paper available here)

Another paper from the special edition of Phil.Trans.Royal Society B on language as a multimodal phenomenon. I like how the three functions of iconicity are made clear here: displacement, referentiality, and embodiment. I also like how an attempt is made at categorising and more precisely defining iconicity, as pinning it down precisely has been quite tricky and different researchers use different terms in different ways. Their definition of iconicity has undergone a (welcome) narrowing compared to their definition in Perniss et al. (2010); they now equate it directly to sound-symbolism (which I’m not sure I fully agree with), and define it as “putatively universal as well as language-specific mappings between given sounds and properties of referents”. This version of iconicity does not include systematicity, or any “non-arbitrary mappings achieved simply through regularity or systematicity of mappings between phonology and meaning”. I’m neutral on this. Certainly, statistical sound-symbolism is different from sensory sound-symbolism, but where do we draw the line between conventionalised language-specific sound-symbolism and statistical sound-symbolism? How is it possible to differentiate them, given that language-specific sound-symbolism will also be statistically overrepresented with certain concepts? Moreover, what are phonaesthemes now? Can you distinguish between statistical phonaesthemes and sensory phonaesthemes which are also very common? This paper goes further than most in terms of categorising and defining the casserole of concepts related to iconicity and it defines the state and purpose of iconicity very well.

5. Shin and Kim (2014). “Both ‘나’ and ‘な’ Are Yellow: Cross-Linguistic Investigation in Search of the Determinants of Synesthetic Color.” Neuropsychologia. doi:10.1016/j.neuropsychologia.2014.09.032.

(no free .pdf available)

adventure time nice fist bump

This is a study of four trilingual Korean-Japanese-English speakers who also have grapheme-colour synaesthesia (which wins the award of “most niche participant group of 2014” for me). They found that all four of them had broadly similar colours for the same characters across languages, and that the effect was more strongly driven by sound rather than the visual features of the characters. This means that grapheme-colour synaesthesia seems to be driven by the sounds of the graphemes more than their shapes. This is rather an exciting find, because it hints that a previously non-linguistic phenomenon may well be rooted in language, and this may have interesting implications for the processing of cross-modal correspondences in language in non-synaesthetes too.

Standard