Science, Science in general

Not all who wander are lost… but it takes work to wander well.

[I originally wrote this as a guest blog for My Scholarly Goop, I’m now crossposting to my own blog]

I used to be jealous of my friend Dave when we were at school. He’s wanted to be a doctor ever since he can remember, and that gave purpose to everything he did. We’d be in a chemistry lesson, and he’d be listening intently, even though he knew the topic, and he knew that he knew the topic, because a mastery of chemistry would be the foundation of the rest of his career. He wouldn’t let losing focus on a dull Tuesday afternoon potentially jeopardise his university applications a few years down the line. I’d be sitting next to him, quietly filling out the sudoku I ripped out of the newspaper in the library.

I envied his sense of purpose and direction. Our lives looked a bit like this:

dave vs me

But in time, I grew a lot less jealous, and embraced my lack of direction. There’s something liberating about not being focused on any particular thing; it gives you room to explore all the spaces in between.

My academic career veered from Japanese to linguistics to cognitive neuroscience, with a smattering of international relations, public policy, and statistics. Along the way, I worked in various jobs in financial software consultancy, accounting, commercial law, translation, and selling dog food. I’m now a data scientist, and I’ve done projects with a Premiership football team, an auction house, a scientific research funding body, a cargo company, and two different medical charities. I’m currently on a six-month placement with Solar Turbines, working on making huge turbine engines more efficient and reliable. I haven’t followed a path so much as got lost in the forest, and I’ve thoroughly enjoyed taking in all the trees.

Now, if you’re like my friend Dave, then great! You know what your goal is, and you’re probably doing what you need to do to get there. But if you’re reading a blog about post-PhD career paths, you’re probably a bit more like me. It’s exciting, because you can do anything! But it’s also overwhelming, because you should consider everything.

Writing about the joys of not having goals may sound flippant, but aimlessly wandering successfully takes a lot of effort. To put it in context, this is what my summer in 2016 looked like:

jobs applied for

I applied for something like 35 or 40 jobs in various fields, I went to six interviews, and I received one job offer (I withdrew from another two interviews before attending, because I’d have taken my current job at The Information Lab even if I’d been offered any of the others). Each job application took a good two hours or so to write, so that’s maybe eighty hours of work. The job application process for The Information Lab meant I had to download and learn a new bit of software, so I probably spent at least ten hours on that application alone.

But just the hours spent applying for jobs wasn’t enough. My PhD was about how special kinds of onomatopoeic words might or might not be related to multisensory processing and perhaps synaesthesia… and that’s the general summary version. PhD research is highly specific, which also means it’s highly irrelevant to most non-academic jobs. When you talk about your PhD, this is what you think you’re saying, and how it compares to what most people hear:

pie charts

Unlike the content of your PhD itself, the wider skills you learn are valuable, widely transferable, and highly sought. However, it’s your responsibility to show this to people. You could be a brilliant analyst, but nobody’s going to listen if you only talk about your skills in the context of your research. So, for every two hours or so I spent on a particular job application, I probably spent another two hours creating a portfolio on my blog. I used my coding, statistical, and data visualisation skills to play with various different data sources, such as looking at how the gender gap in GCSE results in England corresponds to various measures of how “good” a school is, or looking at football stats to show that the 2016 Portugal side are the worst winners of a European Championship. This took a lot of time and effort, but every single organisation that invited me to an interview said that it was this portfolio of work on my blog that had got me there. Not my PhD work.

This means that I spent about 160 hours, or twenty eight-hour days, or just under one working month, on getting one job offer. This probably sounds quite daunting. And yeah, it is. Getting a job outside academia was like writing an extra chapter for my PhD.

But the good news is that while it might be hard work, it won’t be wasted work. I’ve spoken to a fair few PhD students looking for a career outside academia, and I think a lot of people fall into a zone of self-defeat; they’re more than qualified for the jobs they’re applying for, but they undersell themselves because they’ve spent years surrounded by frighteningly competent people during their PhDs and only see themselves in relative terms. I felt like this for a long time myself, and it takes a while to get out of this mindset. I think academia does this to people’s perceptions of themselves:

perceptions 3 (annotated)

All this means is that academia itself is probably the biggest obstacle to getting a job outside academia. Take the time to research different careers, different people, different ideas; once you can frame your skills and abilities independently of your own research, you’re halfway there.

If you’re considering a career outside academia, you’re already far more qualified than you think you are, you’ve got more to offer than you think you do, and you can be far more than you think you can. But it’s up to you to prove it.

Standard
Science in general, Sound-symbolism, Uncategorized

Sound-symbolism boosts novel word learning: the MS Paint version

I have a new article out!

Gwilym Lockwood, Mark Dingemanse, and Peter Hagoort. 2016. “Sound-Symbolism Boosts Novel Word Learning.” Journal of Experimental Psychology: Learning, Memory, and Cognition. doi:10.1037/xlm0000235 (download link, go on, it’s only eight pages)

and I’m particularly proud of this one because:

a) it’s a full article discussing some of the stats I’ve been talking about at conferences for almost two years, and

b) it’s probably the only scientific article to formally cite Professor Oak’s Pokédex.

So, if you like things like iconicity and logit mixed models and flawed experiments cunningly disguised as pre-tests that I meant to do all along, you can read it here.

Enough of that, though. I know that what you’re really here for is Sound-symbolism boosts novel word learning: the MS Paint version.

The first thing we did was to select our words from almost a hundred ideophones and arbitrary adjectives. Participants heard the Japanese word, then saw two possible translations – one real, one opposite – and they had to guess which the correct one was. This was pretty easy for the ideophone task. People can generally guess the correct meaning with some certainty, because it just kind of sounds right for one of the options (due to the cross-modal correspondences between the sound of the word and its sensory meaning). It was a fair bit harder for the arbitrary adjectives, where there are no giveaways in the sound of the word.

2AFC stimuli selection

It’s kind of taken for granted in the literature that people can guess the meanings of ideophones at above chance accuracy in a 2AFC test, but I’ve always struggled to find a body of research which shows this. This pre-test shows that people can indeed guess ideophones at above chance accuracy in a 2AFC test – at 63.1% accuracy (μ=50%, p<0.001) across 95 ideophones, in fact. So, now, anybody who wants to make that claim has the stats to do so. Nice. We’re now rerunning this online with thousands of people as part of the Groot Nationaal Onderzoek project, so stay tuned for more on that.

Then, two different groups did a learning task. We originally had the learning task as a 2AFC set up where participants learned by guessing and then getting feedback. In terms of results, this did work… but about a third of the participants realised that they could “learn” by ignoring the Japanese words completely and just remembering to pick fat when they saw the options fat and thin. Damn.

2AFC failed test

Anyway. We got two more groups in to do separate learning and test rounds with a much better design. One group got all the ideophones, half with their real meanings, half with their opposite meanings. The other group got all the arbitrary adjectives, half with their real meanings, half with their opposite meanings.

In the same way that it’s easy to guess the meanings of the ideophones, we predicted that the ideophones with their real translations would be easy to learn because of the cross-modal correspondences between linguistic sound and sensory meaning…

concept sounds participants real trimmed

…that the ideophones with their opposite translations would be hard to learn, because the sounds and meanings clash rather than match…

concept sounds participants opposite trimmed

…and that there wouldn’t be much difference between conditions for the arbitrary adjectives, because there’s no real association between sound and meaning in arbitrary words anyway.

concept sounds participants arbitrary trimmed

And sure enough, that’s exactly what we found. Participants were right 86.1% of the time for ideophones in the real condition, but only 71.1% for ideophones in the opposite condition. With the arbitrary adjectives, it was 79.1% versus 77%, which isn’t a proper difference.

Additional bonus for replication fans! (that’s everybody, right?): in a follow-up EEG experiment doing this exact same task with Japanese ideophones, another 29 participants got basically the same results (86.7% for the real condition, 71.3% for the opposite condition). That’s going to be submitted in the next couple of weeks.

Here’s the histogram from the paper… but in glorious technicolour:

accuracy for each condition with both experiments (colour) updated

(It would have cost us $900 to put one colour figure in the article, even though it’s the publisher who’s printing it and making money from it. The whole situation is quite silly.)

The point of this study is that it’s easier to learn words that sound like what they mean than words that don’t sound like what they mean, and that words that don’t particularly sound like anything are somewhere in the middle. This seems fairly obvious, but people have assumed for a long time that this doesn’t really happen. There’s been a fair bit of research about onomatopoeia and ideophones helping babies learn their first language, but not that much yet about studies with adults. It also provides some support for the broader suggestion that we use similar sounds to talk about and understand sensory things across languages, but not so much for other things, so words with sound-symbolism may well have been how language started out in the first place.

I’d love to re-run this study on a more informal (and probably unethical) basis where a class of school students learning Japanese are given a week to learn the same word list for a vocab test where they’d have to write down the Japanese words on a piece of paper. I reckon that there’d be the same kind of difference between conditions, but it’d be nice to see that happen when they really have to learn the words to produce a week later, not just recognise a few minutes later. If anybody wants to offer me a teaching position at a high school where I can try this out and probably upset lots of parents, get in touch; I need a job when my PhD contract runs out in August.

The thing I find funniest about this entire study is that when I was studying Japanese during my undergrad degree, I found ideophones really difficult to learn. I thought they all sounded kind of the same, and pretty daft to boot. The ideophone for “exciting/excited” is wakuwaku, which I felt so uncomfortable saying that I feigned indifference about things in oral exams to avoid saying it (but to be fair, feigned indifference was my approach to most things in my late teens and early twenties). There’s probably an ideophone to express the internal psychological conflict you get when you realise you’re doing a PhD in something you always tried to ignore during your undergrad degree, but I’m not sure what it is. I’ll bet my old Japanese lecturers would be pretty niyaniya if they knew, though.

Standard
R, Science in general

scatterplot / dotplot / losttheplot

I’m not sure how to game search engine optimisation algorithms, but hopefully you’ll end up here if you’ve googled “things that are better than histograms” or “like scatter plots but with groups and paired and with lines” or “Weissgerber but in R not Excel” or something similar.

Anyway. Weissgerber et al. (2015) have a fantastic paper on data visualisation which is well worth a read.

(tl;dr version: histograms are dishonest and you should plot individual data points instead)

Helpfully, Weissgerber et al. include instructions for plotting these graphs in MS Excel at the end should you wish to give it a go. But, if MS Excel isn’t your bag, it’s easy enough to try in R…

…apart from the fact that nobody really agrees on what to call these plots, which makes it really hard to search for code examples online. Weissgerber et al. refer to them as scatterplots, but in most people’s minds, scatterplots are for plotting two continuous variables against each other. Other writers refer to them as dotplots or stripplots or stripcharts, but if you don’t know the name, you don’t know that this is what you’re looking for, and all you can find is advice on creating different graphs from the ones you want.

JEDI KNIGHT - these aren't the scatterplots you're looking for

As an example, here’s some of my own data from a behavioural task in which participants had to remember things in two different conditions. The histogram with 95% confidence intervals makes it fairly clear that participants are more accurate in condition one than condition two:

accuracy for each condition in percent

The scatterplots / dotplots / whateverplots also show the distribution of the data quite nicely, and because it’s paired data (each participant does both conditions), you can draw a line between each participant’s data point and make it obvious that most of the participants are better in condition one than in condition two. I’ve also jittered the dots so that multiple data points with the same value (e.g.the two 100% points in condition_one) don’t overlap:

accuracy for each condition in percent - jitterdots

It’s easy to generate these plots using ggplot2. All you need is a long form or melted dataframe (called dotdata here) with three columns: participant, condition, and accuracy.

dotdata$condition<- factor(dotdata$condition, as.character(dotdata$condition))
# re-order the levels in the order of appearance in the dataframe
# otherwise it plots it in alphabetical order
 
ggplot(dotdata, aes(x=condition, y=accuracy, group=participant)) +
  geom_point(aes(colour=condition), size=4.5, position=position_dodge(width=0.1)) +
  geom_line(size=1, alpha=0.5, position=position_dodge(width=0.1)) +
  xlab('Condition') +
  ylab('Accuracy (%)') +
  scale_colour_manual(values=c("#009E73", "#D55E00"), guide=FALSE) + 
  theme_bw()
Standard
Science in general

The only way is ethics

Ethics in scientific research can be very, very frustrating. At MPI, we’re pretty lucky in that we have blanket ethical approval for all studies which use standard methodologies (behavioural, eye-tracking, EEG, and fMRI at the Donders Institute) and non-vulnerable populations (i.e. not children, not the elderly, and not adults with medical or mental disorders). Even then, though, it’s complicated.

For example, I have to include a section in my EEG consent forms which says that if I see any indication of a neurological abnormality in the signal, I will report it to a clinical neurologist. The thing is, EEG doesn’t work like that; you can’t look at the signal, point to it, and say, “yup, this bit’s gone wrong” like you can with an X-ray or a structural fMRI. Interpreting EEG signals depends on whatever the person is doing at the time, and unless they’re doing a specific task for making a specific diagnosis, all you can really tell with EEG is whether somebody is moving, blinking, or currently having an epileptic seizure (or if they have them often).

eeg artifact

As another example, there’s a difficulty in reconciling data protection (which is a good thing) and Open Science (which is also a good thing). The Open Science movement advocates archiving your raw data and participants’ metadata so that other scientists can scrutinise your analysis and replicate – or not – your work. This is easy enough for behavioural data; we just ask participants whether they consent to the anonymised sharing of their raw data. With fMRI data, though, it’s technically possible to reconstruct a participant’s face from the structural scans, which could violate participant anonymity. And with video corpora, this is hugely problematic. The Language and Cognition group at MPI do a lot of work with video corpora for conversation analysis, which involves extra layers of consent from the participants so that the videos can be analysed and shown in conferences. After several hours of recording, they find this one perfect example of a particular gesture or phrase or turn-taking strategy… and then they realise that somebody’s just walked past in the background, and so the video can’t be used because they haven’t given their consent.

Dealing with ethics and consent creates a huge pile of admin work where a common sense strategy would be much quicker and easier… but on balance, this is definitely preferable to an experiment that puts people in any kind of danger. The problem is that outside academia (and similarly-controlled corporate and governmental research), all kinds of ethically questionable experiments are happening.

This is a long, roundabout introduction to an anecdote about how I was recently contacted by a high school student who wanted to know how EEG works with paralysis. I assumed they were asking about a brain-machine interface, such as the one in the 2014 World Cup opening ceremony where a paralysed man wearing an EEG cap was able to control an exoskeleton and kick a football…

Nope. They were actually asking about something they’d seen in an anime. After living in Japan for a year, one of my rules to live by is that the sentence “No, it’s okay, I’ve seen it in an anime” never indicates anything good, and this rule was proven again on this occasion. The anime in question is called Sword Art Online, and I’m not really sure what it’s about other than it features a virtual reality helmet which paralyses the characters from the neck down and overrides their sensory systems, thereby making the virtual reality feel real as well as look real. I wrote back to the student and said that people are doing all kinds of interesting VR research and brain-machine interface research, but that EEG is kind of like a set of scales for weighing things; it can tell you what your weight is, but that doesn’t mean it can change your weight.

The student wrote back to me saying that people are doing research on this in America. These teams are apparently attempting to induce paralysis from the neck down, but are running into problems with their “body stopper”, like vertigo, nausea, paralysis lasting for much longer than when the machine was turned off, and some body functions not working for a while afterwards. I did a bit of googling and found out that the people working on this are amateurs who have taken apart a tazer that they’ve bought from a hardware store, messed about with the power settings, and strapped it to each other’s necks to try to induce temporary paralysis (and the guy in charge of it seems to want to run his own maid café, which pretty much says it all).

It goes without saying that this wouldn’t get ethical approval at MPI or any other university, and that it is, to use the technical term, really fucking dangerous.

It’s a bit more complicated than that, though. It’s easy enough to look at people making their own TMS machines or buying tDCS sets because they think they can zap themselves smart (even though it doesn’t really work like that anyway) and write them off as potential Darwin Award winners… but science is somewhat complicit in this too. The mainstream media coverage of scientific findings is hugely exaggerated; mostly due to the media’s need to sell itself, but also because of the need for academics to overhype their own research. If people are presented with stories about how something about the brain and electricity can make you smarter or make paralysed people walk, and if scientific research isn’t all that open to non-scientists, it’s not really surprising that people are trying it out for themselves.

It boils down to science communication in the end. It’s one thing to talk about how amazing your own research is or how these great findings could mean brilliant things, but that’s actually kind of irresponsible without also talking about the ethics approval boards, the consent forms, the participant safety measures… in short, all the boring but essential things that make scientific research safe. Hence the long, roundabout introduction to this anecdote. You’ll remember the bit about the homemade paralysis machine from a tazer, but I’d rather you remember the bit about all the ethics forms I have to fill in before I can do any kind of experiments myself.

Standard
Science in general

An ode to participants

[klik hier om in Nederlands te lezen]

I’ve been a PhD student at the MPI for 18 months now, and in that time I have tested 147 different participants in 4 different experiments here, and an extra 23 in another experiment in London. That’s about nine and a half times a month, which falls somewhere between the number of times I go to the gym and the number of times I just watch TV eating biscuits (I’ll let you decide which is which).

That’s a lot of people. That’s a lot of times that I’ve been saying “Thanks for doing the experiment” and that’s a lot of times that I’ve forgotten whether it’s de experiment or het experiment. That’s a lot of times I’ve inflicted post-rock on my participant while setting up the electrodes. That’s a lot of times I’ve heard the same stimuli, to the point where I almost feel more familiar with the voice of the woman who recorded the words than the voice of my own girlfriend. That’s a lot of times that I’ve said “press the left button if the word is correct, and press the right button if the word is wrong”, so much so that it’s become burned into my mind and I can’t say it without feeling like I’m singing it. That’s a lot of conversations where I ask things like “So, are you a student here? What do you study? Is my experiment more fun than my office mate’s experiment? [it definitely is]”. That’s a lot of conversations where participants ask things like “are you German? Oh, you’re British, I thought your accent sounded German. How long have you been in Nijmegen? Can you say Scheveningen? [I sort of can, yeah] Is the UK really like Geordie Shore? [it sort of is, yeah]”. I appreciate the Dutch practice, although I hope there won’t be many situations in daily life where I have to tell people “please read and sign the consent form before we go any further” or “don’t worry, this won’t actually electrocute you”.

The really funny thing is the disparity in how each of us sees the experiment. To my participant, it’s a strange, maybe slightly boring, task that takes about an hour. It’s not a bad way to earn a bit of beer money, it’s two drinks at the Cultuur Café on campus, maybe three if they settle for Jupiler instead of something actually nice, and there was that two hour gap between lectures that afternoon anyway. It’s pretty forgettable. A week later, my participant vaguely remembers doing my experiment, but not really what it was about, apart from that it had some Japanese words in it and there was that bit where I turned the electrode impedance check on and that weird swimming cap thing made their head light up like a Christmas tree, and there was something about how blinking made their brainwaves go funny.

To me, though, it’s everything. My career completely depends on the research that I do, and the research that I do completely depends on the kind people who turn up to do these strange, maybe slightly boring tasks, even though it’s 9am and it’s raining outside. I have talked about the results from my experiments all over the place, from a beautiful old room in the KNAW in Amsterdam with oil paintings of 18th century Dutch writers on the walls, to a wooden boat on the river in Tokyo from which my supervisor could see a fireworks display and where I tried to hide the fact that I’d spilled shochu down my shirt.

Without my participants, I would have never seen any of this; without my participants, I wouldn’t be able to do the job that I love (or the job that I think is frustratingly terrible, if you’re asking when I’m cleaning gel out of electrodes with a toothbrush or if there’s a typo in my code that I just can’t find). The department blog hettaligebrein.nl often talks about the research that we do at MPI, and some of it even makes the national news. This can make the scientists involved seem like the most important part… but I hope you appreciate that behind every MPI study is a scientist who is quietly very grateful for the bemused participants who do their experiments. Especially the ones who still turn up at 9am when it’s raining outside.

[this blog was originally written for hettaligebrein.nl, the Dutch-language blog for the Neurobiology of Language department at the MPI for Psycholinguistics]

Standard
Science in general

From codas to coding: how to make the move from linguistics into experimental research.

I was testing a participant the other day when I had a moment. I had electrode gel all over my hands, I was saying something about measuring action potentials, and I just thought, wait, what? How did I end up here?

See, I dropped science at sixteen. I have A-levels in French, Latin, History, and Maths. Even at degree level, I did Japanese and Linguistics, and yet here I am, programming and running my own EEG experiments looking at cross-modal integration in language. Academia is funny like that.

It’s great that you can start specialising at sixteen and still end up doing things that are almost completely unrelated; there’s something reassuring about having the freedom to drift. But, the downside is that you’re always trying to catch up with things that you should have learned much, much earlier. I get asked about how to transition from a languages/linguistics degree towards the experimental side of things quite often; this blog is part answer, part letter to my younger self (who should have learned this stuff, and got a proper haircut, much earlier). If you don’t fancy reading through it all, there are four main points:

  1. Take a two year long Master’s course, so that you have time to develop a) your knowledge, and b) your interests.
  2. Take a more general cognitive neuroscience Master’s course rather than anything that sounds really specific.
  3. Read around about things like how approaches to the neuroscience of language have developed and what sort of questions we should be asking.
  4. Learn statistics. Learn R. Learn programming too, if you can.

…and don’t forget about the cost of it. I can’t speak for many countries, but the Netherlands is much cheaper than the UK, and just as good, if not better.

Okay. Here it is in detail.

I’m a linguistics student, and it’s great! …but that one lecture I had about Broca’s area and Wernicke’s area was really interesting, and I want to do this kind of thing in my Master’s, but I don’t know where to start.

It ultimately depends on what you want to get out of a Master’s. Do you want to go on and do a PhD and research? Or do you want to explore something you find interesting? Because my advice is different depending on whether or not you want to stay in academia.

I’m interested, but I don’t know if a PhD is for me. I’d quite like to have a job where I’m not blogging about my own job at 11pm on a Thursday night…

Fair point. In that case, it’s relatively straightforward – pick something that interests you and meets all your criteria about location, cost, etc, and just make sure you enjoy it! If you know you want to have a non-academic job, then a Master’s is about self-fulfilment / self-development / self-whatever. Transferable skills too, of course, and so a lot of what I’m about to say will also apply, but isn’t quite as crucial.

Then again, staying in school does kind of appeal… you get paid to start work at 1pm, eat nothing but pot noodles, and still get to call yourself Dr. Lockwood afterwards? Sign me up!

That’s not how it is at all (honestly, mum, it isn’t).

Oh. Well, it still sounds good. I find a one year Master’s course and race through it so I can quickly get settled into the life of luxury you’re living as a PhD student, right?

I wouldn’t recommend that, actually. The Master’s course I did was one year long, and at the time I thought that was fine – I’d spent four years doing my undergrad degree, I had itchy feet and I wanted to move on, I liked the idea of quick progress. However, it’s just not possible to learn all the things you need to be prepared for a PhD in one year. It’s too rushed, both in terms of the amount you can learn, and also in terms of the development of your own thinking and interests. If you already have a specific idea of what you want to specialise in, then that’s great; but if you’re generally interested in psycholinguistics / cognitive neuroscience of language, then a year is not enough time. During my Master’s (and presumably in most one-year Master’s courses), I started in late September and had to have a clear idea of what I wanted to write my thesis about by December. This means that you’ve basically got to have worked out exactly what you’re interested in researching within two months of being there, and that’s still while you’re learning the basics of a new field! A lot of people on my course ended up doing a thesis project about something that they were only generally interested in. This isn’t a problem if you don’t want to go on and do a PhD, but it is a problem if you do – if your Master’s thesis is about X, then you will have to base your PhD application on X, which limits your PhD research to things related to X. Luckily, I enjoy my research area, but I do sometimes think I’d be researching something slightly different, or researching the same thing but in a slightly different way, if I’d had more time.

But I found this Master’s course which really interests me and the title is something like MSc cognitive neuroscience of language and communication with an experimental focus on acquisition and development and and and… surely it only takes a year to do something so specific?

It probably does… but I would also recommend taking a more general Master’s (e.g. cognitive neuroscience or logic, like you suggest), rather than one that focuses on one particular thing. You’re doing a linguistics BA, so I can guarantee you that you know more about the theoretical structure of language than most computational or psycholinguistics / neuroscience of language researchers do. Neuroscience of language generally works at a far more general level of linguistic analysis than you’re used to. Instead, you’ll find that you need a lot more general neuroscience information, so you should rely on your BA having provided you with enough strictly linguistic information; do a Master’s in general cognitive neuroscience in order to get as much knowledge about the brain as you possibly can. Then, you can go back to a more language-focused PhD, but with a much better set of skills and a much wider knowledge than I did.

You keep talking about developing your “set of skills” and it sounds horribly corporate. What are you on about?

The first one is doing as much reading about the neuroscience of language as you can. That’s a big field, and I don’t know what your main interests are – let me know, and I can send you some more specific things. A good place to start is Poeppel and Embick (2005) “Defining the Relation between Linguistics and Neuroscience.”, which is a book chapter about what the field is and what it should be doing when looking at the brain and language. Another good one is Hagoort (2014) “Nodes and networks in the neural architecture for language: Broca’s area and beyond”, which is a summary of how the traditional view of language in the brain is defunct and outlines the recent developments (the traditional view is probably what you learned in that one lecture on language and the brain, where Broca’s area and Wernicke’s area play separate roles for syntax and semantics).

That’s just reading! I can do that already.

Sorry, I don’t mean to be patronising, but sometimes it’s useful to have a place to start.

The next thing is to start reading about statistics. If you do any kind of experimental linguistics, you will spend far more time working with numbers than with phonemes or words or sentences. Understanding the methods of analysis is as important, if not more important, than understanding the concepts you’re researching. It’s difficult to recommend something for this, as nothing works the same for everybody – just read as many things as you can, and see what works for you. I find that for any given concept, I could read three different explanations and not understand a thing, another person’s explanation and kind of get it, and one magic way of phrasing things which makes it completely clear what’s going on. For me, that phrasing magician is Daniel Lakens, who writes an interesting and readable blog on statistics as applied to psychology (but which is easily transferable to language research). Just read through it; it’s surprising how much you’ll pick up from blogs rather than textbooks. The main thing to remember is that statistics is complicated, but it isn’t inherently difficult.

Well, reading about statistics is one thing, but how do I actually do statistics?

There are a ton of statistics programmes out there (as well as MS Excel, which I often forget about). The best thing I’ve found for mucking about with data is R… and it’s also completely free to download. R involves a steep learning curve, but it’s also almost instantly rewarding – it’s clear what you’re doing, and it’s easy to see how you can apply it to whatever you’re interested in.

Like with statistics, learning data manipulation and statistical programming can be impossible from one person and easy from another depending on how you find their instruction style. I recommend an excellent set of free courses hosted by Johns Hopkins University about data science. The first two courses on there are a great introduction to data analysis and to R, and you can take it at your own pace. Also, if you have twitter, you should follow Hadley Wickham, the guru of all things R. He writes packages which make grappling with R code much easier (such as dplyr, for which there’s a great tutorial video here), and frequently tweets useful links and resources. I’ve figured out all kinds of things in my scripts just from procrastinating on twitter. When it’s not just an echo chamber for outrage and hatred, social media can actually be pretty great.

Oh, and finally, if you’re not already using referencing software, start now. Again, there are loads out there, but I recommend Zotero. It’s free, it’s really simple to use once you’ve installed it, and it’s brilliantly intuitive.

That sounds like a lot of work! I’m in my final year of my undergrad, and I’ve got these three essays, and…

You will never have as much spare time as you do right now. In my final year, I worked three jobs (as a proofreader, a translator, and best of all, a dog food seller), took an evening course in Russian, played the piano in a musical, did some stand-up comedy gigs… and still had the time to binge watch all ten series of Friends in about six weeks. I don’t have anywhere near that much spare time to waste anymore; it’s hard enough to find the time to improve my statistics and my R skills, and that’s part of my job. I wish I’d put that time a few years ago into working on some useful skills rather than watching improbably affluent fictional twentysomethings drink coffee.

You mean, watching Friends hasn’t prepared you for PhD life?

Only for the amount of coffee that’s required.

Standard
Science in general

Ghost literature is haunting science

Scientists properly referencing things is great[citation needed], but sometimes proper referencing leads to improper science.

With the apparent need for scientists to produce more papers, it is increasingly common to see three or four separate short papers on the same subject – the same experiment, even – rather than one big paper which rounds them all up. This isn’t necessarily a bad thing. Reading a long paper with several experiments which are variations on a theme can take all morning, and sometimes you’re just looking for one specific bit of information anyway. Papers written like this (or articlettes, as I think of them) generally cite their sister articlettes to avoid repeating things every time. It often looks something like this:

Methods

The methods are the same as in Me et al. (20XX), but this time the stimuli were presented visually instead of auditorily. See Me et al. (20XX) for a detailed description.

This isn’t too much of a problem. It cuts down on the length of the articlette by removing material which has already been written and published. If you want to see the detailed methods, it tells you exactly where to find them (and if you’re not interested in the detailed methods, then maybe you should read papers more thoroughly). The author also benefits by sneakily increasing their citations by citing themself.

The problem is when the articlettes read something like this:

Methods

We did blah blah blah with stimuli that were designed based on Me et al. (submitted).

…or even worse,

Methods

We did blah blah blah with stimuli that were designed based on Me et al. (in prep).

In this case, the author is citing their own work which has been submitted to (but not yet accepted and published by) a journal, or isn’t even ready to be submitted. Either way, it’s impossible for the enthusiastic reader to follow up and have a look at their experimental manipulations more closely, because the sister articlette is unavailable (I think of this sort of thing as ghost literature). This is really frustrating – it’s very difficult to know what to think of an articlette’s conclusion when all kinds of things could depend on the manipulations. It could be just as the author describes; but there could also be various things in the experimental set-up which could easily determine the results, possibly more so than the main manipulation which the author thinks is responsible, and it’s just not possible to have a look. Moreover, the articlette which has been published will always look like that – somebody could be reading it years later, and not know where to find the sister articlette with all the interesting information in it, even if it has since been published.

There are mitigating factors, of course. Given the appeal of articlettes to both reader and author, the author can’t necessarily be blamed for putting something in one articlette and then citing it in another. It’s easily possible that the two articlettes were submitted for review at exactly the same time, and that the second one was reviewed and published more quickly than the first one. This would lead to the second one, which depends on the first, citing a paper which is not yet available. It’s unfortunate, but understandable, considering the casserole of nonsense that is the scientific journal system. A rather more cynical interpretation would be that the author knows that their work wouldn’t pass peer-review if submitted completely, and therefore cites ghost literature to obscure the deficiencies of the articlette in question.

I’m surprised that there aren’t measures or rules against these things. Some, but not that many, journals have an editorial policy which states that papers citing as yet unavailable manuscripts will be rejected (and hey, journals will find all kinds of excuses to reject papers). It shouldn’t be too hard to fix. Either the journals prevent the citation of ghost literature, or the authors go full Open Science and publish their stimuli on open repositories online somewhere.

As it stands, ghost literature is haunting science, making it hard to evaluate a paper for what it is. Thing is, I don’t know who to call.

Standard