Free Card Friday: Evidence Indict

 

Today’s free card has broad usefulness for debaters in every event. It argues that social science research should be viewed with skepticism, especially when it comes to using it to guide public policy. This is due to cultural and institutional pressures within the disciplines’ structures, which encourage the publication of inaccurate, irreproducible, or exaggerated information. Debaters can use this card any time their opponent supports their claims with evidence drawn from a single piece of social science research.

 

ONE-OFF STUDIES SHOULD BE DISREGARDED- THERE’S TOO MUCH ROOM FOR ERRORS

Sutherland & Janz 12/11/13

(Alex & Nicole, D.Phil in sociology from Oxford/joint appointment at RAND Europe and Cambridge Univ. & prof of social science research methods at Oxford, The Alliance for Useful Evidence, “Social Science and Replication,” http://www.alliance4usefulevidence.org/replication/)

Social science is broken. The far-too-prevalent reporting of ‘statistically significant’ results, results ‘bordering on significance’ creeping in, focusing on significance rather than significance and effect size, undeclared conflicts of interest, weak or non-existent peer-review processes, or unwillingness to retract articles, create headaches for anyone trying to make sense of research output in social science (and science in general). But all is not lost. There are strident attempts to push for greater reproducibility and replication of research findings. In fact, replication may be the tool that can help empirical social science avoid the pernicious problems set out above. How? Well, if there’s a “knock-out-once-in-a-lifetime-would-you-believe-it” result from a single study that everyone pays attention to (especially if it influences policy) then at the very least researchers should seek to: (a) reproduce the result (significance and effect size) using the same dataset; and/or (b) replicate the analysis using a different dataset (i.e. verify the result); and/or (c) replicate and improve the original using updated data/methods. Policy makers on the other hand, who may not being a position to replicate, should be demanding ‘what other evidence is there for this finding?’, or commissioning a replication, rather than relying on one-off results from a single study, no matter how high the quality of that piece of research. Why does all this matter? The Reinhart and Rogoff replication scandal earlier this year, where a student found that two Harvard economists had made mistakes and omitted data (since corrected but still disputed), showed that by holding back data sets and analytical steps, errors may only be discovered years later – if at all. (The need for replication doesn’t stop with the economy. How about the effects of nuclear proliferation? Or thirteen major effects in psychology?) Despite the importance of ‘getting it right’, particularly for policy-related research, replication is uncommon. The problem is cultural: Gherghina and Katsanidou found that only 18 of 120 political science journals have a replication policy in which they state authors should upload data for their papers. But journals are not the only driver. In an age where ‘publish or perish’ looms larger in academia than ever, the fear may be that such papers may not see the light of day because they are not ‘new’, or these studies remain undone because they are not ‘glamorous’. How do we change this? By making replication something that is accepted as routine, whether done by other academics or analysts within government organisations if data are sensitive. A key lever for such change is by including replication courses at universities as core teaching for (under?)graduate students. Replication is an unparalleled tool for teaching students about the trials and tribulations of real world research – including all the shortcuts an author might have taken to get ‘that result’. If enough students are exposed to this idea, some go on to become academics, editors and policy-makers themselves and are then in a position to influence those around them and the wider research community. The consequences of not reproducing research findings are not just poor quality control and less transparency, which may seem of more concern to academics, but it can (and does) affect our everyday lives. From ‘being in it together’ in terms of austerity – a lot of credence was given to the 2010 Reinhart and Rogoff paper when it came to countries handled their recession – to how we might raise our children.

 

Of course, you should refrain from reading this evidence if you are also going to cite one-off social science studies!

 

Good luck, and don’t forget to send us your case for a free critique!

 

This entry was posted in Free Evidence. Bookmark the permalink.