This article proposes a checklist to improve statistical reporting in the manuscripts submitted to Public Understanding of Science. Generally, these guidelines will allow the reviewers (and readers) to judge whether the evidence provided in the manuscript is relevant. The article ends with other suggestions for a better statistical quality of the journal.
CooperH (2011) Reporting Research in Psychology: How to Meet Journal Article Reporting Standards. Washington, DC: American Psychology Association.
5.
Crettaz von RotenFde RotenY (2013) Statistics in science and in society: From a state of the art to a new research agenda. Public Understanding of Science22(7): 768–784.
6.
GelmanA (2015) Working through some issues. Significance12(3): 33–35.
7.
GlaserDN (1999) The controversy of significance testing: Misconceptions and alternatives. American Journal of Critical Care8(5): 291–296.
8.
IoannidisJPA (2005) Why most published research findings are false. PLoS Medicine2(8): 696–701.
9.
MertonRK (1973) The Sociology of Science: Theoretical and Empirical Investigations. Chicago, IL: The University of Chicago Press.
10.
NelderJA (1999) Statistics for the millennium: From statistics to statistical science. The Statistician48(2): 257–269.
11.
NuzzoR (2014) Statistical errors: p values, the gold standard of statistical validity, are not as reliable as many scientists assume. Nature506: 150–152.
12.
TalAWansinkB (2014) Blinded with science: Trivial graphs and formulas increase ad persuasiveness and belief in product efficacy. Public Understanding of Science. Epub ahead of print 15October.DOI: 10.1177/0963662514549688.
13.
World Association for Public Opinion Research (WAPOR) (2014) ESOMAR/WAPOR guideline to opinion polls and published surveys. Available at: http://wapor.org/esomarwapor-guide-to-opinion-polls/ (accessed 1 December 2014).