Abstract

Introduction
Amongst the diverse array of submissions received at Paramedicine, those employing survey methodology are the most common. Unfortunately, we have found studies using this methodology to frequently be problematic in terms of methodological rigour and quality and therefore they tend to be declined more often compared to other study methodologies. To be clear, Paramedicine recognises survey-based research as a useful means for evidence production; we have published strong survey- research capable of having tangible impact and will continue to do so. Survey research has long proven itself to be a valuable approach to gathering data from populations such as patients, clinicians, or administrators, to inform and develop practice and policy in healthcare. 1 My sense, however, is that there exists an under-appreciation of the complexity of survey-based research, resulting in good research questions being poorly investigated, and producing flawed results that whilst interesting in content and attractive in presentation are ultimately not helpful in moving the discipline forward. At Paramedicine, we have written about the dangers of ‘hurtful science’, 2 and it seems to be within and around survey-based research that we find this concept starkly evident.
Concerns regarding quality in survey-based research, particularly when conducted online as most presently are, are not unique to paramedicine and have been raised in various contexts over many years.3–6 These concerns have perhaps been magnified by the COVID-19 pandemic era, which saw a proliferation in the use of online surveys, and an increased awareness and prominence of undermining issues such as ‘survey fraud’, ‘imposter participants’, and ‘participant disinterest’.7–9 For example, in their 2024 study of the consequences of COVID-19 on children in the United States, Nur et al. 10 found over 70% of 6536 online responses to be fraudulent.
This editorial seeks to draw attention to the problems found in survey-based research as experienced by this journal, in doing so provoking reflection in paramedicine researchers conducting such research and in end-users who consume research or who are charged with its synthesis and translation. This piece is not written as methodological guidance, as that is readily available in good measure from many sources, and the author does not claim to be an expert in survey design and conduct. It does, however, constitute editorial advice and posits broad suggestions for pragmatic high-level solutions that if adopted would most likely result in enhancement of quality overall, and a greater likelihood of publication in Paramedicine.
A personal reflection
‘Let's do a survey’! These were my words, spoken enthusiastically as a relative novice to a small group of similarly experienced colleagues about 15 years ago as we worked to conceive a research study that would constitute the first within a new research group established in an ambulance service. Thinking back, we could have used more suitable research designs to answer our question, but the idea of a survey came to us immediately and seemed to fit, so off we went. It felt ‘easy’, nothing too complex, and would facilitate an early and dare I say it, a quick win, for us. We were at that time novices in the research world, with modest formal research training and little insight into our own limitations (though I’m sure these were apparent to others around us!).
We did not, however, engage as a team in what I refer to as a ‘methodological pause’, akin to the ‘clinical pause’ we might see in clinical practice, a pause that creates an intentional inertia in decision-making and planning and may to some degree serve to counter cognitive traps such confirmation bias, over-confidence bias, and groupthink. 11 If we had embraced a ‘methodological pause’, or something similar, and reflected on the path were heading down, perhaps we might have identified several flaws in our approach that ultimately did lead to a published piece of research, but one that whilst well-written and attractive in message most likely sits in the ‘hurtful science’ category despite our best efforts and intentions. If we had paused, we might have recognised that none of us had undertaken structured training in survey design and methodology and that over-confidence existed. We had superficial knowledge and were content with that, not knowing what we didn’t know or what we had chosen not to learn. If we had taken the time to study the methodology more authentically, we might have realised we had a significant under-appreciation for the complexity of the design. This self-assessment of our capability as a team might have led us to collaborate more effectively, looking beyond our immediate familiar circle of associates, thereby strengthening our team by adding a researcher with demonstrable expertise in survey methodology or in the content area we were investigating. Consequently, we might have then become heightened in our awareness of, and appreciation for how to design a survey instrument and test its validity; of how, if using an existing validated psychometric instrument, to test its performance in a new cohort of participants; of how to determine an a-priori sample size and why that is important; of how to sample properly to ensure representativeness and generalisability; of how to mitigate the risk of self-selection bias, response bias and sampling error; of how to clearly identify the population of interest and how best to access them; of how to navigate the increasing perils of online data collection and the challenges posed to the integrity of response data; and finally, how to correctly analyse and interpret the data and findings, respectively.
But we didn’t pause, and consequently, hard lessons were learned. Yes, it was published, and in time, modestly cited by others. But it was probably not good science. We could have done better; we should have been better. This author does not claim to now have expertise in survey design; one can’t claim expertise in every research methodology, and for me, probably not even in one. The author does, however, claim insight into what he doesn’t know, together with a learned appreciation for embracing reflection and collaboration to ensure true methodological expertise is present from the outset.
Areas of survey research methodology on which we might reflect
Our collective experience as an editorial team has enabled us to identify patterns within survey research methodology that result in areas in most need of discussion and methodological reflection. I summarise those of particular importance below, but emphasise the shortlist is not exhaustive. I offer some guidance and a few recommendations but avoid drifting into specific methodological guidance as this can be found elsewhere and explained more cleverly and clearly by experts – the very people we must be looking to collaborate with and learn from. The onus remains with researchers to conduct their methodological homework.
Sampling and recruitment
We have on many occasions received submissions in which the instrumentation and analysis were robust, but sadly undermined by an unclear population of interest and flawed sampling and recruitment strategies. Many survey studies submitted to Paramedicine make no mention of a-priori sample size calculations, instead referring to being ‘exploratory’ in nature, making it difficult to assess the true intent of the study and its statistical power. Increasingly common are broad surveys of ‘the paramedic profession’, domestically or internationally, which offer potential for detailed insights but are fraught with undefined populations, minimal control over who accessed the dispersed survey web link, and which are at high risk of sampling error. Also problematic is the rise of recruitment via social media, which whilst opening a new avenue of access to potential participants and offering some benefits including larger samples, cost-effectiveness, and efficiency, may exacerbate sampling and response issues that are rarely demonstrably mitigated in design or discussed when reporting. 12 These highlight the importance of identifying clear sampling frames, calculating and reporting response rates transparently, and of the use of strategies such as non-responder follow-up to assess for material differences between responders and non-responders.
Finally, the emerging issue of ‘survey fraud’, referred to earlier in these writings, poses increasing risk, particularly in the context of these large profession-based surveys. 8 We recommend authors read widely, engage with emerging guidance on strategies to combat such risks, and report measures implemented to mitigate them in the final submission.7,10
Use of validated survey instruments and psychometric measures
We have witnessed a proliferation of survey research in paramedicine in which an existing validated survey instrument has been used for data collection. This is of course reasonable and where suitable, a recommended approach, but few instruments, particularly psychometric instruments, have been developed and validated using paramedic populations. Three primary concerns are seen that I wish to draw attention to. First, it appears increasingly common for teams of paramedic researchers to use validated instruments without apparent collaboration with a researcher or clinician who possesses demonstrable expertise in the constructs and domains of a given instrument, or of its application and interpretation in a research context. Paramedicine has previously published editorial content arguing for paramedicine researchers to collaborate more widely in a targeted manner to enhance research quality, and we echo those sentiments again here in the context of survey-based research. 13 This concern is often seen, for example, in survey-based studies investigating the mental health of paramedics, where validated instruments and inventories measuring concepts such as burnout and coping, or conditions such as post-traumatic stress disorder, are deployed seemingly without specialist consultation and collaboration. This creates a risk of inappropriate instrument selection and outcome measurement, methodologically flawed application, and erroneous interpretation of results, creating ethical and quality concerns. 14 The importance of determining the most appropriate outcome measure has been discussed recently in Paramedicine by Eastwood et al. 15 Selection of the most appropriate instrument to measure the desired outcome is equally important, a process with more complexity than is often acknowledged. Researchers may find value in increasing the systematicity of the instrument selection process by engaging with a resource such as consensus-based standards for the selection of health measurement instruments (COSMIN). 16 An initiative of an international multidisciplinary team of researchers with a background in epidemiology, psychometrics, qualitative research, and health care, who have expertise in the development and evaluation of outcome measurement instruments, the COSMIN initiative aims to improve the selection of outcome measurement instruments both in research and in clinical practice by developing tools for selecting the most appropriate instrument. 16
Second, it seems common for a validated instrument to be altered. Undoubtedly, this is done with good intention, in recognition of differences in the new population to whom it will be administered and how it might be interpreted. There tends, however, to often be an under-appreciation for the potentially serious impact such changes have on an instrument's validity and subsequent performance in the new study. This is not to say adjustments to a validated instrument cannot be made, but more so to highlight the need to acknowledge these, methodologically (and statistically) compensate for such changes, and report them transparently.
Finally, a good number of authors frequently appear to neglect the importance of testing a validated instrument's performance in the new population of interest. It may have performed well in the original participant cohort, but it may not do so in a cohort consisting of people of different disciplines education, age, nationality, language, and so forth. Performing appropriate statistical analysis to examine consistency, validity, and reliability, and reporting these transparently, is fundamentally important.
Use of non-validated instruments
The most common scenario in submissions to Paramedicine is that in which an existing validated survey instrument could not be identified, resulting in a survey instrument being developed specifically for a new study. Again, a perfectly reasonable approach, but one that does require fundamental development steps to determine the new instrument's validity and reliability. Unfortunately, our experience is that this aspect of survey instrument design is frequently neglected, or at least poorly articulated. For example, the first step in the determination of validity might be the determination of ‘face’ and ‘content’ validity, beginning with the piloting of the instrument in people from the population of interest and refinement over several rounds, and concurrent consultation with those with specific content expertise. Depending on the nature of the survey instrument, further assessment of validity (e.g. criterion validity and construct validity) and reliability may be necessary. 1 A detailed exposition on validity and reliability assessment is beyond the scope of this editorial, but we strongly urge paramedicine researchers to embrace the complexity of these developmental processes where appropriate in order to create robust, quality surveys representative of good science.
Collection and analysis of qualitative data in surveys
Many surveys appear to collect not just quantitative data, but qualitative information by means of free-text responses to open-ended questions, or the often-seen ‘other’ option that follows a quantitative question. In these instances, two concerns are frequently flagged during review. First, analytic rigour may be present for the quantitative component, but is generally under-developed and reported for the qualitative aspect. The description of the analysis is often limited to vague statements around thematic analysis, but without a rigorous qualitative methodology being applied that addresses issues of reflexivity, trustworthiness and credibility – it is as though those data may be viewed by the researchers as not quite qualitative enough to deserve true qualitative analysis, but qualitative enough to seem useful. 17 This serves to substantially diminish the credibility of the qualitative component of the study, and markedly reduce the quality of the overall paper. We argue that the collection of qualitative data by means of a survey does not negate the need for robust qualitative analysis; the problem may be that survey researchers tend to be more quantitative in orientation and have minimal qualitative training, looping us back to the earlier exposition on the importance of knowing a team's methodological blind spots and leveraging collaboration to mitigate them.
Second, there is frequently what appears to be methodological confusion in terms of why the qualitative data were collected, and how they were to be used. In many instances, the qualitative component seems to not have a coherent rationale for being included in the study design, feeling like a ‘bolted-on’ afterthought that is not congruent with the aims of the study. Other times, the overall study is presented as ‘mixed methods’, but appears to have no such paradigm nor actual mixing of data as one would expect. We encourage those creating survey instruments or questionnaires with quantitative and qualitative questions to engage in a methodological pause, to conduct a ‘sense check’ of sorts, to reflect on whether that design is indeed appropriate; whether a theoretical congruency exists spanning the whole study; whether the qualitative data will add value; and, whether the team has the expertise to manage those data in a way that avoids the creation of hurtful science. Guidance on the analysis and presentation of qualitative data gathered from surveys is freely available and should be engaged with early in the conception of a study.17,18
Reporting of survey research
It is in the reporting of survey-based research that improvement might most easily be achieved. Despite clear guidance existing to help authors improve their reporting of survey-based research, submissions to Paramedicine are frequently missing key reporting elements and those that are present are often under-developed. This can lead to great work looking like mediocre work, and reviewers and editors can only work on what is reported. Submissions often don’t include a completed reporting guideline checklist as requested in the journal's submission guidelines. Reporting guidelines have their limitations and arguments can be made regarding their ‘fit for purpose’ given the nuance that exists in every study, however, it is clear that engagement with reporting guidelines is most likely to enhance completeness, transparency, and quality of research. 19
At Paramedicine, we endorse the use of reporting guidelines such as those available via the EQUATOR network (https://www.equator-network.org/). Several exist specifically for survey-based research, for example, CROSS 20 and CHERRIES 21 by Sharma et al. and Eysenbach et al, respectively. Three final pieces of advice regarding these guidelines; first, they are reporting guidelines, not methodological guidance, a distinction that is often not realised by users. Second, these are not simply checklists that should not be used without context; each comes with a detailed ‘elaboration and explanation’ document providing detailed guidance on the reporting standard. Finally, reporting guidelines provide a minimum reporting standard, allowing authors to still remain creative in their reporting and presentation. 22
Conclusion
Survey-based research is a valuable research methodology that when conducted rigorously generates helpful science of sufficient quality to make a meaningful contribution to the discipline of paramedicine. However, an apparent under-appreciation for the complexity of survey research methodology may instead be resulting in the proliferation of hurtful science and threaten its potential; publishing such research is not in the interests of the discipline or the profession, but giving voice to the issue and reflecting on it is. At Paramedicine, we argue that an uplift in survey research quality is necessary and urge paramedic researchers and those conducting such research into paramedicine to first reflect on its methodological complexity and then respond by embracing methodological rigour when conceiving and designing survey-based research. Paramedicine does not argue for perfect research; we’ve stated before that perfection is not possible. 2 But through methodological pauses, team reflection, effective collaboration, and a stronger appreciation for survey research as a complex methodology demanding specific expertise, we can improve the research being generated and continue the development of a high-quality body of knowledge that will ultimately move us forward. We can, and must do, better.
Footnotes
Acknowledgements
The author wishes to acknowledge Professor Elizabeth Donnelly and Associate Professor Kathryn Eastwood for their critical review of, and contribution to, the manuscript.
Declaration of conflicting interests
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Paul Simpson is the Editor-in-Chief of Paramedicine, and a Director of the Australasian College of Paramedicine, the primary funder and publisher of Paramedicine.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
