Abstract
Background:
There is a well-documented divide between education research and practice. In 2004, Mosteller, Nave, and Miech argued for a focus on the research abstract, particularly structured abstracts, to improve the translation of research into practice. Since their call, no study has systematically examined the quality of abstracts in education research or the degree to which structured abstracts are utilized.
Purpose:
This study addresses two questions. First, what are the characteristics of the research abstracts required by journals in the field of education research? Second, to what extent do research abstracts in the field of education research contain the basic components of a research study?
Data:
Original data are drawn from the top 150 education research journals. Data include the instructions to authors regarding abstracts for each journal (n = 150) and a random sample of abstracts (n = 189).
Methods:
Journal instructions and abstracts were coded. Codes included whether they were structured and whether they included components of a research study, such as the data or findings.
Results:
A nontrivial proportion of abstracts fail to include important components of a research study. More than one in three lacked information regarding the background, and a similar proportion lacked information on conclusions. Over one quarter omitted information regarding the data, and a similar proportion lacked information on methodology. Only 7% of the top 150 journals explicitly require a structured abstract.
Conclusions:
The quality of abstracts in educational research could be improved. Suggestions for improving abstracts, such as shifting toward structured abstracts, are offered.
Keywords
Unfortunately, much of this research fails to be translated into practice. This research–practice divide has been a well-documented phenomenon for education researchers for a number of years (Broekhamp & van Hout-Wolters, 2007; Korthagen, 2007). Although numerous factors may contribute to this divide, one contributor is the limited time and resources available to stakeholders to sort through the vast array of education research. With articles frequently extending beyond 30 pages in length and often unavailable outside of libraries and subscription services (Howard, 2012), the time and resources required to identify relevant research can often be prohibitive.
Given the constraints around time and availability of articles, the research abstract often serves as an important indicator of the article’s value, determining whether a reader invests the effort to acquire and read the article in full and, in some cases, even directly influencing practice (Haynes et al., 1990). Abstracts are brief summaries of a research article, typically placed at the beginning of a paper and increasingly available online even when access to a full article is not. In a commentary in Educational Researcher in 2004, Mosteller, Nave, and Miech argued that improvements to research abstracts in education could substantively improve the accessibility of research, thereby lessening the research–practice divide.
In particular, Mosteller and colleagues (2004) argued for the adoption of a structured abstract, a formalized version of an abstract with labeled sections adhering to the typical format of research papers (Mosteller et al., 2004; Miech, Nave, & Mosteller, 2005). Since Mosteller and colleagues’ call for the structured abstract, the Institute of Education Sciences and affiliated What Works Clearinghouse have adopted the structured abstract as a recommended format for abstracts (Coalition for Evidence-Based Policy, 2005), and academics have suggested refinements to the structured abstract in education (Kelly & Yin, 2007).
Despite being over a decade out from their original call for attention to the research abstract in education, we have little empirical evidence on the state of abstracts in the field. Anecdotally, those working in the field are aware that the structured abstract advocated for by Mosteller and colleagues remains rare; however, little empirical research to date has explored the degree to which this is the case. Furthermore, and perhaps more importantly, we know little about the quality of abstracts in education research. Even if abstracts have tended to remain unstructured, great variation can exist in the quality of the abstract, with some clearly communicating key points of a study and others failing to even articulate the primary research question (Hahs-Vaughn & Onwuegbuzie, 2010; Hartley & Betts, 2009). If we know little about the use of structured abstracts in the field, we know even less about the quality of abstracts, both structured and unstructured, and the degree to which they effectively communicate the content of the research they represent.
The purpose of this study is to provide empirical evidence on the state of research abstracts in the field of education research, providing insight into the quality of abstracts while also giving an indication of the degree to which structured abstracts have been adopted in the field. In particular, I address the following research questions:
What format do research abstracts required by journals in the field of education research take? Particularly, do they specify specific components to be included in the abstract, and are they structured or unstructured?
To what extent do research abstracts in the field of education research contain the basic components of a research study?
The answering of these questions has important implications for education researchers and the journals that publish their research. Understanding the quality of abstracts can point to areas where researchers and journal editors can work to improve the information contained in abstracts. Doing so may improve the accessibility of research and, ultimately, educational practice.
In the next section, I review the literature on research abstracts. I then describe the sample of research abstracts studied and the methodology utilized to address each research question. In the final sections, I present the results of the analysis and offer recommendations for researchers and journal editors for the improvement of research abstracts in the field of education research.
Background
The Importance of Abstracts
Abstracts are generally understood to be brief summaries of the research contained within a study. The University of Wisconsin Writing Center defines an abstract as “a concise summary of a larger project . . . that concisely describes the content and scope of the project and identifies the project’s objective, its methodology and its findings, conclusions, or intended results” (Writing Center, 2007). The American Psychological Association, which provides formatting guidance adopted across numerous disciplines as well as catalogues social science abstracts in its PsycINFO database, suggests that abstracts should “present key elements” of a study and “accurately reflect the content of the article” by including key elements such as the purpose, data, methodology, results, and implications (American Psychological Association, n.d.).
To the extent that abstracts are brief summaries of a broader paper, their content is a function of the nature of the paper. The American Psychological Association notes that the appropriate elements for an abstract will vary between research articles, discussion or theoretical pieces, and literature reviews (American Psychological Association, n.d.). Indeed, even within a category, such as empirical research articles, the content may vary based on the nature and disciplinary approach of the study. Such heterogeneity is reflected in the academic literature on abstracts. Mosteller and colleagues (2004) suggested nine categories to be included in education research abstracts. Kelly and Yin (2007) built on this framework by providing suggestions for further detail to be included within the categories. Likewise, research on abstracts across disciplines have utilized numerous checklists ranging in length from fewer than five categories to more than 20 components (Hartley & Betts, 2009). Looking across these measures, however, reveals general trends in the components suggested for an empirical research study. Although different sources may combine or separate different components, they generally all contain the following components in some fashion: background, aims, methods, data, results, and conclusions.
Despite their short length, abstracts represent a critical component of research studies. In many cases, abstracts are the first, and sometimes only, component of a study read by someone searching the literature. Although research on abstracts in education is limited, a study of medical doctors found that of research articles they accessed, the abstract was the only portion read for over 60% of the studies (Saint et al., 2000). Furthermore, a number of physicians used the content of abstracts to guide their clinical decision making, making abstracts not just an important gateway to a broader research study but a critical component in and of themselves for practice (Barry, Ebell, Shaughnessy, Slawson, & Nietzke, 2001; Marcelo et al., 2013).
In education, the What Works Clearinghouse conducts its initial review of intervention research based on the title and abstract or introduction, meaning that studies with poorly detailed abstracts may not advance to further review (What Works Clearinghouse, n.d.). Resources such as ERIC and other databases often return only the abstract of a study, making it the primary piece of information available for determining whether to invest further effort to acquire the full text. For education practitioners, such as teachers and administrators, this effort is substantial given the demands on their time and their limited access to subscription services of full texts. Even with the rise of more open-access journals, the time spent to read and digest a full-length article means that practitioners may want to do so only for articles they feel confident pertain to their needs. Consequently, for education practitioners, the abstract may be the first and only component of a research study read.
The Structure of Abstracts
Although the format of abstracts varies across disciplines and across journals, the structure of abstracts can be broadly categorized as either unstructured or structured. The unstructured abstract, which is traditionally used in the social sciences, consists of a paragraph-style summary of the research. The structured abstract, in contrast, utilizes labeled sections, often roughly corresponding to the sections of the research paper (see this paper’s abstract for an example of a structured abstract).
Attention to the structure of abstracts gained traction in the medical field in the late 1980s and early 1990s. Following recommendations from the Ad Hoc Working Group for Critical Appraisal of the Medical Literature (1987), a number of medical journals moved to adopt the structured abstract as a requirement for publication. Within 5 years of the call, there was near-ubiquitous adoption of structured abstracts in the medical sciences (Mosteller et al., 2004). Despite their widespread adoption in the medical sciences and some hard sciences, structured abstracts appear less frequently in journals in the social sciences.
In 2004, Mosteller et al. noted this absence of structured abstracts in the education research literature and argued for their adoption by journals in the field. The authors argued that the disconnect between education research and practice exists in part due to the varied number of researchers from different disciplines and methodological approaches conducting education research and the vast number of education stakeholders (from students and parents to school systems and elected officials). The structured abstract, they argued, has the potential to simplify access to and discernment of useful research by education stakeholders and to facilitate easier communication within the education research community (Mosteller et al., 2004).
The body of research on abstracts, although generally coming from outside of the field of education, supports these contentions. Research on structured abstracts suggests that they are more readable and are generally more informative than traditional, unstructured abstracts (Hartley, 2000; Hartley & Sydes, 1997; Hartley, Sydes, & Blurton, 1996; Sharma & Harrison, 2006). For instance, Sharma & Harrison (2006) utilized a differences-in-differences-type approach in which changes in abstract quality were assessed for abstracts in journals that switched from unstructured to structured abstracts as compared to those that did not. They found the use of a structured abstract to be predictive of greater gains in the quality of abstracts with regard to the detail given on various aspects of the study (Sharma & Harrison, 2006). Similarly, Hartley (2000) found structured abstracts to contain more detail on study characteristics.
In addition to being more informative, the research suggests structured abstracts may be easier to digest for users. Hartley and Sydes (1997) compared readability of structured and unstructured abstracts, finding that readers rated structured abstracts as easier to read and that when authors revised traditional abstracts into unstructured ones, the Flesch and Gunning readability scores improved. In another study, researchers examined the time and accuracy with which readers could extract information from an abstract, finding that structured abstracts facilitated both faster and more accurate information retrieval (Hartley et al., 1996).
Despite these advantages of the structured abstract, many journals within the social sciences do not utilize such a format and may consequently be less effective than they could otherwise be. In a review of abstracts in the social sciences, Hartley and Betts (2009) reviewed journal articles from 53 different social science journals. The authors found that nearly half of abstracts contained no background or contextual information, and over one in five lacked information on the conclusions of the study (Hartley & Betts, 2009).
With regard to journals in the field of education research, the evidence is even scarcer. In the study by Hartley and Betts (2009), less than half of the journals included related to the field of education. Furthermore, the journals and articles were not selected randomly, and due to the location of the author, the majority of the education journals examined were focused on Great Britain. Few other studies of abstracts in education exist, and those that do face serious limitations. For instance, Hahs-Vaughn and Onwuegbuzie (2010) found that many education research abstracts lacked information on data or other important components, such as the research question; however, their work drew on abstracts from a single education research journal, limiting generalizability (Hahs-Vaughn & Onwuegbuzie, 2010; Hahs-Vaughn, Onwuegbuzie, Slate, & Frels, 2009).
The purpose of this research is to provide empirical evidence on the state of abstracts in the field of education research. In particular, I examine the degree to which the abstracts contain the basic elements necessary to effectively communicate the substantive content of the research study and the common format (structured or unstructured) of abstracts required by education journals.
Data
The goal of this study was to speak to the state of abstracts in the field of education research, broadly defined. As such, I began by identifying the top 150 journals categorized under “Education and Educational Research” as rated by the 2014 Thomson Reuters InCites Journal Citation Report (Thomson Reuters, 2015). The InCites search ranks journals based on their journal impact factor. While the merits of the journal impact factor are debated, the measure provides a metric of the frequency with which articles in a journal are cited (Hubbard & McVeigh, 2011) and, for the purposes of this study, provides a proxy for relevance to the field of education research. The resulting journals represented a wide range of educational subdisciplines (policy, teaching and learning, and psychology, among others) as well as a range of contexts (K–12, higher education, and international education, among others). The full list of journals is included in Appendix A.
From the list of journals, I compiled a data set of abstracts published in the 2014 calendar year. Specifically, the ProQuest database was searched for journals matching the name of each journal, and all available abstracts for articles published in the journal were downloaded. Approximately 10% of the journals did not return results in the initial ProQuest search. In some cases, this resulted from a lack of indexing by ProQuest, and in other cases, it resulted from typographical differences in the name of the journal and its listing in ProQuest. Where possible, abstracts for these journals were collected through searches using revised versions of the journal name or through searches of databases external to ProQuest. The final data set included abstracts (n = 5,950) for all but seven of the original 150 journals. From this set of abstracts, 200 articles were randomly sampled for detailed analysis. Articles that lacked abstracts or were not of an empirical nature (such as an editorial piece) were removed, resulting in an analytic sample of 189 abstracts. In the analyses that follow, I focus on characteristics of the journals (n = 150) as well as characteristics of the content of the sampled abstracts (n = 189).
Methods
Journals
I began by analyzing characteristics of the journals (n = 150) with a specific focus on the nature of the abstract required by the journal. To do so, I drew on information from the manuscript submission page or author instruction page on each journal’s website. The instructions to authors regarding abstracts were then coded. First, the instructions were coded for whether they made note of a required structured abstract, meaning that they either used the term structured abstract or specified that authors must organize their abstract under content headings. For instance, Educational Administration Quarterly specified that “each manuscript should include a structured abstract” that should “include very brief subheaded sections such as Purpose, Research Methods/Approach (e.g., Setting, Participants, Research Design, Data Collection, and Analysis), Findings, and Implications for Research and Practice.”
For abstracts that were unstructured, the instructions were coded for whether they provided detail as to specific content to include in the abstract. Some author instructions specified that abstracts should include detail on the data, methodology, or findings, whereas other sets of instructions did not provide such guidance. For instance, the journal Learning and Instruction, although not requiring a structured abstract, noted that “the abstract should state briefly the purpose of the research, the principal results and major conclusions.”
One limitation of this methodology worth noting is that it focuses on the instructions available to authors via the journal website. In some cases, journals likely communicate further information regarding the requirements and format for an abstract during the submission process, the peer review process, or the editing process. For instance, the Journal of Research in Reading states that it requires a structured abstract on the author submission page but publishes abstracts in an unstructured format. Such practices further down the submission pipeline are not captured in the coding of manuscript submission pages or author instruction pages. Therefore, the coding utilized in this study represents a picture of the initial instructions regarding abstracts available to authors. As will be shown in the results, however, the proportion of journals requiring structured abstracts and the proportion of articles using structured abstracts is similar, suggesting that there are not significant differences between these initial instructions and the characteristics of the abstracts that are ultimately published.
Abstracts
In addition to coding characteristics of the journal instructions, the analytic sample of journal articles was coded for content and structure (n = 189). Each abstract was evaluated for its inclusion of basic components of a research study. I utilized a six-item scale adapted from Hartley and Betts (2009) that included the following categories: background, aims (purpose/question), method, data, results/findings, and conclusions (discussion/implications). Note that the names of the categories are secondary to the purpose they serve. Some papers or abstracts may call the aims section the purpose or research question section. Background information may often be described as a literature review or prior research. The goal was to identify not the use of a specific term but the inclusion of information that generally adheres to the nature of a given category.
The background category included any mention of previous research, policy context, or other relevant prior knowledge. The aims category consisted of purpose statements, explicit research questions, or other statements of the goals of the paper. The method category consisted of mentions of specific statistical methods, qualitative methods, or other details on the procedures of analysis. The data category consisted of references to specific data sets, details about participants, or other characteristics of the unit of analysis. The results/findings category consisted of statements regarding the findings of the study, and the conclusion category consisted of statements interpreting the meaning, application, or implications of the findings.
Each abstract was classified in a binary fashion as either containing the category or not. In cases where the nature of the article did not entail the use of a given section, those categories were coded as missing. Attempts were made to code liberally, erring on the side of giving credit for a particular category rather than not. For instance, an abstract that stated that “policy implications are discussed” would be given credit for the conclusion category. Likewise, an abstract that said “qualitative analysis was conducted” would be given credit for the methods category despite not providing specific details on the type of analysis. In this regard, the findings may be interpreted as upper-bound measurements of the inclusion of specific categories. In addition to content, abstracts were coded for whether they were formatted as a structured abstract. Abstracts that included specific section headings were counted as structured, and those that lacked such headings were coded as unstructured. A detailed description of the guidelines for coding of the abstracts is included in Appendix B.
All coding was done by the author and a research assistant. A random subset of abstracts was coded by both individuals to check for interrater reliability. This cross-coding was conducted in a blinded manner, such that neither coder had knowledge of the other coder’s scores. The joint probability of agreement was 86% for all codes (n = 70) assigned.
Results
In this section, I present the findings regarding the degree to which sampled abstracts include basic components of an empirical research study as well as the degree to which education research journals require structured abstracts. In short, I find that a nontrivial proportion of such abstracts fail to include important components of an empirical study and that research abstracts in education remain highly unstructured.
Of the top 150 education journals in the field of educational research, I find that that only 11 explicitly require a structured abstract. This represents approximately 7% of the journals examined. Furthermore, of the journals that do not require a structured abstract, the vast majority (approximately 83%) do not provide specific instructions regarding the components that should be included in the research abstract. This means that over three quarters of the top education journals do not require a structured abstract and do not provide guidance as to the content of the abstract in the initial author submission page. Such lack of guidance increases the probability for variation in the format, quality, and content of actual abstracts.
Indeed, analysis of the sample of actual abstracts confirms the lack of structured abstracts and demonstrates the expected variation in the components included in the abstract. First, analysis of sampled abstracts confirms that the use of structured abstracts is rare in education research. Approximately 7% of journals required structured abstracts, and in line with this requirement, approximately 6% of the sampled abstracts utilized a structured abstract. This suggests that journals are not requiring structured abstracts at later stages of the editorial process and that authors are not opting to utilize them when not required by the journal.
Turning to the content of abstracts, I find that a nontrivial proportion of abstracts fail to include important components of a research study. Although authors tend to include statements of the aim or purpose of their research study as well as statements of the results or findings, they are significantly less likely to include information regarding background, such as prior literature, or to include a statement regarding the conclusions or implications of their study. As shown in Table 1, over one in three abstracts lacked information regarding the background, and a similar proportion lacked information on the conclusions/implications of the study. Furthermore, over one quarter of abstracts failed to include information regarding the data, and a similar proportion lacked information on the methods utilized in the analysis.
Proportion of Abstracts Including Components of a Research Study
Note. Sample size varies between components as certain categories were not applicable for all research studies.
Although the number of structured abstracts appearing in the sample was small, a comparison between the content of structured and unstructured abstracts suggests differences between the content included in each. As shown in Table 1, structured abstracts universally included all components of an empirical research study, with the exception of the Background section. To be sure, this difference is to be expected and arises almost mechanically from the required subheadings and sections dictated by a structured abstract.
Conclusions
The findings of this study point to important omissions in many abstracts, particularly in the background and conclusion components but also in the key areas of data and methodology. These shortcomings align with prior research on abstracts in the social sciences (Hahs-Vaughn & Onwuegbuzie, 2010; Hartley & Betts, 2009). Importantly, these omissions may have real implications for the way in which research is accessed and utilized. As previously noted, abstracts serve as a gateway not just for indexing and evaluation of articles by researchers but also for access to research by practitioners. As such, abstracts that do not communicate a sufficient level of detail of the study may result in a decreased probability that the study makes it into the hands of those most equipped to implement the insights of the research.
One potential solution may involve greater attention to the content of abstracts by authors and journal editors. The results of this study suggest that the majority of journal submission guidelines provide no guidance to authors regarding the content of an abstract. One simple and quick change would be to revise such submission guidelines to provide suggestions for the components to be included in an abstract. However, research suggests that the mere provision of instructions regarding abstract content may not be effective at altering their quality (Pitkin & Branagan, 1998). This suggests that editors and reviewers may also need to include a more explicit focus on the abstract during the review and editing process.
In addition to these suggestions, it is possible that a structural change to abstracts could also address issues in abstract quality and accessibility. By design, structured abstracts prompt authors to include a broader range of components in an abstract. It is more difficult to omit a salient portion, such as methodology, if the abstract requires a labeled methodology component. Prior research on abstracts in other disciplines confirms the finding that structured abstracts provide more information on the components of the study (Hartley & Benjamin, 1998).
Although making the recommendations that more explicit instructions be given to authors regarding the content of abstracts, that editors and reviewers more closely analyze submitted abstracts, and for the use of structured abstracts, it must also be noted that the applicability of abstracts differs across subdisciplines of the field. For instance, requiring a Data section may not always be appropriate for a manuscript related to a topic such as the philosophy of education. Nevertheless, these categorizations, when interpreted broadly and used judiciously by journals, have the potential to improve the quality of abstracts.
Over a decade ago, Mosteller and colleagues (2004) initiated a call for the adoption and use of the structured abstract in education research. The evidence presented in this study demonstrates that this call has largely gone unheeded. Of the top journals in the field, just more than one in 20 requires a structured abstract, a proportion that is reflected in the abstracts themselves. Unlike the medical sciences, which responded to a call for structured abstracts in their discipline with near-universal take-up within several years (Mosteller et al., 2004), the field of education research continues to use the traditional, paragraph-style abstract.
In addition to limiting the quality of abstracts, the lack of use of structured abstracts in education research may also serve as a hindrance to the dissemination and use of educational studies. Mosteller et al. (2004) argued that the structured abstract would result in more efficient navigation of research, thereby reducing the time required to gather useful information. Indeed, the research supports this claim, having demonstrated that readers can find information quicker in a structured abstract and make fewer errors in gathering such information (Hartley et al., 1996; Hartley & Sydes, 1997). This suggests that both education practitioners looking to utilize research in practice as well as researchers conducting literature reviews and meta-analyses are operating less efficiently than they otherwise could be if more journals utilized structured abstracts.
As a community of education researchers, we are then, by and large, underperforming when it comes to research abstracts. This underperformance potentially impedes our ability to conduct our work as efficiently as possible while also hampering the ability of education practitioners to as easily find and make use of our research to improve outcomes for students. As the magnitude of education research continues to expand and as more of this research becomes freely available to practitioners through open-access journals, quality abstracts that are easily interpretable will continue to be an important mechanism for facilitating access to research.
The research–practice divide is undoubtedly caused by multiple factors and will hardly be entirely remedied by increased attention to the quality and format of abstracts. Nevertheless, taking steps to improve the quality of research abstracts, such as by providing more guidance to authors or by adopting a structured format, does represent small steps in helping research better move to practice. Importantly, as compared to other causes of the divide, this step is one that is directly within the hands of the education research community.
Footnotes
Appendix A
Appendix B
Acknowledgements
The author is grateful to Ann Kellogg for helpful research assistance on this project.
Author
F. CHRIS CURRAN, PhD, is an assistant professor of public policy at the University of Maryland, Baltimore County (UMBC) School of Public Policy, 1000 Hilltop Circle, Baltimore, MD, 21225;
.
