Abstract
In this content analysis, a research team examined the articles in 15 journals published over a span of 10 years to obtain an overview of the current field of literacy. Researchers coded the topics, theoretical perspectives, designs, and data sources in a total of 4,305 literacy-related articles. Analyses revealed statistically significant differences in the topics, perspectives, designs, and data sources among literacy articles in journals written for practitioners, those written for researchers, and those written for both practitioners and researchers. Although the topics in journals written for practitioners somewhat reflected the content of those written for researchers, results demonstrated a need to diversify methods used in articles published in journals written for researchers. We argue that this diversity is likely to enhance the ability of research to build the knowledge base in our field.
The field of literacy research has developed in breadth and depth across the 20th and 21st centuries (Beach & O’Brien, 2018; Gaffney & Anderson, 2000; Kamil et al., 2011). Indeed, the term literacy—an inclusive term that refers to reading, writing, language, communication, and more—is relatively new in the history of reading research, which itself reflects limited attention historically given to broad views of literacy. Recently, advances in technology have led to investigations of new literacies on topics, such as online reading comprehension (e.g., Leu et al., 2015), and multimodal literacy studies on topics such as social media and video (e.g., Ajayi, 2015). Scholars have refined research on adolescent literacy instruction, moving from content-area reading to disciplinary literacy (Shanahan & Shanahan, 2008). Meanwhile, topics of historical interest, such as vocabulary and comprehension, continue to receive research attention as researchers increase their understanding of these processes (Lapp & Fisher, 2018).
Literacy researchers have likewise witnessed expansion in research methodologies over the last several decades (Beach & O’Brien, 2018; Duke & Mallette, 2001; Gaffney & Anderson, 2000; Guzzetti et al., 1999; Reutzel & Mohr, 2015). Qualitative approaches to reading research did not fully emerge until the 1980s (Pearson, 2004), which led to the paradigm wars, where quantitative and qualitative approaches were pitted against one another (Kamil, 1995; Kamil et al., 2011; Stanovich, 1990). This evolution of research methodologies was occurring in educational research generally. Over time, the dominant paradigm has shifted from behaviorism to cognitive psychology to social, cultural, and critical views of teaching and learning (Gaffney & Anderson, 2000; Unrau et al., 2018; Wiley & Jee, 2010). While educational research generally, and literacy research specifically, experienced the “social turn” in the 1980s and 1990s, this research has not necessarily moved away from cognitive psychology. Many programs of research continue to use cognitive views of reading instruction and learning. In fact, Purcell-Gates et al. (2016) presented a theoretical framework that integrates cognitive and sociocultural perspectives. They describe their framework as follows: It encompasses both the cognitive in-the-head perspective of reading and reading research and the sociocultural perspectives that focus on literacy as social and cultural practices, situated within contextual layers of power relationships, histories, values, attitudes, and social relationships. This is not an additive view, however, that simply links two different perspectives. Rather, it is a reconfiguration and integration of two perspectives that place cognition, skills teaching, and cognitive-centered research on teaching within sociocultural contexts. (p. 1218)
Purcell-Gates et al.’s (2016) theoretical framework exemplifies that the field of literacy has not shifted away from a cognitive paradigm; instead, the field has expanded to integrate sociocultural and critical perspectives.
The field of literacy also experienced the “reading wars,” where code-based approaches to teaching beginning reading were juxtaposed against whole-language approaches (Pearson, 2004; Stahl, 1998). These “wars” led to contention and fragmentation in the field (Duke & Mallette, 2001; Kamil, 1995; Stanovich, 1990). And it is a war that has reemerged, especially in popular media, in current debates about the “science” of teaching reading. Newspaper op-eds (Pimentel, 2018; Wexler, 2018) and a popular audio documentary (Hanford, 2018) have suggested that teachers are ignorant of or resistant to the science of teaching reading. By science, they are referring to systematic phonics instruction. Meanwhile, literacy researchers generally promote both explicit phonics instruction and authentic reading and writing of connected text for children learning to read (Pressley & Allington, 2015; Purcell-Gates et al., 2016). This current war over the science of teaching reading and teachers’ knowledge of that science highlights the need for different communities (i.e., researchers, practitioners, and policy makers) to have access to the research on effective practice in literacy.
This emphasis on the science of reading is an extension of the focus on scientific research that became codified into policy with No Child Left Behind and Reading First. In this codification, scientific research meant experimental or quasi-experimental research, thereby subverting all other methodologies. These movements unfortunately translated in the minds of some educators, researchers, and policy makers to a hierarchy of methodologies: Quantitative research is superior to qualitative research, and experimental and quasi-experimental designs are the “gold standard.” Meanwhile, scholars committed to enhancing the lives, experiences, and outcomes of teachers and learners recognize that all methodologies have strengths and limitations and that many pervasive problems in education cannot be addressed by a narrow view of what is classified as scientific research. Indeed, complex problems require complex research designs and methodologies, which lead to new theories to guide future research and practice.
For this reason, research methodologies have expanded beyond simply moving toward appreciation of qualitative approaches. The field has seen increased use of mixed-methods (Creswell & Plano Clark, 2017; Kamil et al., 2011; Onwuegbuzie & Mallette, 2011) and design-based research (Bradley & Reinking, 2011; Fishman et al., 2013). At the same time, quantitative researchers have made advancements in statistical modeling techniques (e.g., latent class modeling, growth mixture modeling), and qualitative researchers have expanded the depth and scope of qualitative inquiry (e.g., qualitative meta-analytic approaches, discourse analysis).
This expansion has symbiotically coincided with an expansion of theoretical perspectives for understanding and studying literacy teaching and learning (Beach & O’Brien, 2018; Stornaiuolo et al., 2018). Literacy researchers, for example, currently use and appreciate sociocultural, constructivist, poststructural, critical, pragmatic, situated, posthumanist, cognitive, and phenomenological approaches, among others (Unrau et al., 2018). Indeed, prominent scholars encourage literacy researchers to use a range of theoretical perspectives to add nuance and depth to their inquiry to approach the complex problems that face education today (Dillon & O’Brien, 2018; Dressman & McCarthey, 2011; Purcell-Gates et al., 2016).
The number of journals that publish literacy research has likewise expanded. For example, in 1950, if one was interested in publishing in a reading journal, options were limited to what are today The Reading Teacher (RT; Bulletin until 1951) or Language Arts (LA; Elementary English in the 1950s). Reading Research Quarterly (RRQ), Journal of Literacy Research (JLR; previously Journal of Reading Behavior), Literacy Research and Instruction (LRI; previously Journal of the Reading Specialist and then Reading Research and Instruction), and Research in the Teaching of English (RTE) were not created until the 1960s. The 1970s added the Journal of Research in Reading (JRR) and Reading Psychology (RP). Then Reading and Writing (R&W) and Reading and Writing Quarterly (RWQ) emerged in the 1980s. Scientific Studies of Reading (SSR) was not founded until 1997.
This expansion and diversity of journals is supportive for the field of literacy as it provides space for the growing number of topics, theoretical perspectives, and methodologies in literacy research, allowing the research community to better address current problems of educational practice. The existence of numerous journals allows for multiple and varied venues to report research, share practical applications of effective practice, engage in scholarly debate, and generally build knowledge and enhance instruction.
Purpose Statement
While there are many affordances offered by this expansion of topics, perspectives, methodologies, and outlets, there are concerns as well, such as the one raised by Duke and Mallette first in 2001 and again in 2011: Is the field moving toward appreciation for diverse theoretical perspectives and methodologies, or is the field fragmenting, “splintering off into subfields” (Duke & Mallette, 2011, p. xviii)? That is, are specific literacy associations and journals aligned with particular perspectives and methodologies and not others? Kamil and colleagues (2011) explained, “Differential preferences within professional organizations render a portrait of diversity or division, depending on one’s perspective” (p. xx). Division, or fragmentation, is problematic because it precludes a robust, comprehensive understanding of the field of literacy, or “ecological balance,” as Duke and Mallette (2001) and Pearson (2004) called it. Beyond an ecological balance across the field of literacy, it is also important for policy makers, researchers, and practitioners (i.e., teachers, administrators, reading specialists) to have access to a robust and comprehensive understanding of literacy processes and instruction.
In the current study, we conducted a content analysis to investigate if there is presently parity or fragmentation in articles in the literacy field’s topics, theoretical perspectives, designs, and data sources across 15 journals that represent different professional associations and serve different audiences. We also explored whether these topics, theoretical perspectives, designs, and data sources differed for journals that targeted researchers, practitioners, or both. The following research questions guided our investigation:
What are the topics, theoretical perspectives, designs, and data sources in the articles published in 15 journals over 10 years?
What are the target audiences of the 15 literacy journals?
Is there a relationship between the topics, theoretical perspectives, designs, and data sources and the target audiences of journals in which they are published?
Previous Content Analyses of Literacy Journals
Researchers have previously completed content analyses of literacy journals. However, they are most often longitudinal analyses of a single journal or a single organization’s journals rather than our current approach of studying the content of numerous journals over a period of time. For example, Baldwin et al. (1992) completed the first of several content analyses of publications of the National Reading Conference (NRC), now the Literacy Research Association (LRA). Baldwin et al. studied the contents of the Journal of Reading Behavior (now JLR) and the NRC Yearbook from their inceptions, in 1969 and 1952, respectively, to 1991. They analyzed the 2,139 articles in those publications during that time period, documenting the topics studied and analytic procedures. The most frequent topics studied included comprehension, college reading, reading improvement programs, word recognition, and adult reading. The most frequent analyses included ANOVA, armchair, descriptive statistics, description, and correlation.
Dunston et al. (1998) analyzed the contents of the NRC Yearbook from 1975 to 1995. They documented the topics investigated and the methodologies used. The most common topics identified were adult/college/family literacy, beginning/early/emergent literacy, comprehension, instruction, and students. To study the methodologies used, Dunston et al. analyzed quantitative and qualitative studies separately. The most common qualitative methodologies were constant comparative analysis, categorization, and case study, though nearly half of the qualitative studies’ analysis procedures could not be determined. The most common quantitative methodologies were ANOVA, t-test, correlation, and multiple regression.
Guzzetti et al. (1999) conducted a content analysis of LRA’s JLR after the journal published its 30th volume. They examined the topics and populations studied as well as the methods used over time. They found that the theme “the reader” was studied more than the other topics of “the text,” “instruction or programs,” or “tests and assessment.” Elementary students were studied more than all other age groups combined. The most frequently used methods were correlational, experimental, quasi-experimental, text construction/development/validation/critique, and descriptive/informational, with an increase in qualitative research over the course of the investigation.
Like LRA, the Association of Literacy Educators and Researchers (ALER) has conducted content analyses of its publications. To celebrate its 50th year, ALER commissioned a content analysis of its journal, LRI. Morrison et al. (2011) analyzed 1,099 articles published in LRI from 1961 to 2011. They documented topics and participants studied as well as the methodologies used. Frequently studied topics included comprehension, teacher practices, assessment and measurement, and content-area reading. The researchers noted that the common topics of inquiry varied little across the decades studied. The most common participants included teachers, primary-grade students, college students, intermediate students, and struggling readers. Nonempirical pieces (defined by Morrison et al. as “non-research”; e.g., theoretical articles, literature reviews, research critiques) composed 35% of the articles. Of the empirical articles, the most frequent methodologies included ethnographies, quasi-experimental, experimental, survey, and case study. The researchers found that the frequency of nonempirical articles in LRI decreased during the years studied (1961–2011).
Similarly, Reutzel and Mohr (2015) conducted a content analysis of RRQ, the flagship journal of the International Literacy Association (ILA), on its 50th anniversary. They investigated the most frequently studied topics and populations and looked for trends in analysis. In 1,370 articles, the most commonly studied topics were instructional practices and programs; comprehension; reading research, history, and reviews; beginning reading skills/word recognition; assessment; and vocabulary. The most frequently studied populations included primary grades, intermediate grades, college/adults, early childhood (pre-K, K), and international. They found an increase in empirical research over time, with fewer reviews and syntheses being published in the journal. Beginning in 1985, qualitative studies increased in prevalence, with an increasing balance of quantitative and qualitative research being published in the journal in the 2010s, though quantitative research continues to be most common.
These content analyses demonstrated that methodologies have become more diverse over time. Quantitative studies used to dominate reading research, and now qualitative research has emerged and become a common methodology. Journals reviewed have published primarily research articles, but also nonempirical articles such as theoretical pieces and research reviews. Common topics of research across analyses include comprehension, instruction, early reading, and adult reading. These analyses have focused on the contents of one or two publications associated with particular organizations. Few studies have examined the contents of multiple journals at a point in time to capture the “lay of the land” of literacy research, nor have content analysts commonly examined the contents of publications across numerous literacy organizations.
Our research team previously completed a content analysis (Parsons et al., 2016). Instead of focusing on one journal or organization, we conducted a content analysis of the topics, theoretical perspectives, designs, and data sources in nine journals from 2009 to 2014: JLR, JRR, LRI, RP, RRQ, R&W, RWQ, RTE, and SSR. The most frequent topics of articles included comprehension, bilinguals/English language learners (ELLs), phonemic/phonological awareness, instruction, and writing. Researchers did not specify theoretical perspectives in 76% of articles. The most common reported theoretical perspectives that were identified included sociocultural, cognitive/information processing, and new literacies. The most frequently used designs were general quantitative, experimental/quasi-experimental, nonempirical, mixed methods, and case study. The most common data sources included student assessments, observations/video transcripts/field notes, and surveys/questionnaires.
The current content analysis, which encompasses the data from Parsons et al. (2016), expanded the scope of the analysis (i.e., we analyzed six additional journals: RT, Journal of Adolescent and Adult Literacy [JAAL], LA, English Journal [EJ], Elementary School Journal [ESJ], and Journal of Educational Psychology [JEP]) and expanded the duration of the analysis to include articles from 10 years (2007–2016) for 15 journals. In addition, we analyzed the differences in the data (i.e., topics, theoretical perspectives, design, and data sources) in journals aimed at different target audiences. The current study, then, builds upon and extends previous content analyses to provide a snapshot of the current field of literacy and to explore the contents of journals published for different audiences.
Theoretical Framework
This research was guided by a history/philosophy of science lens. According to Kuhn (1962), scientific advancement has often experienced “paradigm shifts.” That is, scientists have experienced eras of shared understanding (i.e., a paradigm) that are periodically altered by new revelations that dispel or disrupt a long-held accepted understanding (i.e., a paradigm shift). A shift occurs when there is widespread distrust due to overwhelming evidence disputing existing paradigms.
Fleck (1979), a German medical doctor and philosopher, had earlier presented the idea of “thought collectives” and “thought styles” (this work was written in German in the 1930s, not translated to English until the 1970s). Thought collectives are a “community of persons mutually exchanging ideas or maintaining intellectual interactions” (p. 39). Thought styles are common theoretical understandings (paradigms) held and methodological techniques used by thought collectives. Fleck saw benefit in multiple thought styles of multiple thought collectives within a field of study (his was medicine) as diversity prevents dogmatic thinking in a field. Nonetheless, the thought style influences what evidence is analyzed and what conclusions are drawn.
Mosenthal (1985) presented the idea of “speech communities”: “communities consist[ing] of researchers who share the same belief as to how a phenomenon should be defined” (p. 7). He explained that speech communities have sub-speech communities, which use different discursive practices to define and study phenomena. Mosenthal used this approach to demonstrate how researchers and practitioners may be of different speech communities that have different understandings of what educational progress is. This idea speaks to how research influences practice, which hinges upon the degree to which researchers and practitioners hold the same understandings of what defines reading achievement, for example.
The current content analysis seeks to investigate the current paradigm(s) or thought style(s) in the field of literacy. Is there one or multiple thought collectives? Do different audiences (e.g., researchers or practitioners) represent different thought collectives? Do different professional organizations represent different thought collectives? Do quantitative and qualitative researchers represent different thought styles? We do not endeavor to necessarily answer these questions, but the evidence we gathered through a content analysis of 15 journals’ literacy research over 10 years provide data that offer an opportunity for the field of literacy to take stock of what exists.
Method
We used content analysis to answer our research questions (Holsti, 1969; Krippendorff, 2004). Stemler (2001) described content analysis as “a systematic, replicable technique for compressing many words into fewer content categories based on explicit rules of coding.” Holsti’s and Krippendorff’s approach to content analysis highlights sampling, context, and analysis. We selected journals by using multiple processes to identify the journals that best represent the field of literacy. We asked scholars in the field for recommendations, we cross-referenced recommendations with two impact-factor indexes, and we consulted national literacy organizations’ journals (see Parsons et al., 2016 and the “Data Sources” section below for details).
Content analysis uses both text and context to answer the research questions and draw conclusions. The current study analyzes text (i.e., 4,305 articles in 15 literacy journals) within the context (i.e., the current field of literacy scholarship reviewed above) to answer Research Questions 1 and 3 (i.e., What are the topics, theoretical perspectives, methodologies, and data sources in the articles published in 15 journals over 10 years? Is there a relationship between the topics, theoretical perspectives, methodologies, and data sources and the target audiences of the journals in which they are published?). For Research Question 2 (What are the target audiences of the 15 literacy journals?), we analyzed the journal aims and scope listed on each journal’s website.
Data Sources
In this study, the data sources are the journal articles in each issue of each journal in the time period outlined (2007–2016). For the current analysis, we used the journals from Parsons et al. (2016) and added more journals associated with literacy organizations: RT and JAAL (International Literacy Association) and LA and EJ (National Council for the Teaching of English). We also added ESJ and JEP. We recognize that ESJ and JEP are not literacy journals; however, we included them because they have a history of publishing impactful literacy research (e.g., Juel, 1988; Pressley et al., 1992).
While the sampling of journals is an important design decision, we would like to additionally acknowledge the important role that editors and reviewers play in determining what is published. Their epistemological and methodological experiences and preferences inevitably influenced what appeared in the journals, but studying editors’ and reviewers’ epistemologies and methodological preferences and how these influence journals’ publications are beyond the scope of this study.
In the current research, we analyzed journals’ content from 4 years beyond our previous study, 2 years back, and 2 years forward, providing data from 15 journals over a 10-year period. This time frame was investigated because it was the most recent 10 years of literature with complete volumes at the time of data collection.
This content analysis also sought to compare the content of literacy journals that may or may not represent different thought collectives. The research team examined each journal’s “aims and scope” on its website. By reviewing the journal’s self-described mission, the researchers determined whether the primary focus of the journal was to contribute to Research, Hybrid, or Practice. We put Hybrid in the middle because we view this as a continuum from Research to Practice. We recognize that all journals hope to contribute to research and practice. However, we also recognize that some journals specifically strive to build knowledge through scholarship and other journals specifically strive to impact practice. This is a gray area, so we used the journals’ own missions to make a determination, identifying each journal as focused primarily on Research, Hybrid, or Practice.
Data Analysis
The research team coded every literacy-related article in the 15 journals from 2007 to 2016 (N = 4,305) with a priori codes (Parsons et al., 2016) that were defined on a coding sheet used by all coders (see the appendix). More specifically, all articles were coded in the literacy journals, and only the literacy-related articles were coded in ESJ and JEP. Each researcher coded approximately 20 journal issues from the corpus of literature. All coders were trained to code based on predetermined terms for topic (up to three topics for each article), theoretical perspective, research design, and data sources (up to four data sources could be identified for each article). To code the topic, coders first examined keywords (if they were included), then the title, and finally skimmed the abstract to determine the topic(s) covered by the article.
To determine the theoretical perspective used in the article, coders looked for headings labeled “theoretical/conceptual framework” or for a sentence in the article explicitly stating the framework used. For instance, “A sociocultural lens was used to frame this study.” We recognize the limitations of this, as many articles clearly situated their study within the relevant literature and theories, but to increase the reliability of coding, we made the explicitness of the framework a condition of coding.
Similarly, the research design needed to be explicitly stated in the article: for instance, “In this case study, authors . . .” Where a research design was not explicitly stated, coders classified the study as general qualitative (referred to as qualitative throughout), general quantitative (referred to as quantitative), mixed methods, or nonempirical. These determinations were made based on the data analysis sections. Specifically, if the data collected were numerical and were analyzed using statistical methods (e.g., ANOVAs, descriptive statistics, regression), the article was identified as quantitative. If the data were words or images and analyzed using qualitative methods (e.g., constant comparative analysis), the article was classified as qualitative. If the study described used both types of data, it was classified as mixed methods. These distinctions came from Creswell (2015). Nonempirical articles were those that had no data analysis sections. Thus, nonempirical does not mean that the articles were not based on research, but rather that they were not presenting a study.
Finally, data sources were examined and coded based on what was presented in each article. For all four areas (topics, theoretical frameworks, designs, and data sources), we began with an a priori list of codes that we expected to see and included an “Others” category. After all articles were coded once, we examined the “Others” for each area, creating new codes for ideas that appeared multiple times, and recoded the “Others” to reflect these new codes.
Data were organized using Microsoft Excel. After coding, data were shared in one central Excel spreadsheet, and each participant recoded three articles coded by another to check inter-coder consistency, which was calculated at 75%, an acceptable level (Multon & Coleman, 2018).
To address Research Question 1 regarding the topics, theoretical frameworks, designs, and data sources that appeared most often from 2007 to 2016, we examined frequency counts. To address Research Question 2, the first author put each journal’s “aims and scope” statement into a table. This author then classified them according to their stated target audience or the type of manuscript they sought, as stated by the journal. Next, the second author reviewed each journal’s aims and scope to determine agreement with the first author’s classifications. The authors agreed on all classifications. To address Research Question 3 regarding differences in content published in different types of journals, we examined four chi-square tests of independence to determine whether the topics, theoretical frameworks, designs, and data sources varied by target audience (i.e., Research, Practice, or Hybrid). In all, we reviewed 4,305 journal articles in this analysis.
Results
We do not present our results in order by research question. It made more sense to present the results by area of inquiry (i.e., topics, theoretical perspectives, designs, and data sources). Thus, we first present the findings for Research Question 2 regarding the classification of each journal, then we provide overall frequency counts and then frequencies by aim of the journal (i.e., Research, Hybrid, or Practice) for topics, theoretical perspectives, designs, and data sources.
Target Audience of Journal
We found that six journals had aims and scopes specifically directed at researchers: RRQ, SSR, JLR, JRR, JEP, and R&W. Five of the journals expressed a hybrid focus aimed at both researchers and practitioners: ESJ, LRI, RP, RWQ, and RTE. Four journals indicated that they were primarily aimed at practitioners: RT, JAAL, LA, and EJ. Supplemental Table 1 presents the journals analyzed; their aims and scope, with key target audiences highlighted; and their classification.
Topics
The topics most frequently written about across the 15 journals included the following: (a) instruction (n = 1,154), (b) comprehension (n = 694), (c) writing (n = 677), (d) new literacies (n = 628), and (e) bilinguals/ELLs (n = 546). The topics most frequently studied in Research, Hybrid, and Practice journals are presented in Supplemental Table 2. Table 1 presents complete topic frequency counts. This relationship between topics and type of journal was significant, χ2(70, N = 10,281) = 2,689.29, p < .001. The results from this chi-square indicated a medium effect size, V = .36 (Cohen, 1988).
Frequency and Percentage of Topics by Aim of Journal.
ELLs = English language learners.
Theoretical Perspectives
The theoretical perspectives most frequently used in these 15 journals included the following: (a) not specified (n = 3,337), (b) sociocultural (n = 183), (c) multiple theories (n = 171), (d) other (n = 108), and (e) critical perspectives (n = 89). “Not specified” indicates that the authors did not explicitly state the theoretical framework used in their study. “Multiple theories” indicates that the authors explicitly identified more than one theoretical framework. The theoretical perspectives most frequently studied in Research, Hybrid, and Practice journals are presented in Supplemental Table 4. See Table 2 for complete theoretical perspectives frequencies. The relationship between theoretical perspectives and type of journal was significant, χ2(82, N = 4,305) = 373.84, p < .001. The results from this chi-square indicated a small effect size, V = .21 (Cohen, 1988).
Frequency and Percentage of Theoretical Perspectives by Aim of Journal.
Designs
The designs most frequently used in these 15 journals included (a) nonempirical (n = 1,432), (b) quantitative (n = 1,054), (c) experimental/quasi-experimental (n = 413), (d) qualitative (n = 371), and (e) case study (n = 284). The designs most frequently studied in Research, Hybrid, and Practice journals are presented in Supplemental Table 6. See Supplemental Table 7 for complete design frequencies. This relationship between research designs and type of journal was significant, χ2(34, N = 4,305) = 2,915.10, p < .001. The results from this chi-square indicated a large effect size, V = .58 (Cohen, 1988).
Data Sources
The data sources most frequently collected in these 15 journals included (a) student assessments (n = 1,705), (b) observations (n = 768), (c) interviews (n = 620), (d) artifacts (n = 608), and (e) surveys/questionnaires (n = 573). The data sources most frequently studied in Research, Hybrid, and Practice journals are presented in Supplemental Table 8. See Supplemental Table 9 for complete data source frequencies. The relationship between data sources and type of journal was significant, χ2(14, N = 4,496) = 1,084.45, p < .001. The results from this chi-square indicated a medium effect size, V = .35 (Cohen, 1988).
Discussion
The field of literacy research has expanded in multiple dimensions in recent years, including definitions of literacy, theoretical perspectives informing literacy research, and research designs used. The number of journals publishing literacy scholarship has also grown, including journals focused on research, journals focused on practice, and journals with a hybrid focus. This expansion and the “growing pains” associated with it (i.e., paradigm wars and reading wars) have led to questions about the cohesiveness of the field. Some have wondered if the field is fragmented or simply diverse (Duke & Mallette, 2001, 2011; Kamil et al., 2011). Using a history/philosophy of science lens, the present research investigated the topics, theoretical perspectives, research designs, and data sources for 4,305 articles published in 15 literacy journals across 10 years. We also analyzed differences among journals published for different audiences: Research, Hybrid, or Practice.
When we look at the topics published across the three types of journals, there is much overlap. For example, four topics (instruction, comprehension, writing, and bilinguals/ELLs) are in the top 10 most common topics in each type of journal. We were pleased to see writing receiving increased attention after decades of limited attention (National Commission on Writing, 2003). Another promising finding was the increase in articles about bilinguals/ELLs and new literacies, which we noted as positive progression in the field. Conversely, we were surprised by the low number of articles focusing on literacy coaching (n = 59 in all journals, and only n = 9 in Research journals) given its prevalence in schools and graduate programs. This area is ripe for additional research.
While there was overlap across types of journals, some topics are more prevalent in different types of journals. For example, new literacies received a lot of attention in Practice journals and less in Research and Hybrid journals. Similarly, word recognition/decoding, phonemic/phonological awareness, and spelling are frequently published in Research journals and seldom in Hybrid or Practice journals. We hypothesize that the dearth of these topics in Practice journals may be due to the ubiquity of programs to teach foundational literacy skills in the aftermath of Reading First. Another reason might be related to the limited age range where teaching those skills are necessary: Of the four Practice journals, two of them (JAAL and EJ) focus on middle school and higher, where word recognition, phonological awareness, and spelling are less emphasized.
These findings draw attention to the “science of reading” movement. Research on word-level components of reading—decoding, phonemic awareness, phonics, and spelling—were common in Research journals, which were overwhelmingly quantitative. These components of reading are easy to measure. Therefore, they are easy to study in a manner that complies with narrow views of scientific research (i.e., experimental and quasi-experimental designs). Conversely, comprehension, which relies upon the extraction and construction of meaning from connected text, is not easy to measure, and neither are the instructional techniques used to support comprehension and its many components (i.e., background knowledge, metacognition, etc.; Leslie & Caldwell, 2017). Yet, comprehension is the ultimate goal of reading. People read to communicate. If comprehension is lacking, then the entire purpose of reading is lost.
The science of reading movement aggrandizes word-level recall (i.e., letters + letter sounds [phonics] = reading). This phonetic knowledge is essential to learning to read, and early-grade teachers absolutely need to teach phonics, but understanding letter–sound relationships is not reading. It is decoding. Reading is far more complex than word calling (Hoffman, 2017). Therefore, the education community should be wary of promoting the science of reading movement because it conflates decoding with reading. This perspective is far too narrow to fully capture the complexity of reading.
We also documented the theoretical perspectives used in these articles. Overall, 78% of the articles did not specify a theoretical perspective. This is similar to what we found in previous analyses of fewer journals and a shorter time period (Parsons et al., 2016), and, as we explained then, one caveat in this result is that we used strict coding rules for coding theoretical perspectives in that they had to be explicitly named (e.g., “The theoretical framework [or perspective or lens, etc.] for this study was . . .”). We recognize that theoretical perspectives are often implied or embedded in particular methodologies. However, we strove to limit the amount of inference required for coding to increase consistency and trustworthiness. We assume that researchers do have and apply theoretical perspectives, yet they do not always explicitly state them. We also know researchers sometimes state their theoretical perspectives in a way that would not necessarily be identified in this study. This finding, then, serves as a reminder to literacy researchers of the importance of explicitly articulating the theory or theories informing the research to optimally explain phenomena (Dillon et al., 2000; Dressman & McCarthey, 2011; Purcell-Gates et al., 2016).
Sociocultural, critical, and multiple theories were commonly used; sociocultural and critical theories were most frequently applied in Practice journals, while multiple theories were the most commonly used theories in Research and Hybrid journals, where sociocultural theories were second most common. Sociocultural theories emerged in social sciences in the 1970s but did not truly gain traction until the late 1980s and 1990s. As Vygotskian theories reached the United States and were built upon by Lave, Wenger, Engström, and others, they were increasingly applied to literacy research. In much the same fashion, critical theorists such as Friere and Bourdieu were influencing the way scholars thought about and studied phenomena in the social sciences, including literacy. The frequent application of multiple theories aligns with calls from researchers to use more than one theory to better investigate and build knowledge in the field of literacy (Beach & O’Brien, 2018; Dillon et al., 2000). This wide use of multiple theories may also be indicative of the current movement toward integrating cognitive and sociocultural perspectives (Purcell-Gates et al., 2016). Regardless, the diversity of theories used to situate literacy research demonstrates a growing appreciation for theories that come from both cognitive and sociocultural traditions. Therefore, the foundation is set for further use of integration of theory from these traditions. These patterns—the emergence of sociocultural and critical theories and the wide use of multiple theories—add depth and nuance to the field. This theoretical expansion and development enhances research on literacy teaching and learning by providing new lenses for investigating and interpreting phenomena.
In this study, our findings indicate that the designs used in articles from Research, Hybrid, and Practice journals were also disparate. This difference was statistically significant with a large effect size. Table 3 displays the rough breakdown of quantitative, qualitative, mixed-methods, and nonempirical studies based upon the type of journal they were published in. This breakdown is “rough” because some research designs do not fit neatly into these quantitative, qualitative, mixed-methods, and nonempirical methodological categories. For example, content analyses can be both qualitative and quantitative. The same is true for case studies. Because of this possible overlap, we have chosen to present this information in the discussion section rather than in the results. For the breakdown presented in the tables, the following classifications were used:
Quantitative: Correlational, experimental/quasi-experimental, meta-analysis, quantitative, statistical modeling, survey
Qualitative: Case study, content analysis, discourse analysis, ethnography, grounded theory, narrative, phenomenological, qualitative
Mixed methods: Mixed methods, formative/design-based
Nonempirical: Nonempirical, literature review
Methodology by Journal Type.
As displayed in Table 3, quantitative research is the most prevalent methodology in both Research journals and Hybrid journals, though Hybrid journals were fairly evenly split between quantitative and qualitative methodologies. Of the empirical studies published in Practice journals (only 30% were empirical), a large majority (75%) were qualitative.
It is logical that most articles in Practice journals would be nonempirical. They are, after all, research-to-practice publications that translate research for practitioners. The large majority of Research journal articles were quantitative (81%). A closer look at the individual Research journals reveals that four of the six Research journals are largely quantitative: JEP 96% quantitative, SSR 93%, R&W 89%, and JRR 82%. The other two Research journals are more balanced. JLR is 30% quantitative, 49% qualitative, and 8% mixed methods. RRQ is 58% quantitative, 23% qualitative, and 8% mixed methods. Supplemental Table 11 displays the data from the Research journals.
It is no surprise that JEP and SSR are largely quantitative. The field of educational psychology is dominated by quantitative approaches to research, though recently scholars have called for more situated, qualitative work to inform that field (Kaplan, 2015; Turner & Nolen, 2015). The Society for the Scientific Study of Reading, the organization that publishes SSR, was created when a group of reading researchers split from the National Reading Conference (currently the Literacy Research Association), objecting to growing attention to qualitative research in NRC. Therefore, it makes sense that they would publish primarily quantitative research. In fact, SSR published zero qualitative studies in the 60 issues we analyzed.
A close look at the Hybrid journals is also revealing (see Supplemental Table 12). All of these journals publish a variety of designs. A majority of RTE’s articles are qualitative (66%), but it also publishes quantitative (10%) and mixed-methods (7%) research. RP and ESJ publish more quantitative (71% and 61%, respectively) than qualitative (12% and 15%) and mixed-methods (8% and 11%) research. LRI and RWQ have more balance, with no methodology being the majority for their publications.
A surprising result was the relative paucity of mixed-methods research in this data set. Literacy scholars have expressed that mixed methods have become more prevalent in the field (Kamil et al., 2011; Onwuegbuzie & Mallette, 2011), and mixed methods are undoubtedly an accepted form of research in the social sciences, with a respected journal, Journal of Mixed Methods Research, published by SAGE and a popular handbook, Handbook of Mixed Methods in Social and Behavioral Research (Tashakkori & Teddlie, 2010). However, of the 2,839 empirical studies published in these 15 journals across this 10-year time frame, only 7% used mixed methods.
We wonder if the relative newness of mixed methods as a design in educational research along with the apparent sustained methodological division in the field of literacy research has made it difficult for mixed-methods studies to get published in literacy journals. If reviewers have perspectives that question the epistemological viability or scientific rigor of mixed-methods research, for example, it would be difficult for researchers to get favorable reviews. Perhaps mixed-methods research is more frequently published in other outlets, though a recent content analysis of other journals found a dearth of mixed-methods research as well (Archibald et al., 2015).
This study imposed strict rules and definitions in our coding processes. Coding rules such as “Theoretical perspectives must be explicitly named” could cause coders to exclude particular data. Our research team made these methodological decisions to minimize inference and move toward objectivity, as recommended by our approach to content analysts (Holsti, 1969; Krippendorff, 2004). We recognize that this analysis is not comprehensive. While the research team took strides to identify the journals that are most pertinent to and representative of the field of literacy, there are other journals within the literacy field that were not included.
Implications for Theory
The history/philosophy of science theoretical lens provides insight into the results of this investigation. From this perspective, researchers are assimilated into a thought collective or speech community that includes a thought style or discursive practices, which have particular theoretical assumptions and investigative approaches (Fleck, 1979; Mosenthal, 1985). The data collected in the current study suggest that the field of literacy has multiple thought collectives that are not necessarily delineated by quantitative or qualitative methodological approaches. While some literacy thought collectives (e.g., educational psychologists) tend to be segregated into clear methodological thought collectives that publish in JEP and SSR, others demonstrate methodological diversity (e.g., JLR and LRI).
Other thought collectives, however, are not as easy to identify. It seems logical that thought collectives would join similar professional organizations, such as the Society for the Scientific Study of Reading mentioned above, but following such logic complicates this conclusion. Let us consider the International Literacy Association, which publishes RRQ, JAAL, and RT. First, these journals are published for different audiences, RRQ for researchers, JAAL and RT for practitioners. Therefore, it is understandable that theoretical perspectives, methodologies, and data sources would be different, but topics should be similar if research leads to practice, as assumed.
The lack of cohesion between the contents of Research journals and Practitioner journals illustrates the most immediate necessity for dialogue among thought collectives in the field of literacy. Researchers and practitioners play equally important roles in the literacy education enterprise. We all dedicate our time and efforts to enhancing students’ literacy. By learning from one another, through inter-thought-collective dialogue, we could better enhance the practice of both researchers and teachers. We wonder if Mosenthal’s (1985) idea of speech communities related to the alignment of research and practice is a factor here. That is, do researchers and practitioners have the same conceptualizations of success in literacy? If they do not, then they may be working toward different goals or at least taking different paths to similar goals.
Implications for Practice
This content analysis has implications for journal editors and reviewers. The field is enhanced by an ecological balance that respects and disseminates research from various paradigms and epistemologies. Journal editors typically have well-deserved discretion in the direction and operations of the journal. Therefore, editors have an important role in promoting ecological balance. If journal editors support the idea of ecological balance within their publication, then they, along with association leadership, can ensure that the call for manuscripts and journal guidelines invite and encourage the submission of manuscripts with epistemological and methodological diversity. Likewise, journal editors can invite reviewers who come from diverse epistemologies and who use diverse research methods to ensure that submitted manuscripts receive evaluation from reviewers who understand and appreciate various approaches to inquiry. This is not a call to place the supervision of epistemological and methodological diversity solely on journal editors; instead, we recommend journal editors enlist the help of their colleagues in the field so literacy scholars can collectively advocate for diverse representation of literacy research. Having a call for manuscripts that invites research and ideas from various epistemologies, maintaining a robust and diverse reviewer pool, and thoughtfully assigning reviewers will enhance the epistemological and methodological variety of the journal content.
Implications for Research
It appears that the field of literacy research continues to be divided on methodological grounds, with four of the six Research journals publishing quantitative research almost exclusively. The field should continue to work to demonstrate the rigor and insight that qualitative and mixed-methods inquiry can provide. Researchers should especially do this through multiple and diverse dissemination outlets. Making the case for qualitative inquiry or conducting a rigorous and insightful qualitative or mixed-methods study and publishing it in JLR or RTE will build knowledge, but it is not going to expand the reach of qualitative or mixed-methods inquiry. Researchers should consider presenting rigorous qualitative or mixed-methods inquiry at the annual meeting of the Society for the Scientific Study of Reading and submit their rigorous ethnographic studies to R&W or JRR. And quantitative literacy researchers should submit their research to RTE and other outlets that have a history of publishing primarily qualitative work.
Conclusion
The field of literacy research, and of educational research more broadly, is changing. More topics are being investigated, more theories are being used, and more methodologies are being implemented than in the past (Beach & O’Brien, 2018; Reutzel & Mohr, 2015; Unrau et al., 2018). The current analysis illustrates the specific topics, theoretical perspectives, and methods receiving more and less attention and use. Therefore, this “snapshot” of the field is helpful for moving scholarship forward by showing gaps in the existing research and by compelling discussion about theories, methodologies, and publication outlets.
Using our theoretical perspective and the data presented here, our position is that the field is not undergoing a paradigm shift, but rather the paradigm is expanding. As the fields of literacy and education research include more social, cultural, and critical perspectives, they are not necessarily moving away from cognitive psychology. Indeed, cognitive psychology has a strong base in literacy research. We see the expanding paradigm resulting in Purcell-Gates et al.’s (2016) theoretical framework, which integrates cognitive and sociocultural perspectives. However, this is not an easy expansion. The differences we see among different journals and different types of publications, and the debates currently taking place about the science of reading exhibit the difficulty in paradigmatic growth. Some scholars, like those who support the science of reading movement, are reluctant or resistant to incorporating sociocultural perspectives and the findings from qualitative investigations, which demonstrate reading as a complex and multifaceted phenomenon. The cognitive and quantitative perspective they prefer presents a simple view of reading, one that is easier to measure and easier to package into instructional programs. Taking an expanded view, like the one presented by Purcell-Gates et al. (2016), complicates this perspective.
The fields of educational research broadly and literacy research specifically have a lot of work to do to proceed with this paradigmatic expansion. Such expansion relies upon ongoing theoretical and empirical work and ongoing scholarly discussion and debate as the field enhances its understanding of phenomena, theory, and methodologies. It is an exciting time to be an educational researcher and literacy scholar. Continuing to be innovative in our epistemological and methodological advances and continuing to demand quality, precision, and depth in the research enterprise will ensure that we continue to make advancements in teaching and learning.
Supplemental Material
Parsons_0133_Appendix_RSB_sp_-_dd1 – Supplemental material for An Analysis of 15 Journals’ Literacy Content, 2007–2016
Supplemental material, Parsons_0133_Appendix_RSB_sp_-_dd1 for An Analysis of 15 Journals’ Literacy Content, 2007–2016 by Seth A. Parsons, Melissa A. Gallagher, Alicia B. Leggett, Samantha T. Ives and Michelle Lague in Journal of Literacy Research
Supplemental Material
Parsons_Online_Tables_RSB_Sp_-_dd1 – Supplemental material for An Analysis of 15 Journals’ Literacy Content, 2007–2016
Supplemental material, Parsons_Online_Tables_RSB_Sp_-_dd1 for An Analysis of 15 Journals’ Literacy Content, 2007–2016 by Seth A. Parsons, Melissa A. Gallagher, Alicia B. Leggett, Samantha T. Ives and Michelle Lague in Journal of Literacy Research
Footnotes
Appendix
Codes for Topics, Theoretical Perspectives, Designs, and Data Sources.
| Code | Definition (if necessary) |
|---|---|
| Topics | |
| Phonological awareness/phonemic awareness | Pertaining to letter and word sounds, e.g., rhyming, blending, segmenting, deleting. |
| Phonics | Pertaining to letter–sound relationships. |
| Fluency | Pertaining to reading rate, prosody, e.g., repeated reading, oral verses silent reading, timed reading. |
| Vocabulary | Pertaining to word meaning, e.g., morphology, morphological interventions, morphological knowledge, bases and suffixes, morphological awareness, semantic knowledge, lexical quality, lexical ambiguity. |
| Comprehension | Pertaining to constructing meaning from text, e.g., connections, predictions, inference, visualization, think-alouds, determining importance, highlighting, pictures–text relationships, background knowledge, listening guide, narrative competence, text signaling, story structure. |
| Writing | Pertaining to writing, e.g., writer’s workshop, process writing, grammar. |
| Spelling | Pertaining to spelling or orthography, e.g., word study, orthographic knowledge, orthographic processing, embedded mnemonics, diacritics. |
| Motivation/engagement | Pertaining to motivation, engagement, attitude, and affect, e.g., goals, self-regulation, self-concept, self-efficacy, engaged reading, recreational reading. |
| Instruction | Pertaining to literacy instruction, e.g., a specific teaching technique, reading programs, fidelity to a reading program, teacher–student interactions, literature discussions, inquiry-based instruction, pedagogy, project-based literacy, read-alouds, book reading. |
| Teacher education | Pertaining to preservice or in-service teacher coursework, e.g., methods courses, field experiences. |
| Professional development | Pertaining to teacher professional development efforts, e.g., school initiatives, action research, professional book clubs. |
| New literacies/digital literacies/technology/multiliteracies | Pertaining to literacies beyond the traditional textual print e.g., visual literacies, digital texts, new literacies, online reading, online education, hidden literacies, hybrid literacy practices, multimodal texts, multimodal learning. |
| Disciplinary reading/content-area reading | Pertaining to reading in content areas, e.g., math, historical reasoning. |
| Adult readers | Pertaining to adult literacy, e.g., National Assessment of Adult Literacy, National Adult Literacy Survey. |
| Struggling readers | Pertaining to students who struggle learning to read, e.g., dyslexia, gaps (achievement, gender), language impairment, RTI. |
| Bilinguals/ELLs | Pertaining to students who are speak English and another language and students who are learning to speak English. |
| Other | Topics not captured in other categories, e.g., genetics, age of schooling. |
| Cognition | Pertaining to thinking, e.g., memory, auditory tasks |
| Reading processes | Pertaining to the processes of reading that do not have their own category or when general processes are studied, e.g., visual processing, eye tracking, visual skills, visual attention span, speech perception, reading skills, language skills, constrained skills, eye movements, strategies, dimensionality. |
| Development | Pertaining to the phases through which students’ progress as their literacy skills grow, e.g., literacy development, early reading development. |
| Word recognition/decoding | Pertaining to students’ recognition of words, e.g., decoding, syllable-based segmentation, syllables, accuracy. |
| Family literacy | Pertaining to students’ literacy learning at home, e.g., home literacy, family characteristics, parent feedback, maternal language, maternal elaboration, parental involvement. |
| Neuroscience | Pertaining to the neurological study of literacy, e.g., brain scans, cortical thickness. |
| Assessment | Pertaining to the assessment of student literacy learning, e.g., informal reading inventories, literacy outcomes, high-stakes testing, schooling effects, achievement. |
| Adolescent literacy | Pertaining to the literacy learning of adolescents. |
| Students with particular needs | Pertaining to students who have specific learning needs that are not literacy specific, e.g., deaf students, autism, inattention/hyperactivity. |
| Literacy coaching | Pertaining to literacy coaches and coaching. |
| Emergent/preliteracy | Pertaining to students who are not yet readers, e.g., preschoolers, early childhood. |
| Type of text | Pertaining to the type of text, e.g., narrative, informational, graphic novels, metafictive devices, children’s literature. |
| Classroom context | Pertaining to the classroom environment, e.g., social support, classroom peers, social context. |
| Teacher characteristics | Pertaining to characteristics of the teacher, e.g., teacher identity, teacher efficacy, teacher perceptions, expert noticing. |
| Lenses of readers | Pertaining to the lenses readers use when reading; what the reader brings to the text, e.g., reader stance, identity, reader response, response to literature, interpretation, embodied knowing, negotiations. |
| Culture, diversity, and equity | Pertaining to sociocultural factors related to culture, diversity, and equity, e.g., social justice, social/emotional factors, critical literacy, identity and power, stereotype threat, culturally relevant teaching, gender, gender differences, disadvantaged/low SES. |
| Policy | Pertaining to policy, e.g., CCSS. |
| Tutoring | Pertaining to tutoring of students in literacy. |
| Theoretical perspectives: Theoretical framework, perspective, or lens must be explicitly stated. Otherwise, it is categorized as not specified. | |
| Constructivism | |
| Social constructivism | |
| Sociocultural | |
| Metacognition | |
| Cultural–historical Activity theory | |
| Situated learning/situative/communities of practice | |
| Cognitive/information processing | |
| Behavioral | |
| Emergent literacy | |
| Family literacy | |
| Critical perspectives | |
| Third pace | |
| Not specified | |
| Other: | |
| Double-deficit theory | |
| Phase theories | |
| Transactional theory | |
| Identity theories | |
| Goal theory | |
| Simple view of reading | |
| Agency | |
| Culturally relevant pedagogy | |
| Ecological perspectives | |
| New literacies | |
| Linguistic or sociolinguistic theories | |
| Multiple theories | |
| Reader response | |
| Pragmatic perspectives | |
| Expectancy-value theory | |
| Methodologies: Must be named (e.g., “This is a ______ study” or “This study used a ______ design” or “This study’s design is ______”). If the methodology is not explicitly specified, the study will be categorized as qualitative, quantitative, mixed methods, or nonempirical. | |
| Qualitative | |
| Experimental/quasi-experimental | |
| Case study/collective case study | |
| Ethnography | |
| Phenomenological | |
| Content analysis | |
| Discourse analysis | |
| Formative/design experiment | |
| Meta-analysis | |
| Mixed methods | |
| Narrative | |
| Survey | Use of a survey, not the development of an instrument |
| Other: | |
| Correlational | |
| Statistical modeling/factor analysis | SEM, HLM, CFA, EFA, structure equation modeling, hierarchical linear modeling, factor analysis, and cluster analysis. |
| Nonempirical | Anything not data based. |
| Grounded theory | |
| Quantitative | Multiple analyses or it is general quantitative. |
| Literature review | Criteria for inclusion described and searching methods described. |
| Data sources | |
| Interviews/focus group | |
| Observations/video transcripts/field notes | |
| Surveys/questionnaires | |
| Artifacts | Any tangible product. |
| Student assessments | |
| Teacher assessments | |
| Studies/articles (e.g., meta-analysis) | |
Note. RTI = response to intervention; ELL = English language learner; SES = socioeconomic status; CCSS = Common Core State Standards; SEM = structural equation modeling; HLM = hierarchical linear modeling; CFA = confirmatory factor analysis; EFA = exploratory factor analysis.
Authors’ Note
The Content Analysis Team includes Jan Ainger, Marisol Alva, Amanda Ayers, Mary Carmen Bartolini, Ellen Clark, Sarah Crain, Nisreen Daoud, Karen Sutter Doheney, Stacey Duff, Susan V. Groundwater, Jacqueline Heller, Amber Jensen, Lesley A. King, Jennifer Lindenauer, Joanna Newton, Kate Park, Viviano Quiceno, Erin M. Ramirez, Patty Salerno, Jayne Sherman, and Peet Smith.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
