Abstract
Federal accountability policies requiring rapid, measurable outcomes have increasingly shaped the nature and type of public literacy services available to adults. However, little empirical research has explored the impact of accountability policies on program practice in adult basic education, and almost no research has focused on the effect on services for adults who have difficulty reading. This ethnographically grounded research article explores one publicly funded adult basic education program’s efforts to comply with federal accountability policy and the impact these efforts had on services for adults with difficulty reading. Findings suggest that efforts to comply with accountability policies resulted in instructional practices that limited students’ opportunities for substantive engagement with reading and in program policies that excluded students who did not produce outcomes from participation. The findings also suggest that in the context of accountability pressures, student marginalization became normalized as an ordinary part of practice.
Introduction
The educational marginalization of adults who have difficulty reading may be exacerbated, rather than alleviated, by participation in publicly funded adult literacy programs. Although many program administrators and instructors work tirelessly to support their students, public adult basic education (ABE) programs in the United States operate under conditions of chronic, debilitating underfunding; varying instructional quality; a shortage of research; and policies that promote test gains and employment as solutions to complex educational and social problems (Belzer & Kim, 2018; Belzer & St Clair, 2007; Sandlin & Cervero, 2003). Federal accountability policies, in particular, have increasingly shaped the nature and type of public literacy services available to adults. Although the issue of accountability may be a familiar topic to those in the field, empirical research regarding accountability in ABE is scant. Scholars have explored how accountability policies have impacted administrative implementation and teachers’ perspectives (Belzer, 2003, 2007) and professional development (PD) (Smith, 2009, 2010), but few studies have explored the impact of accountability on practice at the classroom level, and almost no research has focused on the impact on adults who have difficulty reading. However, as federal accountability policies increase local emphasis on employment and postsecondary outcomes, reading-related instruction for these students may be most at risk of being substantially narrowed or eliminated altogether (Belzer, 2017; Pickard, 2016).
The purpose of this ethnographically grounded research article is to explore the impact of federal accountability policy on program practice in one publicly funded reading class for adults identified as basic-level readers. In this article, “adult literacy programs” and “adult basic education (ABE) programs” are used interchangeably and are intended to denote all levels of adult basic educational programs up to and including high school equivalency preparation. This article does not address English for speakers of other languages (ESOL) services, which constitute a distinct branch of ABE. Program practice is conceptualized here as including instructional practices, practitioner discourse, and administrative policies such as attendance and enrollment.
This analysis focuses on the following question: How did practitioners’ attempts to comply with federal accountability policy impact services to adults who have difficulty reading? I begin the article by differentiating between the terms “adults who have difficulty reading” and “adults identified as basic-level readers.” I then review the literature explicating the contours of accountability in ABE and what is known about its impact on the field. Next, I articulate two important ways practitioners’ efforts to comply with accountability policies negatively impacted services to adults who have difficulty reading: First, practitioners substantially narrowed the scope of instruction regarding literacy and reading, limiting most class time to standardized test preparation; and second, practitioners began to limit participation in this publicly funded program to only those students who could show rapid test score improvement, framing those outside of this category as ineducable. Importantly, in the context of ongoing accountability pressure, practitioners began to view these marginalizing practices as unavoidable and even routine. These findings point to the need for a critical reexamination of accountability-driven ABE policies and practices and for particular attention to the impact on services to adults who have difficulty reading.
Conceptual Framing
Two key terms used in this article need defining:
Thus, the theoretical framework guiding my analysis is grounded in a sociocultural perspective that views literacies as social practices (Gee, 2014; Rogers & Street, 2012; Tett et al., 2006) but also acknowledges the role that learned cognitive skills play in adults’ application of literacy across contexts (St Clair, 2010). From a sociocultural perspective, the point of increasing adult literacy skills or expanding the range of literacies of which one has a command is to enhance how one uses that literacy in the real world (Perry, 2012; Rogers & Street, 2012). From this view, individual student goals take on increased importance in shaping curricula in the adult literacy classroom. However, these beliefs are in sharp contrast to many contemporary policy and practitioner discourses regarding ABE, which increasingly frame learning in these programs as preparation for work (Belzer & Kim, 2018), with assessment of this preparation evaluated by standardized tests. A substantial challenge of conducting ABE research is navigating the tension between these different perspectives. However, a view of literacies as social practices can support critical analysis of ABE practice and can provide a window onto meaningful alternatives to outcomes-driven ABE policy.
Literature Review
The policy landscape in which ABE programs serve adults who have difficulty reading has transformed substantially over the last 20 years. Federal policies implemented in the late 1990s shifted the discourse of the field to what Goldrick-Rab and Shaw (2005) described as a “work-first” approach (p. 293), with the result that programs have moved away from supporting literacy for purposes defined by student goals and toward literacy for purposes defined in economic terms (Belzer, 2017; Perry et al., 2017). Accountability in achieving these economically driven purposes was instantiated by federal legislation. The 1998 Workforce Investment Act (WIA) created specific outcome measures regarding students’ test score improvements, employment, and enrollment in postsecondary training or education that federally funded programs had to meet. To record and track these outcomes over time, WIA instituted the National Reporting System (NRS) database, into which states were required to annually enter data reflecting programs’ progress toward meeting these outcomes (Demetrion, 2005). Of particular relevance for adults with difficulty reading is the requirement for practitioners to administer standardized tests. Programs receiving WIA funding became mandated to pretest students using state-approved standardized tests of reading and math at the point of enrollment and to administer a posttest at the end of the class or program year (Smith, 2009). Program funding became contingent on a certain proportion of students demonstrating a gain of one “Educational Functioning Level,” as defined by the U.S. Department of Education (Office of Vocational and Adult Education, Division of Adult Education and Literacy (2016).
These accountability requirements have influenced the type of PD ABE teachers have access to and thus the instruction students are likely to receive. PD is particularly important in ABE because few teachers have prior professional training in teaching adults or working with learners who have been unsuccessful in traditional K-12 settings (Smith et al., 2003; Smith & Gillespie, 2007; Ziegler et al., 2007). Smith (2009) found that several states were offering PD explicitly geared toward supporting students’ score improvement on the Test of Adult Basic Education (TABE), a standardized assessment instrument commonly used for NRS reporting. As she put it, This surely defeats the purpose of helping adult students to reach their goals, since no one could ever imagine that an adult student would choose to enroll in adult basic education with the specific goal of doing better on the TABE. (Smith, 2009, pp. 38–39)
Providing service to adults who have difficulty reading in this landscape can be challenging. From the beginning, researchers noted the concern that these accountability measures could shift program emphasis away from learners’ own goals to externally imposed ones and set up unattainable performance standards for programs (Quigley, 2001). Particular concern has been expressed about adults who have difficulty with reading, as these students may not demonstrate rapid gains on standardized tests; many providers were worried that they were implicitly being encouraged to reduce access for these learners (Beder, 1999; Comings, 2007; Condelli, 2007; Demetrion, 2005). Although Condelli (2007) suggested that initial concerns about federal accountability regulations encouraging “creaming” in adult literacy programs were unfounded, his conclusions conflict with what empirical qualitative research has demonstrated. In fact, this practice is a reality in some contemporary programs (Belzer, 2003; Pickard, 2019). In Belzer’s (2003) multicase study of 24 programs in six states, some participants reported feeling pressure to only accept students who could “boost program statistics in favorable ways” (p. 36). Pickard (2019), from the same data set as this study, analyzed how accountability policy created barriers to ABE program access that may disproportionately affect racially minoritized adults and adults who are identified as having difficulty reading.
In 2014, WIA was updated and renamed the Workforce Innovation and Opportunity Act (WIOA). The new act heightened the emphasis on workforce readiness and rapid transition to paid employment and added a new focus on postsecondary education (Jacobson, 2017; Pickard, 2016). These policy purposes may shift program attention even further away from attention to the needs and interests of adults who have difficulty with reading. Despite the potential negative implications of these changes and the sizable proportion of adult literacy participants who may have difficulty with reading, little empirical research has explored how these shifts in policy have influenced instruction and services for these learners. This article explores how efforts to comply with accountability policies in one ABE program negatively impacted services to adults who have difficulty reading.
Method
Setting
This study took place at The Literacy Center (TLC), a well-established adult literacy program situated in a large urban center. (All names of people and organizations in this article are pseudonyms.) TLC was selected as a site for this study because of its long history of providing free literacy services to adults and its community reputation for serving adults identified as basic-level readers in a classroom setting. TLC had been established as a community literacy organization almost 50 years prior to the data collection for this study. At its inception, the center had offered free, one-on-one tutoring services to adults in nearby neighborhoods. Over time, its services had substantially expanded to include free group classes at neighborhood sites across the city, including adult literacy, high school equivalency, English language acquisition, family literacy, and educational services for out-of-school youth. However, at the time data collection began, the scope of TLC’s services was in the process of narrowing. TLC had been one of a handful of adult literacy organizations in the city to survive cutbacks in public funding after the 2008 recession. The program had scaled back its neighborhood-based services until only one site remained, located in a small office building downtown. Program funding had evolved from being primarily community-supported to being almost entirely dependent on government funding. However, during my fieldwork, it became clear that TLC was struggling to meet its funding-related accountability outcomes for the number of students it enrolled and the percentage of these who demonstrated acceptable progress.
Participants
I focused on a class categorized by TLC as an “ABE” reading class, which was the most basic-level reading class offered at the program. There was a relatively consistent number of students present in each class: For the days I observed, mode and median for attendance were both 17, with a range of nine to 22 students present. This range and fluctuation can be partially explained by the attendance and enrollment processes at TLC. New students were added to the class frequently, whereas others were promoted to the next level or dropped from the program altogether. In addition, some students attended regularly throughout the study, and some came only occasionally.
Most students in the class were African American; four Latinx students and two White students participated in the class during my fieldwork. This disproportionality is notable, given that the White and African American populations of this city were roughly equal. Furthermore, although the program website and administrators reported that TLC primarily served women, on average 40% of participants were women on the days I was present in the classroom.
Thirteen learners who were present for all or most of the classes I observed during the first few weeks of fieldwork were invited to be interviewed as part of the study. And, 11 students agreed: Nine were African American (three women, six men) and two were Latinx (one woman, one man). The oldest interview participant was a 72-year-old African American woman, and the youngest was a 26-year-old African American man. Both Latinx participants spoke English fluently but reported having difficulty reading and writing in any language and expressed a desire to improve their English print literacy skills.
All of the focal learners were identified by the program as basic-level readers, and most of them reported having difficulty with some elements of reading. However, my observations of their reading in class suggest that they possessed a wide range of reading skills. A few of them demonstrated difficulty identifying single letters and their sounds, whereas others knew letters and sounds but encountered difficulty and frustration when they tried to read short words and sentences. Others reported regularly reading extended texts for work or recreation but would sometimes get stuck on unfamiliar words or make mistakes that obfuscated the meaning of the text. One learner read all the available classroom material with speed and accuracy but received repeat low scores on her assessment tests. Notably, nine of the 11 focal learners reported having been identified as learning disabled or placed in special education when they were children. Although the topic of learning difficulties and the relative merits of assessment and diagnosis in the adult literacy population have been debated at some length (Belzer & Ross-Gordon, 2011; Mellard & Patterson, 2008), the predominance of learners with a history of special education in this group adds weight to calls for improved teacher preparation and PD, so that ABE teachers can provide adequate and equitable educational services to all ABE learners.
In addition to these learners, seven TLC staff members agreed to be interviewed as part of the study: two teachers, a counselor, the tutor coordinator, the associate director of instructional quality, the director of education, and the executive director. These practitioners had worked at TLC for 2 to 17 years, and many had experience with ABE instruction or administration prior to coming to TLC. Ms. Birch, the teacher of the focal reading class, was an African American woman in her sixties and had taught basic reading and math classes for many years. Although the staff of TLC was racially diverse overall, the two uppermost administrators were both White. As I articulate more fully below, my own whiteness and status as a PhD candidate influenced my data collection with both learners and practitioners.
Data Collection
Data collection took place over an 8-month period and included participant observation, writing field notes, conducting interviews, and collecting classroom and program artifacts (Hammersly & Atkinson, 2007). For 4 months, I acted as a volunteer aide in the class, which met twice a week for 2.5 hr per session. Classroom interactions during these visits were audio-recorded and transcribed. As an aide, I mostly assisted those students who Ms. Birch or I felt needed support completing in-class assignments, although I occasionally taught small portions of the class if Ms. Birch needed to step away. In addition, I met one-on-one several times before or after class with three students who had difficulty with letter and sound identification and sight words; together we worked on reading material I had provided to supplement the instruction they were receiving in class. After these 4 months, I conducted monthly or bimonthly follow-up visits to the site for four more months. Over this 8-month period, I also attended a new student orientation and a meeting for all students in the program. In all, I visited the program 44 times. Observational data from all program visits were documented in field notes written during or shortly after visits (Emerson et al., 2011), and I collected copies of any instructional and informational materials given to students.
Throughout the study, I conducted 32 interviews with students (21) and staff (11). All interviews were audio-recorded and transcribed except for one, which was documented by extensive field notes. Interviews ranged in length from 18 to 141 min, but most were in the range of 30 to 80 min. Most interviews took place in a private room at TLC, but eight interviews were conducted in learners’ homes, one was conducted over the phone, and two of the interviews with Ms. Birch were conducted in transportation vehicles (her car and a public bus).
Initial interviews with students were semi-structured, with three or four open-ended questions that asked about their past educational histories, their experiences enrolling at TLC, and their educational goals. Because I hoped to understand learners’ experiences over time, I interviewed most learners twice, with 3 to 6 months between interviews. Two of the 11 learners were unable to complete a second interview, and one learner was interviewed a third time.
At our first interview, one student reported that he and others were nervous about being interviewed, possibly due to students’ perceptions of me as affiliated with the program and thus in a position of relative authority. Although I introduced myself as a graduate student, students often described me as a teacher or tutor and sometimes called me to request homework or explain why they missed class. Furthermore, my social positioning as a White, middle-class woman who started our relationship by requesting that they sign a complex informed consent document likely mirrored other bureaucratic situations in which learners faced potential consequences for the revelation of personal information, such as interactions with welfare, immigration, and criminal justice systems. However, as our relationships deepened over time, many students’ anxiety seemed to diminish, and our interviews became fully unstructured, exploring students’ experiences of program processes as they unfolded. We discussed their thoughts and feelings about their progress, classroom instruction, testing, and TLC’s attempts to produce outcomes. Students’ perspectives of events described in this article are explored more thoroughly by Pickard (2019).
The teachers and administrators also seemed to view me as “one of them,” rather than as a neutral party or as aligned with the students. I was given easy access to upper-level administrators; both the executive director and the director of education participated in interviews that lasted 108 min and 92 min, respectively. It seems likely that this access was facilitated by my status as a White, middle-class doctoral student and that a perceived affiliation supported practitioners’ willingness to speak frankly with me about their perceptions of students. Interviews with staff initially explored their perceptions of state and institutional responses to the needs of adults who have difficulty reading but began to address the program’s efforts to comply with accountability policy as events changed in real time. Ms. Birch, the classroom teacher, participated in four interviews, spread across the study period. The counselor, who also had frequent contact with learners in this class, was interviewed twice, 3 months apart. All other staff were interviewed once each.
Analysis
Data analysis began during the data-collection period and consisted of listening to recorded interviews and classroom interactions, reviewing field notes and program artifacts, and writing memos, to deepen understanding of context, focus the scope of future data collection, identify early themes, and point to potential directions for analysis (Hammersly & Atkinson, 2007; Maxwell, 2012). Simultaneously, open coding of transcripts, field notes, memos, and artifacts regarding focal students’ experiences in the program was conducted using ATLAS.ti qualitative data software. Although the research question originally guiding this ethnography concerned the impact of policy-driven discourses of workforce development on instruction for adults with difficulty reading, early data analysis indicated that accountability-driven standardized testing, rather than workforce development, was the prevalent framework for instruction. Data collection then began to focus more explicitly on the influence of accountability on program practice, teachers’ thinking about their work, and students’ experiences. Coding began the constant comparative method (Glaser & Strauss, 1967, as cited in Hammersly & Atkinson, 2007) to organize similar codes into categories and group these categories into themes. Two prevalent themes were identified: limits to educational access for adults with difficulty reading and the influence of accountability on program practices. Given the nature of ethnography, the data presented in this article are necessarily partial (Anderson-Levitt, 2006) and focus on the second theme, the influence of accountability on program practice.
Findings
As practitioners at TLC sought to comply with federal accountability policies, services to adults with difficulty reading became increasingly focused on achieving rapid standardized test score improvements. This focus on accountability limited in-class opportunities for substantive engagement with reading and negatively informed practitioners’ perspectives of student educability and the contours of acceptable program practice. Below, I explore in depth how practitioners’ attempts to comply with accountability policy negatively influenced services to TLC students who had difficulty reading. I will offer commentary and discussion of these findings as they are presented.
Teaching and Learning as Test Preparation
Although instruction in this reading class addressed a range of topics, such as personal goal-setting, language arts concepts, and short reading passages that Ms. Birch thought would be of interest to students, practitioners regularly framed these activities as in the service of performing well on standardized tests, specifically the General Educational Development (GED) test and the TABE. This framing was apparent in both classroom discourse and instructional materials. For example, 17 of 22 audio-recorded classes included discussions of these tests, some of them quite lengthy; one lecture regarding test preparation lasted just under 19 min. During class time, students mostly completed worksheets and short reading passages photocopied from books targeted to the ABE market, frequently from the series
In my first month of fieldwork, most test-related classroom talk focused on the GED. Having students successfully complete high school equivalency testing has long been a widely recognized ABE program and policy goal. In addition, many students who enroll in ABE programs do wish to obtain a high school equivalency diploma. However, in this class, framing teaching and learning about reading as preparation for the GED disconnected classroom work from the other literacy goals articulated by students, particularly those with difficulty reading. In our interviews, students identified goals such as learning how to read, spell, and sound words out; completing real-life reading and writing tasks, such as completing medical forms, independently; reading the driver’s license test manual; and reading well enough to get deeply engaged with a story. However, these goals went largely unaddressed.
Although practitioners at TLC acknowledged in interviews that many students in this class may never take the GED test, references to the GED were regularly invoked to motivate students to embrace classroom activities. For example, the teacher would describe activities, such as summarizing paragraphs or map reading, as similar to activities required on the GED. However, on several occasions, students told me explicitly—and with some emphasis—that they did not want their GEDs, as in this exchange:
What are your goals for the class? What do you want to have happen?
I
Her clarity here may be an indication of the strength of her feelings or perhaps that the ongoing allusions to the GED left her feeling that her goals were unacknowledged. Another student expressed resentment about the emphasis on the GED and the lack of support for reading development:
Everybody on me, push me to get my GED. I wouldn’t mind trying to get it, but I don’t really want it. I just want to read. That’s it. That’s all I want to do.
Yeah, but you feel like Ms. Birch and the tutor are, do they talk to you about the GED?
No, no, they just telling everybody to get it, but it’s like, I want to tell them, I don’t care about it. I just want to read, that’s it.
Yeah. Do you feel like you’re getting support for reading?
A little bit.
Despite this disparity between student interests and program goals, the in-class focus on test preparation only intensified during the course of my fieldwork; however, the focus shifted away from the GED and toward the production of immediate improvements on the TABE test.
About a month into my fieldwork, TLC administrators informed teachers that the program was not meeting its WIOA funding-related accountability outcomes for the number of students enrolled and the percentage of those who demonstrated acceptable progress on the TABE. From that point on, demonstrating progress on the TABE became a focal point of classroom discourse and instruction. This shift in focus was accompanied by a sense of urgency, which was frequently conveyed to the students. In the first class after administrators shared the need to produce outcomes, Ms. Birch administered the TABE test to her students. At the next class, she told the students, I cannot enter those scores from them TABE tests that you took into the computer. That’s how devastating they are. I don’t know whether to laugh, to cry, to scream, or just pass out . . . One of the things that this class has to do is to show some gains in these reading scores.
Similar messages of the urgent need to show progress were repeated throughout the next 3 months, and some classroom practices reflected this need. Ms. Birch increased the frequency with which she timed classroom activities, engaged in lengthy discussions with students about test-taking skills, and paid close attention to the feedback she was given by administrators each time her students took the TABE. However, the specific content of the class did not differ substantially from what it had been during my first month of fieldwork. Students still completed photocopied worksheets focused on language arts concepts, read short passages with multiple-choice questions, and talked about life goals. One explanation for this consistency may be that although the TABE was not prevalent in classroom discourse until midway through my fieldwork, the specific skill requirements of the TABE were already informing how teachers and administrators defined teaching, learning, and educational progress within the program.
Each time students at TLC were given the TABE, administrators provided instructors with a computer-generated description of the types of questions each student was and was not correctly completing. Ms. Birch and other teachers at TLC reported how much they appreciated these detailed TABE handouts because they believed these descriptions isolated the specific skills students needed help with to show progress. The following excerpt from class discussion illustrates Ms. Birch’s thinking about how these handouts should influence her reading instruction: Each class, I’m going to work on the skill that the majority of you seem to need support on when you’re taking TABE tests. I can tell you right now, the three problem areas in reading. Number one is recall, meaning that you are able to read a passage and then answer questions about what the story was about. Number two: You picking the correct word to use appropriately in a sentence. That’s another one. And then number three is what we call consumer materials . . .
However, this articulation of what constituted important work in this reading class had several negative implications for the students with difficulty reading. First, it suggested to these students that reading is a set of isolatable, rather than interrelated, skills. However, reading is better conceptualized as a complex, interactive set of skills and strategies that vary depending on text, task, purpose, and audience (Pressley, 2002; Purcell-Gates et al., 2002). Second, this framing potentially misdirected students’ time and energy away from reading for the life purposes they articulated in our interviews and toward an exclusive focus on reading for testing. Third, and perhaps most importantly, this instructional emphasis on reading for the TABE suggested to students that their ability to improve their reading was tied to their ability to pass the TABE and, given the recurrent difficulty some of them experienced showing progress on the TABE, reinforced the idea that it was something they might never be very good at.
Practitioners’ reliance on the framework presented by the TABE was likely facilitated by the perennial lack of knowledge about adult beginning reading instruction among ABE teachers (Bell et al., 2004; Smith, 2010; Smith et al., 2003; Smith & Gillespie, 2007; Ziegler et al., 2007). Smith et al.’s (2003) research with 106 ABE teachers found that 53% had completed no formal educational coursework in adult education and an additional 27% reported having completed three or fewer of such classes. These results included formal training in generalized adult education theory and adult English instruction; thus, it is likely that even fewer instructors had any specific training in teaching reading to adults, much less in teaching to those who have difficulty reading. Bell et al.’s (2004) survey of 208 ABE practitioners’ instructional knowledge of reading found that educators demonstrated knowledge of only 48% of the content assessed by their tool and that those who taught beginning literacy students had a significantly lower mean score than those who taught multiple levels. Furthermore, a survey of 468 ABE and ESOL teachers found that only 6% of practitioners reported having any formal training addressing learning disabilities (Ziegler et al., 2007). It seems likely in this context that the diagnostic descriptions of students’ skills that accompany tests such as the TABE may offer instructors the most information some of them have ever received about what constitutes reading or the teaching of reading.
A view of TABE score improvement as the defining paradigm for instruction was evident across practitioners and subject matter at TLC. In an interview, an assistant director in the program articulated a similar framing of learning and teaching in the context of math: [Some] students we just talked to them and asked them to let us know what did they think was the reason they’re not making gains. We had this one student who said, “Well I know every time I TABE I get those questions wrong with graphing.” So, OK [teacher], you have to work with her because that could be just one question that she’s missing that can potentially move her up to the next level.
In this perspective, the important work for teachers at TLC to do was target isolatable skills deficiencies and help students make test score gains so they could transition from one educational functioning level to the next. Some might reasonably argue that patching up skills in this “just in time” manner can support learners in passing the gatekeeping tests that may stand between them and their long-term educational or employment goals. Although this may work well for students who already have an established foundation of reading skills, for adults who have difficulty reading, this kind of patchwork approach seems unlikely to lead to improvement in reading skills or progression toward students’ goals. These students are more likely to benefit from instruction in which reading skills are an embedded part of regular opportunities to read a wide range of real-world texts (Purcell-Gates et al., 2002; Rodrigo et al., 2007).
The degree to which TABE testing structured practice at TLC was likely accelerated by the intense pressure practitioners felt to meet their outcomes agreements with the state. Practitioners repeated on numerous occasions that they believed they faced the loss of their jobs and the closure of their program if they did not produce student outcomes on standardized tests. In this context, administrators began to take drastic measures to meet state-required program outcomes.
Testing, Ineducability, and Exclusion
In the fourth month of my fieldwork, several practitioners reported that students deemed unlikely to show progress were being removed from the program. Early in my fieldwork, the director of education suggested that other local programs were already “creaming” lower-scoring students because of their failure to show gains and articulated strong resistance to the idea that such a process would take place at TLC: I can tell you that the [municipal literacy referral agency] would love for me to say, “I’ll take [all lower-scoring students].” But, I can’t take them all . . . You don’t get the same type of outcomes and impact with people who are reading below fourth-grade level that you do with higher level students. But we’re, you know, we’re not, that, that would encourage creaming and I don’t think we’re an organization that creams. I mean, if you come to us and you want services, we do our very best to help you.
However, when faced with the reality that the needed outcomes were not materializing within the state-specified timeline, the idea began to take hold that for the program to survive, students who did not show gains should be removed. In other words, practitioners at TLC came to believe that compliance with accountability policy required actions that were counter to the inclusive purposes expressed by the organization’s director of education, above.
Some practitioners then began what might be described as a process of moral rationalization (Tsang, 2002). In moral rationalization, individuals attempt to reconstruct scenarios in ways that would make their actions consistent with their morality. One way this can be accomplished is by positioning the victims of immoral actions as responsible for those actions. In my conversations with practitioners during this time, several put forward explanations that centered students as primarily responsible for their failure to show sufficient gains. This position is exemplified in the following exchange with the assistant director:
Why do you think that those students aren’t making gains?
There’s a lot of reasons. I think just because they may have just plateaued. Maybe they have learning differences, and maybe they really don’t have any goals, they’re just coming because it’s something to do. And it could be from instruction. It could be just barriers that they have at home that’s interfering with working in class. So it could be a number of things.
We did start taking a look at students that were here for over five years and they were making
In this telling, TLCs accountability problems were located within certain students; therefore, removing those students from the program began to make sense. In my experience as a researcher and practitioner, characterizations of ABE students such as those articulated by this administrator are extremely common. However, the explanations offered in this rationalization merit further exploration.
Foremost among these explanations is the notion that students had “plateaued.” In other words, this practitioner claimed that these students could not benefit from any further instruction—that they had reached the limits of what they were capable of doing, and thus were acceptable targets for dismissal from this program. The claim of student ineducability has historically been invoked to deny access to a range of educational programs, particularly for students of color and students with disabilities (Jensen, 1973; MacDonald, 2004; Reed Martin, 1981; Menchaca, 1997). However, educability is a context-dependent, ideological construct, and the question of whether any student can credibly be categorized as ineducable has been widely debated in the courts (Reed Martin, 1981). Federal legislation, including the 1973 Rehabilitation Act and the 1990 Americans with Disabilities Act (ADA), legally mandates access to public education, including ABE, regardless of perceived educability (Lee, n.d.; Taymans, 2010). Despite these legal protections, student ineducability was utilized as an explanation for excluding students from this publicly funded program. Furthermore, this framing of students as ineducable was facilitated by an accountability-driven view of educational progress and learning as uniquely indicated by score improvement on the TABE. In this way, accountability as articulated in WIA and WIOA may be working at cross-purposes with federal legislation designed to provide legal protection to adults with disabilities.
In addition to promoting a view of students as ineducable, the hyperfocus on rapid TABE score improvement encouraged a deficit view of program participation itself. Students who had invested substantial time in the program—in some cases, up to 5 years—were framed not as demonstrating persistence or even as in need of support to succeed in a worthwhile activity; rather, they were seen as engaged in a pointless pursuit, regardless of the small test score improvements they may have made or any other benefits they may have experienced as a result of their participation. In the context of accountability, ongoing ABE participation without a gain of an entire Educational Functioning Level (EFL) was described by this practitioner as a meaningless choice of “something to do.” For other types of adult learning, or perhaps with a different lens on adult literacy learners and programs, such long-term participation might be considered a valued form of community engagement and self-improvement, with or without predefined indicators of progress. Many researchers have described the range of benefits that can ensue from long-term participation in an ABE program (Comings, 2007; Nash & Kallenbach, 2009). However, at TLC, accountability encouraged practitioners to view their own work as extremely narrow in scope and to delegitimate the substantial possibilities for educational, social, civic, and personal growth that were once considered central to ABE program participation (Beder, 1991; Belzer, 2017; St Clair, 2010).
Another of the assistant director's claims worth further consideration is the idea that non-progressing students were “taking up a spot for somebody else.” Although TLC did have a long waiting list, this explanation is rooted in an unfounded assumption that the list was full of people who
Even with these rationalizations, few practitioners at TLC felt good about removing these non-progressing students. The assistant director quoted above described her own dislike for the process: It is a tough call. It was sad too. It really was. Because after we did, I said, I don’t want to do this anymore. I mean, this is their livelihood, you know? Some of them, this is what they look forward to do. [But] it doesn’t really help us if they just coming every day and they’re not learning. That’s not really helping us. So . . . yeah, it was hard.
Despite this discomfort, in the context of a poorly funded ABE program with high-stakes accountability such as TLC, practitioners felt that their options for serving students who did not make progress were extremely limited, and learners who did not make sufficient gains were seen as a serious liability to the program’s very existence.
In the sixth month of my fieldwork, students reported participating in meetings with Ms. Birch in which they were apprised of their enrollment status. Those who had shown gains were told they were fine; however, one student was given a month to improve his TABE scores, and two others were informed that it was their last day. According to practitioners, program exclusion at this point was being expanded to include students who had not shown test score gains after shorter periods of time and newer students who teachers predicted would be unlikely to show gains in the near future.
ABE programs have always had to navigate the tricky balance between the disparate goals of funders, practitioners, and students (Beder, 1991; Belzer, 2007; Quigley, 1997). However, accountability policies had created a seemingly unresolvable situation at TLC: Practitioners felt they had to choose between the survival of the program as a publicly funded entity and allowing lower-performing students to participate. In separate interviews with the executive director and the director of education, both articulated having conversations with the board about whether TLC should abandon its mission as a literacy agency, or if it should stop accepting government funding so that the program no longer needed to focus on standardized testing and employment preparation.
However, in the period during which this study was conducted, practitioners at TLC felt stuck. As they were forced to make decisions in these circumstances, marginalizing practices such as the categorization of some students as ineducable and the removal of low-performing students from the program began to make a kind of sense. Furthermore, if low-scoring students were going to be enrolled in the program only to be expelled later, not allowing them to enroll in the first place came to seem to some practitioners as a potentially just, rather than an unjust, procedure. This response from a program counselor was representative of the views I heard from a number of teachers and administrators: I understand if that’s how we’re going to get our funding, I do understand. [But] I think it needs to be clearer. I think we need to have a better process of orientation and interviewing, so that we can accept “the right people” into the program.
Changes to orientation processes like the one recommended by this counselor are designed to better fulfill accountability requirements but would likely mean the exclusion of many adults—some with difficulty reading or learning disabilities and some simply without the experience and skills to show rapid test score improvement. As more publicly funded programs seek to align their offerings to better comply with accountability requirements, fewer public literacy services may be available to these adults.
Discussion
In the current climate of accountability, adults who have difficulty reading are not “the right people” to participate in publicly funded ABE programs. Many of these programs once considered themselves responsible for helping students define and meet personal literacy and educational goals. However, TLC’s evolution to a program with top-down accountability has shifted the focus of its ABE programs to the fulfillment of policy expectations. These expectations are geared to the demonstration of rapid, measurable returns on investment and are, by their very nature, impersonal and ill-suited to adult learners who have difficulty making rapid progress. The data presented here illustrate how, in one setting, accountability policy limited the type of instruction that learners had access to and encouraged the classification of some learners as ineducable. A result of both processes is that adults who wanted help with their reading were not able to get it. This result has negative consequences that reverberate in learners’ lives and throughout our collective communities.
Although the research in this article is specific to the context of TLC, the challenges these practitioners faced are likely not unique, given the widespread influence of federal accountability policy. At TLC, marginalizing practices such as test-specific teaching and program exclusion became a normalized part of compliance with federal policy. These programmatic responses to accountability pressure prompt many questions that could form the basis of further research. Is what happened at TLC typical of what is happening in other federally funded programs? What happens next for students who are asked to leave a program? The answers to these questions are urgently needed if we are to create an adult literacy system that does not marginalize or exclude adult students
Recommendations
The contemporary social and policy context in which publicly funded adult literacy programs attempt to provide educational services is, at best, trying and, at worst, impossible. Programs are asked to provide an educational panacea for multiple social issues, using extremely limited funds and a relatively untrained workforce. These realities make targeted improvements even more important.
There are a number of fairly straightforward changes that would improve services to adults who have difficulty reading. Individually, these recommendations would have limited impact. Taken together, they could make a substantive contribution to improving literacy education in ABE.
Funding for Reading Specialists and Special Education Training
The obvious elephant in the room is the need for more funding. Approximately, 2.5% of federal spending on education is targeted toward adult education, and ABE makes up only about a third of that (Roumell et al., 2019). Furthermore, federal funding for public adult literacy programs has been reduced by 18% since 2001, and substantial additional cuts have been proposed (National Skills Coalition, 2018). This limited funding means teachers’ pay is rarely competitive enough to attract certified reading specialists to the field. Money needs to be directed toward increasing the number of reading specialists engaged in adult literacy education, either by raising teachers’ salaries enough to attract certified reading specialists from outside the field or by paying for the training necessary to certify teachers who are already in the field. There is a growing crop of online, graduate-level programs that offer classes in ABE, and there are many programs that offer online reading specialist certification. PD options should include opportunities to participate in these courses or in the acquisition of reading specialist certification; however, given the extremely low salaries of ABE teachers, this training should be paid for with government funds or included in loan forgiveness programs in exchange for years teaching adult literacy learners. Guaranteeing that each program or at least each region has access to reading specialists who could act as consultants and PD leaders could substantially improve the knowledge base about reading instruction among contemporary adult literacy instructors. While Amstutz and Sheared (2000) have argued that it is adult literacy programs’
Importantly, nine of 11 focal participants in this study indicated that they had been identified as needing special education in their youth. It is possible that the Americans with Disabilities Act Amendments Act (ADAAA) and the Rehabilitation Act, if enforced, would protect these students from exclusion from publicly funded ABE programs. However, the legal requirement for specialized instruction is limited to K-12 schools and times out at the age of 21 years (Taymans, 2010). Special education certification or training in specialized instruction for adults with learning disabilities could substantially improve ABE instructors’ abilities to serve adults with difficulty reading.
Revision of Assessments and Outcomes Expectations
For programs to be willing to direct crucial funding toward adults who have difficulty reading, accountability-driven assessment practices for these learners need to be revised. Presently, enrollment of these adults is perceived as working against programs, as indicated in this study, because of the belief that these learners do not easily show progress on the TABE and other standardized assessments. Federal policies should allow for alternative assessment of these learners, such as practice-based assessments that keep track of changes in how adults are using literacy in their lives (Reder, 2009).
Furthermore, research, policy, and practice need to move beyond the discourse of immediate “outcomes and impacts” and begin to explore how literacies are intimately interwoven with quality of life and quality of place for the large numbers of adults who have difficulty reading. Literacy growth as contextualized in learners’ lives—and not just as indicated on a test—should be recentered as part of the mission of ABE programs.
Programmatic Changes
The previous suggestions are critical to long-term structural change; however, there are actions programs can take within existing policy constraints that could improve services to adults who have difficulty reading. My overarching suggestion is that practitioners spend substantial time talking with students about their literacy goals and make efforts to connect classwork to these goals. Students could be encouraged to bring reading material from home, and programs could replace worksheet-based instruction with authentic, familiar materials connected to students’ stated purposes for literacy learning, thereby potentially increasing engagement (Rachel Martin, 2001; Purcell-Gates et al., 2002) and comprehension (Perry et al., 2017). Furthermore, there is potential within this approach to strike a balance between attending to students’ interests and supporting programmatic requirements for measurable outcomes. Because genre familiarity can serve as a support to comprehension (Perry et al., 2017), it is possible that research-guided reading skills instruction grounded in authentic, familiar texts may also act as a support to the successful learning of assessment-related skills.
My second suggestion is that programs meaningfully include writing as part of ABE instruction. Writing development can support reading development (Graham & Hebert, 2011) and can substantively expand the range of literacy practices with which adult students are able to engage. However, writing has generally been erased from policy conceptualizations of adult literacy (Perry et al., 2018), and data from this study suggest it received little attention at TLC. Although more research is needed about how to best incorporate writing in literacy instruction for adults, students in this study expressly stated writing and spelling as literacy goals.
Finally, programs should consider incorporating practice-based assessment (Reder, 2009) in addition to the required standardized testing. If programs keep track of changes in how students engage with literacy practices outside of school, those who have difficulty making standardized test score gains—and the practitioners who teach them—might be better able to see and understand their literacy growth.
Conclusion
Despite its public funding, ABE has not been subject to the same oversight regarding equity as other publicly funded education. Thus, investigations that support a nuanced understanding of how processes such as accountability can result in inequitable practices are even more essential. Much of what I have written here may seem familiar to ABE practitioners, and it may not seem noteworthy or concerning. However, as practitioners and researchers, we have both the right and the moral imperative to consider the implications of designing our practice to comply with unjust state mandates. Working in the difficult policy and funding conditions that define the field of adult literacy education can sometimes result in a guiding value statement along the lines of, “It may not be perfect, but it’s better than nothing.” However, if participating in an adult literacy program means being discouraged from authentic engagement with personal educational goals or having one’s motivation, effort, and ability questioned, then perhaps participation is not better than nothing. And that is a state of affairs worth considerable rethinking.
Supplemental Material
sj-zip-1-jlr-10.1177_1086296X20986910 – Supplemental material for Accountability in Adult Basic Education: The Marginalization of Adults with Difficulty Reading
Supplemental material, sj-zip-1-jlr-10.1177_1086296X20986910 for Accountability in Adult Basic Education: The Marginalization of Adults with Difficulty Reading by Amy Pickard in Journal of Literacy Research
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
