Abstract
Literacy researchers have explored how video games might be used as supplementary texts in secondary English language arts (ELA) classrooms to support reading instruction. However, less attention has been focused on how video games, particularly online educational games designed to teach argumentation, might enhance secondary ELA students’ writing development. In this article, we describe how the pedagogical feedback provided by one such game, Quandary, influenced two seventh graders’ written arguments in advocacy letters addressed to the state governor regarding a local environmental disaster. We compare these two embedded cases to data from 10 focal students, as well as patterns from 114 seventh graders (in five ELA classes). Based on our analysis of screen-capture video of students’ gameplay, drafts of their advocacy letters, and video-stimulated recall interviews, we conclude that game feedback rewarding or penalizing predetermined right or wrong player moves may encourage students to develop argumentation strategies that are less effective in more complex rhetorical situations and may foster a false sense of competence.
In 2018, a Pew Research Center survey of people ages 13 to 17 found that 90% of U.S. adolescents play video games, a figure including both boys and girls, as well as teens of various backgrounds. Accordingly, education researchers have begun to examine how both commercial, off-the-shelf (COTS) video games and games designed specifically for educational purposes can be used for teaching and learning (Clark et al., 2014; Mitchell & Savill-Smith, 2004). Previous studies of how video games can be used in English language arts (ELA) have tended to focus on the integration of digital, online games in reading instruction (Adams, 2009; Steinkuehler, 2010). For example, online video games often have rich storylines that can be “read” and analyzed, much like that of any novel, play, film, or TV show.
Nevertheless, some prior research has demonstrated that online video games, like other mass-entertainment media, can also inspire adolescents’ writing, both in and outside the classroom. For instance, teens can write fan fiction, create tutorial walk-throughs, or compose essays about the ethical and moral issues raised by online video games (Black & Steinkuehler, 2009; Gerber & Price, 2011). Whereas these previous studies have approached online video games as texts about which adolescents might write arguments, other research has regarded video gameplay itself as ongoing argumentation. For example, COTS video games can invite players into scenarios requiring them to compose emergent fictional narratives (Gee, 2015), and educational games, or “edugames,” may engage students in solving simulated problems that parallel real-world issues (National Research Council, 2011). In particular, “pro-social games” that involve students in positive social interactions with broader communities can be used to promote civic engagement (Gerber & Gaitan, 2017): “Teens with the most (top 25%) civic gaming experiences were more likely to report interest and engagement in civic and political activities than teens with the fewest (bottom 25%)” (Lenhart et al., 2008, p. vi).
In addition, prior research has highlighted the abundance and timeliness of video-game feedback on teen players’ progress toward game goals (Prensky, 2001; Wouters et al., 2013). Such feedback, which can allow students to experiment, to monitor their progress, and to adjust during gameplay (Abrams & Gerber, 2013), may contrast with secondary teachers’ responses to students’ writing, which may address only later drafts of a text at the end of a composing process. Nevertheless, opportunities remain to investigate the pedagogical quality of specific kinds of feedback offered by video games on adolescents’ emerging arguments.
In this article, we describe how feedback provided by the free online edugame Quandary, which was designed to teach argumentation to “players aged 8–14” (Learning Games Network, 2012), influenced two seventh graders’ written arguments in their advocacy letters to the state governor regarding a local environmental disaster. We compare these two cases to data patterns from 10 focal students and a larger data set of 114 seventh graders (in five ELA classes), who also participated in this 1-month unit of study on argument writing for environmental action in a suburban public middle school in the southern United States.
Background
Argument Literacy
For success in their educational, professional, and civic endeavors, adolescents must learn to critically analyze and evaluate evidence, claims, and interpretive perspectives on an issue, and to creatively synthesize and apply these resources in making their own compelling arguments. Thus, a fundamental strand in the secondary ELA curriculum is students’ development of what Graff (2004) has called “argument literacy.” In particular, argument writing is prioritized in national and local standards for secondary-literacy education (National Governors Association Center for Best Practices & Council of Chief State Officers, 2010). Moreover, standardized tests of secondary students’ persuasive-writing abilities are often used (advisedly or not) to assess students, teachers, and schools. Nevertheless, according to some national studies, many secondary students are not prepared for advanced writing demands (Graham & Perin, 2007). Indeed, despite the importance of learning to write effective arguments for full participation in community life, the National Assessment of Educational Progress (NAEP) “Writing Report Card” has found that only 27% of eighth and 12th graders performed at the “proficient” level on persuasive-writing tasks (National Center for Education Statistics [NCES], 2011). Thus, there is an urgent need for additional research on the teaching and learning of argument writing in secondary schools.
Recent research on argument-literacy development in elementary, secondary, and postsecondary education has identified a number of factors critical to students’ writing of effective arguments, including audience awareness and the integration of opposing viewpoints to enhance one’s own stance on an issue (Felton et al., 2015; Midgette et al., 2008; Newell et al., 2011). Moreover, some literacy researchers have argued that classroom-talk activities, including debates, discussions, and collaborative problem-solving, can scaffold students’ composition and revision of written arguments. These activities can provide students with opportunities to invent and refine arguments orally before they begin writing, and to explore potential consequences of their emerging arguments on actual audiences, who can respond immediately (Kuhn et al., 2016; Reznitskaya & Wilkinson, 2017). In fact, such formative feedback—oral and written, from both teachers and peers—has been shown to be among the most important constructive influences on students’ writing achievement (Graham et al., 2011; Hattie & Timperley, 2007). However, the relative effectiveness of different response contexts, practices, and agents remains to be explored.
Digital Learning Environments
A significant body of research has highlighted the potential of digital, online games for promoting learning in various subject areas, precisely because of the continuous feedback on players’ goal-directed participation that such games provide (Clark et al., 2014; Malone, 1981; Prensky, 2001). While COTS games may have educational as well as entertainment value, researchers have also studied “serious games” and “edugames” designed specifically to immerse students in educational experiences (Sitzmann, 2011; Squire, 2011; Wouters et al., 2013). In such edugames, the cybernetic systems that encourage or discourage certain kinds of player performances also align with learning goals (Salen & Zimmerman, 2006). In particular, an emerging line of inquiry has begun to investigate digital and online tools, including games, as scaffolds for students’ composition and evaluation of written arguments (Scheur et al., 2010).
Education researchers specializing in the school subjects of science and social studies have examined the use of online games and simulations as tools for fostering students’ inquiry and argumentation (Chapman et al., 2017; National Research Council, 2011; Ravenscroft & Matheson, 2002). For example, computer-based conceptual maps have been shown to support young adults’ independent and collaborative development of evidence-based arguments in problem-based science-learning tasks (Belland, 2010). Similarly, Schwarz and De Groot (2007) have observed that adolescents’ exposure to multiple perspectives and counterarguments while working with “graphic tools for argumentation in e-discussions” was an important factor in how these digital scaffolds helped secondary students to improve their written arguments about historical events (p. 297).
Beyond scaffolding tools that stimulate interest, digital online games may be conceived as “learning environments” that collaborate with learners as they playfully interact with those environments through reading and writing (Alberti, 2008; Berninger & Winn, 2006). Such digital contexts can encourage playfulness that is educational and possibly also supportive of political activism (Glas et al., 2019). For example, “persuasive games” can enlist players through procedural rhetoric, or rules of play, in crafting narratives that argue for social action (Bogost, 2016; Salen & Zimmerman, 2006). Regarding environmental action in particular, researchers have begun to explore the benefits and challenges of using games to encourage ecological citizenship (Gabrys, 2019; Gerber & Gaitan, 2017; Raessens, 2019). As literacy researchers respond to the call to teach environmental literacy (Beach et al., 2017; National Council of Teachers of English, 2019), these previous studies have revealed opportunities for similar research on the use of immersive argumentation games in ELA contexts.
Pedagogical feedback from digital tools, including online video games, has received relatively little attention by English-education researchers, despite scholarship on the educational value of game-based opportunities for students to experiment, to fail, to monitor their progress, and to adjust their performance in response to frequent and instantaneous responses from game-based feedback systems (Abrams & Gerber, 2013; Bogost, 2016; Juul, 2013; Salen & Zimmerman, 2006). The relative lack of ELA research on game-based feedback may relate to the field’s aversion to “machine scoring,” or computerized assessment of students’ writing (Herrington & Moran, 2001). The National Council of Teachers of English (2013) has condemned machine scoring in a recent position paper, emphasizing that computers use “different, cruder methods than human readers to judge students’ writing” (p. 1)—for example, gauging the sophistication of students’ vocabulary by measuring the length of words in their written texts, and evaluating idea development by counting the number of sentences in a paragraph. Notwithstanding those concerns, in this article, we describe positive, as well as negative, influences of different kinds of feedback provided by the online game Quandary on two seventh-grade ELA students’ emerging arguments about science-fiction and real-world environmental crises.
Theoretical Framework
A meta-analysis of research on formative assessment of students’ writing has defined feedback as including oral and written comments and questions provided by either adults or young readers on students’ compositions, and the modeling of writing strategies by teachers or classmates (Graham et al., 2011). However, a synthesis of meta-analyses determining the influence of teacher and peer responses to students’ writing has conceptualized feedback more broadly as any “information provided by an agent (e.g., teacher, peer, book, parent, self, experience), regarding aspects of one’s performance or understanding” (Hattie & Timperley, 2007, p. 81). By this more inclusive definition, feedback is not limited to teachers’ and classmates’ oral or written responses to students’ participation in school tasks; instead, it also encompasses the multimodal discursive interactions in which students engage while playing educational video games (Prensky, 2001; Wouters et al., 2013).
In this article, we describe pedagogical feedback provided by the online game Quandary on two seventh graders’ emerging arguments regarding public water contamination. This feedback included oral and written discourse, as well as other nonverbal (e.g., visual, auditory) communication in the gameworld. For example, while playing Quandary, students made decisions that drew reactions from game characters via oral and written dialogue. Moreover, students’ participation in the game elicited nonverbal feedback (e.g., points, advancement). Applying this expanded definition of feedback to Quandary enabled us to account for the visual syntax (logical sequencing) and the semantics (meaningful relationships) of the gameworld, which Gee (2015) has suggested are important considerations for understanding how video game players accomplish goals and solve problems in fictional, interactive contexts.
Previous studies of oral classroom discourse have distinguished between “closed” pedagogical interactions, in which teachers aim to elicit a single right answer from students, and “open” alternatives, in which teachers invite students to generate multiple responses, none of which is known or favored at the outset of the activity (Nassaji & Wells, 2000; Nystrand et al., 1997). Studies of whole-class discussions as scaffolds for students’ argument writing have drawn on this distinction between closed and open feedback to evaluate teachers’ oral responses to students’ participation in such dialogues (Wilkinson et al., 2017). Similarly, studies of teachers’ written responses to students’ argument writing have differentiated between two kinds of teacher feedback that each set a different task for writers. “Corrective” feedback prompts writers to edit their compositions as though their texts were in a late developmental stage; “facilitative” feedback encourages writers to inquire and experiment with their compositions as though their texts were in an early developmental stage (Beach & Friedrich, 2006).
Educational video games also provide feedback to student players, as the game assesses user input and provides timely information to players about their progress (or lack thereof) toward game goals (Rogers, 2017). Research on edugames has focused primarily on the learning benefits of what Wouters et al. (2013) have termed “corrective” feedback, which might be described in terms of cybernetic systems as “negative” feedback, or responses that limit players’ options and drive their progress toward predetermined goals (Salen & Zimmerman, 2006). However, Ravenscroft and Matheson (2002) have investigated the effectiveness of what they call “facilitative” feedback, which we compare to “positive” feedback in cybernetic systems, promoting expansion (rather than narrowing) of possibilities. In our view, each Quandary game scenario is a “negative,” or closed, cybernetic feedback system that propels players to make choices, all of which culminate in the same fixed outcome.
Feedback in edugames also depends on how well the game’s evaluation of player performances aligns with curricular goals (Gerber & Gaitan, 2017), as well as the perceived connection between feedback and outcomes (Rogers, 2016). Accordingly, in this article, we focused our analysis on two kinds of feedback provided by different stages of the “Water War” game scenario’s globally negative feedback system: “evaluative” feedback, which awards or withholds points and/or game advancement, and “nonevaluative” feedback, which does not. We also attended to whether and how students seemed to take up the game’s feedback, and what that uptake (or lack thereof) suggested about the game’s design.
Methodology
Site Selection
We conducted our qualitative, discourse-analytic case study (e.g., Dyson, 1993, 2003) in five seventh-grade ELA classes at William Bartram Middle School (all names in this article are pseudonyms), a suburban public middle school in the southern United States, during a 1-month unit of study on argument writing for environmental action. This unit was designed in response to an industrial accident at the Kaleidoscope fertilizer plant in a neighboring county, which had leaked waste products into the underground water supply and would potentially contaminate not only local wells but also a state aquifer. During the unit, seventh graders wrote and mailed letters to the governor, advocating solutions to the problem, supported by evidence drawn from news sources.
This site was ideal for our study because the students’ local environmental crisis closely paralleled Quandary’s science-fiction game scenario, which students played during the unit. In that scenario, “Water War,” the public well of a space colony on Planet Braxos becomes contaminated, requiring the colonists and their captain (the role assumed by each game player) to explore and eventually recommend possible solutions to the problem. The real-world civic challenge of public water contamination had also appeared on national media through events like the Flint, Michigan, water crisis (Kennedy, 2016). The local immediacy of the Kaleidoscope accident made the “Water War” scenario even more relevant to students’ learning to write effective arguments. Previous studies of argumentation and online gaming have emphasized the urgent need for research in educational contexts where students engage in writing and revising arguments to participate in community discussions, beyond the classroom, about important real-world issues (Hilliard et al., 2016, p. 10; Newell et al., 2011, p. 297).
In addition, seventh grade is a critical transitional period in the Common Core State Standards for ELA, entailing a considerable intensification of demands for argument literacy, particularly regarding argument writing. For example, in fifth grade, students are expected to “write opinion pieces on topics or texts, supporting a point of view with reasons and information” (National Governors Association Center for Best Practices & Council of Chief State Officers, 2010). But beginning in sixth grade, students must “write arguments to support claims with clear reasons and relevant evidence.” Seventh graders must also “support claim(s) with logical reasoning and relevant evidence, using accurate, credible sources and demonstrating an understanding of the topic or text,” as well as “acknowledge alternate or opposing claims, and organize the reasons and evidence logically.” Because students in seventh-grade ELA classes may be required for the first time in their educational history to analyze, evaluate, and synthesize in their own writing multiple (even competing) evidence sources, claims, and interpretive perspectives, we chose this school context as an appropriate site for studying how closed and nonevaluative feedback from an online game might shape secondary students’ development of written arguments.
Participant Selection
Among the 114 seventh graders who participated in our qualitative research project, we selected 10 focal students (two from each of the five ELA classes), whom their teacher, Jenna Hughes, had recommended as representatives of their peers (Bogdan & Biklen, 2007). In this article, we focus on two of these seventh graders: Ruby, a Latina girl, and DJ, a white boy. Although adolescent boys are the group most associated with gaming, a 2018 Pew Research Center survey has suggested that 83% of U.S. girls also game and that gaming is on the rise among Latinx teens. Accordingly, Ruby and DJ are relevant focal students to feature in our analysis. Moreover, their embedded cases illustrate patterns in our larger discourse data set.
Quandary
The digital, online game Quandary, which students played at the beginning of the seventh-grade ELA unit on argument writing for environmental action, was not only a site but also a participant in our study, given our theoretical stance on feedback as inclusive of input from nonhuman agents. While others have theorized the role of video games in learning in terms of human–computer interaction (HCI Games Group, n.d.), actor-network theory (Giddings, 2007), and material rhetoric (Bogost, 2008), we address students’ interactions with Quandary much as another study might describe and analyze pedagogical feedback provided by teachers, classmates, or traditional learning environments to highlight how edugame feedback can resemble and reinforce operative classroom discourse.
Quandary is an award-winning edugame that supports young people’s development of argument literacy by immersing them in a science-fiction world in which they play the role of a space-colony captain who must deliberate with fellow colonists on the best community response to problems arising on Planet Braxos. Each Quandary game scenario addresses a different civic challenge. The seventh graders participating in our study played “Water War,” which opens with a comic-style introduction to the quandary: The space colony’s sole public well has become contaminated by an unknown source, and colonists must decide what to do until the well’s undrinkable water has been purified.
Stage 1 of “Water War” introduces players to the 12 community members, who each represent a different vocation (e.g., doctor, farmer, historian) and embody that social role’s expertise, interests, and biases. By clicking each colonist’s character card, players can read that person’s perspective on the water crisis or watch and listen to the colonist’s animated oral delivery of the same text. After considering these viewpoints, players categorize the colonists’ comments as “Fact,” “Solution,” or “Other Opinion” by dragging each character card into one of three bins. (The game recognizes four statements per category as correct.) If players succeed in classifying at least two facts and two solutions accurately, they may proceed to the next game level. In Stage 2, players choose two of the Stage 1 solutions to explore. In Stage 3, players pick a colonist from a smaller group of community members selected by Quandary, who then elaborates on her or his Stage 1 perspective. Based on this comment (written, oral, animated), players drag each of their two chosen solutions to the colonist to ascertain her or his response, earning points for doing so. Given this additional feedback, players then drag each Stage 1 fact to the colonist. If the fact is relevant (i.e., a counterargument) to the colonist’s stance on that solution, she or he rebuts that objection, and players receive points. Based on this interactive feedback from as many as eight colonists, players choose in Stage 4 what they consider to be the better solution. In Stage 5, players decide which members of the Stage 3 group will support their chosen solution, picking no more than two proponents and earning points if their assessments are correct. In Stage 6, players similarly identify up to two probable opponents of the recommended solution, winning points for accurate matches. After Stage 6, the proposed solution is automatically submitted to the Colonial Council back on Earth, who approve it. Finally, in Stage 7, players predict which colonists in the entire Stage 3 group will “agree” and “disagree” with the council’s decision, earning points for correct appraisals. “Water War” ends with a comic-style conclusion, showing the outcomes of implementing players’ recommended solution, which are both positive and negative. The quandary has been addressed, “and within a month, the colony well is clean again.” Across the stages of “Water War,” Quandary provides different kinds of pedagogical feedback on players’ participation: evaluative feedback when rewarding or denying points or advancement for predetermined correct and incorrect moves, and nonevaluative feedback when allowing players to learn about the community, the issue, and the eventual consequences of the council’s decision, without incentivizing any particular conclusion.
Data Sources and Data Collection
After playing “Water War,” the 114 seventh graders wrote drafts of their advocacy letters to the state governor, which we collected and analyzed. We also recorded screen-capture video of the 10 focal students’ individual Quandary gameplay at separate consoles in a school computer laboratory during a 50-minute class period. From these 10, we selected four seventh graders, including Ruby and DJ, with whom to conduct semistructured interviews, based on how well their gameplay illustrated patterns among the focal students (Bogdan & Biklen, 2007). We excerpted video clips of these four seventh graders’ gameplay, which they then viewed and interpreted in individual, 30-minute, video-stimulated recall interviews (Lyle, 2003). During their interviews, Ruby and DJ explained their decision-making in different stages of “Water War,” providing voice-over commentary as they watched their gameplay video clips with an interviewer. Similarly, these focal students clarified whether, how, and why the game’s feedback had influenced their interpretations of, and responses to, the space colony’s water-contamination problem, as well as their composition and revision of their advocacy letters to the state governor. In addition, for this article, we compared Ruby’s and DJ’s gameplay and interviews with their subsequent advocacy-letter drafts to describe how their written argumentation responded to different kinds of pedagogical feedback provided by Quandary.
Data Analysis
To investigate how Quandary’s different kinds of feedback influenced focal students’ development of arguments, we first analyzed each stage of the “Water War” game scenario. Gee (2015) has suggested that video-game players engage in a responsive, turn-by-turn “conversation” with the gameworld as they attempt to accomplish their purposes (p. 26). Analysis of these conversational turns begins with attention to the “information resources needed to conduct a task or solve a problem in the game” (Wouters et al., 2011, p. 330). We noted how textual features like the stage title and the directions for participation, as well as visual elements (e.g., boxes, buttons, and moveable objects), framed the kinds of choices students might make at that stage, as well as whether and how those choices aligned with the learning goals and social awareness the game sought to foster (Gerber & Gaitan, 2017). Next, we played through each stage of “Water War” multiple times, exploring the choices available to players in each stage and assigning “process codes” to the effects produced in the gameworld by each possible decision—for example, earning points, provoking reactions from characters, gaining access to the next stage, and altering the events of the storyline (Saldaña, 2016, pp. 96–100). Then, we categorized these changes as evaluative feedback or nonevaluative feedback, using “theoretical codes” (Saldaña, 2016, pp. 223–228).
In a second phase of data analysis, we compared our coding of “Water War” to screen-capture videos of the 10 focal students’ gameplay. In this phase of process coding (Saldaña, 2016, pp. 96–100), we attended to students’ “manipulation of the information flow” (Wouters et al., 2011, p. 330), or what Gee (2007, 2015) and others have called “game mechanics”: how players use the spaces and objects of the game to take action that has meaning in the gameworld. First, we noted whether and how students responded to the apparent tasks, goals, and resources of each game stage (Garris et al., 2002). Next, we analyzed their choices, attending to the order and amount of time spent during their decision-making processes. Comparisons of these “magnitude codes” (Saldaña, 2016, pp. 72–75) for the 10 focal students enabled us to select video clips to show four of them, including Ruby and DJ, in video-stimulated recall interviews (Lyle, 2003), because their gameplay either epitomized or diverged interestingly from that of the other focal students.
Finally, in a third phase of data analysis, we focused on the seventh graders’ written arguments. First, we compared the solutions that Ruby and DJ had tested while playing “Water War” to those that they had recommended in their subsequent advocacy-letter drafts, determining whether and how their stances had changed from gameplay to letter writing. Then we consulted their interviews to ascertain why. Next, we examined what facts Ruby and DJ (as well as their 112 peers) had mobilized in their advocacy letters, and whether these facts were used to support solutions to the local industrial accident or “other opinions,” for example, regarding the anxiety-producing scope of the environmental problem. We compared Ruby’s and DJ’s evidence–solution combinations in their letter drafts to those that they had explored in Stage 3 of “Water War.” Finally, we considered how the kinds of feedback provided at each game stage (established during the first phase of our coding process) might have influenced the discursive “moves” that the seventh graders made in their letter drafts to accomplish their rhetorical goals (VanDerHeide, 2017, p. 323). Throughout all three phases of qualitative coding, we wrote memos to generate descriptive codes and thematic patterns.
Findings
Evaluative Feedback: Game Stage 1
Game feedback
Entitled “Get Your Facts Right,” Stage 1 of “Water War” invites players to classify the 12 space colonists’ comments on their contaminated public well as “Fact,” “Solution,” or “Other Opinion.” To advance to the next game level, players must drag character cards into one of three bins, successfully distinguishing among the colonists’ statements. Quandary recognizes four statements per category as correct, and each colonist’s comment may be classified as only one of the three categories. For example, these are the four possible solutions:
Eminent domain: “The colony should pay for everyone to have clean water from the private well [of Granik the construction chief].”
Community appropriation of a private well, without compensating the owner: “We should take over Granik’s well so everyone can use it.”
New public well construction: “We should really just dig a new [community] well.”
Inaction: “The best solution for this situation is to leave it alone.”
Players earn points for correct sorting and must “get right” at least two facts and two solutions to progress to the next game level. Thus, when rewarding or penalizing predetermined correct or incorrect answers in Stage 1, Quandary’s feedback on players’ participation is evaluative.
Ten focal students
Screen-capture video of the 10 focal seventh graders’ gameplay revealed that they all struggled with Stage 1 of “Water War.” Indeed, none correctly sorted all of the cards. Moreover, five of these focal students attempted to circumvent the task by simply dragging cards quickly into the three bins, then clicking the “Finish” button to see whether they could advance to the next game level. This task-avoidant behavior resembled similar instances documented in prior research on oral classroom interactions, in which students responded to a teacher’s evaluative, or known-answer, questions and prompts by resisting the activity or refusing to participate (e.g., Sherry, 2018). Sensing that the social interaction with the teacher is one in which students’ involvement will likely elicit disapproval, criticism, or correction—because only one predetermined response will “get it right”—many students choose not to risk investment in such traditional classroom discussions.
Ruby
Of the 10 focal seventh graders, Ruby was one of the five who attempted to “game” Stage 1 by randomly dragging cards into the “Fact,” “Solution,” and “Other Opinion” bins. Unsuccessful in this endeavor, Ruby was forced by Quandary’s game design to redo the sorting task. In response to this evaluative feedback, Ruby clicked on the audio-play button and listened to eight colonists orally deliver their written statements, before sorting them into the bins. Four of these comments were recognized as “Fact” by Quandary, one as “Solution,” and three as “Other Opinion.” However, Ruby ultimately misclassified the metalworker’s “Fact” (“If the colony pays for water, it will cut into the salaries of our experts and craftsmen”) as “Solution,” and the engineer’s “Solution” (“We should really just dig a new well”) as “Other Opinion.” (As we will discuss below, DJ similarly misidentified the engineer’s “Solution” as “Other Opinion,” which meant that neither Ruby nor DJ could test this course of action in subsequent game levels.)
Reflecting on her experiences with Quandary in her video-stimulated recall interview, Ruby explained that during Stage 1, she “didn’t know if I could tell the difference between opinion and fact. Because some of them were kind of this and kind of that, so I was confused.” Indeed, all four of the colonists’ comments accepted as “Fact” by Quandary offer objective evidence, yet also discredit one of the four possible “Solutions.” For example, the historian’s “Fact” problematizes the new well construction “Solution” (the engineer’s comment): “Because of Braxos’ subterranean layout [fact], two wells might connect to same water source [opposed solution].” (In Stage 1, both Ruby and DJ mistook this “Fact” for an “Other Opinion.”)
Perhaps because of her “confus[ion],” Ruby sought other means to complete Stage 1—namely, Quandary’s animated video feature: Maybe if I listened to them [the space colonists] doing it [orally delivering their perspectives on the contaminated well] . . . So I actually went back and looked at how they were saying it, so that maybe then I could know better.
Unable to sort the colonists’ comments based on their written statements alone, Ruby attended to each colonist’s tone of voice and body language to distinguish fact from opinion. Of Stage 1’s eight possible opinions, the four “Solutions” are delivered with emphatic certainty and assertive hand gestures, like folding arms and pointing index fingers. In contrast, the four “Other Opinions” are voiced with anxious uncertainty, accompanied by shrugging shoulders, head scratching, and sad or scared eyes. (In delivering his “Solution,” the engineer runs his fingers though his hair, an anxious or uncertain gesture that perhaps prompted Ruby to misclassify his comment as “Other Opinion.”). While the four available “Facts” include some of the other two categories’ gestures, these animated comments generally express curiosity through an enthusiastic or questioning intonation, raised eyebrows and shoulders, and chin tapping.
When subsequently drafting her advocacy letter to the state governor, Ruby similarly struggled to distinguish facts from opinions in the four local news articles that she cited. The florid prose of the journalists (e.g., “the hundreds of millions of gallons of contaminated water that poured into the earth when the accident occurred”) was often indistinguishable from the inflammatory remarks of quoted local residents, state government officials, and environmental activists (e.g., “These phosphate companies are playing roulette with our public waters”). Both interested and disinterested parties shared the same passionate tone of voice, which is perhaps why Ruby cited unproven statements as facts in her written description of the state’s environmental problem (e.g., “The Kaleidoscope accident is pouring waste into our aquifer and destroying it”). Likewise, 58 (51%) of the 114 seventh graders, including Ruby and DJ, in their advocacy-letter drafts postgameplay misrepresented opinions drawn from the four news articles as facts. In particular, the seventh graders tended to integrate as facts the uncorroborated opinions of local residents quoted by journalists, perhaps conditioned to trust community perspectives through their experiences with Quandary space colonists who each voiced an earnest, expert viewpoint.
DJ
Like Ruby, DJ also had difficulty in sorting the colonists’ comments into the “Fact,” “Solution,” and “Other Opinion” bins in Stage 1 of “Water War.” During his interview, DJ explained: I had to figure out what they were saying exactly so that I could put them in the right spot. Because some of them were unclear, and I had to find words that helped me to understand exactly what they were saying . . . Like key words.
Whereas Ruby had relied on the colonists’ tone of voice and body language to distinguish among their comments, DJ attended to “key words” that indicated varying degrees of certainty. For example, three of the four “Solutions” accepted by Quandary employ the insistent modal verb “should.” (The fourth begins, “The best solution for this situation is . . .,” announcing explicitly that this comment is a “Solution.”). In contrast, most of the “Other Opinions” express “hope” or “worry” about an unknown future. Only the remaining “Other Opinion” emphasizes thinking rather than feeling, though tentatively: “I don’t think it’s right . . .,” instead of, for example, “It’s not right.” Such lexical clues enabled DJ to complete Stage 1’s classification task.
Moreover, these (un)certainty indicators seem to have provided DJ with discursive stems for communicating his own recommended solution and other opinions when he subsequently drafted his advocacy letter to the state governor. In advocating “Solutions,” DJ used “should” three times to promote environmental cleanup (warranted real-world action) and new well construction (an inappropriate response, given that any future local wells would draw from the same polluted water source). In expressing “Other Opinions,” DJ wrote of his “concern” about the water crisis, much as the space colonists had expressed their “worry” about the contaminated public well. Similarly, DJ twice employed the hedging phrase “I don’t think” to indicate who is and is not responsible for the environmental damage, marking his assertions as well-reasoned opinion rather than established fact. Ninety-three (82%) of the 114 seventh graders, including Ruby and DJ, similarly integrated these “Solution” and “Other Opinion” discursive stems into their advocacy-letter drafts postgameplay.
In Stage 1 of “Water War,” Quandary provided evaluative feedback on the seventh graders’ participation by penalizing or rewarding them for incorrectly or correctly classifying facts, solutions, and other opinions. However, the game did not teach the students evaluative criteria for making such distinctions. Accordingly, while playing Quandary, Ruby and DJ attended to superficial discursive features rather than complex meaning: tone of voice and body language, and “key words” indicating degrees of certainty and uncertainty, respectively. Although these improvised strategies enabled Ruby and DJ to complete Stage 1, they were less effective when applied to the comments of local residents, government officials, and environmental activists, quoted in four news articles about a real-world environmental crisis. Roughly half of the seventh graders, including Ruby and DJ, used opinions (typically anxiety-producing, though as yet unfounded, speculations of imminent doom) cited from the news articles as facts in their advocacy letters to support their claims that the industrial accident was an urgent, worrisome problem. However, these students did not present evidence (either facts or opinions) to justify a particular solution, as we will discuss in the next section. Nevertheless, both Ruby and DJ maintained that playing Quandary had improved their ability to distinguish facts from opinions. It seems that the binary quality of the evaluative feedback provided in Stage 1 of “Water War” had given the seventh graders a false sense of competence, which was later belied in their advocacy letters by their misuse of community opinions as facts about local water pollution.
Evaluative Feedback: Game Stage 3
Game feedback
Having correctly identified at least two of the four possible solutions in Stage 1, players must choose two courses of action in Stage 2, “Narrow It Down.” None of these solutions are awarded points, so Stage 2 offers players nonevaluative feedback on their participation. In Stage 3, “Investigate Viewpoints,” players can present their two chosen solutions to eight colonists selected by Quandary, exploring each person’s perspective on those possible courses of action, and earning points for doing so. Moreover, players can associate a Stage 1 fact with each solution. If the fact functions as a counterargument to the colonist’s stance on the solution, that person rebuts the objection, and players receive points. No points are awarded to facts corroborating (vs. contesting) the solution. Thus, the evaluative feedback provided in Stage 3 does not reward the use of facts as evidentiary support for a recommended course of action—a missed opportunity to teach values, norms, and practices of academic reasoning.
Ten focal students
Screen-capture video revealed that the 10 focal seventh graders spent most of their time on Stage 3 (roughly half of their total gameplay). Moreover, students seemed to devote equal time to exploring each of their two chosen solutions. In contrast with Stage 1, in which students attempted to sidestep the task, during Stage 3, focal students lingered and did even more argumentative work than was required by the game, which allows players to skip Stage 3 if they so choose. Thus, for the seventh graders, Stage 3 elicited the most engagement, perhaps because it was the most interactive in design: While only some fact–solution pairings earned points, all combinations allowed players to provoke reactions from colonists regarding their developing arguments.
Ruby
For focal student Ruby, Stage 3 “wasn’t that confusing,” unlike Stage 1, which she had been obliged to repeat before advancing to Stage 2. In Stage 3, Ruby tested two solutions: eminent domain and inaction. However, because Ruby had not correctly identified new well construction as a “Solution” in Stage 1 (the engineer’s comment), she could not ascertain the colonists’ responses to that course of action in Stage 3. Similarly, because of misclassification in Stage 1, Ruby was unable to use the historian’s “Fact,” regarding the common water source for all of Planet Braxos’s wells, to contest her two chosen solutions in Stage 3.
Accordingly, in drafting her advocacy letter to the state governor after playing Quandary, Ruby recommended new well construction, as did 69 (61%) of her peers, including DJ, despite knowing through a previous seventh-grade science field trip to a water-treatment plant that all local wells would draw on the same potentially tainted aquifer. Moreover, in keeping with Stage 3’s model of argumentation, Ruby did not cite evidence—either facts or opinions—from the news articles or other sources, including her own experience, to support her favored solution. Granted, in her real-world context, new well construction was an irrational proposal from the outset, yet Ruby did not attempt to derive her preferred solution from an investigation of available facts. Nevertheless, though she did not detect this error in her reasoning, Ruby did employ in her letter draft Stage 3’s strategy of using evidence to challenge the validity of a possible solution. However, because Quandary does not provide evaluative criteria for distinguishing facts from opinions, Ruby treated the panicked opinions of local residents quoted in the news articles as evidence for her opposition to the inaction solution, which she had explored in Stage 3 of “Water War”: “We know you [the state governor] say that the water is tested and comes out negative [for toxicity], but people won’t take the risk to drink the water.”
DJ
Given his Stage 1 performance, focal student DJ, like Ruby, was not permitted by the game to test the engineer’s solution of new well construction in Stage 3. Instead, he explored the solutions of public appropriation of a private well either with or without compensation to the owner: “With [each] solution, I matched up a fact, which would change their [colonists’] opinion or strengthen their opinion to what they wanted to do.” In this way, Stage 3 enabled DJ to practice using facts to contest possible solutions, to consider virtual audiences’ attempts to rebut his objections, and generally to investigate the merits and limitations of his two chosen solutions.
However, despite this nuanced inquiry into civic argumentation while playing Stage 3 of “Water War,” in drafting his advocacy letter, DJ recommended the new well construction solution, in addition to environmental cleanup, the only solution that logically addressed the real-world environmental problem. Notwithstanding this error in judgment, all the more remarkable given DJ’s earlier participation in the water-treatment-plant field trip, in his letter drafts, DJ (like Ruby) anticipated plausible counterarguments and used evidence to oppose the solutions he had tested in Stage 3. However, also like Ruby, the evidence that DJ cited from the news articles constituted opinions, not facts: I don’t think it would be fair if people had to pay for clean water [rebuttal of eminent domain solution], because they are not at fault [community opinion as evidence for rebuttal] . . . People with clean water shouldn’t be required to share their water [rebuttal of uncompensated appropriation solution], but I believe, with everything going on, it would be very much appreciated [community opinion as evidence for rebuttal].
Moreover, similar to Ruby, DJ cited neither facts nor opinions drawn from the news articles or other sources as evidentiary support for the two solutions that he proposed to the state governor, even though there were ample factual grounds for environmental cleanup.
Because Quandary rewarded Ruby and DJ with points for using facts to contest possible solutions in Stage 3 of “Water War,” this evaluative feedback seemed to encourage them to try this argumentation strategy when they subsequently drafted their advocacy letters. However, Quandary did not teach them in Stage 3—or any other game level—evaluative criteria for determining the persuasiveness of an argument: What makes an effective solution to an environmental problem? Accordingly, both Ruby and DJ recommended the inadequate solution of new well construction to the state governor. Moreover, the question of which approach(es) to take to the complex project of environmental cleanup—the logical frame for proposing solutions to the local industrial accident—was not addressed by any of the 114 seventh graders. When asked what he had learned, if anything, by playing Stage 3 of “Water War,” DJ replied, “That every opinion matters, I guess?”
Nonevaluative Feedback: Game Conclusion
Game feedback
Of the two solutions explored in Stage 3, players must choose one in Stage 4 to recommend to the Colonial Council back on Earth. Quandary awards no points for this decision, providing nonevaluative feedback to players. However, in Stages 5 and 6, players attempt to identify up to two proponents and opponents of their chosen solution, earning points for correct answers. In Stage 7, players then expand this sorting to address the larger group of colonists from Stage 3. Although it is not a game stage, the comic-style conclusion of “Water War,” which shows the outcomes of the implemented solution, both popular and unpopular, provides nonevaluative feedback to players in presenting the consequences of players’ argumentation, without favoring one solution above the others.
Ten focal students
Although this conclusion requires players simply to click a button to terminate the game, screen-capture video showed that the 10 focal seventh graders did not race through this ending. Instead, they read or viewed and often reread or reviewed the outcomes of their recommended solution. Some students even replayed the game to test out a different solution. Thus, the conclusion, like Stage 3, seemed to interest the seventh graders more than other game levels, perhaps because its nonevaluative feedback allowed them to receive diverse audience responses to their argumentation.
Ruby
In the final stages of “Water War,” focal student Ruby chose to recommend the eminent domain solution. While engaging with the game’s conclusion, she not only read but viewed and listened to each colonist’s reaction, even when that response was a two-word “Thank you.” Regarding the mix of positive and negative outcomes of her implemented solution, Ruby explained during her interview, “It kind of came out the way I wanted it to.” She also commented that hearing the colonists’ final reactions “helped a lot.” This specific, though nonevaluative, feedback seemed to deepen her appreciation of her game argument’s impact on the community.
However, in subsequently drafting her advocacy letter, Ruby did not recommend the eminent domain solution but instead proposed that Kaleidoscope should pay to build new wells for local residents affected by the industrial accident, even though she also acknowledged in her letter that the aquifer on which such wells would draw might be polluted by the chemical spill. Perhaps the nonevaluative feedback of the game’s conclusion inadvertently encouraged Ruby’s illogical letter argument. In “Water War,” all of the four possible solutions are interpreted by colonists as being equally good and bad, which undermines the importance and urgency of players’ deliberation efforts. Moreover, the colony’s core problem (water contamination by a native parasite) is eventually solved outside the bounds of the game by the resident biologist: The last words of the “Water War” conclusion are “And within a month, the colony well is clean again.” Put differently, “Water War” players explore, weigh, and decide on a solution that is ultimately only a stopgap measure. However, in the real world, the seventh graders needed to address the complex and consequential problem of local environmental pollution, for which there would likely be no “quick fixes.”
DJ
Similarly, DJ selected the eminent domain solution to test in Quandary’s final stages. In his interview, he noted that “not everyone [among the colonists] was happy about paying [the private well owner], but they were happy about getting clean water. So I guess that’s what mattered.” Moreover, DJ noted, “You need to look at all viewpoints before jumping to a conclusion.” Thus, it seems that the game’s nonevaluative feedback, which had enabled DJ to consider multiple viewpoints, without driving him toward one right answer, had enriched his understanding of effective argumentation.
However, like Ruby, DJ also abandoned this solution to argue for new well construction in his subsequent advocacy letter to the state governor, despite acknowledging, “This waste could affect the aquifer, which means all of [the state].”
Even if Ruby and DJ had been able to test the new well construction solution in the game, they would have discovered that it, too, produced equally positive and negative results: Initially, the new well water is contaminated, but then it is purified by the biologist.
Presenting all solutions as equal and not providing evaluative criteria for determining the persuasiveness of an argument, Quandary’s nonevaluative feedback may have contributed to 70 (61%) of the 114 seventh graders’ advocacy for an inadequate solution to the real-world water crisis. Indeed, only one seventh grader acknowledged that none of the four “Water War” solutions would address the local industrial accident.
At the end of his letter, DJ wrote, “All I am asking is for something to be done.” After playing Quandary, the seventh graders demonstrated in their advocacy-letter drafts that they could cite evidence from four news articles to argue that the Kaleidoscope chemical spill was an urgent public problem, though they tended to reference community opinions rather than facts—perhaps because of the design and the feedback of Quandary’s Stage 1. The seventh graders also showed that they could recommend solutions, though their proposals tended to be ill-advised because they failed to address the core problem of environmental damage. Moreover, the seventh graders used community opinions (not facts) drawn from the news sources to contest alternative solutions to the problem—perhaps because of the design and feedback of Stage 3. However, the seventh graders generally did not ground their favored solutions in factual support. All of these argumentation moves were consistent with, and perhaps encouraged by, the tasks and feedback provided by Quandary’s “Water War” game scenario, yet they did not fulfill seventh-grade curricular goals regarding argument writing.
Discussion
When an edugame penalizes or rewards players for making predetermined incorrect or correct moves, it resembles traditional oral classroom discourse between teachers and students in which teachers attempt to elicit particular “right” answers from students, called “IRE/F” for “teacher Initiation, student Response, teacher Evaluation/Follow-up” (Mehan, 1979; Wells, 1999). Moreover, “single correct answer” game tasks may be more likely to cue “IRE/F” in students when young people are assigned to play edugames in the context of school lessons. Our research demonstrates that when students encounter such constraining evaluative feedback in an edugame, which limits their freedom to explore and play, student players may try to circumvent the task. Previous studies of whole-class discussions have similarly shown that students may resist or attempt to subvert such IRE/F exchanges in an effort to avoid embarrassment or to reposition themselves as knowledgeable contributors (e.g., Mehan, 1979; Sherry, 2018). Likewise, prior research on written teacher feedback on students’ compositions has noted that teacher comments aimed at “fixing” students’ writing are often not appreciated by authors (Straub, 1997). Our study extends these earlier inquiries by documenting a similar phenomenon in edugameplay.
Moreover, our findings indicate that when students receive evaluative feedback from an edugame regarding their failure to complete a mandatory, yet confusing, task, they may employ improvised, compensatory strategies in their efforts to navigate the game, which may be less effective when later applied in real-world rhetorical situations. Prior research on oral classroom discourse has likewise demonstrated that in face-to-face IRE/F exchanges, students may rely on the teacher’s tone of voice or the order of proffered response options to determine which answer is “correct” (Nystrand et al., 1997). Similarly, previous studies of teachers’ written comments on students’ compositions have revealed that when teachers give vague feedback on students’ writing, yet assert clear consequences for noncompliance, students may interpret and apply teachers’ comments in unexpected ways (e.g., Sommers, 1982). These earlier inquiries have described how evaluative feedback can transform pedagogical interactions into games. Similarly, the National Council of Teachers of English (2013) has warned that “computer scoring systems can be ‘gamed’ . . . [if students] know and can use machine-tricking strategies” (p. 2) Our study adds to this literature by exposing how edugames may also be “gamed” if players are cued by evaluative feedback.
In addition, our research illustrates how evaluative feedback from an instructional video game, which unequivocally awards points to predetermined “correct” player moves—even when ambiguous task design makes those answers problematic—can foster a false sense of competence at targeted tasks. The seventh graders who participated in our study tended to have difficulty in distinguishing facts from opinions in Stage 1 of “Water War” and beyond. Accordingly, focal students Ruby and DJ relied on superficial stylistic features to make these determinations, which Quandary rewarded. After playing the game, most of the seventh graders, including Ruby and DJ, struggled to separate community opinions quoted in the four news articles from scientifically established facts, favoring strongly expressed emotion as evidence in their developing arguments. Similarly, a recent Pew Research Center report has indicated that 64% of U.S. adults reported being confused by “fake news,” yet 84% nevertheless claimed to feel confident in their ability to identify factual reporting (Bialik & Matsa, 2017, p. 6). Accordingly, English educators have begun to offer strategies for teaching critical media literacy in a posttruth era. However, their efforts have primarily focused on information literacy (e.g., establishing source credibility), rather than on argument literacies (e.g., analyzing the design and effects of arguments in particular communities and cultures of argument, such as disciplinary, ethnic, or gendered) (Goering & Thomas, 2018; Hicks & Turner, 2017).
Finally, our study also exposes limitations of nonevaluative pedagogical feedback. Many ELA teachers not only assume but also teach students that all perspectives on a given issue are equally valid, regardless of their logic and evidentiary support (Wilkinson et al., 2017). However, this epistemological stance may not sufficiently address curricular goals related to students’ critical thinking about arguments they read, hear, or view or their rigorous composition of their own arguments. Indeed, as illustrated by our study, nonevaluative feedback may seem like no feedback at all. Across the stages of the “Water War” game scenario and the game conclusion, Quandary treats all four possible solutions as having similarly weighted benefits and drawbacks. Moreover, after players’ careful deliberation and decision-making, the game conclusion reveals that their recommended solution is ultimately ineffectual because the water-contamination problem is solved offstage by another community member. After playing Quandary, most of the seventh graders who participated in our study wrote an advocacy letter to the state governor, arguing in favor of a losing strategy for addressing a real-world environmental problem, as though all that mattered was action, rather than a well-reasoned course of action: in DJ’s words, “All I am asking is for something to be done.” Why should students come to value logical argumentation if nothing depends on it, even in the edugameworld?
Previous research on oral discourse in ELA classrooms has valorized teachers’ nonevaluative feedback, claiming that it models student-centered, inquiry-driven dialogue and is thus an effective scaffold for written argumentation (Reznitskaya & Wilkinson, 2017). We caution that such latitude for exploration and experimentation may not support students’ achievement of argument-literacy goals, in the absence of rigorous study and practice of (inter)disciplinary and democratic criteria for evaluating argumentation (e.g., evidence sources, evidence–claim connections, and contextual appropriateness). In this way, we draw attention to the forgotten fourth “R” in American public education’s emphasis on “reading,” “riting,” and “rithmetic”: rhetoric.
Implications
To bolster young writers’ sense of agency, ELA educators have praised writing assignments and other activities that enable students to receive responses from authentic audiences (beyond teachers, classmates, and standardized-test scorers), as such experiences emphasize that a fundamental purpose of writing is meaning-making (Christensen, 2000; Lewison & Heffernan, 2008). Our research confirms this trend in that the seventh graders spent the most time on the interactive levels of “Water War.” Unfortunately, none of these middle schoolers received a response from the state governor to their advocacy letters. Thus, a real-world community-literacy project became just another “school game,” a pointless exercise. In fact, the teacher’s thoughtfully designed assignment came to resemble a common timed-writing prompt on the state standardized test, which invites test-takers to make a case for a given course of action in a letter to their state governor. As other English-education researchers have documented, when students risk-making arguments for social action in their own communities, lack of feedback from those local audiences can be demoralizing (Kamler, 2001; Lensmire, 2000). We agree that one of the most important goals of feedback on secondary students’ written arguments—be it evaluative or nonevaluative, in or out of school, or from human or nonhuman agents—is that it assures young people that their voices matter and that they can be shaped to even greater effect.
Although our study highlights shortcomings of evaluative and nonevaluative pedagogical feedback, particularly regarding the edugame Quandary, we do not mean to discourage English educators from pursuing the potentials of research and teaching involving video games and other digital tools that provide opportunities for play-based inquiry, as well as feedback on students’ writing. Despite their initial attempts to “game” Stage 1 of “Water War,” focal students in our study engaged seriously with Quandary’s invitation to develop arguments advocating environmental action, and they later used several of the game’s rhetorical moves in their real-world argument writing. Game-studies researchers have similarly shown how edugames can offer opportunities for playful ecological citizenship (e.g., Gabrys, 2019; Raessens, 2019). English-education researchers have likewise begun to explore how role-plays, simulations, and gaming can prompt students’ argument writing about climate change (Beach et al., 2017). Such activities may help students to take an important first step in writing for environmental action, from ignoring the problem to countenancing the dire realities of humanity’s impact on Earth (Sherry, 2019). However, once students move beyond what environmental activist Dr. Joanna Macy has called “business as usual,” into “The Great Unraveling,” where they face hard ecological truths, students may need help to see argument writing for environmental action as part of “The Great Turning,” a collective commitment to environmental awareness and global sustainability (Macy & Johnstone, 2012). The seventh graders who participated in our research did not complete this developmental arc of environmental literacy during the 1-month unit of study. Thus, we call for further research on how pedagogical feedback, in whatever form, might persuade young people that their evidence-based argument writing about environmental issues is urgent, important social action.
Supplemental Material
sj-zip-1-jlr-10.1177_1086296X20986598 – Supplemental material for How Feedback From an Online Video Game Teaches Argument Writing for Environmental Action
Supplemental material, sj-zip-1-jlr-10.1177_1086296X20986598 for How Feedback From an Online Video Game Teaches Argument Writing for Environmental Action by Anne M. Lawrence and Michael B. Sherry in Journal of Literacy Research
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
