Abstract
Educational leaders and teachers are increasingly embracing the concept of student voice and implementing student voice practices (SVPs) in schools and classrooms. This study presents the validation of two SVPs scales that together reflect the state of SVPs in a school. One 17-item scale focuses on teachers’ SVPs, and the other 11-item scale addresses school-level SVPs. Exploratory and confirmatory factor analyses using a randomly divided sample of 1,751 students from four schools revealed both scales to be multidimensional, consistent with theory. Each scale also showed strong reliability, convergent validity with student-teacher relationships, and concurrent validity with academic engagement. Measurement invariance testing demonstrated that the scales are invariant by gender and racial and ethnic background. The SVPs scales can advance research on student voice in schools by helping to clarify the enabling conditions and the effects of practices that give students a say in improving their educational experiences.
Keywords
Introduction
Whether they are filling out feedback forms for their teachers, participating in their principal’s advisory council, collaborating with teachers and administrators on decision-making committees, or organizing to demand change to their school’s policies or practices, students are increasingly using their voices to drive improvements in their classrooms and schools. Indeed, the concept of student voice has gained greater visibility and uptake over the last two decades (Benner et al., 2019; Hendrie, 2023; Mockler & Groundwater-Smith, 2014). The term student voice refers to “a heterogeneous array of programs and practices that incorporate an active student role in the identification of needs in their schools, decisions about improvement strategies and priorities, and decisions about strategy implementation and evaluation” (Giraldo-Garcia et al., 2021, p. 52). As Guzman-Valenzuela (2024) showed, research on student voice has expanded rapidly in recent years, reflecting a surge in student voice practice.
Despite advances in student voice research (Brasof & Levitan, 2022; Cook-Sather, 2018), theory (Cook-Sather & Mathews, 2023; Holquist et al., 2023; Pearce & Wood, 2019), and implementation (Giraldo-García et al. 2021; Mitra, 2021), and despite growing interest in the concept from policymakers and funders, the field lacks a validated measure for assessing the extent and quality of student voice practices (SVPs) in a school. Such a measure is needed not only so that schools can assess and track this facet of their culture over time but also so that the field can gain a deeper understanding of its importance and impact. This study aimed to contribute to the knowledge base on student voice by developing and validating a scale to measure SVPs in a school.
Conceptualizing SVPs in School
We start from the premise that at its most basic, student voice (SV) consists of three quintessential dimensions: (a) opportunities for students to participate in sharing their voices, (b) student participation in those opportunities, and (c) the responsiveness of adults to students’ ideas and insights. Although types of SV occur in the absence of (or with weak forms of) any one of these dimensions (e.g., student activism is SV without formal, approved opportunities; tokenistic SV is participation without responsiveness), the state of SVPs in a school is defined by all three.
Opportunities
Our conceptualization of opportunities for students to share their voices is informed by a framework developed by Holquist et al. (2023). This framework identifies several components of SV opportunities that can be interrogated. These include setting, access, and roles.
Setting
Holquist et al. (2023) observe that SVPs in a school can occur at one of two levels: those that operate at the school level and focus on school policies, practices, and culture (Taines, 2014) and those that operate at the classroom level and concern students’ experiences in their classrooms (Bloemert et al., 2020; Howley & O’Sullivan, 2021). To assess the state of SVPs in a school, it is important to know the extent of both school- and classroom-level opportunities. A school with low school-level SVPs would present few opportunities for students to either voice their opinions about school policy and practice or participate in school-level decision making through committee work, advisory boards, or governance bodies. By contrast, a school with high school-level SVPs would have a range of such opportunities. Similarly, a school with low classroom-level SVPs would be one in which few teachers engage students either in providing input and feedback on issues of curriculum, pedagogy, assessment, or classroom culture or in collaborating on decision making in the classroom. Meanwhile, a school with high classroom-level SVPs would be one in which most of the teachers embrace such practices. Schools therefore can be classified as high in both settings, low in both, or high in one and low in the other. Given conceptual differences between classroom and school SVPs, separate scales may need to be created for each setting, but both scales would be needed to understand the overall state of SVPs within a school.
Access
In each setting, SV opportunities may differ with regard to students’ access to them. Research has found that SV and governance opportunities are relatively rare in urban schools (Bertrand et al., 2023) and more common in schools serving affluent, predominately White students (Kahne & Middaugh, 2009; Kirshner, 2015; McFarland & Starmanns, 2009). Even within schools, access can vary in ways that reflect issues of privilege and power. For instance, some researchers observe that SV or school leadership opportunities tend to be limited, either implicitly or explicitly, to high-achieving, highly involved students or students whose demographic profiles reflect White, college-going identities (Czerniawski, 2012; Hipolito-Delgado & Zion, 2017; Keisu & Ahlström, 2020; Kirshner, 2015). As governance spaces come to be dominated by students with a particular identity, it can become even more difficult for students without that identity to break in. For example, in their case study of student voice at East High, Marsh and Nelms (2020) recounted how LGBTQ+ students reported feeling left out of student decision making. McKay’s (2014) case study of two students with disabilities further revealed how bias and favoritism on the part of adults can play a role in who gets invited to meetings with school leaders to share their voice and who does not. The voices of minoritized students, the voices of those for whom school is not working well, and the voices that can be most difficult for adults to hear tend to be neglected or silenced, significantly limiting potential opportunities for school improvement (Mansfield et al., 2018). Measuring perceived access and disaggregating results along demographic lines are key to determining who has been excluded from SV opportunities.
Access also can vary as a function of the purpose or intent of the SV initiative. As Holquist et al. (2023) observed, some school-level SVPs, such as climate surveys, town halls, or a suggestion box, may be designed for the entire student body to participate, but other initiatives may target a specific set of students, such as those who are chronically absent or struggling in their classes (Superville, 2019). The same may be true at the classroom level, because teachers may decide to distribute feedback forms to their entire class, or they may target particular students to participate in more intimate and in-depth SVPs, such as cogenerative dialogues (Beltramo, 2017; Emdin, 2017) or pedagogic partnership programs (Cook-Sather et al., 2019). In a school with strong SVPs, there should be evidence of both types of access: widespread and targeted.
Roles
Whether at the school or classroom level, SVPs also may differ with respect to the roles accorded to students. Holquist et al. (2023) defined roles in the context of an SVP as the amount of power, responsibility, and leadership students hold with regard to decision making relative to adults. Because SV entails destabilizing, if not upending, traditional power hierarchies in schools, power and leadership have long been considered critical aspects of SV (Cook-Sather, 2006; Fielding & Rudduck, 2002; Taylor & Robinson, 2009). To understand the nature of the opportunities for participation that a school affords its students, one must assess students’ roles or leadership relative to adults.
The field of SV is replete with frameworks that differentiate these roles, including ladders, spectra, and pyramids (Conner, 2015). One of the most parsimonious of these frameworks is Mitra’s (2005) “pyramid of student voice,” which situates being heard (or “listening” to students) on the bottom, “collaborating with adults” in the middle, and “building capacity for leadership” at the top (Mitra & Gross, 2009, p. 523). Translated into practice, the bottom level would entail asking students for their input and feedback and listening to their ideas. Most commonly this is done through surveys or feedback forms, but other mechanisms for listening exist, such as suggestion boxes, town halls, open office hours, and student advisory boards.
The practices at the middle level entail working collaboratively with students to make decisions (Mitra & Gross, 2009). This might include students and adults partnering to collect data on school problems and implement solutions (Biddle & Hufnagel, 2019). Often, at the school level, these types of practices take the form of a committee, such as a continuous-improvement team or a school climate committee, on which students, teachers, and administrators serve. In the classroom, collaborative decision making may involve choosing which activity to do as a class or co-creating an assessment rubric.
The third level, which Mitra and Gross (2009) noted is the rarest, entails practices that enable students to take leadership of a SV initiative. These practices may involve a student-led student voice club, student-led professional development for staff, or even student government with substantial decision-making power. In the classroom, student leadership tends to be exceedingly rare because democratic schooling and even personalized learning require collaboration and shared decision making between students and teachers (Bishop et al., 2020; Tse, 2009); however, some teachers may create opportunities for self-directed or collective learning experiences in which students are in charge of key decisions. For example, youth-led participatory action research projects allow a group of students to choose an issue to study, to determine the best ways to conduct the research, and to identify the actions they want to take based on their findings (Arthurs, 2018; Ozer et al., 2013).
In short, SVPs position students in one of three ways: (a) as sources of information, (b) as collaborators, or (c) as leaders. The SVPs therefore can be understood as seeking students’ input, engaging students in collaborative decision making, or supporting student leadership and decision-making authority.
Participation
The second dimension of a school’s SVPs, participation, pertains to whether students engage once presented with the opportunities described earlier. Of course, such participation depends on having access to the opportunity, not only knowing that it exists but also being invited or encouraged to participate. Although access may constrain participation in particular ways, who decides to participate once invited bears on representativeness—that is, how reflective the SVs within these opportunities are of the intended sample (Holquist et al., 2023). A school could have high or low rates of participation. Moreover, a school can have high or low rates of representation. When students participating in a SVP are not representative of the population the SVP is intended to serve (be it the whole school, an entire class, or a targeted subgroup of students), decisions made as a result of the SVP may not meet the needs of the intended beneficiaries (Caetano et al., 2020; Keisu & Ahlström, 2020). Along with its corollary, representativeness, participation is an integral element of SVPs. After all, without student participation, the SVPs ring hollow.
How students choose to participate in SVPs varies at the school and classroom levels (Holquist et al., 2023). At the school level, participation in SVPs often requires students to opt in to participate, to volunteer or be selected for participation or both (Demetriou & Wilson, 2010; Taines, 2014). This is because school-level SVPs are often intended to serve the entire student population, and due to school size, not all students may be able to participate in a SVP. For example, not all students can participate on a school advisory board. At the classroom level, participation in SVPs tends to be more embedded into the teacher’s instructional practice (Bloemert et al., 2020; Howley & O’Sullivan, 2021). Because classroom-level SVPs are only meant to serve the students in a specific class, students typically must opt out of participating. For example, all students may be invited to participate in developing classroom rules, but students may opt out of offering their input or feedback. In the case of more targeted invitations, such as an opportunity to participate in a cogenerative dialogue session with their teacher during lunch, the desire to please their teacher may make some students feel reluctant to opt out. At the same time, how fulsome the participation is may be shaped by the students’ fears of perceived retribution for sharing candid opinions or perspectives on the quality of the teaching or effectiveness of the learning environment. Given the nature of classroom-level SVPs, it may be harder for students to opt out of participating because teachers may know and further encourage students to participate. Conversely, at the school level, it is easier for students to decline to opt in because school leaders may not directly know who is and is not participating. Therefore, it may be more important to measure participation at the school level than at the classroom level when striving to understand the state of SVPs within a school.
Prior studies have noted that students may not participate in SVPs when they do not feel safe, or they may self-censor, remain silent, or go along with the ideas of adults if they perceive risks to sharing their own views, such as retaliation from teachers or administrators (Bahou, 2012; Biddle & Hufnagel, 2019; Groundwater-Smith et al., 2014; Hipolito-Delgado, 2023). Additionally, race and class dynamics may further impinge on student participation. Students might not engage in SV opportunities if they have the impression that the school does not value the experiences and perspectives of “students like them.” Research has found, for example, that White and Black principals have different motivations for soliciting the voices of Black and Brown students, with the Black principals being more interested than their White counterparts in deconstructing oppressive conditions and cultivating the critical consciousness of their students (Flores & Ahn, 2024). These leaders may send different messages to students about why they should engage in SV opportunities, with some messages resonating more than others. Research also demonstrates that students are more inclined to trust educators who share their ethnic or racial background (Irizary & Williams, 2013), so in settings where the ethnoracial identities of the teachers and administrators do not reflect those of the majority of students, participation may be further depressed by a lack of trust (Mitra et al., 2024). Without a robust foundation of trusting student–educator relationships, even SV initiatives designed to improve the school experiences of a targeted group of students (e.g., chronically absent or LGBTQIA2S+ students) may fail to attract sufficient participation (Superville, 2019). If a school with robust SVPs has both widespread and targeted access to SV opportunities, then it also should have stable participation rates that do not vary significantly across different demographic indicators.
Responsiveness
A final dimension of the state of SVPs in a school, responsiveness, refers to “the extent to which a student voice practice contributes to change” (Holquist et al., 2023, p. 721). Research by Kahne et al. (2022) highlighted the importance of responsiveness to SV. Students who perceived their schools as more responsive to their concerns attended school more often and earned higher grades than their counterparts in schools perceived as less responsive to student concerns. Responsiveness encompasses a set of practices that entail listening, acting, and closing the loop by informing students of how their contributions led to changes in policy or practice. These practices can be measured at both classroom and school levels.
As with access to opportunities and participation, research has documented how responsiveness can be shaped by racialized power dynamics (Bertrand, 2019; Salisbury et al., 2020). For example, Hipolito-Delgado (2023) illustrated how some White educators show little responsiveness to the voices of students of color who call for change, resorting to “hierarchies of power” (p. 3) to discount, manipulate, or even silence students’ critiques.
Other Aspects of SVPs
There are two pieces of the framework posited by Holquist et al. (2023) that we do not incorporate into our conceptualization: intent and focus. Although these two dimensions may be important for explicating a specific SVP, they are less pertinent to understanding the generalized state of SVPs in a school. Irrespective of whether the focus of the SVPs is the curriculum, pedagogy, school culture, dress code policies, or the cleanliness of the bathrooms, the intent should always be improving students’ experiences as learners and as members of the school community (Biddle, 2019; Giraldo-García et al., 2021; Halliday et al., 2019); however, adult intent is not always transparent to students. Whether or not an initiative has at its core an equity intent may be especially difficult for students to discern. Generalizing intent across a range of school-level initiatives also may prove to be complicated. As such, focus and intent matter when examining a specific SVP but are less relevant to understanding students’ perceptions of the state of SVPs in their school.
A final nuance in our tripartite conceptualization is worth mentioning. We are interested in understanding the state of SVPs in a school, not the state of SV. The critical distinction here is our focus on practices. A school with few school-sanctioned opportunities for SV and few teachers who seek out SV would have few SVPs, but if student activism is high, the school could be abuzz with SV. Our measure was not designed to capture the level of student activism in a school in the absence of SVPs.
Existing Measures
As Lyons et al. (2020) observed: “There is a growing body of student voice research, . . . yet most studies have been qualitative in design. . . . The number of quantitative studies in the field is limited, perhaps in part due to an absence of validated measurement tools” (p. 1). Because of this gap in the field, some recent quantitative studies have examined SV conditions in schools by using one or two items drawn from larger climate scales (e.g., Conner et al., 2022; Kahne et al., 2022) or by using survey items that have not been subjected to psychometric testing beyond reliability scores (e.g., Anderson & Graham, 2016; Moore, 2022). Although these studies have helped identify significant associations between students’ perceptions of SV and various outcomes, including student agency (Moore, 2022), student well-being (Anderson & Graham, 2016), student engagement (Conner et al., 2022), and student attendance and achievement (Kahne et al., 2022), they have not addressed the need in the field for a validated measure of SVPs.
Four measures pertaining to SV have been developed and validated in recent years. First, Zeldin et al. (2014) developed a measure of youth–adult partnerships, a two-dimensional construct consisting of supportive adult relationships (five items) and youth voice in decision making (four items). Although initially developed and validated for use in community-based programs, the scale has since been profitably applied in high school contexts, where it has been linked to both emotional and cognitive engagement (Zeldin et al., 2018). The youth voice in decision making subscale assesses “the degree to which youth perceive that their ideas are heard, respected, and considered” (Zeldin et al., 2018, p. 360). Items include, “I have a say in planning programs at this school” and “I am expected to voice my concerns when I have them.” Although parsimony is a strength of this measure, its concision does not permit understanding of whether SV in decision making is a singular or multidimensional construct.
In part to address this very question, Anderson (2018) developed and validated a student participation scale consisting of 38 items, which she found load onto six factors: working together (nine items), voice about schooling (nine items), having a say with influential people (five items), having influence (seven items), voice about activities (three items), and having choice (five items). Most of the questions begin with “At my school,” but seven questions ask specifically about the extracurricular activity context, and eight questions inquire about students’ experiences in their classrooms. These eight classroom-focused items load onto three different factors: working together, having influence, and voice about schooling. Validated with a sample of 1,435 Australian students, aged 11–17 years, the full measure has not yet been used in the U.S. context, where some items (e.g., “At my school, I get the chance to say what I think in year group or house meetings” and “At my school, I get the chance to say what I think to the deputy”) may require adapting due to unfamiliar terminology and different governance structures. Recently, a reduced number of items from two of Anderson’s subscales, voice about schooling (eight items) and voice having influence (four items) were applied in a U.S. context, and the modified subscales were found to have strong reliability (alphas of .887 and .843, respectively) and construct validity (although the exploratory factor-analysis results were not shown; Christensen & Knezek, 2024).
In 2011, Ozer and Scholtand’s (2011) validated a 28-item measure of youth psychological empowerment, including some items that bear on SV in schools, such as “I want to have as much say as possible in making decisions in my school,” “If issues come up that affect students at my school, we do something about it,” and “There is a student council here that gets to decide on some really important things.” Consisting of four factors (i.e., sociopolitical skills, motivation to influence, participatory behavior, and perceived control), this scale was designed to measure young people’s feelings and behaviors as change agents rather than to assess school practices. In fact, each factor includes several items tapping respondents’ beliefs and actions about improving “the city,” thereby extending the sense of empowerment beyond the schoolhouse walls.
Most recently, Lyons et al. (2020) developed and validated a scale of “student leadership capacity building mechanisms” that gauges students’ perceptions of “opportunities in the school to build student leadership” (p. 11). The 18-item scale consists of three dimensions: organizational, interpersonal, and personal. The personal dimension further breaks down into two constructs: inclusive positivity and critical awareness. Sample items in the overall scale include the following: “I know students who are on school committees with other students and teachers” (organizational); “In my school, both teachers and students take time to build relationships with me” (interpersonal); and “My teachers teach me to challenge usual ways of thinking” (personal). This scale represents an important advance in SV research because it helps to illuminate the kinds of leadership opportunities schools are providing to their students, revealing “not simply whether leadership capacity is being developed, but what kind of leadership is being modeled and nurtured and how this is being done” (p. 10). Tested with a diverse but small sample of 280 U.S. high school–aged respondents, this measure focuses on practices related to student leadership development, the top of Mitra’s (2005) pyramid, leaving open the question of whether and how SVPs situated at the bottom and middle levels of the pyramid (e.g., seeking input/feedback and collaborative decision making) may be psychometrically distinct.
This Study
This study aimed to develop and validate two multidimensional scales of SVPs (classroom and school levels) that can be used to measure the state of SVPs in a school. Such scales are needed if researchers are to understand the outcomes associated with different types of SVPs, the enabling conditions that give rise to them, and the ways in which these types of practices relate to one another or interact over time. In addition to disaggregating school-level practices from classroom practices, we sought to improve on existing measures by exploring the dimensionality of SVPs in light of the theoretical framework of Holquist et al. (2023), which posits that SVPs are made up of different kinds of SV opportunities (situated at different levels of Mitra’s [2005] pyramid), students’ participation in these opportunities, and adults’ responsiveness to students’ contributions. These specific factors or aspects of SVPs have not been examined in existing scales.
Approved by an institutional review board (FWA #IRB00008523), this study followed all ethical principles and guidelines for the protection of human subjects in research. Our research team included four senior-level researchers, three research assistants, three graduate students, and two high school–aged youth researchers.
Participants
A total of 1,751 students from four schools within the same school district completed the survey. Fifty-one percent of the students were in middle school, and 49% were in high school. The response rates for the two middle schools were 55% and 58%, whereas the response rates for the two high schools were 10% and 44%. Although the high school response rates fell below the 50% standard generally considered acceptable (Dillman et al., 2014), we decided to retain these respondents because our analysis occurs at the student rather than the school level. 1 Furthermore, survey participant characteristics were relatively consistent with the student body demographics of each participating school site; however, students who identified as Latino/a were slightly underrepresented and students who identified as White were slightly overrepresented within the study population.
The district, well respected for its SVPs, chose the schools because they had either established (one middle and one high school) or emerging or nascent SVPs (one middle and one high school) as well as principals who expressed support for SV. Because opportunities for SV were expected to vary across the four schools, we first assessed seven specific school-level SVPs that could be present in each site: students serving on committees with adults; student advisory board(s); open office hours with the principal; a suggestion box for students to share their ideas for school improvement; a survey for students to share their experiences at school; student-led clubs, organizations, or classes focused on improving the school; and student forums or town halls. (Although included on the survey, this block of seven yes/no questions was distinct from the school-level measure we sought to validate. Rather, it served as a check for our school-level opportunities measure, as we discuss later.) Calculating mean scores across this set of questions, we found that opportunities significantly varied between at least two schools (F(3, 1,546) = 33.67; p < .001). Both high schools reported higher levels of school-level SV opportunities (4.0–5.0 opportunities, on average) relative to those of the two middle schools (3.1–3.5 opportunities, on average); however, this result may have been a function of nonresponse bias in the high schools, so no further analyses by school level were conducted.
The student sample was predominantly Latino/a, socioeconomically diverse, and relatively balanced with regard to gender. About half the participants identified as female (49.1%), 47.2% identified as male, and 3.7% identified as nonbinary. Additionally, 4.9% of students identified as transgender. Most students identified as Hispanic, Latino/a, or Spanish origin (66.5%). Eleven percent identified as White, 8.5% as multiracial, 4.7% as Asian American, 3.5% as Black/African American, 1.8% as Native Hawaiian or Other Pacific Islander, 1.5% as American Indian/Native American, and 2.1% as another race/ethnicity. Because all students in the district receive free and reduced-price lunches, a question regarding family’s financial strain was asked as a proxy for socioeconomic status. A little less than half the students reported experiencing no family financial strain (48.5%), whereas 40.2% reported experiencing some strain, and 11.3% reported experiencing a lot of strain.
Procedures
We used a sequential mixed-methods process in designing our school- and classroom-level SV scales. We first reviewed the existing academic and practitioner research literature and conducted expert interviews and focus groups with students and practitioners from across the United States to develop a theory for what SVPs may look like in a U.S. school. Second, we reviewed existing SV instruments from which potential items could be adapted or borrowed. Third, based on our initial theory and existing instruments, we developed an initial draft of the scales to better understand the SV opportunities, student participation in SVPs, and adult responsiveness to SV across school and classroom settings. These items were original, developed specifically for this study.
Next, we conducted four rounds of cognitive interviews with nine students from across the United States at the same time as we conducted focus groups and interviews involving 63 students and 34 practitioners in the four school sites participating in the study. These focus groups and interviews were designed to learn how respondents conceptualized “student voice,” what their experiences of SVPs were within the school and classroom settings, and how they might interpret the items on our scales (i.e., traditional focus group design paired with cognitive interviews; Desimone et al., 2004). Recent research shows that cognitive interviews are helpful in identifying flaws in items, but they may not be helpful for resolving them (Willis, 2018). To identify and begin resolving issues with items, we paired cognitive interviews with focus groups. This pairing afforded us greater insight into participants’ experiences with SVPs as well as the interpretability and breadth of our items.
Findings from the school-site focus groups and interviews were used to sharpen the conceptual clarity and dimensionality of each construct being measured and to revise specific items. Most notably, we eliminated items pertaining to leadership roles (e.g., student-led school improvement efforts) because these items confused students who had never experienced such opportunities and had trouble imagining them. We also revised many classroom-level SVP items to be more specific and granular because the global questions (e.g., “How many of your teachers involve you in decision making in the classroom”) did not make sense to students.
The SV survey then was piloted at the four school sites using standardized administration procedures in the spring of 2021. It was administered at all four school sites again in the spring of 2022 after further refinements and streamlining. School partners sent a parent opt-out form to all students’ parents/guardians at least 1 week prior to survey administration. School leaders invited all current students to take the survey. The survey was administered online during class time using a secure data-collection platform and took about 15–20 minutes to complete.
Measures
We theorized that there are important differences in how SVPs are implemented and experienced by students in school and classroom settings. Accordingly, we created two separate scales, one that captured SVPs at the school level and one that captured the extent of SVPs within classroom settings in a school. To verify that these were two distinct constructs, we specified an exploratory factor analysis (EFA) that included all school-level SVP items and all classroom-level SVP items. Eigenvalues confirmed at least two separate factors, with items conceptualized as school-level SVPs loading onto one factor and items conceptualized as classroom SVPs loading onto another, separate factor (see online Supplemental Table S1a, b). To better understand the dimensionality of SVPs in each of these two settings, we chose to run separate EFAs aligned with our proposed conceptualizations of school-level SVPs and classroom-level SVPs in all subsequent analyses.
Student Voice Practice Scales
The initial items for the school- and classroom-level SV scales are provided in Tables 2 and 3, respectively (see online Supplemental Tables S2 and S3 for descriptive statistics). Students were invited to complete the school-level scale if they reported that at least one type of SV opportunity was available at their school, as noted earlier. Based on findings from the interviews and focus groups conducted during the development of the survey, we determined that students could not respond to questions about the state of SVPs in their school if they were not aware of the opportunities available. Therefore, we determined that a precursor question was necessary prior to the school-level scale questions. Most of the sample (92.6%) reported that at least one type of SV opportunity existed within their school; only 130 students reported that no types of SV opportunities existed. All subsequent school-level SVP scale items were rated on a four-point scale ranging from “Strongly disagree” (1) to “Strongly agree” (4). Items that specifically asked about school leaders used the following prompt: “Remember, when we say, ‘school leaders,’ we are referring to your principal, assistant/vice principal(s), and other school administrators.” The scale included 12 items intended to tap the opportunities, participation, and responsiveness constructs.
All students were invited to complete the classroom-level SVP items regardless of how they responded to the question about the types of SV opportunities within their school. We did not condition these responses in part because classroom SVPs were seen by our respondents as embedded in teachers’ instructional practice. For the classroom-level items, students were asked to respond to the prompt, “How many of your teachers do the following?” All classroom-level scale items were rated on a four-point scale with the following response options: “None” (0), “One teacher” (1), “Some, less than half my teachers” (2), and “Most, more than half my teachers” (3). The scale included 17 items intended to assess the constructs of opportunities and responsiveness. We also included one participation item, but it was conditioned on respondents feeling that they had ideas to share with their teachers about how the classroom, the teaching, or the assignments and activities could be improved. This item generated too much missing data to be profitably included in our analyses, so it was excluded from this analysis.
Convergent and Concurrent Validity Scales
This study used previously validated scales to test the convergent and concurrent validity of the proposed school- and classroom-level SVP scales.
Student–Teacher Developmental Relationships
Student–teacher developmental relationships were assessed with a five-item scale informed by Search Institute’s Developmental Relationships Framework (Pekel et al., 2018), which has previously demonstrated strong psychometric properties and has been used to assess student–teacher developmental relationships in other studies (see, e.g., Boat et al., 2024; Scales et al., 2021). Example items include “My teachers show me that I matter to them,” “My teachers challenge me to be my best,” and “My teachers listen to my ideas and take them seriously.” Items were assessed on a four-point scale ranging from “Strongly disagree” (1) to “Strongly agree” (4). The scale is internally consistent (α = .92).
Academic Engagement
Academic engagement was assessed using a previously validated nine-item measure from the Stanford Survey of Adolescent School Experiences (Pope et al., 2015), which taps affective, behavioral, and cognitive engagement (Fredricks et al., 2004). Example items include “How often do you complete your school assignments,” “How often do you have fun in your classes,” “How often do you find value in what you do in your classes,” and “How often do you think your schoolwork helps you to deepen your understanding or improve your skills.” Items were assessed on a four-point scale ranging from “Never” (1) to “Often” (4). The scale is internally consistent (α = .84).
Unweighted Grade Point Average
Schools provided administrative data for all students who participated in the survey, including students’ most recent cumulative grade point average (GPA). GPAs ranged from 0.0 to 4.0 (mean = 2.73; SD = 0.85).
Absent Rate
Schools also provided an absent rate for each student. Absent rate was calculated by taking the total number of days absent divided by the total number of possible school days. The absent rate ranged from 0.0 to 0.85 (mean = .11; SD = .12).
Analytic Plan
We first conducted Kasier–Meyer–Olkin and Bartlett tests of sphericity to confirm significant correlations between proposed items and to examine the proportion of common variance between items. EFA factors were extracted using maximum likelihood with robust standard errors (MLR) estimation. Because we expected factors to be correlated, we conducted EFAs using an oblique rotation method known as Geomin rotation (i.e., the default rotation method) in Mplus version 8.7 (Muthen & Muthen, n.d.). Traditionally, EFAs and confirmatory factor analysis (CFA) are conducted on two independent samples (Little, 2013). To create two distinct samples, the dataset was randomly split into two independent samples to conduct EFAs (n = 825 for the school-level scale; n = 889 for the classroom-level scale) and CFAs (n = 796 for the school-level scale; n = 862 for the classroom-level scale). This approach has been used in previous scale validation research (e.g., Diemer et al., 2017; Wilf & Wray-Lake, 2023). EFA factor solutions were obtained using the following criteria: Kaiser’s criterion (retaining factors with eigenvalues >1), the interpretability of obtained factor solutions, the internal consistency of obtained factors, and model-fit indices. The resulting EFA factor solutions were then used to inform the retention and removal of any items. Consistent with past research, items with a loading of .40 and above and without significant cross-loadings (i.e., .40 or above) onto other factors were retained (Costello & Osborne, 2005).
CFAs on the same set of items were conducted to assess goodness of fit and to confirm the factor structure. Model fit was assessed using a comparative fit index (CFI), standardized root mean square residual (SRMR), and root mean square error of approximation (RMSEA). Common guidelines were used to assess goodness of fit: CFI > .95, SRMR < .06, and RMSEA < .08 (McDonald & Ho, 2002). To leverage all available data, missing data were imputed via expectation-maximization imputation (Dempster et al., 1977).
Convergent validity was tested using latent factor correlations between SVP scales and a measure of student–teacher developmental relationships. Concurrent validity was tested by specifying multiple regression models to assess the relationship between the SVP scales and academic engagement and school administrative data including absent rate and GPA while accounting for student grade level, gender, racial-ethnic identity, family financial status, English-language learner status, and school site.
A determination of measurement invariance by gender and race/ethnicity was conducted to determine whether items held similar meaning for participants from different groups using a series of nested models. Nested multigroup CFAs were run to examine configural (shown by equivalent factor structure), metric (shown by equivalent factor loadings), and scalar invariance (shown by equivalent item intercepts) across subgroups. Configural invariance is established when the same CFA is valid within each group. This is assessed by evaluating the overall fit of the model. Subsequently, the configural model with no imposed constraints is compared to the metric model with factor loadings constrained to be equal across groups, and the metric model is then compared with the scalar model with both the factor loadings and the item intercepts constrained to be equal across groups (Putnick & Bornstein, 2016). Because χ2 tests are sensitive to large sample sizes, model invariance was assessed using ΔCFI < .010 and ΔRMSEA < .015 (Cheung & Rensvold, 2002). Little to no changes in these metrics demonstrate no significant decreases in model fit (Little, 2013).
Results
EFA
Bartlett’s test of sphericity was significant for both the school-level SVPs scale (χ2(55) = 4,778.44; p < .001) and the classroom-level SVPs scale (χ2(120) = 9423.69; p < .001). Moreover, the Kasier–Meyer–Olkin test result was .87 for the school-level SVPs scale and .95 for the classroom-level SVPs scale, and all items were significantly correlated at or above .30 with at least one other proposed scale item. These findings indicate that the data were satisfactory for factory analysis. MLR estimation and Geomin oblique rotation were used to conduct EFAs for both the school- and classroom-level EFA models.
Post Hoc Parallel Analysis
In addition to EFA models, parallel analyses were used to compare sample eigenvalues with eigenvalues extracted from a series of random datasets generated at the 95th percentile (Horn, 1965). It is recommended that eigenvalues generated from the sample dataset are retained if they are greater than the corresponding eigenvalues generated by the random dataset (Zwick & Velicer, 1986).
School-Level SVP EFA
Analysis of the eigenvalues of one- through four-factor EFA models indicated that a three-factor solution was the strongest (Table 1). The eigenvalue for the three-factor model was >1, the model fit the data well, and it yielded an interpretable factor structure. Furthermore, parallel analysis also supported a three-factor solution. The eigenvalue associated with the 95th percentile of the random dataset did not exceed the eigenvalue of the sample dataset until the fourth factor. A single item, “I feel I have a say in what happens in my school,” was removed because it did not load onto any of the factors above .40. Item factor loadings ranged from .493 to .897, all above the suggested cutoff of .40. The three factors that emerged were consistent with three components of SVPs at the school level, including opportunities, participation, and responsiveness (Table 2).
Exploratory Factor Analysis for School- and Classroom-Level Student Voice Scales.
RMSEA, root mean square error of approximation; CFI, comparative fit index; SRMR, standardized root mean square residual; SVPs, student voice practices
p < .001.
School-Level Student Voice Scale: Exploratory Factor Analysis (n = 825).
Note. A single item, “I feel I have a say in what happens in my school” was removed because it did not load onto any of the factors above .40.
p < .05.
Classroom-Level SVP EFA
In comparing one- through four-factor EFA models, we identified a scale of three factors as the strongest model. The eigenvalue for the three-factor model was >1, the model fit the data well, and it yielded the most interpretable factor structure (see Table 1). Parallel analysis, however, supported a two-factor solution. The eigenvalue associated with the 95th percentile of the random dataset exceeded the eigenvalue of the sample dataset at the third factor. Because the three factors that emerged in the sample dataset were consistent with theorized components of SVPs within classrooms, including two separate types of opportunities (i.e., seeking input and collaborative decision making) and responsiveness (Table 3), we chose to retain the three-factor solution. Item factor loadings ranged from .454 to .961, all above the suggested cutoff of .40.
Classroom-Level Student Voice Scale: Exploratory Factor Analysis (n = 889).
p < .05.
CFA
The three-factor school- and classroom-level models were cross-validated by conducting CFAs with the random independent sample of students to evaluate how well each item in the scales loaded onto their respective factors. MLR estimation and the fixed-factor method were used to scale the latent constructs in both models. Tables 4 and 5 provide standardized factor loadings, standard errors, and R2 values for the final CFA models.
School-Level Student Voice: Confirmatory Factor Analysis (n = 796).
Notes. All factor loadings were significant at p < .001; standardized coefficients are provided.
Classroom-Level Student Voice: Confirmatory Factor Analysis (n = 862).
Note. All factor loadings were significant at p < .001; standardized coefficients are provided.
School-Level SVP CFA
Model-fit indices indicated that the three-factor solution was a good fit to the data, within the acceptable cutoff ranges: SRMR = .039, RMSEA = .031, and CFI = .977. Factor loadings ranged from .604 to .872, with most loading >.70 (see Table 4). Correlations among the three latent constructs were statistically significant and positively correlated with one another such that opportunities were positively correlated with responsiveness (r = .750; p < .001) and participation (r = .526; p < .001). Responsiveness was also positively correlated with participation (r = .578, p < .001). The three subscales of the school-level SVPs also were internally consistent, demonstrating Cronbach’s alpha estimates of .78 (opportunities), .88 (responsiveness), and .79 (participation). The Cronbach’s alpha for the full school-level SVPs scale was .90.
Classroom-level SVPs CFA
Model fit indices indicated that the three-factor solution was a good fit to the data, within the acceptable cut-off ranges: SRMR = .045, RMSEA = .066, and CFI = .928. Factor loadings ranged from .598 to .875, with the majority loading above .70 (see Table 5). Correlations among the three latent constructs were statistically significant and positively correlated with one another, such that seeking input was positively correlated with collaborative decision-making (r = .767, p < .001) and responsiveness (r = .809, p < .001). Collaborative decision-making was also positively correlated with responsiveness (r = .649, p < .001). The three subscales of the classroom-level SVPs scale were also internally consistent, demonstrating Cronbach’s alpha estimates of .91 (seeking input), .88 (collaborative decision-making), and .81 (responsiveness). Cronbach’s alpha for the full classroom-level SVPs scale was .91.
Convergent Validity
Convergent validity was examined to evaluate whether the school- and classroom-level SVP scales were correlated with the conceptually related construct of student–teacher developmental relationships. Latent factor correlations showed that, overall, the school-level SVP scale was positively correlated with stronger student–teacher developmental relationships (r = .424; p < .001), as were its three core dimensions: opportunities (r = .362; p < .001), responsiveness (r = .393; p < .001), and participation (r = .294; p < .001). The overall classroom-level SVP scale (r = .481; p < .001) and its core dimensions—seeking input (r = .482; p < .001), collaborative decision making (r = .345; p < .001), and responsiveness (r = .484; p < .001)—also were all positively correlated with stronger student–teacher relationships.
Concurrent Validity
Concurrent validity was examined to evaluate whether school- and classroom-level SVP scales were associated with academic engagement, absent rate, and GPA while accounting for confounding variables. The overall school-level SVP scale was positively associated with academic engagement (β = .356; SE = .04; p < .001), as were each of its dimensions: opportunities (β = .288; SE = .04; p < .001), responsiveness (β = .279; SE = .04; p < .001), and participation (β = .202; SE = .04; p < .001). The overall school-level SVP scale was significantly associated with lower absent rate (β = −.076; SE = .04; p < .05). Of the school-level SVP dimensions, only participation was significantly associated with a lower absent rate (β = −.077; SE = .03; p < .05). The overall school-level SVP scale and its dimensions were not significantly associated with GPA (see online Supplemental Table S4a–c.)
The overall classroom-level SVP scale and each of its dimensions were significantly associated with greater academic engagement (β = .352; SE = .04; p < .001) and a lower absent rate (β = −.0794; SE = .04; p < .05) with one exception. Seeking input was positively associated with greater academic engagement (β = .335; SE = .03; p < .001) and a lower absent rate (β = −.080; SE = .03; p < .05). Similarly, responsiveness was significantly associated with greater academic engagement (β = .274; SE = .03; p < .001) and a lower absent rate (β = −.073; SE = .03; p < .05). Although collaborative decision making was significantly associated with greater academic engagement (β = .209; SE = .04; p < .001), it was unrelated to absent rate. The overall classroom-level SVP scale and its dimensions of seeking input and responsiveness were not significantly associated with student GPA. Collaborative decision making was negatively associated with student GPA (β = −.086; SE = .04; p < .01; see online Supplemental Table S5a–c.)
Measurement Invariance
To examine whether the school- and classroom-level SVP scales were invariant (i.e., reflect no differences in interpretation of items) across gender and race/ethnicity groups (Latino/a vs other), we conducted multigroup measurement invariance tests.
Gender
Measurement invariance testing was conducted with two gender groups: cisgender girls and cisgender boys. For both the school- and classroom-level SVP scales, configural, metric, and scalar invariance were all satisfied (Tables 6 and 7). All models provided a good fit to the data, and the changes in CFI and RMSEA did not exceed the evaluation criteria (i.e., ΔCFI < .01 and ΔRMSEA < .015). These results show that the two scales are invariant by gender.
School-Level Student Voice Scale: Goodness-of-Fit Indicators for Measurement Invariance by Subgroup.
Note. Due to small sample size among racial and ethnic groups, students who identified as Latino/a or Hispanic were compared with students in all other racial/ethnic groups.
p < .001.
Classroom-Level Student Voice Scale: Goodness-of-Fit Indicators for Measurement Invariance by Subgroup.
Note. Due to small sample size among racial and ethnic groups, students who identified as Latino/a or Hispanic were compared with students in all other racial/ethnic groups.
p < .001.
Racial and Ethnic Background
Schools in our sample predominantly serve students who identify as Latino/a or Hispanic (66% of sample). Due to the small sample size among other racial and ethnic groups, these groups were combined for invariance testing. For both the school- and classroom-level SVP scales, configural, metric, and scalar invariance were all satisfied (see Tables 6 and 7). All models provided good fits to the data, and the changes in CFI and RMSEA did not exceed the evaluation criteria. These results show that the two scales are invariant among students who identify as Latino/a and students who do not.
Discussion
For many years, SV has been heralded in the literature as an effective way to improve students’ agency, engagement, and learning outcomes (Fielding & Rudduck, 2002; Mitra, 2008; Toshalis & Nakkula, 2012), but these claims have rested primarily on qualitative evidence, because the field has lacked a valid and reliable instrument to measure SVPs in schools. This study addresses this need by validating two separate scales: one that measures school-level SVPs and the other that considers the extent of classroom SVPs in a school. The validation of two separate scales, informed by initial EFAs that showed distinct loadings for school- and classroom-level SVPs, is an important contribution. It demonstrates clearly that classroom-level practices are distinct from school-level practices. These two scales set the stage for future work exploring the different antecedents and effects of these two differently situated sets of SVPs. Taken together, they can also provide insight into the state of SVPs within a school.
In alignment with our theoretical conceptualizations, this study reveals that SVPs at the school level are best understood as a three-dimensional construct consisting of opportunities, participation, and responsiveness. The opportunities construct captures students' access to varying roles across SVPs in their school. The participation construct assesses whether and how students participate across SVPs in their school. Finally, the responsiveness construct measures whether school leaders listen, act, and inform students of changes that result from SVPs in their school. Together these three constructs offer a multidimensional picture of the school-level SVPs within a school.
Interestingly, two of the items intended to tap access, “My school has opportunities to hear from all students about how to improve our school” and “My school seeks out the ideas of the students who are having the hardest time in school about how to improve our school,” loaded on the responsiveness construct rather than the opportunities construct. We believe that these two items loaded on responsiveness because they were written to center the school rather than the student. These items also may be more associated with who is being listened to through the SVP (an indicator of responsiveness) rather than who is invited to speak up (an indicator of opportunities and access).
A three-factor solution also best explained the extent of classroom-level SVPs in a school. Consistent with our theorizing, the three factors reflect three different sets of classroom SVPs: opportunities, seeking input and feedback; opportunities, collaborative decision making; and responsiveness. Although they are conjoined on the school-level scale, seeking input/feedback and collaborative decision making emerged as separate sets of practices on the classroom scale, likely because more items assessing these practices were included on the latter than on the former scale. A great deal of existing research discusses the practices of seeking feedback and collaborative decision making in the classroom (e.g., Davis-Porada, 2023; Kahne et al., 2022; Nelson, 2022; Richter & Tjosvold, 1980), but these practices are not always conceptualized as SVPs, and they are not usually discussed in relation to one another. The high reliability of both the classroom scale as a whole and its separate dimensions means that future research can explore the average as well as the unique and additive effects of these different sets of practices on students’ experiences and outcomes.
Prior research has pointed to a virtuous cycle between SVPs and student–teacher relationships. SVPs depend on strong relationships (Biddle, 2017; Hopkins, 2022; Mitra, 2021), but they also can help to build them (Benner et al., 2019; Lyons et al., 2020; Mitra, 2008). Reflecting the close association between the two phenomena, the measure of youth–adult partnership posited by Zeldin et al. (2014) consists of two subscales, one focused on relationships and the other on youth voice. Similarly, the measure of student leadership capacity building posited by Lyons et al. (2020) includes an interpersonal dimension, with items assessing relationships between teachers and students. In line with prior research suggesting positive links between student–teacher relationships and SVPs, we found significant correlations between the developmental relationship measure and the two SVP scales. Furthermore, each of the three dimensions of both scales correlated significantly with the developmental relationships measure. This robust evidence of convergent validity offers further empirical support for these scales.
This study also advances the literature by demonstrating the concurrent validity of the two scales. A great deal of research has linked opportunities for SV to enhanced student engagement (Anderson, 2018; Conner et al., 2022; Ferguson et al., 2011; Kahne et al., 2022; Mager & Nowak, 2012; Zeldin et al., 2018). Toshalis and Nakkula (2012), for example, argued that “student voice can elevate student motivation and engagement” (p. 23), and Mitra et al. (2012) asserted that “student voice can lead to an increase in student engagement” (p. 104). Our findings offer strong support for these claims. Each scale, and each of its dimensions, is strongly associated with greater academic engagement, even when accounting for student demographics and school site. These results may indicate that schools that work to create an environment in which students can have a say in improving the educational experience for themselves and their peers are more likely to value student engagement and work hard to ensure that students are engaged in learning. Similarly, at the classroom level, educators who seek out and respond to student feedback or who entrust students to make decisions with them in the classroom may be more adept at creating learning environments where students feel interested, challenged, and supported, thereby promoting student engagement.
In addition to engagement, the classroom SVP scale also showed strong concurrent validity with respect to absent rates. The scale and two of its three dimensions were negatively associated with the absent rate. The more of their teachers used SVPs of seeking students’ input and showing responsiveness to students’ concerns, the better was students’ attendance. Students may be more inclined to come to school when they feel that their teachers value their feedback and input and take action in response. Classroom SVPs also may be linked to students attending school more regularly because the SVPs have helped improve the classroom climate and learning experience for students.
In contrast to the classroom SVP scale, the dimensions of opportunities and responsiveness in the school-level SVP scale were not associated with absent rates. This finding runs counter to prior work finding that school responsiveness to SV predicted better attendance (Kahne et al., 2022). We did, however, find links between a lower absent rate and both the overall school-level SVP scale and the dimension of student participation in school-level SVPs. Although this finding is in line with a wealth of prior qualitative research demonstrating that students who participate in school-level SVPs reap certain academic and developmental benefits (Mager & Nowak, 2012; Mitra, 2008), it also could be that schools are selecting more consistently attending students for SVP roles. Future research could use our measure, focusing particularly on the three items constituting the participation dimension, to examine which students are participating in school-level SVPs. Researchers also could follow the participating students over time to explore how they may be uniquely benefiting from their involvement as compared with otherwise matched respondents who do not participate in school-level SVPs.
Our findings depart slightly from prior literature in that neither scale is significantly associated with student achievement, as indicated by GPA. This finding runs counter to a great deal of SV theory and qualitative research that suggest that SVPs enhance student learning and achievement by improving the learning environment for students and by strengthening students’ metacognition and sense of agency as learners (Beattie & Rich, 2018; Mitra, 2018; Toshalis & Nakkula, 2012). Some scholars have found that certain dimensions of SVPs, such as school-level responsiveness, are associated with higher grades (e.g., Kahne et al., 2022; see also, Mager & Nowak, 2012); however, others have found that participation in school-level SV opportunities is unrelated to improved student achievement (Voight & Velez, 2018). Our findings of no association between either school- or classroom-level SVPs and student GPA (and of a negative association between classroom collaborative decision making and GPA) may have many possible explanations. First, these findings may be a result of our cross-sectional method. It may take time for the greater engagement that SVPs appear to generate to result in increases in GPA. Future research can use our scales to run regression analyses that probe the causal relationship between SVPs and student achievement over time. In addition, longitudinal mediation models with engagement may suggest an indirect impact of SVPs on student achievement. Second, at the classroom level, it may be that teachers are more apt to use classroom SVPs with struggling students than with high-performing students given that high-performing students tend to take advanced courses, such as International Baccalaureate and Advanced Placement courses, which may have tightly constrained curricular and assessment practices that are not amenable to either collaborative decision making or adjustments suggested by student feedback or input. Our qualitative research in these schools found that at the classroom level, English learner teachers were particularly committed to SVPs, adding credence to the possibility that in our sample schools, struggling students may have had more exposure to SVPs such as collaborative decision making. Future research can explore whether and how SV opportunities vary for students in different tracks or programs. Additionally, at the school level, we found that administrators were intentional about involving students who were struggling academically into their school-level SVPs, even though these opportunities attracted some high-achieving students as well. These inclusive practices could have shaped the (non-)relationship between school-level SVPs and GPA in our study. Third, it could be the case that quality matters more than quantity when it comes to student learning and achievement. Our scales, and particularly their opportunities dimensions, assess the extent and range of SVPs that exist in a school, but one particular SVP implemented well could be more impactful to a student’s learning than many SVPs implemented poorly. Future research can examine whether curvilinear models are more effective for understanding the relationship between SVPs and student achievement, and qualitative research can help to clarify the importance of high-quality implementation. Finally, it may be that GPA is not the most appropriate indicator of student learning and that other scales or proxies for enhanced understanding or skill development should be used.
Despite the lack of concurrent validity with GPA, the SVP scales did show strong measurement invariance by race and gender. These findings support the use of both scales across a range of groups, but further invariance testing by school level, grade, or age is warranted.
Limitations
This study had several limitations that future research can address. First, we did not have a strong set of items related to student participation in classroom-level SVPs. Although student participation is complicated by power dynamics with the teacher at the classroom level, it should be assessed in future work. Second, because we conducted the research in one district, in schools that served predominantly Latino/a students, it is imperative that future research test the two scales in schools with different student demographics, including with non-U.S.-based samples. Relatedly, although our sample was selected to include two schools with established SVPs and two without strong SVPs, all schools were situated in a district that prioritized SV. To further assess the generalizability of the measures, researchers will need to test the scales in districts without strong commitments to SV and in schools with advanced SVPs as well as those with moderate, weak, and nonexistent SVPs. The low response rate of 10% in the high school with the more established SVPs also may have reflected a response bias that further limits the generalizability of our findings, underscoring the need for these measures to be evaluated in more schools. Third, with the school-level scale, we traded granularity for parsimony when assessing opportunities. Seeking feedback and input and engaging students in collaborative decision making may have emerged as separate practices (or opportunities) at the school level, as they did at the classroom level, if more items were included to capture each set of practices. Relatedly, our decision to eliminate items about leadership roles from the school-level opportunities block of questions (e.g., students lead school improvement efforts; students have power to make decisions about changes to school policy or practice) may limit our understanding of the nature of those opportunities and lower the ceiling for what may exist in a school. In future administrations of our measure, researchers may include items pertaining to student leadership in school change to see if they hold together with “seeking input and collaborative decision making” or constitute a separate type of opportunity. Finally, parallel analysis for the classroom-level SVPs suggests that there may only be two factors present. Because of this finding and several significant cross-loadings across factors, further item generation and refinement may be needed to capture three distinct dimensions of classroom-level SVPs aligned with our theoretical model.
Future research testing our measures could include previously established scales, such as the student leadership capacity building scale of Lyons et al. (2020) or select items from the psychological empowerment scale of Ozer and Scholtand (2011), to examine convergent validity further. Discriminant validity could be assessed by including scales conceptually unrelated to SVPs, such as students’ extracurricular participation (Fischer et al., 2020). Longitudinal research could assess the predictive validity of the scales by examining their association with expected outcomes, such as youth empowerment or engagement, over time, controlling for other factors that are known to impact these constructs. Additional sustained qualitative work could help determine the impact validity (Massey & Barreras, 2013) of the scales or their utility to schools in informing and effecting change.
Conclusion
SVPs have been implemented in schools and classrooms for several decades, with research on these practices stretching back to the 1990s (Corbett & Wilson, 1995; Rudduck, 1999; Rudduck et al., 1996; Soo Hoo, 1993); however, the field is only just beginning to produce and validate scales that can be used to measure these practices (Anderson, 2018; Lyons et al., 2020). Our study builds on this trend by contributing two validated multidimensional SVP scales with strong psychometric properties, one situated at the classroom level and the other at the school level. Although some scholars may question whether SVPs can and should be measured quantitatively because they are so relationally based and contextually dependent, we believe that our measures can be used to help fortify and extend the research base for SV. If our measures can be validated with other samples, many possibilities for future research will open. This research will help advance our understanding of SVPs at the school and classroom levels, hopefully leading to more effective and impactful practice and expanded opportunities for students to have a meaningful say in their education.
Supplemental Material
sj-docx-1-ero-10.1177_23328584251326893 – Supplemental material for Measuring Student Voice Practices: Development and Validation of School and Classroom Scales
Supplemental material, sj-docx-1-ero-10.1177_23328584251326893 for Measuring Student Voice Practices: Development and Validation of School and Classroom Scales by Jerusha Conner, Samantha E. Holquist, Dana L. Mitra and Ashley Boat in AERA Open
Footnotes
Acknowledgements
Chen-Yu Wu, Nikki Wright, Enrique Rosado, Bailey Bonds, and Paul Akapo provided helpful assistance with this research.
Declaration of Conflicting Interests
The author(s) declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
This research was funded by a grant from the Gates Foundation (Grant No. INV-031504). The sponsor played no role in the study design; collection, analysis, and interpretation of data; the writing of the report; or the decision to submit the article for publication. Views expressed here do not necessarily reflect the positions or policies of the foundation.
1.
The high school data are also worth retaining given the paucity of quantitative SV data in the research literature and the low (or nonreported) response rates in the few studies that have used a quantitative indicator or measure of SV. Specifically, of the nine quantitative studies that we cite in our literature review, five do not report their response rates (e.g., Anderson, 2018; Anderson & Graham, 2016; Christensen & Knezek, 2024; Lyons et al., 2020; Ozer & Schotland, 2011). In the studies that do report response rates, two conducted at the school level reported response rates as low as 21% (Conner et al., 2022) and 36% (Kahne et al., 2022). Another study conducted at the individual level had a response rate of 7% for students and 12% for teachers (Moore, 2022). The study by
reported a response rate of 60%, but this response rate was calculated based on a group of students specifically recruited for the study and not based on an entire school population.
Authors
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
