Abstract
Although there are many activities (e.g., transition services), derived from correlational research, that occur while students are in school that increase the likelihood of positive post-school outcomes, many teachers continue to provide services shown to have little to no effect on outcomes of students with disabilities. The purpose of this study was to operationally define the predictors of post-school success for educators to understand what is necessary to develop, implement, and evaluate secondary transition programs based on predictor research. Results conclude experts in the field reached consensus on an operational definition and set of essential program characteristics for each predictor of post-school success to aid practitioners in implementing these practices.
Keywords
Delivering secondary educational opportunities that keep youth with disabilities in school while preparing them for a productive adulthood is the ultimate challenge facing high schools today (Stormont, Reinke, & Herman, 2011). There are many determinants of in-school and post-school success for students; effective instruction is a primary factor. Although many evidence-based practices and programs are available, many teachers continue to use practices shown to have little to no effect on outcomes of students with disabilities (Cook, Tankersley, & Landrum, 2009). It is important that educators are equipped to choose and implement practices shown to be successful for students.
Research conducted in the past 30 years to identify the most effective practices to teach academic and functional skills (e.g., Test, Fowler, et al., 2009; U.S. Department of Education, 2010, 2012) has focused on measuring student performance and did not directly link the intervention with post-school outcomes such as enrolling in post-secondary education or attaining employment. In response to the need to establish a link between in-school practices and programs and post-school outcomes, Test, Mazzotti, et al. (2009) reviewed correlation research in secondary transition using the quality indicators suggested by Thompson, Diamond, McWilliam, Snyder, and Snyder (2005) to identify variables or
This research significantly advanced the field of secondary transition in special education by identifying evidence-based predictors of post-school success from high quality correlation research. The predictors identified were defined directly from the research studies that established them. Unfortunately, the descriptions of the predictors, as published in empirical research, lack the details necessary for educators and practitioners to develop, implement, or evaluate local programs aligned with the synthesized predictor research. For example, previous literature defined work study as (a) students who participated in work study were 2 times more likely to be engaged in full-time post-school employment (Baer et al., 2003), or (b) students in the Bridges School to Work Program who accepted a post-internship job offer and who completed the internship were more likely to engage in post-school employment (Fabian, Lent, & Willis, 1998). Although a state or local education agency may report having work study opportunities for students, without operational definitions of work study components and the other predictors, policymakers, administrators, and teachers are left to guess, or assume, their program is sufficiently aligned with the research literature. To move beyond guesses, assumptions, and speculations, each predictor needed to be operationally defined and essential program characteristics determined. Therefore, the purpose of this study was to add specificity to the existing definitions, and where necessary, operationally define the predictors of post-school success (Test, Mazzotti, et al., 2009) in such a way as to provide educators with criteria to develop, implement, and evaluate secondary transition programs based on predictor research. The objectives of this study were
to reach consensus on the operational definition of each predictor of post-school success; and
to identify and reach consensus on the program characteristics for each predictor of post-school success.
Method
Group Processes
A Delphi procedure (Linstone & Turoff, 2002) was used to gain consensus on the operational definitions and program characteristics of the predictors. The Delphi procedure is a nominal group technique used to solicit input from experts to reach consensus about a particular topic. Consensus is gained through an iterative process of soliciting information from, and providing information to, experts via a series of questionnaires. The Delphi procedure provides a structured, systematic approach to collecting data in situations where the only available alternative may be anecdotal or subjective (Linstone & Turoff, 2002). Strengths of the Delphi procedure include a structured method of communication for individuals with appropriate knowledge of the content to express differing perspectives, provide ongoing feedback, and provide opportunity to edit previous contributions throughout the process (Hsu & Sandford, 2007).
Participants
The authors developed a set of inclusion criteria to identify a purposeful sample of experts in the field of secondary transition and/or career technical education (CTE). For the purpose of this study, an individual was deemed an expert if he or she met one or more of the following criteria:
author/researcher of scholarly, peer-reviewed work relative to one or more of the predictors of post-school success;
author/researcher of scholarly, peer-reviewed work relative to one or more specific youth populations (e.g., youth who drop out, Native American youth, youth with emotional behavioral disabilities);
author/researcher of scholarly, peer-reviewed work relative to one or more specific post-school outcome categories (e.g., post-secondary education or training, employment, or independent living);
practitioner (e.g., teacher, service provider, educational administrator) with 10 or more years of service to secondary-age youth with disabilities (10 or more years was chosen to ensure practitioners had sufficient experience and expertise).
The authors initially generated a list of 48 potential experts. This list was purposefully reduced to ensure a balance of potential respondents across researchers, local and state education administrators and practitioners, and those with special education or CTE expertise. After reducing the list, 32 experts were sent a letter describing the purpose and process of the study and inviting them to participate. Of the 32 experts, 22 responded to the invitation to participate. Of these 22 respondents, 13 were female; 5 were practitioners with 10 or more years of services to secondary-age youth with disabilities, 3 had 10 or more years of service or research experience in CTE, and 14 were authors/researchers of scholarly, peer-reviewed work relative to predictors of post-school success, post-school outcome categories, or one or more specific youth populations.
Response Rate
Dillman’s (2007) survey methods were utilized to maximize response rate (e.g., using pre-notifications and follow-up reminders to complete survey). Overall response rate, calculated as total number of respondents by total number of possible respondents across all rounds of ranking, was 62%. Response rates varied across rounds ranging from 32% to 100%. Individuals representing each inclusion criteria (e.g., practitioners with 10 or more years of services to secondary-age youth with disabilities or CTE, authors/researchers of scholarly, peer-reviewed work relative to predictors) participated in each round of voting.
Procedures
Data were collected in three phases: (a) clarifying initial definitions, (b) soliciting input from experts and voting on definitions, and (c) reviewing final definitions and characteristics to ensure representation for students with disabilities from culturally and linguistically diverse (CLD) backgrounds (Hughes, 2013; Trainor, Lindstrom, Simon-Burroughs, Martin, & Sorrells, 2008). The process used in each phase of the Delphi procedures is described next.
Phase 1
To clarify the definitions of each predictor, authors re-examined the original 22 articles used to identify the original 16 predictors (Test, Mazzotti, et al., 2009). We recorded any explicit definitions or descriptions of characteristics described in the original literature that could assist in operationalizing each predictor. Next, we reviewed the secondary sources cited in the original 22 articles to better understand the original variable and ensure consistency in definitions across studies, again making note of any differences in definitions or characteristics. Last, in Phase 1, authors reviewed other widely disseminated sources (e.g., textbooks) and recorded any additional definitions or characteristics associated with each predictor. All information gathered from Phase 1 was organized into a table (see Table 1 for sample) and distributed to the experts as the basis for Round 1 in Phase 2.
Sample of Phase 1 Data Collection.
Phase 2
Phase 2 consisted of soliciting information from the experts through multiple rounds of questionnaires distributed via Qualtrics, an online survey software program. In all rounds of ranking, scores were calculated for each definition by summing the rankings assigned by experts for each definition. See Table 2 for an example of calculations. In each subsequent round, we calculated the minimum and maximum scores, mean, and standard deviation for each definition, and the highest ranked definitions for each predictor were carried forward for subsequent ranking. We graphed the rankings and used visual analysis (Kennedy, 2005) to determine the natural breaking point of the scores. Tied statements were included in the next round of ranking.
Example of Calculations for Each Round.
Phase 2, Round 1
For Round 1, experts were asked to review descriptions of each predictor extracted from the literature, write an operational definition for each predictor, and/or add missing program characteristics they deemed necessary for implementation of the predictor in a secondary setting. In the analysis, we combined similar definitions and characteristics, revised definitions to include suggested additions from experts, and eliminated duplicates.
Throughout this consensus building process (i.e., Round 2 and beyond), we were cognizant of remaining in the role of researcher and not letting our personal professional experiences influence results. To this end, we used a number of strategies to ensure results reflected experts’ intent and not that of the researchers. Strategies utilized to reduce researcher bias included using brackets (i.e., [ ]) to indicate a word inserted by us, working in pairs or triads when making decisions, checking our decisions with our co-researchers before continuing, and having experts vote on recommended changes before moving forward to the next round of rankings.
Phase 2, Round 2
In Round 2, we inserted verbs and other words that appeared to be missing from each definition and used brackets to denote our changes to what experts submitted in Round 1. We distributed the complete list of 223 suggested predictor definitions (i.e., 10–15 definitions per predictor) and 211 (i.e., 10–12 sets per predictor) suggested sets of characteristics to the experts and asked experts to rank 10 definitions for each predictor from those generated in Round 1 (with 10 being the highest and 1 being the lowest) and add any explanation, argument, or justification for or against the inclusion of the particular definition or characteristic as part of the final operational definition.
Phase 2, Round 3
As we reviewed the experts’ suggested text additions in Round 2 for the top 10 choices, we realized some suggested revisions could change the definitions or characteristics in substantial ways. For example, the definition of Vocational Education identified in Round 1 was, “Students should enroll in occupationally specific course to learn and practice specific vocational skills.” In Round 2, experts suggested adding “This coursework should also include opportunities for work-based experiences, such as work study, competitive employment, job shadows, etc.” To ensure we honored the experts’ intentions, we sought to gain consensus on the text additions before continuing with the rounds to reduce the definitions. Therefore, for Round 3, we distributed an Excel table containing the original suggested definition of the top 10 definitions from Round 2 and the suggested text additions to the experts who responded to Round 2. Experts were asked to vote
Phase 2, Round 4
In Round 4, prior to sending out the 164 definitions (i.e., 9–10 definitions per predictor), the research team corrected grammatical and spelling errors and reduced redundancy (e.g., word duplications in a definition) to ensure consistency across definitions, being mindful not to alter the intent of the definition. For example, a suggested definition for self-determination was as follows:
Self-determination, [is] the ability to set goals for oneself, evaluate options, and take initiative to reach them is based on the ability to make choices, solve problems, and accept consequences of one’s own actions.
We re-wrote the definition as follows:
Self-determination is the ability to make choices, solve problems, set goals, evaluate options, take initiative to reach one’s goals, and accept consequences of one’s actions.
The revised definitions were distributed to experts, and experts were asked to rank their top three definitions, with 3 being the highest ranking and 1 being the lowest ranking.
Phase 2, Round 5
In Round 5, 48 definitions (i.e., 3 per predictor) were distributed and experts were asked to rank the one operational definition they thought most represented the predictor from the top three identified in Round 4.
Phase 2, Round 6
Round 6 focused on identifying the program characteristics aligned with the final operational definition for each predictor. In earlier rounds of ranking, the panel of experts identified from 3 to 12 program characteristics per predictor. Because the impetus for the study was to help educators know what is necessary to develop, implement, and evaluate secondary transition programs based on predictor research, the number of program characteristics needed to be reduced to only those most relevant for program development, implementation, and/or evaluation. The top 10 sets of program characteristics identified in Round 1 were split apart into individual program characteristics yielding 342 program characteristics (i.e., 10–28 characteristics per predictor). To determine which characteristics were essential, ancillary, or irrelevant given the newly defined operational definition, experts were asked to use their professional expertise and judgment to classify each suggested program characteristic as any of the following:
Program characteristics identified as essential or ancillary were retained and included in Round 7 of ranking. Program characteristics identified as irrelevant were excluded from the list of program characteristics. To reduce redundancy and ensure consistency in the program characteristics, we themed the characteristics into like categories, and when possible, collapsed similar characteristics into one statement. For example, the following program characteristics were identified by the experts:
Use a direct instruction curriculum to teach communication and interpersonal skills.
Use a direct instruction curriculum to teach conversational, negotiation, and conflict skills.
Use a direct instruction curriculum to teach group skills.
The research team collapsed these three statements into the following program characteristic:
Use a direct instruction curriculum to teach communication, interpersonal, conversation, negotiation, conflict, and group skills.
Phase 2, Round 7
For Round 7, researchers asked experts to view each predictor’s operational definition and essential program characteristics as a whole and determine whether they were Acceptable or Required Revision. If an expert felt a revision was required, they were asked to re-write the definition or characteristic incorporating the suggested revision.
Phase 3
Because the number of students from racially and ethnically diverse backgrounds is steadily increasing in our schools (Hughes, 2013), it is important to ensure students’, from diverse populations, secondary transition needs are met through effective interventions and programs (Mazzotti, Rowe, Cameto, Test, & Morninstar, 2013; Morningstar, in press; Trainor et al., 2008). The final respondent group of the Delphi procedure was not representative of diverse populations (e.g., most respondents were White males or females). As we reviewed the final definitions and program characteristics identified by the experts, we noted some concerns as to whether definitions and characteristics were inclusive of diverse populations. For example, it seemed many of the program characteristics were targeted at mainstream transition practices and neglected to recognize the possibility of differences among diverse populations of youth (e.g., differences in expected outcomes of youth from diverse backgrounds, cultural nuances in expected social and workplace behaviors). Therefore, in the final phase, we requested the Equity Assistance Center (EAC), funded by the U.S. Department of Education, to provide guidance regarding equity, access, and culturally responsive characteristics for each predictor. To strengthen the final definitions and characteristics, researchers asked the EAC to review the operational definitions and program characteristics for each predictor with a lens toward equity, access, and culturally responsive characteristic descriptions.
Results and Discussion
Final Definitions and Program Characteristics
After seven rounds of voting during Phase 2, experts reached consensus on the operational definitions and essential program characteristics of the 16 predictors. Table 3 presents the final operational definitions and essential program characteristics. For each of the 16 predictors of post-school success identified through the review of correlation research, experts reached consensus on an operational definition and corresponding essential program characteristics.
Operational Definitions and Essential Program Characteristics of the 16 Predictors Identified in Test, Mazzotti, et al. (2009).
A recommendation by the Equity Assistance Center (EAC) to address cultural relevance and competency.
To strengthen the final definitions and characteristics in Phase 3, the EAC reviewed and added language to some characteristics to address equity and/or diversity. Table 3 includes the 14 suggestions made. For example, one characteristic of the predictor Student Support read as follows:
Develop and implement procedures for cultivating and maintaining school and community networks to assist students in obtaining their post-secondary goal.
EAC added the following language to this characteristic:
Consider networks that are culturally, racially, and ethnically representative to accommodate the needs of CLD students.
Furthermore, EAC suggested that every predictor and characteristic consider its impact on students from CLD backgrounds; being mindful that use of mainstream value-based approaches may not serve the needs of all students. This recommendation is supported by Trainor et al. (2008) who suggested a need for increasing educators’ cultural competence and recommend that “educators consider student’s culture and communities in transition planning and service delivery” (p. 62).
Limitations
When considering these findings, four limitations need to be taken into account: (a) selection of experts, (b) response rate, (c) online communication, and (d) potential researcher bias. First, a recognized limitation of any study utilizing a Delphi procedure is the selection of participants identified to lend their experience and expertise to the research (Linstone & Turoff, 2002; Schmidt, Lyytinen, Keil, & Cule, 2001; Welty, 1972). In spite of adhering to strict inclusion criteria to ensure diversity in content expertise (e.g., CTE, special education), application of knowledge (e.g., researchers/academic and public school educators/practitioners), level of application (state department of education and local school district educators), and those with specific knowledge related to the 16 predictors of post-school success, selection of experts was limited by our personal knowledge of individual experts’ research focus, their years of experience, and professional work. Selection of experts was not based on a broad or comprehensive survey of such professionals (e.g., membership of professional organizations associated with those fields) or analysis of their individual contributions; therefore, we can make no statement about how experts’ opinions would generalize to other experts or practitioners.
Another limitation of this study was the reduced and variable response rates from experts through the rounds of the study. To encourage experts to remain active and complete all rounds of the Delphi study, personal emails were sent inviting individuals to participate. During each consecutive round, we sent multiple emails reminding participants of the goal of the study and the importance of their contribution. Previous literature using Delphi procedures suggests decreasing response rates after each consecutive round of voting is not uncommon (Linstone & Turoff, 2002). We checked respondents after each round to ensure individuals representing each of the inclusion criteria (e.g., practitioners with 10 or more years of services to secondary-age youth with disabilities or CTE, authors/researchers of scholarly, peer-reviewed work relative to predictors of post-school success, post-school outcome categories, or one or more specific youth populations) participated in all phases of the study. To increase response rate from the last round of voting and maintain a steady rate at or above 50% for the remainder of the study, we included all 22 experts in Rounds 5 through 7. Throughout each round of voting, experts could add or revise a characteristic, include an argument for or against including a characteristic, or advocate for specific wording of a characteristic. In this way, experts were afforded the opportunity to have their voice heard even when they had not participated in a previous round of voting. This flexibility also ensured experts had the opportunity to add or elaborate characteristics should they realize something was missed previously.
Third, all interactions with experts were through electronic survey and e-mail; intent of edits could only be articulated in writing. Therefore, meaning expressed through voice or facial expression could not be discerned as neither focus groups nor interviews were conducted.
Finally, although we were mindful to not introduce our personal, professional knowledge into the study by not adding or recommending definitions or characteristics and took steps to be transparent when changing any wording, no research study is completely free of bias. It is likely that other researchers reading the same suggestions might have edited the responses differently. However, all edits were approved by our respondents. In spite of these limitations, there are key implications for practitioners and other researcher to consider based on the findings of this study.
Implications for Practice
Operationally defining the programmatic characteristics of the predictors for post-school success may narrow the research-to-practice gap by giving educators information to align secondary transition programs with high quality research shown to increase the likelihood of positive post-school outcomes for youth with disabilities. The operational definitions and program characteristics can be used to develop and expand secondary transition programs, and/or evaluate existing programs. The findings can assist schools and districts in identifying whether they are implementing practices that have been empirically determined to influence change in the areas identified (e.g., post-school outcomes data, in-school outcome data). Doing this will enable schools to invest in providing transition services that will have the best chance of improving students’ post-school outcomes, therefore ensuring the biggest return for resources (e.g., time, effort, finances) invested. Implementing each of the predictors will require a collaborative effort among state and district administration, teachers, and other school staff. Some predictor program characteristics will require action at a state or district level. For example, when examining the program characteristics of vocational education, the following characteristic clearly requires action at a state/district administrative level and will affect at a systems level:
Provide opportunities to earn certificates in certain career areas (e.g., Certified Nursing Assistant, Welding, Food Handlers Certification).
Other program characteristics for vocational education only require action on the part of a teacher and impact at a student level. For example,
Provide accommodation and supports in CTE courses to ensure student access and mastery of content.
Multidisciplinary teams should work together to determine to what extent each predictor and program characteristic is being implemented and who will be responsible for developing an action plan to implement each program characteristic.
In addition, although each characteristic is essential to fully implementing a predictor, a multidisciplinary team should make decisions regarding priorities for action based on student performance data, resource availability, policy alignment, and other data available to a team. Certain characteristics may be of greater importance than others for an individual team. There may also be additional program characteristics not identified in this study given the state/local context.
Recommendations for Further Research
This study moves the field of secondary transition another step forward by establishing operational definitions and program characteristics of the predictors with sufficient detail for educators to develop, implement, and evaluate programs. Nevertheless, more work is needed. Researchers need to conduct long-term experimental research verifying the characteristics identified by the panel of experts are accurate for real classroom settings and for a variety of students. A survey to a broader, more comprehensive group of participants could validate as well as confirm or deny the generalizability of the characteristics identified by the small number of experts participating in this study. Empirical research needs to be conducted to determine the effectiveness of the program characteristics, and, what, if any revisions are needed to distinguish between essential and ancillary characteristics. Ongoing review of the literature found evidence to support a 17th predictor (i.e., parent expectations) of post-school success. As new predictors are identified, this procedure will need to be repeated to determine their corresponding operational definition and program characteristics.
In conclusion, the purpose of this study was to clarify the existing definitions of the 16 predictors of post-school success (Test, Mazzotti, et al., 2009), and where necessary, operationally define them in such a way that local educators know what is necessary to develop, implement, and evaluate secondary transition programs based on predictor research. Results indicated experts in the field of secondary transition reached consensus on an operational definition and a set of essential program characteristics for each of the 16 predictors. Policymakers, administrators, and practitioners now have information to assist them in providing evidence-based programs and services to youth with disabilities to prepare them for the transition from school to post-school life.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received the following financial support for the research, authorship, and/or publication of this article: This document was developed by the National Post-School Outcomes Center, Eugene, Oregon, (funded by Cooperative Agreement Number H326U090001) with the U.S. Department of Education, Office of Special Education and Rehabilitative Services, and The National Secondary Transition Technical Assistance Center, Charlotte, NC (funded by Cooperative Agreement Number Grant # H326J11001) with the U.S. Department of Education. Dr. Selete Avoke and Marlene Simon-Burroughs served as project officers. Opinions expressed herein do not necessarily reflect the position or policy of the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Department of Education.
