Abstract
Experiential learning is used extensively in agriculturally related colleges and courses. There is a need to examine how to best structure meaningful learning experiences in order to best prepare the agriculture, food, and natural resources workforce. This study aimed to investigate how different modes of reflection and levels of transfer affect the development of students’ transfer skills over time in an undergraduate animal science course. A 2 × 3 factorial, quasi-experimental design, examined with linear mixed modeling, was used to examine how the two treatment variables (reflection mode and transfer level) impacted students’ transfer skills (the dependent variable) over time. Time (in weeks) had a positive, significant effect on participants’ (N = 123) transfer skill quiz scores. The fixed effect of near transfer had a negative, significant effect on participants’ transfer skills scores as compared to the fixed effect of far transfer. All remaining fixed effects had no significant effect on students’ transfer skills scores. As students spend more time practicing transfer skills, especially to distal contexts (i.e., far transfer), their ability to transfer abstract concepts increases. We recommend that educators who seek to teach for transfer implement intentional scaffolding in which learners progress across multiple levels of transfer context.
Plain Language Summary
This study looked at how college students in an animal science course learn to apply what they have learned to real-life situations. We tested two things: how students reflect (written reflection on their own vs. reflecting verbally with a classmate) and how the type of practice they got (same vs. similar vs. different situations) influenced how well they can apply what they learned. We found that students got better at applying their knowledge over time, especially when they practiced using it in very different situations. There was no difference in students’ learning if they reflected by writing independently or talking with a peer. This matters because many agriculture-related college programs use hands-on learning, but it is not always clear how to design these experiences to help students truly understand and use what they learn. The key takeaway is that giving students time and challenging them with unfamiliar problems helps them learn better. Teachers should focus on giving students real-world practice and encourage them to think deeply about how they can use what they are learning.
Introduction
Experiential learning is used extensively at the post-secondary level (Eyler, 2009; Kolb, 2015; Lin et al., 2025; Nilson, 2016; Valiente-Riedl et al., 2021), especially in agriculturally related colleges and courses (Estepp & Roberts, 2011; Lehrer, 2024). The National Academies of Sciences, Engineering, and Medicine (NASEM, 2021) advised implementing experiential learning to ready the upcoming workforce in the agri-food and natural resources (AFNR) sector. Experiential learning “requires a holistic perspective that considers the nature of the individual student along with industry and academic objectives for learning outcomes” (NASEM, 2021, p. 20). However, many programs in higher education focus too heavily on skill development, and not enough on the development of the holistic person, a trend Kolb (2015) referred to as the vocationalism of higher education. Vocationalism can equip students with experiences to build industry-based skills and trade, but “there are…dangerous currents of anti-intellectualism in this movement” (Kolb, 2015, p. 6). Instead, higher education should also focus on the holistic experiential learning process, rather than only emphasizing content and vocational skill development (Kolb, 2015). Higher education programs must provide students with learning experiences that build humanistic knowledge and competence as well as industry skills and career preparation (Cheng et al., 2022; Chickering, 1977; Goulart et al., 2021; Kolb, 2015). In doing so, “we will recognize the key significance of differences among students, not only in verbal skills and academic preparation but also in learning styles, capacity for independent work, self-understanding, social awareness, and human values” (Chickering, 1977, p. 87).
Eyler (2009) suggested that, while traditional experiences such as internships and fieldwork are important for vocationally and professionally oriented college tracks, experiential learning has the power to build intellectual capacity within liberal education. In the higher education classroom setting, experiential learning can build deep comprehension of subject matter, stimulate critical thinking, foster knowledge application and transfer, and promote life-long learning among students (Eyler, 2009). A fundamental goal of higher education is student mastery of complex bodies of knowledge, and such mastery is often measured through students’ abilities to transfer knowledge to applied settings. Mastery of knowledge and learning transfer occur when students can move beyond simply recalling information and are instead able to identify how knowledge can be utilized and when they develop the ability to apply said knowledge (Eyler, 2009).
Transfer of learning, which is especially important in the context of preparing individuals for vocational and professional settings, is heavily dependent upon the learning experience (Macaulay, 2000; Morris, 2019). Such an experience should be “well taught and well-integrated with previous knowledge, [employ] teaching methods which seek to enhance the ability of students to make connections and [provide] ample scope for putting learning into practice” (Macaulay, 2000, p. 17). Over twenty years later, recommendations for enhancing experiential learning in AFNR contexts are continually being made (NASEM 2021; Newcomb et al. 2004/2025). Recommendations have included examining how to best structure meaningful learning experiences in order to best prepare the AFNR workforce, encouraging the prompting of learner reflection around their experiences, and identifying which environments and designs are best suited for learning (Coleman et al., 2020, 2021; NASEM, 2021). There has been considerable research conducted around the use of experiential teaching methodologies, such as cooperative learning, service learning, and project-based learning, among others, in agriculturally focused college courses (Beadle et al., 2024; Burmesch et al, 2024; Culhane et al, 2018; Greenhaw et al., 2025; Kricsfalusy et al, 2016; Matriano, 2020; Young, 2023). Much of this work has focused on instructional strategies rooted in experiential learning rather than the broader framework of experiential learning theory and its core tenets. As such, there remains a gap in exploring the effectiveness of experiential learning theory as a guiding model for pedagogical approaches in post-secondary agricultural settings. This research gap, coupled with the need to prepare a highly skilled AFNR workforce (NASEM, 2021), enhances the need to study theoretical elements (i.e., experience, reflection, conceptualization, and application) of experiential learning through both pedagogical and theoretical lenses in post-secondary agricultural contexts. Therefore, this research aimed to investigate how reflection mode and transfer level influence students’ ability to transfer skills in an undergraduate animal science course through experiential learning.
Theoretical Framework and Related Literature
Experiential learning theory was used as the framework for this study (Dewey, 1938; Joplin, 1981; Kolb, 1984, 2015; Roberts, 2006). Roberts (2006) claimed that many practitioners in agricultural education have studied the practice of experiential learning, but less attention has been paid to the theory behind the practice. Knobloch (2003) issued a challenge to educators, stating we must “move beyond the ‘doing’ and ensure that all learning is connected to thinking and knowledge that will be easily remembered and applied later in life” (p. 31). For an experience to be educative, it requires more than concrete experience alone (Dewey, 1938; Knobloch, 2003; Kolb, 2015). Kolb (2015) claimed that an educative learning experience should encompass four key components: “(a) concrete experience, (b) reflective observation, (c) abstract conceptualization, and (d) active experimentation” (p. 44). A synthesis of prominent theorists’ models of experiential learning (Joplin, 1981; Kolb, 1984; Roberts, 2006) is presented in Figure 1. Additionally, this model addresses one of the primary critiques by researchers that finds models of experiential learning are overly simplistic and stepwise in nature.

Visualizing the process of experiential learning.
This model depicts the foundational elements of learning experientially: “(a) experience, (b) reflection, (c) conceptualization, and (d) application” (Coleman et al., 2024). These components are presented in no particular order, and they overlap with one another. This is because the order in which the stages of the learning process occur is of less importance (Kolb, 2015), and stages of the process can occur simultaneously or at multiple points in the process. For example, reflection might occur during a concrete experience, after the experience, and again during the application stage. The interaction of these stages is of importance in framing this study, as we aim to examine the effects (i.e., main and interaction) of two experiential learning components: reflection and application. These two variables were selected for a multitude of reasons. When delivering instruction framed with experiential learning, it has been shown that educators often focus on facilitating the concrete experience for learners while frequently omitting other elements of the framework, especially reflection and application (Shoulders & Myers, 2013). It has been contended that experiences lacking elements of the experiential learning process may not be as educative (Baker et al., 2014; Kolb, 2015; Knobloch, 2003). Early on, Dewey (1938) contended that educational experiences must be well-planned and clearly conceptualized. For that reason, it may not be enough for educators to simply be instructed to prompt or facilitate reflection and application practices with their students. There are countless approaches to reflection and application; therefore, this study sought to test some common approaches to compare their effectiveness.
Reflection
Reflection is a critical learning process component for ensuring that learners’ experiences are educative ones (Dewey, 1938; Kolb, 2015; Lehrer, 2024; Zull, 2002). Kolb (1984, 2015) labeled this stage of the experiential learning cycle reflective observation, which is the process of thinking about our experiences and examining them in our memory, in order to build mental connections between the concrete world and abstract thought. Zull (2002) claimed, “We need reflection to develop complexity. We may start with a direct and sometimes relatively simple concrete experience, but that experience grows richer as we allow our brain the freedom to search for those still unknown connections” (p. 164). While reflection can occur subconsciously within a person’s mind, it is also critical for the educator to prompt meaningful reflection within their students. It is up to the educator to design curricula and assignments that prompt reflection around essential, related concepts (Rivas et al., 2022; Zull, 2002).
In agricultural education settings, studies have measured the impact of Schön’s (1983) two reflective modes (i.e., reflection-in-action and reflection-on-action) while implementing experiential learning (Baker et al., 2014; Blackburn et al., 2015; Coleman et al., 2020, 2021; DiBenedetto et al., 2017). While these modes of reflection are widely recognized, they focus more on when reflection occurs as opposed to how reflection occurs. More recently, attention to reflection mode has been the focus of some researchers (Lin et al., 2025; Sieben et al., 2021), and varying effects have been observed based on the approach prompted by the educator (Lin et al., 2025). A reflection mode suggested by Wright et al. (2013) called hevruta emphasized verbal discussion with a peer learner. The technique was used in two college courses, and it was successful in engaging participants in intellectual conversation and content comprehension around course topics (Wright et al., 2013). Another mode of reflection that has been shown to be effective is written reflection via journals and portfolios (Hubbs & Brand, 2005; Lamm et al., 2011; Loo & Thorpe, 2002; Thorpe, 2004; Yancey et al., 2013). Thus, this study will test the effects of reflection, conducted verbally with peers or via independent written journal reflections, which will serve as the first independent variable.
Application via Transfer
Application, also known as active experimentation, is critical for effective experiential learning (Kolb, 2015; Morris, 2019; Roberts, 2006; Zull, 2002). Applying learned concepts to similar or new situations is known as knowledge transfer (Haskell, 2001; Macaulay, 2000). To accomplish this, learners must be able to implement their knowledge into real-life practice (Macaulay, 2000). However, the role of the teacher is critical in educating learners on how to accomplish this (Newcomb et al., 2025). It is up to the educator to facilitate situations that challenge students and present them with problems so they may “take appropriate action based on the different sources of knowledge that they have accessed” (Macaulay, 2000, p. 18). In educational environments, employing effective teaching strategies (i.e., problem-based learning, simulations, case vignettes, or other reasoning techniques) helps bring the transferable elements and concepts students have learned to the forefront of their minds (Campione et al., 1995; Macaulay, 2000). Learners can then apply concepts to real-life situations “while being a step removed from a ‘messy’ reality where time is of the essence” (Macaulay, 2000, p. 18). Moreover, Morris (2019, p. 1074) contended that conceptualization and application must be context-specific and pragmatic to achieve “deep conceptual understanding.”
The contextual levels at which transfer can occur are also of importance. Macaulay (2000) identified two main categories of transfer: (a) near transfer and (b) far transfer. Essentially, the difference between the two is transferring learned concepts to the same or similar situations in which they were learned, versus transferring concepts to situations largely dissimilar in which they were learned. According to Haskell (2001, p. 29), six levels of transfer exist: “(a) nonspecific transfer, (b) application transfer, (c) context transfer, (d) near transfer, (e) far transfer, and (f) displacement or creative transfer.” We consolidated these into three main categories (i.e., same, near, and far transfer) as shown in Table 1, which will be used as the second independent variable in this study.
Combined Transfer Levels.
Transfer Skills
The ability to transfer learning is often considered the culmination, or overall objective, of learning. If the learner cannot transfer abstract concepts to real-life situations, then it can be argued that learning has not taken place (Zull, 2002). Haskell (2001) emphasized how important practice was when achieving transfer, stating, “Whether in the laboratory, in the classroom, or in natural settings, without sufficient practice enabling the adequate encoding of learning, transfer fails” (p. 175). Often, when a learner does not have the ability to transfer concepts and knowledge, it is because the time allocated for practice was overly brief (Haskell, 2001). The ability for a learner to achieve transfer is also dependent upon the type of practice they receive. Haskell (2001) purported that when learners process information across situations where stimulus-response pairs are frequently changing, then far transfer is often achieved. However, automatic processing, the repetition of the same skill or concept application in the same setting, often leads to near transfer achievement (Haskell, 2001). This is often the case in vocational settings, where learners are being trained for specific skills, instead of being educated holistically and being encouraged to be reflective in their practice. Such skill-repetition-focused settings are not conducive to building transfer skills (Haskell, 2001; Kemshall, 2000).
In addition to building transfer skills, educators must also consider how they might assess learners’ transfer skills, which can be a difficult concept to measure (Cree, 2000; Galoyan & Betts, 2021). Cree (2000) outlined six characteristics that can serve as indicators of learning transfer: (a) being an active learner by seeking out knowledge, (b) reflecting on previous experience and knowledge, (c) making relevant connections between different experiences and sources of knowledge, (d) being flexible and able to compare critically, (e) using abstract principles appropriately, and (f) integrating personal knowledge and experience with professional knowledge and experience. Assessment methods by which an educator could potentially evaluate these six indicators include (a) observation of direct practice, (b) oral and written evidence, (c) learner self-assessment, and (d) feedback from clients or stakeholders (Cree, 2000). For this study, students’ transfer skills will be the dependent variable and will be measured using written evidence, as recommended by Cree (2000).
Purpose and Questions
This study aimed to investigate how different modes of reflection and levels of transfer affect the development of students’ transfer skills over time in an undergraduate animal science course. An introductory animal science course was selected as the context for this study because within the [College], animal sciences is one of the largest undergraduate departments. Moreover, there has been, and likely will be, an increasing global demand for animal-based food products (Komarek et al., 2021), undergirding the need for well-trained graduates entering this industry. This research was guided by three questions:
How do students’ transfer skill scores change over time, and how are these changes influenced by mode of reflection and level of transfer?
What are the effects of the mode of reflection, level of transfer, and time (in weeks) on students’ transfer skill scores, as measured using statistical modeling?
Which statistical model best predicts students’ transfer skill scores?
Methodology
Research Design
This research is part of a large-scale research project on the effects of experiential learning in a post-secondary agricultural education setting. A 2 × 3 quasi-experimental design, examined with linear mixed modeling (LMM), was used for this research (Field, 2018, 2024; Singmann & Kellen, 2019). We were most interested in examining simple main effects and the interaction of two treatments on one dependent variable across time (Terrell, 2012). The first independent variable (IV1), reflection mode, consisted of two categories: written journaling and peer conversation. The second independent variable (IV2), transfer level, included three categories: same, near, and far transfer (see Table 2). The dependent variable consisted of students’ transfer skill scores measured at various time intervals throughout the course. A pre-test was administered at the start of the course, and scores from this pre-test were used as a predictor variable to account for students’ initial content knowledge.
Participants’ Characteristics.
Note. Totals may not sum to 100% due to rounding or missing data.
Study Participants, Context, and Procedures
This study was conducted in the College of Agricultural and Life Sciences at the University of Florida, which had 4,101 undergraduate students enrolled in the fall semester of 2021. Participants were selected using a non-probability convenience sampling method using student enrollment from ANS3006L: Introduction to Animal Science Laboratory (Ary et al., 2010; Dooley, 2001). This course included six pre-existing laboratory sections to which students were assigned. We selected this course because it is frequently taught across colleges of agriculture with large enough enrollments to draw a sample for conducting quasi-experimental research. Moreover, this experiential, laboratory-structured course was divided into six sections, which provided pre-existing, equally distributed groups with which to conduct the research design. Students must complete ANS3006: Introduction to Animal Science as a pre- or co-requisite to take the accompanying laboratory course. In total, 123 students were enrolled in the laboratory course.
Before conducting the study, a meeting was held with college administrators and the course instructors to review the research design and procedures. This study received ethical approval from the University of Florida IRB (approval #202101849) on August 10, 2021. At the beginning of the semester, students were informed of the study, and they were given incentives for their participation (i.e., course bonus points and a gift card). Before beginning the study, we obtained written consent to participate from all enrolled students during the first class meeting. To accomplish this, the lead researcher (a) met with each class, (b) verbally outlined the study’s purpose, risks, benefits, and procedures, (c) provided the students with an informed consent form, and (d) obtained signed consent forms from those who wished to participate (N = 123). The design of the study ensured that the students’ educational experience remained the same regardless of their decision to participate. While participants were incentivized, participating in this study required no additional time from the students because the experimental treatments were included in the course design, and data were collected as a function of course assessments. Students’ identities were anonymized before data analysis, and only aggregated data are reported in this study.
Eighty-two percent of participants identified as female (n = 101), 15% were male (n = 19), and 2% were non-binary (n = 3). A majority of participants identified as white (54%; n = 66), 25% selected Hispanic or Latino (n = 31), 12% identified as multiracial (n = 15), 6% identified as Asian (n = 8), and 2% identified as black (n = 3). Fifty-five percent of participants (n = 68) had completed at least one post-secondary animal science course beforehand. Table 2 displays the characteristics of participants by their respective treatment group.
Experimental design in the social sciences can often be complex, and researchers must recognize study limitations (Diener et al., 2022). Because the six preexisting, laboratory sections were assigned as the six treatment groups, it is possible that selection bias could have been a threat to internal validity (Ary et al., 2010). However, our treatments were assigned randomly to the laboratory groups. All other threats to validity were accounted for in the study’s design. No students dropped the course, and the course attendance averaged 93%. Researcher observations were recorded in the form of field notes across the semester to audit protocol deviations.
The six laboratory sections met separately each week, in person, for 1 hr and 55 min. Over the course of the study, there were 12 class meetings, with 2 weeks of scheduled breaks. Additionally, students were required to complete online pre-lab assignments before each weekly laboratory session. These assignments aimed to develop students’ abstract content knowledge. The included educational video lectures, PowerPoints©, and readings related to each week’s topic. The instructors of the course, guided by the lead researcher, delivered the experimental treatments to the participants. Frequent meetings were held between the researcher and instructional team to discuss treatment procedures and ensure consistency. Consistent instructional delivery was implemented across groups to control for instructor effect.
For IV1, reflection mode, students were asked up to five pre-developed reflection questions, which were aligned with each week’s learning objectives. The treatment was administered during the in-person class meeting over 11 of the 12 weeks of instruction. The reflection questions were the same across groups; however, the reflection method varied based on the assigned treatment group. Groups A, B, and C used written journaling as their method of reflection. In contrast, groups D, E, and F engaged in verbal peer reflection. Written journal reflection groups were issued a physical, note-taking journal in which they were asked to capture their reflective responses to the questions. Verbal peer reflection groups were instructed to discuss their reflective responses with a peer. Allotted reflection time varied across weeks based on the number of reflection questions asked. Reflection time ranged from approximately 7 to 10 min per week. However, the reflection duration was kept consistent across treatment groups.
For IV2, the transfer level, the treatment included a practice application exercise that was facilitated during the laboratory meeting. The researchers and the course instructors meet to develop the exercises to ensure alignment with both the transfer level and the course content. Students were provided with a case vignette (i.e., a situational problem) in which they were prompted to apply the learning objectives and concepts from the week. This approach was used because case studies and/or problem scenarios are recommended for practicing learning transfer (Macaulay, 2000). Students were instructed to complete the vignettes during class in small group settings with their classmates. Groups A and D (Treatment level one) received a vignette designed to prompt same transfer. Groups B and E (Treatment level two) were given a vignette aimed at near transfer. Groups C and F (Treatment level three) worked on a vignette intended for far transfer. Following group work on the vignettes, the instructors conducted a whole-class debriefing session to address any student questions. The IV2 treatment was implemented over 8 weeks within the 12-week instructional timeframe.
Instrumentation
Course Pre-test (Covariate)
Given that 55% of students had previously taken an animal science course, it is probable that they brought existing knowledge and experiences to the class. As stated by Streiner (2016, p. 4), “using baseline variables that may be related to the outcome as covariates can reduce the within-group variance, thus increasing the accuracy of the estimates of treatment effects and the power of the statistical test.” A pre-test was administered at the beginning of the course, before the first laboratory session, to measure students’ prior knowledge. This assessment was delivered via Canvas—the primary learning management platform for University of Florida courses. The pre-test consisted of 26 questions that were randomly selected from the course’s weekly, multiple-choice, content-knowledge quiz questions. A panel of experts was commissioned to review the assessment for face and content validity. Before analysis, group mean centering was conducted to aid in preventing multicollinearity between predictor variables (Field, 2018, 2024; Singer & Willett, 2003). This practice centers the means of the continuous variable around a subgroup’s score. In the case of this study, the means of the pre-test scores were centered around each respective treatment group’s means.
Transfer Skills Assessment
Assessing transfer skills can be difficult to accomplish because it is influenced by a range of factors (Cree, 2000). Cree (2000) outlined four specific methods for assessing transfer skills, which included (a) assessment of direct observed practice, (b) assessment of oral and written evidence, (c) learner self-assessment, and (d) feedback from stakeholders. For the sake of this study, we opted to assess transfer using assessment of oral evidence in the form of written response questions over the course of the semester, which was also supported by Campione et al. (1995). The questions were developed to present real-world, problem-based scenarios related to the weekly course objectives. The questions prompted learners to transfer the objective-based concepts across three contextual settings (same, near, and far contexts), and all participants received the same questions. The questions were evaluated by an expert panel to ensure face and content validity. Transfer assessment questions were administered via the Canvas learning management system as part of students’ weekly quizzes, and they were administered during the same 8 weeks in which the transfer level treatments were provided.
To score participants’ transfer skills, a 12-point rubric was developed for evaluating the written responses. The following considerations for constructing rubrics recommended by Butler and McMunn (2006) were implemented: (a) the rubric reflects the skills and content taught; (b) the rubric emphasizes important concepts; (c) the rubric differentiates between superior, adequate, and substandard responses; and (d) the rubric designates the most important characteristics with appropriate point distributions. Additionally, the rubric criteria were developed based on Cree’s (2000) six indicators of transfer of learning: (a) the learner is active in seeking out knowledge and learning, (b) the learner is reflective on their previous experience and knowledge, (c) the learner makes relevant connections between experiences and sources of knowledge, (d) the learner can compare and discriminate critically between concepts, (e) the learner uses abstract principles accurately, and (f) the learner integrates personal knowledge and experience with professional knowledge and experience.
In total, there were 951 written responses, transfer skill assessments submitted by the participants over the 8 weeks in which assessments were provided. There were 33 assessments (3.3%) that were missing at random. Complete case analysis was used because the proportion of missing data was below 5%, which is negligible (Jakobsen et al., 2017). All assessments were scored by the lead researcher after interrater reliability was calculated. Interrater reliability was calculated using percent agreement among raters for a sample of the assessments (Dooley, 2001; Stemler, 2004). Acceptable levels of percent agreement for consistency estimates are those above or equal to 70% (Stemler, 2004). To calculate the percent agreement, three assessments were randomly selected from each of the six treatment groups across the 8 weeks of assessments, for a total of 144 assessments (15%), to be scored by two raters, both of whom were researchers on this study. Of the 144 assessments, 125 resulted in exact agreements for a percent agreement of 86.8%. Of the 19 scores that were not exact agreements, all were near-miss adjacencies (off by only one point). To resolve the differences, the two raters met to discuss the adjacencies, which provided additional clarity around the use of the scoring rubric.
Data Analysis
SPSS Version 28 was used for data analysis. An LMM procedure was conducted for this three-level data structure (Field, 2018, 2024; Singer & Willett, 2003; Singmann & Kellen, 2019). The LMM procedure was selected because it is appropriate for experimental design studies that include repeated measures (Singmann & Kellen, 2019), and assumptions of independence and homogeneity are met as a function of the procedure (Fitzmaurice & Ravichandran, 2008). The restricted maximum likelihood (REML) method was used for estimating model parameters.
Figure 2 is a visual display of the three levels: (a) level one, representing each of the 8 weeks in which a measure for transfer skills was observed; (b) level two, each participant within the study; and (c) level 3, each of the six treatment groups (A–F). However, upon initial analysis of the data, the level three parameter accounted for a marginal amount of the variance of the transfer skill scores in the model (0.04%), so level three was removed from the final analysis.

Three-level linear mixed model design structure.
The between-subjects variance as a function of the intraclass correlation coefficient (ICC) was .377 for Model One. In total, seven models were examined in order to examine the effects of all predictor variables (time, the experimental independent variables, and the variable of prior knowledge) on transfer skill quiz scores. The starting model included only the fixed effect of time, while the remaining predictor variables were included one at a time to compare model fit statistics between models. However, only estimates from three models are reported in this study. They include (a) Model One, our beginning model that included the fixed effect of time (in weeks) on transfer skill scores; (b) Model Two, the model that included all predictor variables, which were the fixed effects of time (in weeks), prior knowledge, reflection mode, transfer type, the interaction effect of reflection and transfer type, and the interaction effect of time and transfer type on transfer skill scores; and (c) Model Three, the final model which measured the fixed effects of time and transfer level on transfer skill scores.
Results
Means and standard deviations for students’ transfer skills scores across the eight observations (which spanned 12 weeks), categorized by treatment group and total scores, are reported in Table 3. As for the participants’ total scores, the two highest mean scores were observed in weeks 12 and 14, and the two lowest mean scores were in weeks 2 and 4. For Treatment Group A (same transfer, written reflection), the two highest mean scores were in weeks 9 and 14, and the two lowest scores were in weeks 4 and 8. For Treatment Group D (same transfer, verbal reflection), the two highest mean scores were in weeks 8 and 14, and the two lowest mean scores were in weeks 4 and 5.
Transfer Skill Score Means and Standard Deviations.
In Treatment Group B (near transfer, written reflection), the two highest mean scores were in weeks 12 and 14, and the two lowest mean scores were in weeks 2, 4, and 5, with weeks 2 and 5 being the same. For Treatment Group E (near transfer, verbal reflection), the two highest mean scores were in weeks 3 and 14, and the two lowest mean scores were in weeks 4 and 8. In Treatment Group C (far transfer, written reflection), the two highest mean scores were in weeks 12 and 14, and the two lowest mean scores were in weeks 2, 3, and 4, with weeks 2 and 4 being the same. Finally, in Treatment Group F (far transfer, verbal reflection), the two highest mean scores were in weeks 12 and 14, and the two lowest mean scores were in weeks 2 and 8. For the most part, the highest scores were observed within the latter portion of the semester, and the lowest scores were within the beginning or middle of the semester. Results from the LMM procedure are presented in Tables 4 to 6.
Estimates of Fixed Effects for Predictor Variables across Three Models.
Note. Significant at *p < .05, **p < .01, ***p < .001.
Statistics of Model Fit across Three Models.
Variance Components across Three Models.
Note. Significant at *p < .05. **p < .01. ***p < .001.
Model One included a positive, significant (p < .001) fixed effect of time (in weeks) on participants’ transfer skill scores. When examining Model Two, which included all variables of interest as fixed effects, time (in weeks) had a positive, significant (p < .001) effect on participants’ transfer skill quiz scores. Additionally, the fixed effect of near transfer had a negative, significant (p < .05) effect on participants’ transfer skills scores as compared to the fixed effect of far transfer. All remaining fixed effects, including prior knowledge, reflection mode, the interaction of reflection mode, and the interaction of time and transfer level, had no significant effect on students’ transfer skills scores. Therefore, Model Three retained only the positive, significant (p < .001), fixed effect of time (in weeks) and the fixed effect of transfer level on students’ transfer skills scores. The categorical predictor variable of transfer level included a nonsignificant, negative effect of same transfer when compared to far transfer and a significant (p < .05) negative effect of near transfer when compared to far transfer. Model Three, as indicated by the AIC and BIC measurements (Table 5), was the most parsimonious.
Conclusions and Recommendations
Experiential learning can be an effective teaching methodology for developing post-secondary agriculture students’ transfer skills. The positive, significant trend for the fixed factor of time (in weeks) indicates that growth in students’ transfer skills increased as the semester progressed, and this was irrespective of the experimental treatment group to which participants were assigned. This finding is congruent with Haskell’s (2001) consideration that time spent practicing transfer skills is critical for skill development. The more time students spend interacting with course content and the more time they spend practicing transfer skills, the more expertise is developed, and the ability to transfer concepts to real-life settings increases (Haskell, 2001; Macaulay, 2000).
However, for those aiming to build transfer skills within their learners, we emphasize that time alone is likely not enough to develop transfer skills. Transfer skill practice should be met with deliberate, learner self-reflection around the act of practicing transfer skills (Campione et al., 1995; Haskell, 2001; Macaulay, 2000; Rivas et al., 2022). It is noted that in this study, reflection mode was not a significant predictor of students’ transfer skill scores. However, when developing transfer skills, it may be less important to focus on the method of reflection and more important to focus on the purpose of reflection. For example, researchers have emphasized learner self-reflection, but in a metacognitive way that encourages learners to think deeply about the practice of transferring skills (Campione et al., 1995; Haskell, 2001; Macaulay, 2000; Rivas et al., 2022). Learners need to know why they are applying concepts, and they need to critically examine their own performance and reasoning (Lehrer, 2024). In this study, learners were prompted to reflect on the course content as it related to the weekly course objectives, and reflection was not metacognitively focused. Therefore, it is recommended that future research around teaching for transfer skills investigates the effects of both metacognition and content reflection practices. Both written and verbal methods of reflection are supported by research theorists (Blackburn et al., 2015; Hubbs & Brand, 2005; Lamm et al., 2011; Lin et al., 2025; Loo & Thorpe, 2002; Thorpe, 2004; Wright et al., 2013; Yancey et al., 2013), and our findings suggest neither method is more effective than the other when controlling for transfer skill scores. Congruent with the recommendation by Sieben et al. (2021), offering a variety of reflection practices can provide students with autonomy over their reflection format.
The interaction of reflection and transfer was not significant, indicating that these two independent variables were independent of one another when controlling for transfer skill scores. Additionally, the effect of the independent variable of transfer level when interacting with time (in weeks) was not significant, suggesting that these two variables are also independent of one another when predicting transfer skill scores. However, when analyzing the independent variable of transfer level, near transfer has a negative, significant effect when referenced against far transfer, suggesting that learners who were in far transfer groups would achieve higher transfer skill scores. This overall pattern indicates that far transfer might be more effective and should be further explored. We recommend educators who seek to teach for transfer implement intentional scaffolding in which learners progress across multiple levels of transfer context (i.e., from same to near to far). Additionally, Macaulay (2000) described only two levels of transfer, which were near and far. We also recognize that the same and near transfer levels are contextually similar. As a result of the difference between near and far transfer in this study, future research could combine the same and near transfer levels and test the difference between the two levels: same/near transfer versus far transfer. Future research should also employ qualitative data collection approaches to explore students’ perceptions and preferences regarding the varying pedagogical approaches.
Moreover, this research only tested the effects of two modes of reflection and three levels of application via transfer. Other approaches remain to these two experiential learning components, which could be tested, and there are two other primary experiential learning components (experience and conceptualization) that could also be tested. However, there are some considerations for those seeking to replicate this research or continue testing these variables. This research tested the effects of reflection that was prompted by the instructor in the forms of written journal reflection and peer verbal reflection on three dependent variables. When developing experiential designs to test reflection, it is important to consider that, because reflection is an internal process, reflection can be difficult to control. While reflection is a critical part of the learning process, reflection is often an automatic and subconscious effort by the learner (Quay, 2003; Zull, 2002). Therefore, a true absence of reflection cannot be controlled for, nor can the depth of internal reflection that is naturally occurring inside learners. This leaves researchers with the ability to test only additional reflection that is prompted by the educator. Prompted reflection then occurs in addition to the natural, internal reflection processes of each individual, which can vary between subjects.
It is also important to consider that, in this study, learners treated with far transfer were prompted to work through three case vignettes. They did not jump straight to far transfer. Instead, a scaffolding approach was used, starting with a same-context transfer scenario, moving to near context transfer, and finally to a far context scenario. In this approach, far transfer groups received more practice with using transfer skills in class than other groups. As we previously mentioned, practice is essential when developing learners’ transfer skills (Campione et al, 1995; Galoyan & Betts, 2021; Haskell, 2001; Macaulay, 2000). We recommend that practitioners consider using the approach of scaffolding learners to a more distant context when teaching for transfer.
Discussion
While the far transfer group was more effective compared to near transfer groups, we want to highlight some considerations by Haskell (2001) that are important to consider for researchers who are interested in measuring transfer skills. Caution should be used when designing experiments that measure transfer skills, because:
Teaching the principle in such close association with testing for transfer is not much different from actually telling subjects that they should use the principle just taught [to] them. And telling a subject to use a principle is not transfer. It’s simply following instructions. (Haskell, 2001, p. 37)
Therefore, the dilemma is not that learners are taught transfer skills, but that the measurement of transfer skills is tricky. When explicitly asked to transfer concepts for the sake of assessment, transfer is not autonomous nor is it initiated by the learner, which some may argue is not true transfer (Haskell, 2001). This presents the dilemma of whether we are actually observing true transfer or whether we are observing the participants’ ability to follow instructions (Haskell, 2001). In this research, we measured transfer skills, which included the indicators recommended by Cree (2001) that transfer is taking place, but it was not without prompting by the course instructor as a function of the research design. We transparently present this dilemma, not to discount our findings. Rather, consumers of this research, and those seeking to measure transfer skills experimentally or practically, should be informed of its complexity and measurement challenges.
It was observed that the final model we fitted was the most parsimonious; however, it resulted in low measures for between-subjects variances. Therefore, replication of this study is suggested. Continual study around these phenomena would provide longitudinal effects, which could help confirm the findings of this study. Additionally, the inclusion of within-student characteristics as predictor variables, such as race/ethnicity, sex, academic major, etc., may also aid in accounting for some of the remaining variance.
These challenges are important for researchers to consider, especially those who are interested in designing experiments in educational settings. This is not said to deter scientists from conducting experimental research in the social sciences, nor to discount the value of such research. However, results from experimental research in education cannot always be copied and pasted across educational settings. Haskell (2001) shared the same sentiment by stating:
…using the detailed computational research findings on how the mind works may not be the best data for designing effective instructional methods; what we know about how the mind is structured isn’t necessarily the model on which to base the structure of teaching (p. 51).
This does not mean that empirical educational research has no value. In fact, the only way by which we can confirm the theories that support our practice is through testing and observing their effects empirically, and there are dangerous implications for basing theories of educational instruction on a lack of supportive research findings (Haskell, 2001). Therefore, those designing instructional approaches should ground them in appropriate, research-supported theory. Modeling instruction to follow a formulated, step-by-step approach only because it is supported by clinical findings may not be the best approach to educational practice. The same can be asserted about the theory of experiential learning. Historically, experiential learning models have been oversimplified, and intentional or not, they imply a stepwise do-reflect-apply-repeat approach to the methodology. In reality, developing an educative experience that results in the successful transfer of learning is far more complex and contextually dependent.
It is also important to discuss some considerations around the design of this research and the potential implications. The 2 × 3 factorial design was intended to compare the effects of variables against one another, and their interaction effects, as they related to the dependent variables. This design did not include a control group for which there was an absence of experimental treatment. The design was selected because the practices of reflection and teaching for transfer were already time-tested and theory-supported best practices within education and experiential learning. The implication of such a design means that any significant findings were not an indication that the nonsignificant variables were not effective, only that they were significantly less effective than the statistically significant independent variables when controlling for the dependent variable. Additionally, a lack of a significant difference among any variables does not mean the approaches are ineffective. The dependent variables measured in this study were specific and limited measures of student achievement. Overall student performance in the course was not measured as part of this research; however, there were no students who performed poorly in this course. Therefore, we encourage practitioners to try the approaches used in this research, and we acknowledge that you may achieve results that are similar or dissimilar to those in the context of this research.
Context plays a critical role in experiential learning. All learning experiences are connected to those that have come before them (Dewey, 1938), which is a primary reason a measure of prior knowledge was included as a covariate in this research. Many researchers have tested the effects of the experiential learning process, but the reality is that the process cannot be separated from its context. So, while researchers can test the process, it can be difficult to control for contextual variables and within-subjects variables. Experiential learning emphasizes the importance of the contextual variables that frame the learning process, as well as the experiences that have come before and those that may follow, all of which should be considered by those studying and implementing experiential learning.
For those who aim to teach for transfer, real-life, practical experiences must be included (Cree, 2000; Haskell, 2001; Macaulay, 2000; Morris, 2019). In this study, we utilized realistic, scenario-based case vignettes as the treatment, which also served as one method of simulating reality. However, the laboratory course was also designed to include practical animal science experiences, coupled with the other critical experiential learning elements (i.e., reflection, conceptualization, and application) recommended by theorists (Dewey, 1938; Joplin, 1981; Kolb, 2015; Roberts, 2006). Experiential learning, when implemented holistically, lends itself naturally to knowledge transfer (Macaulay, 2000; Zull, 2002), especially at the post-secondary level (Chickering, 1977; Estepp & Roberts, 2011; Eyler, 2009; NASEM, 2021). Thus, we recommend its use for practitioners interested in developing transfer skills within their learners. However, it is not enough to only provide real-world experiences. When designed with intentionality and holisticness, which marries experience with critical reflection, conceptualization, and practical application, experiential learning has the power to provide educative experiences for learners and to achieve the end goal of education: knowledge transfer.
Footnotes
Acknowledgements
None.
Ethical Considerations
This study received ethical approval from the University of Florida IRB-02 (approval #202101849) on August 10, 2021.
Consent to Participate
Participants gave written consent for review and signature before participating in the study and before educational assessment data were collected.
Consent for Publication
Not applicable.
Author Contributions
B. Coleman: Conceptualization, Investigation, Formal Analysis, Methodology, Project Administration, Writing – Original Draft; J. Bunch: Conceptualization, Investigation, Formal Analysis, Methodology, Writing – Review and Editing, Supervision, Project Administration; G. Israel: Conceptualization, Methodology, Formal Analysis, Writing – Review and Editing; T. Roberts: Conceptualization, Writing – Review and Editing; A. Wysocki: Conceptualization, Writing – Review and Editing.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
