Abstract
This study investigates whether the presence of relevant images or progress indicators in an online multimedia training will improve knowledge gain and satisfaction. It was hypothesized that both knowledge gain and satisfaction increase with the inclusion of images and progress indicators. A randomized pretest-posttest 2 × 2 factorial design was used to examine the impact of relevant images and progress indicators on knowledge gain and satisfaction in an online multimedia environment. Results suggested that satisfaction increased when the training had both pictures and progress indicators. On the other hand, there did not appear to be a significant difference in knowledge gain between conditions.
Keywords
Introduction
Aviation training has leveraged several different delivery methods, including in-person instruction, simulations, and customized curricula, to achieve effective and efficient results. In recent years, due to its cost-effectiveness, online learning has been a top contender for delivering training in aviation, specifically in low-risk areas like recreational drones. However, the quality of online training can vary, so it is crucial to incorporate learning design principles to optimize training (Craig & Schroeder, 2023; Kearns, 2016).
Drone Pilot Certifications
A drone is an unmanned aerial vehicle that can be controlled remotely. Several industries are finding ways to provide faster and safer operations. From first responders to the military to e-commerce, drones have proved to be useful, especially in harmful conditions (Chamayou, 2015; Maghazei et al., 2022). In the U. S., the Federal Aviation Administration (FAA) requires commercial operators to obtain the small Unmanned Aircraft Systems (sUAS) certification (Small Unmanned Aircraft Systems, 2021) to fly drones.
For recreational operations, the FAA requires individuals to complete The Recreational UAS Safety Test (TRUST) training (Exception for Limited Recreational Operations of Unmanned Aircraft, 2024; FAA, 2024). As with most online training, TRUST includes passive reading with a set of questions at the end. Individuals are required to finish the TRUST training modules and answer all questions correctly to be allowed to fly drones recreationally.
Multimedia Learning
Most online learning is self-directed and requires the learner to be engaged for better cognition and information retention (Markant et al., 2016). There are a variety of ways to increase engagement in online learning. The cognitive theory of multimedia learning states that the information presented in both text and pictures enhances learning when compared to learning through text alone (Mayer, 2017; Mayer & Moreno, 1998). When learners receive information through multimedia, they are able to construct two distinct mental representations: a verbal representation and a visual representation. As a result, establishing links between these models can help learners gain a better grasp of the presented information.
Motivation in Online Learning
A study conducted by Rovai et al. (2007) indicates that online learners exhibit increased intrinsic motivation and greater satisfaction with their learning experience. Motivation and self-monitoring play a big role in predicting online learners’ performance and course completion rates (Kizilcec & Schneider, 2015). It is important for online learners to understand their progress when it comes to self-directed learning. This can be achieved through progress indicators that help with tracking and time management. Percent-done progress indicators are visual cues that provide the user information on how much of a task they have completed and when it will be completed (Myers, 1985). In addition, users respond more positively when they receive information about what stage they are at in the process (Conrad et al., 2010).
Current Study—Hypothesis
This study aimed to investigate whether a combination of relevant images and progress bars in the UAS safety training module would impact knowledge gain and satisfaction. The research on progress indicators (Myers, 1985) and Mayer and Moreno’s (1998) multiple representation principle from the cognitive theory of multimedia learning supports the idea that adding progress indicators and text-relevant images will positively impact learning and learner satisfaction. Specifically, combining images and progress indicators will allow the learner to establish multiple mental connections to the information provided in training (Mayer, 2017), allowing for a better understanding of the material. The inclusion of progress indicators will provide the learner with information about the duration, thus reducing the learner’s required attention to the remaining training duration and providing encouragement, leading to higher satisfaction (Conrad et al., 2010). Based on this, the current hypothesis is that adding relevant images or progress indicators will positively impact learning and satisfaction. Specifically, combining images and progress indicators will yield higher satisfaction and learning scores than any other combination or text-only training.
Methods
Design
This study implemented a randomized pretest-posttest 2×2 factorial design to investigate the effect that relevant images and progress indicators have on knowledge gain and satisfaction in a multimedia environment. Relevant images and progress indicators were each either present or absent from the training to determine how their use impacted knowledge gain and satisfaction. Knowledge gain is measured using the pretest and posttest scores from module one of the TRUST training. Satisfaction is measured using the validated System Usability Scale (SUS; Brooke, 1996).
Participants
Given a medium effect size (d = .25), a power level of 80%, and an alpha level of .05, a power analysis suggested that the study would require a sample size of 128 participants. Participants were recruited from the Arizona State University (ASU) Polytechnic Human Systems Engineering SONA Systems subject pool. These participants were students in the Human Systems Engineering (HSE) department at ASU and received 1-hour course credit for the time spent on this study as part of their HSE 101 curriculum. Due to time constraints, only 30 participants completed the training.
Materials
Demographic Survey
The demographic survey consisted of two questions. The first was an open-ended question asking their age and a multiple-choice question on their knowledge of aeronautics and aviation safety.
Training Manipulation
Participants were randomized into one of four multimedia lessons on the first module of TRUST. All lessons used the information provided by the FAA (2024). The four online TRUST training methods were as follows: training with both pictorial representations and visual progress status; training with pictorial representations and without visual progress status; training without pictorial representations and with visual progress status; training without both pictorial representation and visual progress status. Relevant images were selected based on what current TRUST providers are using. Current FAA resources were leveraged, and these images were not created by the researchers. Images that are typically used in aviation training, such as the airspace classification chart, were used to support the safety training.
Satisfaction Measures
The SUS (Brooke, 1996) was used to assess participant satisfaction. This survey consists of a 10-item questionnaire using a 5-point Likert scale to measure the participants’ satisfaction with the training. Although the SUS was originally developed to provide a measure of perceived ease of use, Lewis and Sauro (2009) demonstrated that the SUS effectively provides a global measure of satisfaction in terms of usability and learnability. As such, the SUS was selected as the optimal measure for satisfaction since it is a reliable scale that provides information regarding satisfaction in terms of learnability.
Learning Measures
The same seven-item pretest and posttest were used to measure participants’ knowledge gain. These tests consisted of seven multiple-choice questions from the FAA’s (2024) TRUST module one training.
Procedures
Participants entered the study via the ASU Human System Engineering SONA Systems subject pool, where the participants were redirected to Qualtrics. Before beginning the study, participants received the informed consent form. After the consent form, participants answered the demographic survey and the pretest knowledge assessment. Participants were then randomly assigned to one of the four online training methods and had as much time to view the lesson as they needed. Afterward, they were given the satisfaction survey followed by the posttest knowledge assessment.
Data Analysis
Knowledge Gain Score
The knowledge gain score was calculated by comparing the pretest and posttest results. Since there are seven questions, the highest knowledge gain score is seven, and the lowest is no gain, which is scored as zero. If a participant correctly answered a question in the pretest, there would be no increase in knowledge seen in the posttest. Thus, questions that were answered correctly in the pretest were not considered for knowledge gain. The number of questions answered incorrectly in the pretest and correctly in the posttest were considered to show knowledge gain and were included in the final score.
Satisfaction Score
The satisfaction score was calculated in accordance with Brooke’s (1996) guidance. For the odd-numbered questions, the score was calculated by subtracting one from the participant’s selected value (1–5). For the even-numbered questions, the score was calculated by subtracting the participant’s selected value (1–5) from five. The sum of the resulting scores was multiplied by 2.5 to obtain the overall satisfaction score (0–100). Although these scores are presented on a scale from 0 to 100, they are not percentages. A score below 68 is considered below average, and a score above 68 is considered above average.
Results
This study implemented two 2 × 2 factorial ANOVAs to investigate the effect of relevant pictorial representations and progress indicators on knowledge gain and satisfaction. For knowledge gain, there was not a significant main effect observed for either pictorial representations F(1,20) = 2.08, p = .17;
Descriptive Statistics for Knowledge Gain and Satisfaction.
Discussion
Online modalities of training delivery can enhance knowledge gain for learners (Means et al., 2009). Means et al. (2009) reviewed comparative studies of online and face-to-face versions of the same course from 1996 to 2008 and concluded that online learning could produce learning outcomes equivalent to or better than face-to-face learning. Thus, it is important to analyze the impact of various components within online training.
This study focused on knowledge gain and satisfaction for the online FAA TRUST training. Relevant images and progress indicators were each either present or absent from the training based on treatment condition. There were no significant statistical differences in knowledge gain between conditions when using images and progress bars as features of the training technique. This does not support the hypothesis that the introduction of images and progress bars would improve knowledge gain. However, the interaction between progress indicators and relevant pictures had a significant impact on satisfaction.
When both text and pictures are needed for comprehension and learning, students must integrate verbal and pictorial information into one coherent, task-appropriate mental representation, a process known as text–picture integration (Mayer, 2017; Zhao et al., 2020). Since there were no significant differences found for knowledge gain, training providers can assess the implications of visuals on learners’ mental workload of text-picture integration. The textual literature could be more concise to provide more bandwidth to present a wider range of information for better safety training.
Since no significant changes were observed with the presence of progress bars as seen previously by Myers (1985), the impact of their use on the learners’ performance could be investigated further. Further evidence-based research could help determine the most effective ways to incorporate progress bars into the learning experience. This involves conducting user testing to gather feedback from learners about progress and using time-based data analytics to track the impact of progress bars on their performance and motivation.
The key limitation of this study is the small sample size. This small sample size could have led to the non-significant results observed. However, the addition of pictures within the TRUST module indicated the partial eta squared of .094 for participants’ knowledge gain, which is a relatively large effect size for an educational study. This suggests that the null effect was potentially due to the small sample size of the study. Future research would need to be conducted to confirm the stability of the effect size and if the larger sample size showed a significant effect.
This kind of online learning can be a valuable tool for providing UAS safety training. This study focused on analyzing two dependent variables: knowledge gain and satisfaction. In the context of the TRUST training, knowledge gain is defined as an increased understanding that an individual acquires pertaining to the safe operation of UAS. This includes understanding UAS protocols, safety regulations, maintenance techniques, flight planning, and emergency operations. Analyzing knowledge gain in online UAS training is important because it helps assess the efficacy of training delivery. Knowledge gain in UAS training has been assessed through pre- and post-assessments. Ultimately, the goal of online UAS training is to increase knowledge and understanding to improve the safety and effectiveness of UAS operations.
The benefit that online UAS training delivers is the spatiotemporal independence to complete the training and testing. The satisfaction score in UAS training refers to the level of satisfaction that learners have with their learning experience. This study used the validated SUS questionnaire (Brooke, 1996) to measure how engaged participants were with the material, how well the program met their expectations, and how confident they felt about flying a UAS safely. The satisfaction score can be further studied to identify areas for improvement to ensure that the UAS training program meets the needs and expectations of learners. Learner reaction to the online TRUST training with relevant pictures and progress indicators could also further be measured using Kirkpatrick’s (1994) first level program evaluation.
Online learning is a cost-effective way to deliver safety training, as it eliminates the need for travel and reduces the cost of materials such as printed manuals and handouts. This also helps with scheduling due to the elimination of travel time. This study is an example of how customizable online training delivery methods can be. The ability to simulate different conditions in a reasonable amount of time can help meet the specific needs of different learners, allowing for a more personalized and effective learning experience. The satisfaction survey exemplifies how feedback can be used as data points to improve the assessment content and presentation. Overall, online learning can be an effective way to provide UAS safety training by offering a convenient, interactive, cost-effective, customizable, and accessible learning experience.
Conclusion
The FAA TRUST training is designed to help recreational UAS pilots operate their aircraft safely and in compliance with FAA regulations (FAA, 2024). This study determined that there are no significant statistical differences in participants’ knowledge gain when using relevant images and progress indicators for online training. However, there was a statistically significant difference in the level of satisfaction when both relevant images and progress bars were used. Future work can examine satisfaction in terms of learner reaction, learner behavior (Kirkpatrick, 1994), and additional parameters such as which parts of the content or areas of the screen participants focus on by using eye tracking (Sharma et al., 2022). This information could indicate pain points or aspects that demand a deeper sense of attention. This could provide additional insight into where and how training modifications can be made. This study evaluated the first module of the TRUST training. Using a similar approach, the other modules can also be examined. After taking the entire TRUST training, more data would be available to analyze whether the learners can effectively fly a drone and adhere to safety protocols. The findings of this study highlight the need for presentation consistency across providers since the content is uniformly set by the FAA.
Footnotes
Acknowledgements
We would like to thank the recruited participants who volunteered for this study.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
