Abstract
One apparent challenge associated with learning analytics (LA) has been to promote adoption by university educators. Researchers suggest that a visualization dashboard could serve to help educators use LA to improve learning design (LD) practice. We therefore used an educational design approach to develop a pedagogically useful and easy-to-use LA visualization solution to support data-informed LD. We interviewed four staff in a medical degree program at a New Zealand university, designed and piloted a dashboard, and evaluated it through interviews. As a proof-of-concept project, our study showed that educational design research could be meaningfully used to develop a visualization dashboard that is easy-to-use and useful. In particular, the preliminary design principles identified provide implications for practitioners who are seeking to use LA to inform LD. Finally, we reflect on the purpose of visualization dashboards in relation to the literature and identify areas for future developments.
Keywords
Introduction
Learning analytics (LA) is “the measurement, collection, analysis and reporting of data about learners and their contexts for purposes of understanding and optimizing learning and the environments in which it occurs” (Long & Siemens, 2011, p. 34). While static learner-related data such as student demographics and academic results have a long history of use in educational research, LA provide opportunities for educators to make use of additional data about the process of learning, often in the form of learners’ digital footprints (Gunn et al., 2017). LA have contributed to the improvement of student learning, for instance, through identifying at-risk students (Saqr et al., 2017), predicting retention (Gašević et al., 2017), and understanding learning behavior (Rienties & Toetenel, 2016).
Because it captures learners’ footprints in digital learning environments, LA has been increasingly valued for its capacity in informing learning design (LD) to improve learning experiences and outcomes (Mor et al., 2015). LD concerns the composition and arrangement of learning resources and activities that constitute a learning episode, with an emphasis on enabling the active role of students during learning (Holmes et al., 2019). Conventional approaches to LD are characterized as content-based (e.g., new content and learning outcomes drive curriculum renewal, see Wu & Chen, 2021, for example), perception-based (e.g., student teaching evaluations drive the changes made within the curriculum, see Ballantyne et al., 2000, for example), or driven by explicit frameworks or models (e.g., conversational framework by Laurillard, 2013). The LA approach to LD is, by contrast, data-driven, allowing educators to estimate students’ actual engagement with learning resources and activities and therefore making design decisions directly informed by actual student learning behaviors.
In an attempt to understand how LA can be used for LD, Gunn et al. (2017) offered a LA-LD framework, identifying three types of learning-related data that are relevant to LD during a typical cycle of learning. During the planning and development phase of learning, data related to admission, demographics, cohort size, and course content could inform LD. During the teaching and assessment delivery phase, data related to student feedback, learning thresholds, interim grades, participation, and resource use could be used to inform LD. Finally, during the review and evaluation phase, student attendance, engagement, pass rate, and student-generated artefacts could be used to inform LD. An influential literature review on LA for LD (Mangaroska & Giannakos, 2018) further identifies variations in perspectives and data sources when LA is applied for LD purposes. The review suggests that LA solutions could be designed from the perspectives of individual student, student groups, course-level information, content, and teacher, and typically use data from student-generated content, local context, academic profile, evaluation, course-related performance, and course-meta data. While these works help educators organize a variety of learning-related data for various actionable design insights (e.g., Gunn et al., 2017), the potential of LA can only be realized if institutions develop relevant tools and build capacities for educators to use LA for LD.
While LA for LD holds much promise, to produce actional information to enhance learning LA requires advanced technical skills and dedicated time, which most educators lack (Daniel, 2015; Lee et al., 2020). Therefore widespread use of LA in daily teaching practice is mainly seen in STEM disciplines, where staff have adequate technical competence (M. Liu et al., 2019) and where specific funding has been supplied (Tsai et al., 2019). Most staff, on the other hand, require training and support to use LA (Kaliisa et al., 2021; Tsai & Gasevic, 2017) and perceive institutional learning platforms where most learning-related data are stored as being unable to produce readily interpretable analytics (Lockyer et al., 2013).
To enable educator use of LA, one approach is to design simple and useful visualization dashboards that present data in a way that informs LD activities (Kaliisa et al., 2021). In an earlier review of visualization dashboards, Verbert et al. (2014) reported that visualization dashboards have been primarily student-facing, which aims to enable learners to keep track of their progress. The authors also identified that data being visualized were often student-created artefacts, learner social interactions, resource use, time logs, and assessment results, all of which correspond to the type of data collected during the phase of teaching and assessment delivery, as specified by Gunn et al. (2017). Recent developments have seen some visualization dashboards designed for educators, although this kind of work has still been comparatively less than those designed for students (e.g., Sahin & Ifenthaler, 2021). Kaliisa et al. (2021) in a detailed analysis of two instances of educator-facing LA visualization dashboards in in the UK and Norway recommended teacher-facing LA dashboards be designed according to teacher needs, in a simple format, based on relevant theoretical perspectives, with ongoing technical support and consideration of the specific local teaching context.
Research Aim
In the present study, we set out a design-based research project with the aim of designing and piloting a pedagogically relevant and easy-to-use visualization dashboard that promotes data-informed improvements in LD. To make LA accessible to educators so that learning-related activities could gain insights from LA, we prioritized engaging with staff who have had no training in LA. Thus, this is a proof-of-concept, serving to explore how LA can be translated and embedded into educators’ ongoing improvements in teaching practice without specific training.
Ease-of-Use and Usefulness
We used the technology acceptance model (Davis, 1989) as an immediate framework to assess the extent to which the LA dashboard meets the research aim. The model identifies two constructs, “ease-of-use” and “usefulness,” as predictive of individual use of technologies, and has been widely used to explain the adoption of various technologies, including those in the educational context (e.g., Al-Nuaimi & Al-Emran, 2021). Ease of use refers to the amount of effort it takes to use a technology, and usefulness refers to the extent that using a technology would enhances performance (Davis, 1989). According to the model, a technology will be adopted if an individual regards the technology as useful and easy-to-use. In line with this idea, for the visualization dashboard to promote future LD, educators need to find the dashboard useful for their LD activities and at the same time easy-to-use. These two constructs were used as sensitizing concepts to guide our design and evaluation of the visualization dashboard.
Method
We adopted educational design research, “in which the iterative development of solutions to practical and complex educational problems also provides the context for empirical investigation, which yields theoretical understanding that can inform the work of others” (McKenney & Reeves, 2021, p. 83). Our study fits well with a development approach to educational design research, having an emphasis on enhancing interventions, such as visualization dashboards, through “interactive, scientific testing and refinement” (p. 85).
The study took place in a campus for a medical degree program at a research-intensive university in New Zealand. The 6-year medical degree program consists of three phases: 1-year health sciences learning, 2-year pre-clinical learning, where students attend timetabled lectures and tutorials, and 3-year learning based around clinical experience. During clinical learning, students attend a combination of “block modules,” courses that are organized for cohorts of students during certain period of the academic year, and “vertical modules,” courses that are offered to all students weekly during the academic year. Each module is led by a module convener and a support staff member. Because the campus for data collection was clinical learning, the dashboard was designed for modules in this phase. The medical degree uses the Moodle learning management system to host most learning resources and activities and all interim and final academic results. The study obtained institutional ethics approval (reference number: D21/118)
The authors consisted of the research team. All team members have experience in conducting applied research and most have experience in curriculum design in health professions education.
Kaliisa et al. (2021) called for the development of teacher-facing visualization dashboards to be human-centered and collaborative, recognizing the centrality of teachers’ needs within specific educational context. We therefore collaborated with four staff members who have been teaching into or supporting our medical degree program. Among them, Participant One (P1) was the convener of a vertical module on pathology; Participant Two (P2) supported the teaching of a block module on children’s health; Participant Three (P3) was a convener of a block module on general practice, and Participant Four (P4) provided teaching delivery and administration support to that module.
McKenney and Reeves (2021) identify the general process of conducting educational design research as consisting of three main stages, “analysis and exploration,”“design and construction,” and “evaluation and reflection,” which enable progressive deepening of understanding and maturing of intervention. Our study followed this design process. Work undertaken in each stage is described below in turn.
At the first stage of “analysis and exploration,” researchers collaborate with practitioners in an attempt to develop a better understanding of the nature of the problem to be addressed (McKenney & Reeves, 2021). In our study, this took the form of interviewing study participants to identify what is expected for a visualization dashboard for LD purposes. The interviews were semi-structured and each took around 60 minutes. Data were transcribed then analyzed in light of the two constructs, “usefulness” and “ease-of-use,” identified by the Technology Acceptance Model (see the Theoretical Framework section above). Analysis of the data was finalized after team discussion.
At the “design and construction” stage, prototypes of possible solutions to the problem being addressed are developed, refined, and operationalized (McKenney & Reeves, 2021). In our study, this began with an identification of four design principles for the visualization dashboard based on themes identified from the interviews in the previous stage. Next, drawing on the design principles, we undertook a stocktake of Moodle built-in analytics features to assess whether existing analytics would meet the design principles. Our analysis showed that existing visualization solutions in Moodle focused on a particular learning activity but not an overview of how students have engaged with a range of learning activities and resources. This led us to an exploration of local solutions outside of Moodle. After consultation with our university’s IT services, we identified PowerBITM as software with suitable functionality which allows us to develop the visualization dashboard. We accessed learning-related data recorded in Moodle, used them to sketch a blueprint of the dashboard, developed dashboard prototypes in PowerBITM, and refined and revised the prototypes in several iteration until the consolidation of a dashboard template. Finally, at the “evaluation and reflection” stage, the researchers seek to empirically test the prototype solution and further develop an understanding of how and why the intervention works or does not work (McKenney & Reeves, 2021). In our study, a second round of semi-structured interviews was conducted to test the visualization dashboard and capture evaluative feedback from participants. We presented the refined dashboard to the participants, allowing them to interact with the dashboard while talking aloud about their thinking. This was followed by prompts on participants’ perception of the potential usefulness and ease-of-use of the dashboard. The interviews were each around 60 minutes in length, transcribed and analyzed to examine the “usefulness” and “ease-of-use” of the dashboard. The analysis result was discussed and agreed upon by the team.
Results
Analysis and Exploration
Our analysis of interviews at the “analysis and exploration” stage showed that, despite having used Moodle to teach or support teaching, participants had no engagement with Moodle built-in analytics. The built-in analytics were considered either as advanced Moodle use, which “never quite reached the top of the list to work it out” (P4) or as difficult to access. Furthermore, Moodle analytics were viewed as having little utility: I’m not even aware if there are any tools [LA] than just clicking through and landing on stuff … I have a vague feeling I’ve seen it somewhere. … I can log on and check up on particular students and see the last time they used Moodle … but you’ve got to be able to remember where to find it … you can’t just do a dashboard of who’s used Moodle this week, which would have been really nice. (P2)
On the whole, participants did not have working knowledge of how students have engaged with most learning resources and activities other than online quizzes.
We don’t give them any printed out material. If they want to access any of the lecture notes or any of the PowerPoints or anything, they have to go into Moodle to get them, but I haven’t looked at how many downloaded them … the quizzes, I can clearly see how many people have done the quiz on the first page. (P1)
Ease-of-Use
Participants confirmed that a dashboard must be easy to use, suggesting that “starting at the front, the first thing is that the dashboard has to be easy to get to” (P1). In particular, the easiness of a dashboard was interpreted as consisting of three subthemes on (a) navigation, (b) amount of information, and (c) aesthetics, which are described below in turn.
All participants had a similar view on dashboard navigation: They considered existing design of courses offered through Moodle as “a little bit clumsy” (P3), described as “clickety click every time just to do one task” (P4). Such experiences led participants to suggest that the design of a dashboard needs to “count everything in the number of clicks” (P4), warning clumsy navigation would impede adoption. There was a preference for an overview of important information in one aggregated dashboard view, rather than multiple views via hyperlinks.
Not having to click five times to get to something. There’s nothing worse than “Oh, I want that. Let me see, I have to go through here, which will take me to here, which will take me there.” And then “Oh, look, here it is over here.” It’s got to be an easy two clicks. Otherwise, you forget what you’re looking for by the time you find it. (P2)
Participants also commented on the quantity of information in a dashboard, suggesting that a “general principle is to not have too much, not to make it visually confusing” (P3). Similarly, when describing an ideal dashboard, Participant 1 shared, I’m just like going through all the good and bad dashboards that occur in my life. I’m thinking of MetService. The problem with MetService is the scrolling … there’s just too much scroll. (P1)
Finally, participants commented on the aesthetics of the dashboard, with explicit indication that “it’s very important that it’s (the dashboard) aesthetically clean and clear” (P4). For the participants, “aesthetics is about usability. If it’s put together in a way where the form makes sense and where the form follows the function, then it’s easier to use” (P3). In addition, Participant 2 referred to their approach to color-coding when supporting group-based clinical teaching, suggesting that consistent color-coding helped to accelerate work efficiency.
When probed for what makes a dashboard overall easy-to-use, Participant 3 provided a concise summary below, which captured aspects of the three subthemes described above.
Have decent enough resolution, and not cluttered. Good colors, and nicely kind of positioned, judiciously chosen information. (P3)
Usefulness
Participants had differences in perceptions of usefulness. Participants held varied assumptions about dashboards in the educational context, which did not necessarily align with our focus of informing LD. For example, Participant 4 expressed a need for the dashboard to monitor students’ weekly progress, communicate with students who are at risk, and aggregate marks across different assessment activities for summative evaluation of academic performance. By contrast, Participant 3 was able to note the different purposes dashboards could serve, indicating that “there is a real difference between what you look at day to day and what you look at when you’re trying to improve your learning design.” For this participant, a dashboard that is useful for LD purposes clearly involves less frequent monitoring of individual student progress and should certainly not be used to assess student performance.
I don’t think I’ll look at that [dashboard] regularly. I don’t want to use that information to mark, essentially adult students. They’re responsible to complete our module. And I don’t want to monitor the level of detail of them. What I’d be more interested in is summary information. So, twice a year, possibly four times to get the most …“this is how your pages being used.” Like “did you know that … 80% of your students looked at this aspect of it”; “0% of your students opened that link.” That type of information would be more useful to me as convener. Because it’s about course design. (P3)
Participants perceived two other aspects of a dashboard that are useful for LD. The first was concerned with the type of data to be visualized, and the second concerned the way the data is organized in the dashboard.
Participants emphasized visualizing data related to student participation and resource use, broadly corresponding to Gunn et al.’s (2017) engagement data collected during teaching and assessment phase. Such data would enable participants to understand what “students are looking at on Moodle,” and “knowing how they are using that means we can design that side of it better for them as well” (P2). In particular, participants noted the potential of access data in reflecting students’ approaches to learning, explaining academic performance, and targeting supplementary learning support.
Participants also described how the data should be visualized. There was a consensus on displaying data primarily by student cohorts and by individual student as needed. This would allow participants to draw out insights into how students learn with the online learning resources and activities.
I want to be able to see Group A, and how many have completed in Group A, Group B, and Group C. But then I also want to be able to see the individual students about how they’ve individually completed things because Group A, B, and C are doing different things at the same time. (P1)
In addition, being able to choose to display certain information on the dashboard based on individual need was appreciated (P3), as Participant 4 commented “we all run our modules quite differently. For some modules, students are interacting much more on Moodle … [but] that’s certainly not [in other modules]” (P4).
Design and Construction
We identified four design principles (Table 1) for the visualization dashboard based on findings from the interviews. These principles were used to develop and revise dashboard prototypes.
Themes Identified From First Round of Interviews and Design Principles Derived From Themes.
Table 1 after four rounds of testing and refining, with each focusing on different visualizations by system features, student cohorts, events, and individuals. We eventually consolidated a basic dashboard template, which aggregated access data of different learning activities and resources by student cohorts. As shown in Figure 1, users can view the number of unique interactions (i.e., view and completion) and the total number of interactions within a course. They can also filter to view interactions within a specific learning activity or resource via the “event” dropdown menu, to view quiz completion rate by student groups, or to view overall interactions by student groups via the “student groups” chart.

A visualization dashboard by student cohorts and type of activity or resource.
Because of the flexibility offered by PowerBITM, We were also able to further customize the dashboard for needs of an individual module, for instance, to display access time of a particular learning activity or resource (Figure 2).

A visualization dashboard by student cohorts, individual activity and resource, and time stamp.
It should be noted that the visualization solution piloted in our study essentially existed outside of Moodle. During project implementation, we sought to establish real-time connection between the dashboard and Moodle. While this was technically possible, we were not able to achieve this because of IT security concerns. In addition, while we did not assume that every university educator would be able to use PowerBI to adapt the visualization template, the program where we piloted the dashboard had faculty-based ongoing eLearning support. This meant that template could be adapted by eLearning support staff based on the needs of teaching staff.
Evaluation and Reflection
Ease-of-Use
We piloted the basic dashboard template in three modules and presented the relevant dashboard to staff who teach or support teaching in that model. In the interviews participants expressed that their dashboard was easy to navigate and used appropriate color coding (P1 and P4). More importantly, it was perceived as able to provide insights into how students have engaged with online learning resources and activities, and thereby facilitate improvements in LD, “I think it’s useful … I think if I knew the dashboard existed, I would modify the Moodle page” (P1).
In a few minutes after being introduced to the dashboard, participants were able to identify student engagement pattern and relate it to their LD. For example, Participant 1 was able to tell the differences in access pattern between compulsory activities and optional resources, and across student cohorts.
I can see that the things that we’ve made compulsory, like the Tikanga, pretty much 100% of people have done. I can see that things that are just information haven’t recorded many clicks … Going up to quiz things, I can see that lots of people may be downloading stuff … Okay, in this middle group summary we’ve got 139 Students … Let’s filter it by quizzes and see what happens. So 109 students have done a quiz … (P1)
Similarly, in comparing the access patterns between student cohorts, Participant 2 was able to understand the differences in assessment result, I like this, this is so interesting. Because this less interaction in Group A than Group B, and we had more failures in Group A than we did in Group B. And I am seeing that immediately, that the lack of interaction relates to the group that had the most re-sits … That is really cool … so now I am actually seeing positives for this. I like this, I want this. (P2)
And relate student engagement data to the way teaching was carried out during different periods of the year, with some of the periods having remote teaching because of COVID-19 lockdown.
Oh I like it because you can see it easily … You can see the interaction was in March. And we had a face-to-face lecture of March on Developmental. Whereas in July, it was online, and there wasn’t as much interaction. So, I can see immediately what the system was telling me. And that’s really key because in July and August, they were all more or less home and shutdown. (P2)
Usefulness
Such insights allowed participants to understand the efficacy of their design and delivery of learning resources and activities, which enabled them to identify supplementary learning opportunities that could be acted upon.
Because I know what the students miss. It’s almost matching up with how the results came out … I am seeing like, April, coming up to exam time and suddenly there is a whole heap of interactions the whole week before exams. They should be interacting (with Moodle) in February and March … that’s the time they have to actually start looking at everything, and they are not. They are leaving it up to the last minute. Can I have one of these please? Is anyone else this excited about it? … so, my excitement at seeing this, because I can see immediately how it can be applied, may not be the same as other administrators who look at it and think “oh it’s just another tool.” (P2)
However, the perceived usefulness of the dashboard was not shared by all participants. In marked contrast with the enthusiasm by Participant 2, Participant 4 noted: We do not really … I mean, it’s kind of useful to know whether students have logged in to have a look at the information that we provide to them, but we don’t really need to see it, it’s not something that we assess them on. (P4)
In reflecting responses from Participant 4 in interview at the “analysis and exploration” stage, we note that this participant held a view of visualization dashboard that served primarily to track student progress and academic results. Such a view seemed to have influenced the participant’s expectation of the dashboard, which led to the above evaluative statement during the second round of interview.
Discussion
Recent works show that the field of LA has shifted from predicting student retention and grades to obtaining deeper understanding of student learning (Axelsen et al., 2020; Joksimovic et al., 2019). Although such an application of LA seems promising in improving teaching practice, the adoption of LA in day-to-day teaching practice still appears challenging (Prieto et al., 2019) as university educators may not necessarily have the expertise and resources to engage with LA for the purpose of improving LD. In the present study, we sought to address this vexed issue through developing a visualization dashboard solution that allows university educators to improve LD based on evidence of student engagement with online learning resources and activities. As a small-scale and proof-of-concept project, our study demonstrated that it is possible for a dashboard to be designed using locally available software with limited technical expertise in LA to provide LD insights from existing data recorded in the institutional learning management system. It thus confirmed the possibility of undertaking educational design research as a way of promoting the adoption of LA to inform local teaching practices. In considering it as a whole, we note that our study addresses a number of identified issues in relation to using LA for LD, which we explain below in detail.
First, while dashboard has been considered as a relatively straightforward solution that generates actional insights into learning (Kaliisa et al., 2021), there has been a lack of teacher-facing dashboard solutions aimed at improving LD activities (Sahin & Ifenthaler, 2021). Our study therefore serves as an exploration into this issue, providing a possible outlook of a visualization dashboard for learning design purposes. The dashboard template we deigned captures what Mangaroska and Giannakos (2018) describe as the group, course and content perspectives of LA, drawing primarily on students’ digital traces as the source of information. While we acknowledge that such a data source provides limited information and a simplistic view of actual learning behavior (Mangaroska & Giannakos, 2018), findings from the interviews showed that participants were able to gain an understanding of how students engaged with the learning resources and activities provided and therefore to develop some actionable insights into future improvements.
Second, our design of the dashboard aligned with the technology acceptance model (Davis, 1989), where we placed an emphasis on making the dashboard solution easy to use and useful for teaching or teaching support staff. Such an attempt is consistent with current approaches to evaluating LA dashboard, where usability has been the focus of evaluation (Verbert et al., 2013). The four principles derived from participant interviews provided an explanation of what constitutes usefulness and ease-of-use from educators’ perspective. We note that in our study context, a useful dashboard for LD purposes was expected to display student engagement patterns over a wide range of learning activities or resources within a course, rather than being confined to a particular learning activity such as a quiz. This is in marked contrast with existing analytics features provided by learning management systems where the analytics primarily focused on a single activity. It seems that for LD purposes, a dashboard that aggregates engagement data across different activities, resources and student groups is particularly useful in helping staff make comparisons and relating students’ actual engagement with their pedagogical intentions and arrangements. On the other hand, an easy-to-use dashboard was understood as requiring little navigation effort, having less clicks and scrolling, but at the same time presenting graphical representation of engagement data in one view rather than multiple views. While these design principles cannot be readily generalized to any educational contexts, as meaningful adoption will need to be underpinned by considerations of the specific pedagogical context and target audience (Pawson, 2006), they may serve to inform others who are seeking to develop local visualization solutions for LD purposes.
Third, one of the major criticism on research into LA for LD has been the reliance on quantitative methods in describing and predicting learning patterns (Mangaroska & Giannakos, 2018). Although works of this kind have no doubt shown the efficacy of LA in informing LD (e.g., Nguyen et al., 2017), there is a need for better understanding of the context that shapes learning and how data generated in the context can be used for subsequent decision-making (Mangaroska & Giannakos, 2018). With regards to this, our study demonstrates the potential value of qualitative approaches to exploring LA for LD. Findings from the interviews clearly showed participants’ expectation of the dashboard and, later, their interpretations of and reflections on student engagement in relation to the specific teaching context (e.g., remote teaching during COVID-19 lockdown). They showed that the qualitative interviews employed in this study served not only to collect data for research purposes but to promote users’ reflective practice, a desired outcome of dashboard applications (e.g., Pozdniakov et al., 2022; Rodríguez-Triana et al., 2015).
Fourth, our study identified an emergent finding relating to the purpose of visualization dashboards. The interview data showed that, for some, dashboards were likely considered as being associated with a specific purpose of tracking student progress and assessing student performance. While this certainly reflects the dominant way of using dashboards in the educational context (e.g., Verbert et al., 2014), it was not our purpose when designing a dashboard. This finding highlights the differences in users’ conceptions and expectations of what a dashboard should serve for. Given that dashboards have been increasingly applied to and embedded in online learning platforms, we consider it important for designers and educators to understand different ways of using visualization dashboards. Equally, we see it important that users understand the limitations of a visualization dashboard. For instance, we consider it problematic to use a dashboard that aggregates student digital traces in one platform to assess student learning. This is because that learning does not simply happen in one modality and certainly is not restricted to one institutional platform (Q. Liu et al., 2023). Therefore, inferences of learning based on engagement data within a single platform are a poor proxy of learning and may not be appropriate to be used to assess student overall performance. That is, data obtained from a single platform can inform educators how well students have engaged with the platform; they do not however describe how students learn across different platforms and with different devices, tools, and resources.
Finally, we see merit in careful consideration of software and ongoing maintenance. We designed the dashboard using a locally available software package, PowerBITM. We updated the dashboard template after an annual upgrade of the institutional learning management system. This work involved revalidating only a few field areas in PowerBITM with minimum maintenance effort. While we did not consider this aspect when choosing PowerBITM, this highlights an additional aspect of sustainability when selecting software.
As a proof-of-concept project, some limitations of our study must be noted. The dashboard was designed and piloted with four participants and three courses, all within one campus of a medical degree program at our university. While we included different teaching-related roles and different courses, the sample we drew on meant that our study did not seek to generalize the findings beyond the project scope. Future endeavors could seek to involve more educational programs and institutions in order to establish generalizable design principles for LD-oriented visualization dashboards. A second limitation relates to the data being visualized. The visualization was based on data obtained from Moodle and therefore is restricted by what Moodle records. With LA becoming increasingly popular, future learning management systems may develop more sophisticated ways of recording learning-related data, which would allow more nuanced visualization solutions. Relatedly, we note that what is presented in a dashboard is not only being influenced by system capability of data tracking, but also shaped by the extent to which educators and learning designers develop engaging learning activities in a learning management system.
Conclusion
Our study aimed to address an apparent challenge of promoting the adoption of LA through the development of a proof-of-concept process for designing and developing a LA dashboard. This proof-of-concept uncovered several important findings about the process of designing a LA dashboard. First, it is possible for educators with limited technical expertise in LA to rapidly develop an easy-to use and useful dashboard. Second, involving staff from the beginning of the process allowed us to develop a set of guiding principles that focused the design and development of the dashboard. Third, clarity about purpose is paramount to ensure that the appropriate learning analytics are the focus of the dashboard in order to maintain its usefulness and ease-of-use. Finally, choosing tools that are sustainable and scalable facilitates development and simplifies annual review and updating. The educational design approach we employed allowed us to work with staff to design a locally relevant and easy-to-use dashboard, which visualized student engagement data recorded in the learning management system.
Footnotes
Appendices Interview guides
Acknowledgements
We’d like to thank the study participants for their collaboration within this project.
Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The research was funded by the 2022 University Teaching Development Grant from the University of Otago.
