Abstract
The authors explore the validity criteria of digital storytelling when applied as a research method in Participatory Health Research. The article begins with an overview of digital storytelling as a participatory visual research method. To demonstrate the validity criteria of digital storytelling, what follows is a reflexive account of a 2-year Participatory Health Research study that used digital storytelling as a research method to investigate treatment experiences among breast cancer patients. The authors offer a suggested summary of validity criteria for digital storytelling when applied to Participatory Health Research and describe the application of participatory, intersubjective, catalytic, contextual, empathic, and ethical validity. The article concludes with a discussion about resources and distribution.
Keywords
Digital storytelling involves creating a short video using photographs, images, music, and personal narrative. The origins of digital storytelling are attributed to Dana Atchley and Joe Lambert who developed a workshop process in the 1990s geared toward assisting people in groups of 8 - 12 with minimal technological fluency (Lambert, 2013). This “classic” digital storytelling approach involved 3 to 5 minute videos put together during 3-day workshops, where participants shared their story ideas (i.e., the storytelling circle), wrote a script, and pulled together photographs on computer software (Matthews & Sunderland, 2017). This design has evolved to include various lengths, processes, and technological formats to guide people in creating a short autobiographical video (Matthews & Sunderland, 2017).
Over the last 25 years, health care practitioners, educators, advocates, and researchers have worked alongside community members in creating digital stories. As a participatory visual research method, digital storytelling is well-documented (Adelson & Olding, 2013; Chalfen, 2012; Cueva et al., 2016; de Jager et al., 2017; Gubrium, 2009; Gubrium & Harper, 2013; Gubrium et al., 2016; Gubrium et al., 2014; Margolis & Pauwels, 2011). Yet there is a paucity of literature about validity criteria. The importance of validity criteria to judge research being sound and just is rarely described in the digital storytelling literature (Gubrium & Harper, 2013). To address this need, the authors explore the validity criteria of digital storytelling when applied as a research method in Participatory Health Research. The article begins with an overview of digital storytelling as a participatory visual research method and includes a description of validity criteria. The second section is a reflexive account of a 2-year Participatory Health Research study that used digital storytelling as a research method to investigate treatment experiences among breast cancer patients. The authors also offer a suggested summary of validity criteria for digital storytelling when applied to Participatory Health Research and describe the participatory, intersubjective, catalytic, contextual, ethical, and empathic validity of the respective study, with a particular focus on audiencing. The final section ends with a discussion about resources and distribution.
Digital Storytelling as a Participatory Visual Research Method
Over the last two decades, researchers have used digital storytelling to support various purposes and outcomes. Examples include its application as a pedagogical tool (Sandars & Murray, 2009), as a form of knowledge translation (Boydell et al, 2012; Scott et al., 2013), and as a method of ongoing community engagement (Gubrium, 2009), particularly in sharing stories and experiences in local, regional, and national settings (de Jager et al., 2017; Wexler et al., 2014; Willox et al., 2013). For instance, Willox et al. (2013) used digital storytelling as a research method working with a remote community in Northern Labrador to understand the impact of climate change on health and well-being in the region of Nunatsiavut. Participants shared their reflections and experiences about how changes in the land, sea, ice, and snow impacted their health and well-being. Wexler et al. (2014) also described a pilot study that examined youth-produced digital stories as representations of everyday lives, values, and identities. A subset of 31 stories were coded and thematically analyzed to understand how young Alaska Inupiaq people are responding to growing up in a world that is markedly different than previous generations (Wexler et al., 2014). Researchers included member checking to increase trustworthiness and credibility (Wexler et al., 2014). Adelson and Olding (2013) have additionally written about digital storytelling as a counter-narrative, echoing Willox et al.’s (2013) claims that digital storytelling can serve as a “digital decolonization” for counter-narratives when working alongside First Nations, Inuit, and Métis communities.
Digital storytelling is also used as a health care intervention strategy in Participatory Health Research (De Vecchi et al., 2016; Wilson et al., 2015). Participatory Health Research focuses on participatory approaches that address health issues (Wright et al., 2009). In these settings, digital stories create a multimodal platform where normative, biomedical health care discourses can be contested by offering important insights into quality of care practices informed by the patient experience (Akard et al., 2015; Cueva et al., 2016; Taylor, 2014). The individual patient stories are not cumulated and blended into a shared visual story; the experiences are distinctly communicated within a multimodality of visual, voice, and music.
Participatory Visual Research Methods: Locating Digital Storytelling
In the context of research, digital storytelling is often labeled as a participatory visual research method (Adelson & Olding, 2013; Gubrium & Harper, 2013; Margolis & Pauwels, 2011; Wexler et al., 2014; Willox et al., 2013). As Oliveira (2016) described, “participatory visual research includes a range of strategies meant to facilitate participant-centered meaning-making,” which involves individual participation as well as larger group processes (p. 262). According to Chalfen (2012), participatory visual research methods incorporate various types of biodocumentary storytelling. The underlying philosophy is that people are experts in their own situations (Chochinov et al., 2005). Participatory visual methods include various multimedia self-representing practices such as photovoice (Gubrium & Harper, 2013), participatory video (Sitter, 2015), cellphilms (MacEntee et al., 2016), and digital storytelling (Matthews & Sunderland, 2017). These methods often support individual and collective empowerment in developing practical visual media skills such as storyboarding, editing, and visual storytelling (Gubrium & Harper, 2013).
According to Chalfen (2012), the differences between participatory visual methods are frequently based on who controls the camera, who weighs in on content decisions, and the final artifact (i.e. individual stories vs. group-based perspectives). For instance, a common element across digital storytelling, photovoice, and participatory video is the collaborative process. In digital storytelling, the storytelling circle involves participants describing the meaning behind their photographs. The process offers insight into a person’s social realities and lived experiences (Foster-Fishman et al., 2005). Through the group dialogue, participants also critically reflect on relevant events associated with their visual stories (Gubrium, 2009). The combined process of the individual engagement crafting short videos with the storytelling circle further enacts Freire’s (1970/2008) theory of critical consciousness and the ongoing process of dialogue, reflection, and action.
Compared to other participatory visual methods, digital storytelling also shares certain similarities with photovoice: Both processes require participants to bring their individual stories and singular visual representations into a group setting and discuss their images around a shared topic. In contrast, participatory video involves collaborative efforts with participants from concept through to production in order to create one—or many—group-based videos (Sitter, 2012). Participatory video members also take on assorted roles with shared, negotiated goals in the development and presentation. Most projects last months, if not years, with various aspects of editing and group involvement throughout the research stages (Sitter, 2015; White, 2003).
According to MacEntee et al. (2016), the reliance on collaborators and researchers to bring technologies into participatory research creates and sustains unequal power differentials. These authors claimed that cellphilms—filming stories via mobile phones—are a more accessible participatory visual method due to the number of people who own mobile devices. The authors “see cellphilms as a tool that can combat the assumption that marginalized individuals need an intermediary to tell their stories” (MacEntee et al., 2016, p. 8), which reiterates Thomas Harding’s (1997) description of video activism as a grassroots movement that does not require intermediaries in filming and distribution.
Like cellphilms, the final output of digital stories is often short videos. However, while researchers have used laptops and iPads for digital storytelling, they have also used mobile phones (see Alexander, 2017; Christiansen & Koelzer, 2016; Reitmaier, Bidwell, & Marsden, 2011). In these instances, the differentiating factor between cellphilms and digital storytelling is not technology but process; in digital storytelling, participants formulate and work through concepts and ideas in the storytelling circle around a shared topic, where peer feedback is a core aspect of this engagement.
Validity Criteria
As the role of digital storytelling in research is well-documented, there is a need to consider validity criteria and what that looks like in praxis. Yet rarely do researchers describe validity criteria specific to digital storytelling as a research method in Participatory Health Research. Spencer (2011) pointed out that in the application of visual methods, validity should focus on process, “the language, text, institutional disciplinary discourses which shape our interpretation must be revealed if validity is to be strengthened” (pp. 141–142). Spencer also reiterated the need to triangulate visual representations with other data, as doing so “provides a more trustworthy understanding of a complex situation” and “encourages a carefully analytical and collaborative practice” (p. 141). Similarly, Gubrium and Harper (2013) noted the importance of cross-checking multiple data, the triangulation of methods, sources, and use of member checking as a form of validity with participatory visual media.
Waterman et al. (2001) also contend that participatory-informed research should be judged by its own terms: the extent it is participatory, whether it aims at change, or if it involves movement between reflection, action, and evaluation. The International Collaboration for Participatory Health Research proposed six types of validity to judge the soundness of conducting Participatory Health Research: participatory, intersubjective, catalytic, contextual, ethical, and empathic validity (International Collaboration for Participatory Health Research [ICPHR], 2013). While the above criteria came from a discussion paper, it has also been published elsewhere (see Wright et al., 2018). The dearth of literature about validity criteria and digital storytelling was a core reason in writing this article. As digital storytelling is often described as a participatory visual research method, these six validity criteria identified by ICPHR are ideal to evaluate adherence specific to participatory-informed research.
Background and Context
This multiyear Participatory Health Research study in Atlantic Canada investigated patients’ experiences of breast cancer treatment at a respective Regional Health Authority (RHA). Breast cancer is the most frequently diagnosed cancer in Canadian women, with one in nine females estimated to develop breast cancer in their lifetime (Canadian Cancer Society [CCC], 2015). However, little was known about the factors associated with treatment decisions and the overall patient experience within this RHA.
The study involved three phases: (1) patients created digital stories representing dimensions of their treatment experiences, (2) screenings were held with knowledge users 1 from the respective RHA who completed questionnaires about viewing impact, and (3) focus groups were conducted with a subset of knowledge users to identify barriers and facilitators in supporting breast cancer patients in their treatment at the RHA. The project received approval from the relevant institutional review boards (IRBs).
Phase 1: The Digital Storytelling Process
Recruitment included women between the ages of 30 and 74, who had received breast cancer treatment at this RHA within the last 5 years. This age-group was chosen for this study as the largest number of new breast cancer cases occurs within this age-group (CCC, 2015). In Phase 1, participants were asked to (a) consider the determinants that influenced their choices of breast cancer treatment; (b) consider the strengths and challenges related to delivery, format, and timing of information; and (c) create a digital story based on their personal breast cancer care experience. As several participants were receiving treatment at the time of the study, longer days for the workshop were not possible. To accommodate participants’ unique needs, the classic digital storytelling process was adapted. A series of workshops were held at the RHA. The workshops consisted of 2 half-days and a screening of the completed videos on Day 3. Several participants who could not attend the workshops participated in one-to-one workshop sessions. The first author of this article was a principal investigator (PI) of this study and an experienced digital storytelling facilitator. The PI trained research assistants who assisted with facilitation and technological support. iPads were used, along with iMovie and iPhoto software.
Day 1 included an introduction to digital storytelling and screening examples of digital stories about various health-related topics. Participants then shared their ideas about their own digital stories in the form of a storytelling circle, which provided opportunities for feedback, unpacking concepts, and developing connections within the group. The day finished with storyboard development, which required participants to consider the timing of the image in conjunction with the narrative. Unlike a script, the storyboard affords the ability to pair narrative with images when working through the development of the story.
Day 2 focused on voice recording, choosing photographs and music, and laying everything into the iMovie time line. At the start of the day, participants received a brief tutorial and a tip sheet on navigating iMovie and iPads. At the end of the day, any participant who had last-minute additions outlined their requests and facilitators completed them after the workshop. The third day solely focused on the screening to celebrate accomplishments, share videos, and talk further about experiences in relation to the digital stories. Final 18 digital stories were approximately 5 - 12 minutes in length.
Digital story analysis
Digital stories were inductively analyzed using critical framing techniques (Sitter, 2015). Several research team members identified, organized, and discussed emergent themes (Spencer, 2011). What followed was a deductive analysis from the literature based on the absence, presence, and variance of elements of an evidence-based patient choice framework based on breast cancer (Ford et al., 2003). Findings from the Phase 1 analysis informed the development of the Phase 2 questionnaire.
Phase 2: Digital Storytelling Screenings
Digital storytelling screenings were held over a period of 4 months. A total of 117 RHA knowledge users attended the screenings, including surgeons, medical/surgical/radiation oncologists, family practitioners, social workers, administrators, community health nurses, and perioperative teams across the region. Due to time considerations, two to three digital stories were screened at each venue. For each screening, the chosen digital stories collectively communicated all the emergent themes identified from the Phase 1 analysis. At the end of each screening, questionnaires were administered to evaluate the impact and extent to which the digital stories succeeded in identifying opportunities for improving the experiences of breast cancer patients at the RHA. The Phase 2 questionnaire also asked respondents to indicate their interests in participating in a follow-up focus group.
Phase 3: Focus Groups
The third and final phase included two focus groups of 13 health care professionals. Guiding questions were informed by the core themes from the digital stories in Phase 1 and Phase 2 results. With consideration to the topic of inquiry of breast cancer treatment, themes included (a) delivery, format, and timing of information; (b) patient navigators; (c) whole patient care and complementary treatments; and (d) continuity of care throughout treatment. Focus group questions focused on developing practical outcomes that could address the issues identified in the digital stories that were also supported by the survey responses in Phase 2. Focus group data were audio-recorded, transcribed, and analyzed for emergent themes using thematic analysis. Results included recommendations and quality health care outcomes for breast cancer treatment within the respective region. While the overall study explored the treatment experiences of breast cancer patients, the following section attends to the validity criteria of the application of digital storytelling within this Participatory Health Research study.
Validity Criteria and Digital Storytelling
Participatory Validity
Participatory validity criteria primarily focus on quality engagement by key actors, as identified by ICPHR (2013): To be called participatory, the people whose life or work is the subject of the research need to actively take part in the research process. For example, where the subject of research is improving the health of people in a neighborhood, residents of the neighborhood need to be part of the process. And where the subject of the research is the quality of services being provided by the health-care system, both professionals and service user need to be engaged, as each group is directly affected by issues of quality. (p. 6)
Participants across all phases were also either patients or knowledge users. For instance, recruitment called for participants who experienced breast cancer treatment at this RHA, thus the digital stories focused on the patient experience within this setting. Audiences at the Phase 2 screenings were health care professionals employed at the RHA as were the focus group members. The findings and recommendations were thus targeted to consider feasible implementation strategies based on the locality of this study, which also overlaps with contextual validity.
Intersubjective, Catalytic, and Contextual Validity
As explained by Bach et al. (2017), intersubjective, catalytic, and contextual validity are assurances of the project relevance for those involved. The role of the patient-collaborator supports intersubjective validity as the individual was able to provide guidance across key parts of the research process including recruitment, screening locations, and knowledge translation. In this study, digital stories also became a pedagogical tool to share and create further connections within the community and among breast cancer support group members. For instance, after the focus groups were completed, the research team received requests from local breast cancer support groups to screen the digital stories at their annual conference. The project also received further funding to support a research-based theatre production inspired and developed based on the digital stories.
Contextual validity included recommendations based on the needs and structure of the respective RHA, which were grounded in the patient treatment experiences at this particular setting. The overall process of the digital storytelling was also adapted in order to meet the needs of the participants. For instance, participant engagement in Phase 1 was based on a number of factors, including health considerations which required shortening workshop timelines.
Contextual Validity
As previously noted, contextual validity requires that the research be explicitly connected to the local context. The findings should be relevant to the participants and local communities where the research is conducted. This Participatory Health Research study involved a critical framing technique with participatory visual methods that located participant stories at the center of analysis (see Sitter, 2015). The visual framing analysis included identifying emergent themes from (a) dominant imagery, (b) narrative, and (c) content. This visual framing technique also involved analyzing the digital stories in connection to the local context of the respective health care setting. In line with thematic visual analysis, differences were discussed and organized into themes (Spencer, 2011). Researchers also compared the core themes from the digital stories with existing research in the respective health care field to understand if the patient experience supported—or diverged from—the current scholarship.
The themes from the digital stories also guided the subsequent research phases, particularly in mixing methods. Examples included applying themes in developing both the screening questionnaire and focus group questions. In doing so, the research design ensured the patient stories of this RHA were at the forefront of the research process as well as inform the recommendations to the respective RHA. This included extensive adaptions to the current format for patient navigators, implementation of a whole-person care approach to treatment, and a visual decision aid specific to this health authority inclusive of (1) time lines from diagnosis to surgery, (2) follow-up and aftercare, and (3) health care resources and supports within the respective community.
Empathic Validity and Audiencing
Much has been written about the process of creating digital stories (Cueva et al., 2016; de Jager et al., 2017; De Vecchi et al., 2017; Lambert, 2013; Tharp & Hills, 2003). Yet the importance is not only in the telling but in the listening. The role of audiencing is a critical consideration in participatory visual research, where exhibitions and screenings have increasingly come to be seen as essential components to the research design (Mitchell, 2015). Audiencing is also inherently connected to demonstrating empathic validity. Dadds (2008) describes the role of empathy regarding participatory research: By empathetic validity, I mean the potential of the research in its processes and outcomes to transform the emotional dispositions of people towards each other, such that the more positive feelings are created between them in the form of greater empathy. Related to the growth of empathy is the enhancement of interpersonal understanding and compassion. Research that is high in empathetic validity contributes to positive human relationships and well-being. It brings about new understanding such as respect, compassion or regard. (p. 280)
Matthews and Sunderland (2017) explain how listening includes the visual and audio in concert with one another. In attending to another person’s views, perspective, and story, the active process of listening is also a political act that calls for the “attention and attunement to personal narratives” (Matthews & Sunderland, 2017, p. 7). The authors consider the ways meaning and agency transform and take shape in the process of listening. The act of listening to a digital story is located within social, cultural, and historical contexts and thus involves “mediated movements…as the listener interprets the other’s story and brings it home” to their own way of making sense of—and acting upon—that story (Matthews & Sunderland, 2017, p. 8). This type of encounter with a digital story requires active listening. Participatory traditions call for persons who occupy positions of power to listen to the experiences of those on the margins (Matthews & Sunderland, 2017). Thus, considerations to listening—and building this concept in the research design in the context of Participatory Health Research with knowledge users—are critical.
In our research study, patients indicated that the digital storytelling process created a space to identify connections among one another through their shared experiences. Bearing witness to these stories also had an impact on knowledge users: From the 117 survey responses from knowledge users who viewed these digital stories, 91% of the sample strongly agreed/agreed that viewing the digital stories increased their understanding of women’s experiences with breast cancer treatment. As one audience member indicated, “listening to the women’s experiences of going through treatment for breast cancer opens my eyes to what some of their needs are. This will help health care professionals provide better care.” Another knowledge user indicated the videos provided a more complete picture of the treatment process: “It was a very emotional experience watching the videos and hearing these stories. I felt I got the whole experience of the patient rather than pre-/post-op when they come to our floor and I can care for them.”
In the open responses, a number of knowledge users also indicated they would change how they interacted with patients. For instance, one respondent stated, “I will be more open to listening to patients and ask them if they have questions. I will not assume that they have the knowledge on what to expect after a mastectomy. Education will be included in post-op care on how to manage drains and what to look for/expect.” Another response included, “In my profession, there is a great deal of emphasis placed on patient understanding but often after time one becomes lazy. Reminds me I need to check in more to clarify understanding.”
Ethical Validity
While audiencing is a core part of empathic validity, it is also tethered to ethical validity, particularly when it comes to public screenings. Our findings also support Mitchell’s (2011) claim that attending to audience impact is critical, particularly the potential risks involved in seeing videos and how to attend to these risks. In our research, we found that the sensitive topic and raw nature of the digital stories had a strong impact on different audiences, particularly community-based. Indeed, the impact of viewing was found to be so powerful we asked ourselves “what is our ethical responsibility in the act of viewing?” A core outcome of digital storytelling screenings included a recommendation of no more than two to three videos at once followed with facilitated group discussions.
With many health-related topics, stories can be raw and include graphic imagery, which may result in very emotive responses. Incorporating discussion time with audience members was important when recognizing the potential overwhelming impact of viewing numerous digital stories in one setting. In this study, screenings with built-in discussion times between and after each digital story were found to be essential in sharing experiences. Patients attending these screenings indicated the development of the digital stories—as well as the group viewings with family and friends—were very therapeutic.
Banks et al. (2013) also provide a detailed overview of ethics in community-based participatory research. The authors note that current IRB procedures and guidelines are not particularly well-suited for participatory research (Banks et al., 2013). Using visual-based methods further contributes to potential challenges. Gubrium et al. (2014) also wrote about ethical issues and digital storytelling in research implementation. While their article is based on the classic digital storytelling format, it does identify concepts that transcend digital storytelling design variations. Identified challenges included the control to shape one’s story and securing consent from individuals in photographs (Gubrium et al., 2014). A storyteller’s control in shaping their own story is often influenced by time restrictions placed on length. In our study, these time restrictions were not strictly enforced. While some digital stories were 5 minutes, several were up to 12 minutes long. Ultimately, these longer stories created screening barriers in several venues. However, participants were pleased with their final product, which supported their autonomy in creating a story they wanted to tell.
An image-based release form was also developed for this study and given to the participants prior to the workshops. The signature process was outlined in handouts sent out prior to the start of the workshop and also reviewed on the first day. Similarly, we ensured copyright free music and a repository of stock images were available on site for easy access. All of these elements—release forms, copyright free music, and stock images—were in place so participants could focus on their story development during the workshop.
Ownership of the visual media in participatory visual research is also a core ethical issue. While participants are authors of their visual stories, these media are simultaneously considered as data in the context of research. To address the issue of ownership, Mitchell (2011) provided examples of how ownership of participant-generated photographs can be recognized in consent forms. Sitter (2015) also wrote about the role of ownership in participatory visual media, where participants owned the videos and the researcher requested permission to analyze and screen in various settings. In the study described in this article, participants received copies of their digital stories with all final edits made on-site immediately after the final screening on the last day. The participants provided permission for screenings, analysis, and that their digital stories could be used in the development of a theatre production for knowledge translation. Participants have also shared their digital stories on social media and with community and family members. However, further research is needed in this area, including examples of various consent forms and IRB requirements when it comes acknowledging the challenges with authorship and ownership, while balancing the researcher’s responsibilities to both participants and their respective institutions.
Discussion
The authors explore six validity criteria of digital storytelling in Participatory Health Research through a reflexive account of a 2-year study investigating treatment experiences among breast cancer patients. The validity criteria included participatory, intersubjective, catalytic, contextual, empathic, and ethical. The role of audiencing is also a core aspect of the quality criteria, particularly related to empathic and ethical validity.
Digital storytelling also requires a number of practical considerations regarding resources, time, anonymity, and screenings. Resources are key aspects of the research process. Whether using mobile phones, iPads, or computers, technologies are needed—along with facilitators that know how to provide appropriate guidance so the technology does not overpower the storytelling process.
When the process is informed by the participants’ needs, format flexibility is essential. In this study, several participants were in treatment, and full-day workshops were not an option. The workshops thus veered from the classic digital storytelling format into half-day sessions. To balance time with story completion, there was one facilitator for every two participants. This resulted in decreasing participant numbers per workshop and increasing the quantity of workshops. The process also required strong facilitation skills to support participants in making editorial choices during the allocated workshop time.
When IRBs are involved, there is also the topic of anonymity and consent. For participants who desire anonymity, careful consideration in supporting this request is needed. Examples include the use of a different story narrator, drawing on metaphorical imagery to communicate the visual aspect of the stories, and so on. In this study, participants also had to secure consent from people in any photographs they chose to include in their digital stories. When this was not possible for several participants, they explored creative ways to communicate these storied elements including drawings, paintings, stock photographs, and abstract imagery.
The topic and content of digital stories can also impact screening formats. Audiencing is a core concept in digital storytelling; we recommend reflecting on the ethical responsibility toward audience members when sharing stories, particularly when crafting IRB submissions. Audience members can have emotional responses to digital stories, which may be heightened when viewing a large number of videos around a shared topic. Following screenings with facilitated discussions can be helpful for audience members who require time to process and reflect on the stories. In the case study described in this article, the decision not to upload digital stories online was based on thoughtful consideration in a desire to ensure people had options to discuss their experiences after viewing the videos, if needed. Community screenings that included all 18 digitals stories were done over 2 days, followed by approximately 2 hours of discussion time. Screenings in educational settings were also restricted to one or two videos, followed by facilitated discussions.
This article provides an example of validity criteria of digital storytelling when applied as a research method in Participatory Health Research. In this study, the digital storytelling process created space to reflect and discuss the patient experience, to bear witness to patient stories, to develop meaningful patient engagement, and to engage knowledge users for effective change. While the authors do not suggest all criteria are essential, the application offers an exemplar of how to consider validity when using digital storytelling in Participatory Health Research.
Footnotes
Acknowledgments
The authors would like to acknowledge and thank the research study participants. The authors would also like to thank Dr. Erin Cameron, Dr. Gail Wideman, Dr. Alex Mathieson, Ms. Rosemary Lester and Ms. Amy Burke. The authors would also like to thank the reviewers for their thoughtful feedback.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by a Patient-Oriented Research Grant awarded by NL SUPPORT.
