Abstract
Background
Feedback and debriefing are crucial components of communication skills training, as they foster reflective learning and support the consolidation of core competencies. This review aims to systematically identify, map, and analyze the literature on how feedback is implemented in communication training in the context of healthcare education.
Methods
This scoping review follows the 5 stages of the methodological framework proposed by Arksey and ÓMalley (2005). Four online databases—MEDLINE, Cochrane Library, PsycInfo via Ovid, and ISI Web of Knowledge—were systematically searched to identify relevant studies.
Results
365 articles were included in the final analysis. Articles reported that communication skills training is frequently conducted in small-group settings, with participants receiving immediate post-event feedback from multiple perspectives. There was a notable lack of consistency in reporting details related to the characteristics of debriefers and standardized patients, as well as in the approaches used to guide the debriefing process. Structured, multiphase feedback methods were rarely described.
Conclusions
This review shows that, although the importance of feedback in communication skills training is widely acknowledged, it is rarely described in sufficient detail. A clear gap emerges between the theoretical understanding of feedback's value and characteristics and its structured implementation in practice. While there is a slight trend toward increased use of standardized feedback models, this appears unrelated to the specific setting, communication scenario, or profession.
Introduction
Effective communication is widely recognized as a core competency in healthcare education—and practice. Strong communication skills are crucial for ensuring patient safety, fostering therapeutic relationships, and promoting effective teamwork across professional boundaries. 1 Inadequate communication is frequently cited as a major factor contributing to adverse events, 2 underscoring the need for systematic communication skills training (CST) in all health professions curricula. 3 Experiential learning—including role-plays and simulations—is a fundamental prerequisite for effective communication training. CST comprises structured teaching and learning strategies designed to systematically develop the communicative competencies of healthcare professionals through practical exercises, feedback, and reflective practice. 3 Research has demonstrated that CST enhances students’ communication abilities4,5 and better prepares them for clinical interactions. 6
A growing body of literature emphasizes that feedback and debriefing are indispensable elements of experiential learning, enabling learners to consolidate knowledge, enhance self-reflection, and reach their full potential7–9 As Motola et al. 10 note, without a structured and guided reflection process, much of the learning achieved during training would be left to chance, while other authors even argue that conducting simulation without adequate debriefing is not only ineffective but unethical. 11 In line with these findings, the International Nursing Association for Clinical and Simulation Learning (INACSL) updated its standards for simulation-based education in 2021, highlighting core components such as facilitation and debriefing. 12 These guidelines underscore that essential learning during simulation occurs primarily in the debriefing phase, reinforcing the central role of structured feedback in communication skills training.
The boundaries between feedback and debriefing are not always clearly defined, and the literature presents multiple, sometimes overlapping definitions. 13 Feedback has been described as specific information comparing a trainee's observed performance to a standard to guide improvement 14 or more broadly as an ongoing dialogue to support learner reflection and performance change. 15 Debriefing, in turn, is generally framed as the guided reflection process following experiential learning. 16 Some authors even propose grouping both concepts under the umbrella term of a “learning conversation.” 9 The present review does not seek to appraise terminological differences in the use of feedback and debriefing across the literature. Instead, it maps and reports the characteristics of feedback and debriefing described by the original study authors, without applying an external or normative terminological framework.
Communication skills were previously regarded primarily as a soft skill, meaning that they were not considered a core professional competence but were instead treated as a social or interpersonal supplement to technical and clinical expertise. Consequently, communication skills were often assumed to develop naturally through experience and observation rather than through formal education and assessment, as described by Singh et al. 17 In contrast, they are now widely recognized as a central component of professional practice in all healthcare professions.18,19 While the educational value of feedback and debriefing is well established, 20 there remains a limited understanding of how feedback is applied in communication skills training in healthcare education. 21 Consequently, it remains unclear what feedback methods, models, or facilitators are used, and how structured these processes are. To our knowledge, no comprehensive review has yet synthesized existing evidence on the implementation and use of feedback and debriefing within the CST in higher healthcare education. This scoping review therefore aims to map the current landscape of feedback practices in communication training, identify the range of applied approaches, and highlight gaps for future research and educational development.
Methodology
This scoping review follows the 5 stages of the methodological framework proposed by Arksey & ÓMalley. 22 The protocol for this scoping review was published on Open Science Framework Registries, available at https://osf.io/mfhq6. 23 The Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) was used to guide the reporting of this review. 24 We followed a systematic iterative approach throughout the research process based on a constructivist approach to data synthesis.
Step 1: Identifying the Research Question
The PCC (Population/Concept/Context) mnemonic formed the basis for the review questions. Through a systematic mapping of the existing evidence concerning communication skills training, our aim was to elucidate the ways in which feedback and debriefing are integrated within advanced healthcare education. This scoping review is guided by the primary question “How is feedback implemented in communication training within healthcare education?” The secondary questions are “Which definitions or conceptualizations of feedback are employed?,” and “How have feedback practices evolved over time?”
Step 2: Identifying Relevant Articles
The search strategy was iteratively developed by AD, HB, and KR. The search string has been additionally peer reviewed by two researchers (AJ and KL) who are experienced in the field of systematic literature search. A preliminary pilot search was undertaken in July 2022. A number of iterations of our search strategy were tested against key articles to ensure the search strategy was capturing articles relevant to the review question. The search strategy for all four databases, MEDLINE/PubMed (biomedical sciences, 1946-present), Cochrane Library (healthcare, 1993-present), PsycInfo via Ovid (psychology and related disciplines, 1806-present), and ISI Web of Knowledge (multidisciplinary current awareness, 1998-present) was held on June 14th, 2024 (Appendix A).
Step 3: Study Selection
This review included peer-reviewed articles that reported empirical research using observational, randomized, and non-randomized studies, and mixed-method designs, as well as descriptive educational reports. Conference abstracts, commentaries, or editorial articles were excluded. Only English language references were reviewed. No date restriction was placed on search results. Further eligibility criteria were defined using the PCC mnemonic as summarized in Table 1. The criteria were developed and iteratively refined by the review team through the pilot search.
Eligibility Criteria for Article to Be Included.
The web-based platform Rayyan (Rayyan Systems Inc.) was used to manage the study selection process. 25 Five members of the research team (AD, FE, HB, LG, and MMK) independently screened titles and abstracts of the articles using Rayyan with reviewer blinding enabled, to exclude those that did not meet the inclusion criteria. A full-text screening followed by the reviewers FE, HB, MMK, NS, and LG. In both stages, every article was independently reviewed in duplicate to avoid potential selection bias. The team members met regularly, twice a month over a period of 12 months, during the screening and data extraction stages. Any eligibility disagreements were resolved through discussion to ensure there was group consensus.
Step 4: Charting the Data
Following the completion of full-text screening, data were systematically extracted utilizing a standardized data extraction template implemented in Microsoft Excel. The template was initially developed by all authors and adapted from the JBI template data extraction instrument with modifications in relation to the scoping review concept. 26 It was subsequently piloted by MMK, HB, and KR, reviewing independently three articles each to ensure that the essential data could be extracted. The extraction template was modified based on the pilot. All team members involved undergo extensive piloting. A narrative and analytic approach was used to extract data most relevant to the research objectives. To summarize and chart the study characteristics, categories were developed for each dimension or theme within the data extraction template; these categories were subsequently operationalized using numerical codes. Regular meetings, still twice a month, occurred between the extractors to discuss challenges during data extraction and to refine coding. Data extracted from each of the publications included general publication details (year, author, journal and country), study population and setting, and characteristics of the communication training (composition of the group, group size, dialogue partner, communication scenario, duration of training) as well as of the feedback (timing of feedback, duration of feedback, feedback method or concept). The characteristics of the identified publications are presented in Supplemental Table 1.
Step 5: Collating, Summarizing, and Reporting Results
Data analysis of key findings followed an iterative process of summarizing, describing, and categorizing themes as they related to the research questions. The data were summarized in a narrative format as well as a tabulation and visual display. A formal quality appraisal of the included articles was not performed, which aligns with methodological guidance for the conduct of scoping reviews. 26
Results
The electronic database literature search yielded 3666 citations. After the removal of 921 duplicates, 2745 articles underwent review of title and abstract. A total of 605 full-text articles were assessed for eligibility. After full-text screening, 365 articles were included in the final synthesis (Figure 1).

PRISMA flow chart.
Characteristics of Included Articles
Included articles were published between 1973 and 2024. The scoping review identified literature from every continent (Figure 2). Overall, 53 (15%) of the included studies were randomized controlled trials, 34 (9%) used a control group design, and 103 studies (28%) used a pre-post design. Nearly half of the included articles 175 (49%) were descriptive in nature, predominantly consisting of single-group post-test designs focused on describing curriculum development.

Sample sizes varied considerably across the included articles, ranging from a single participant 30 to 7200 participants. 31 In 55 articles, the sample size was not reported precisely; instead, authors provided estimates such as 15-20 participants per semester. 32 This was particularly common in articles focusing on the development of teaching concepts or curriculum designs.
Description of Trainees
Monoprofessional communication training targeting medical students was most frequently reported (n = 260; 71%), followed by training for nursing students (n = 41; 11%). Physiotherapy (n = 3), midwifery (n = 2), and speech and language therapy (n = 2) were comparatively rare. Fifty-one articles reported multiprofessional samples (n = 51; 14%), including social workers, physician assistants, and dietetic students, e.g., A total of 19% (n = 69) of the included articles involved trainees who could be classified as beginners, as they were in their undergraduate years. In 106 articles (29%), students were more advanced in their education, while 140 articles (38%) included residents. A resident is a physician who has completed medical school and is undergoing supervised specialty training in a clinical setting. Traditionally, the term resident refers exclusively to physicians who have completed medical school and are undergoing postgraduate specialty training. For this review, however, the term is used in an extended sense to also include nurses, therapists, and other health professionals who have completed their initial professional education and are engaged in continuing professional development. (

Treemap illustrating the distribution of learner levels (resident, advanced, beginner) and training group sizes across included studies.
Description of Training
Duration
In 26% of the articles (n = 95), the communication skills training was limited to a one-day workshop. Most CST interventions (n = 246, 67%) extended over more than one day, while in 6% of the articles (n = 24), the time span of the training was not reported. The duration of the communication scenarios was reported in half of the articles (n = 182, 50%) (Figure 4).

Duration of communication scenarios and feedback/debriefing across included studies.
Group Size
Group sizes—defined as the number of trainees, excluding external experts, facilitators, and similar roles—varied considerably. In 18% of the articles (n = 65), trainees did not participate in group settings but worked individually or in pairs, often alongside a facilitator. When group-based CST was implemented, the number of trainees ranged from three to 65. Most groups were small with 2 to 6 trainees (n = 135, 37%) (Figure 3). 84 articles (23%) did not report any information on group size.
Communication Scenarios
The most frequently reported types of communication training were taking patients’ medical history and conducting patient interviews, described in 65 articles (18%). A total of 38 articles (10%) specifically addressed training in delivering bad news. Communication in emergency situations represented another major theme, covered in 41 articles (11%). Eleven articles (3%) focused on patient communication during ward round interactions, while 22 (6%) covered other complex patient contexts, including palliative care, perinatal loss, emotional distress, and advanced illness.
Non-patient-related communication scenarios, such as handovers and other forms of interprofessional communication, were reported in 21 articles (6%). Communication with patients’ relatives and family members was the focus of 25 articles (7%).
An additional 15% of articles (n = 54) were broadly defined and could therefore not be assigned to any of the specified communication scenarios. This included, for example, simulated nursing routines in which students focus on responding to subtle verbal and nonverbal patient cues rather than solely on task completion. 33 In 44 articles (12%), communication scenarios were not described in sufficient detail to allow their setting to be categorized.
Conversation Partner
Standardized patients (SPs), most commonly fellow students trained and instructed to assume specific roles, were the most used conversation partners in CST, as reported in 287 articles (79%). In 57 (16%) articles, “real” conversation partners were used. These differed from standardized patients in that they were actual patients, relatives, or colleagues. Virtual simulations were described in 11 articles (3%). In another 10 articles, the type of conversation partner was not clearly specified.
Feedback in Communication Training
Giving and Receiving Feedback.
In most of the included articles (n = 341; 93%), only trainees—whether beginners, advanced students, or residents—received feedback on their performance. In 16 articles (4%), the conversation partners also received feedback. In most cases trainees received feedback from multiple perspectives (n = 254, 70%), often including self-reflection (n = 178, 49%). In addition to self-reflection, further perspectives included those of lecturers, peers (other trainees), (standardized) patients, or individuals categorized as “other.” The “other” category comprises feedback provided by AI systems or external experts (Figure 5). Twenty-seven articles did not report or clearly specify who provided the feedback.

Receiving feedback. Sunburst diagram showing the number of feedback perspectives provided to trainees (inner ring) and the corresponding sources of feedback (outer ring).
Feedback Timeframe
In the majority (N = 277, 76%) feedback was given immediately after the communication scenario. In 39 (11%) articles, it was given later the same day, and in 22 articles, feedback was delivered at a later point, beyond the day of the training. The feedback phase lasted 29 min on average (M = 20), with a minimum of 2 min 34 and a maximum of 120 min.35,36 If a period was reported for duration of feedback, 30 min to 45 min e.g., the longer time was included in our calculation. In 25 articles, the feedback did not exceed 10 min (7%), in 36 articles (10%) it lasted up to 20 min, and in 46 articles the feedback lasted up to an hour (Figure 4). For 249 articles (68%) time frame for feedback was not reported.
Characteristics of Included Articles
Feedback Methods
A guiding question of this review was how feedback is defined or conceptually understood within the literature. For this, we examined feedback and debriefing descriptions in the included literature. Overall, these articles presented a highly heterogeneous picture of feedback practices. In some cases, feedback was characterized as “well-structured” or as following a “standardized feedback procedure.” Theory-based learning approaches, such as the SPIKES model, 37 were applied; additional frameworks mentioned included Kirkpatrick's model, 38 among others. Several articles reported the use of feedback forms, structured templates, or checklists to support the feedback process. These tools were typically developed specifically for the respective communication training programs. A structured feedback method or model was reported in 35 (10%) of the 365 articles (Figure 6). Nine different feedback models were reported (Table 2). All these nine models share a structured, stepwise approach to guiding feedback or reflective conversations. The most frequently reported method, each cited in 8 articles, was Pendletońs model39° and 360° Feedback. Although a feedback phase was present in the remaining articles, the specific sequence and structure of the feedback process were often not clearly described.

Year of publication.
Structured Models for Reflection, Feedback, and Debriefing Reported in the Included Studies.
Discussion
This scoping review was conducted to provide a comprehensive overview of the use of feedback in CST. The findings indicate that feedback is employed across diverse contexts within communication training programs, frequently involving medical trainees. The reviewed articles encompass a range of training durations, from brief sessions lasting only a few minutes to comprehensive programs spanning multiple days. The content of the communication scenarios varies considerably, including interactions with patients, relatives, and interprofessional teams. Additionally, the group sizes involved in these training sessions differ, reflecting a broad spectrum of instructional settings.
The articles included in this scoping review illustrated that feedback could take various forms. Only 35 (10%) of the reviewed articles reported a structured and clearly articulated feedback model. The earliest article included in this review that described such a model was published in 2001. This allows us to cautiously conclude that structured communication-skills training and curricula, including feedback, became more widely established beginning in the early 2000s.
Methodologically, the included literature is dominated by descriptive and single-group designs, with a large proportion of curriculum-design-focused studies. Few studies examine the impact of specific feedback models or compare different feedback approaches. In addition, most studies focus on medical students followed by nursing students, while other healthcare professions and interprofessional settings remain underrepresented. Across the included studies, considerable variability was observed in scope, pedagogical approaches, targeted competencies, and evaluation strategies. Accordingly, outcome measures were tailored to the underlying research questions and ranged from written instruments, including checklists or rating scales, to interview-based approaches, which captured not only changes in competencies but also participants’ satisfaction with the overall training.
Returning to the issue of terminology, it is important to note the conceptual distinction between feedback and debriefing. As outlined in the introduction, feedback is often described as a unidirectional transmission of information, whereas debriefing is typically understood as a bidirectional, interactive process that fosters reflection and dialogue. During the extensive data extraction process, it became evident that the terms feedback and debriefing are frequently used interchangeably or without explicit methodological distinction in the included studies. Beside the 35 articles that report on and explicitly name structured feedback and debriefing models, Adamson et al. 40 provide a positive example of a deliberate linguistic distinction between the two terms—an explicit differentiation that remains uncommon in the reviewed literature. In their study, a debrief was conducted with the expert by experience (EBE) “to establish how successful they felt the simulation activity had gone and to discuss any possible improvements that could have been made.” 40 This was followed by feedback, described as “comments from both facilitator and EBE as to what aspects of communication (both verbal and non-verbal) the students did well and in which areas they could improve.” 40 This example illustrates a purposeful separation of debriefing and feedback, thereby clearly distinguishing the reflective from the evaluative components of the intervention.
Although structured feedback models were rarely reported, the literature suggests a shift from directive feedback toward more dialogic and learner-centered approaches. Based on the frequent use of self-reflection and peer feedback reported in the literature, it appears that feedback was rarely applied in a purely directive manner. Rather, the approaches described, such as engaging learners actively and summarizing learning objectives, more closely resembled the components of a classical debriefing process. The mapping shows that feedback and debriefing are often, but not always, a group-based discussion of the preceding communication scenario. Debriefings in small groups are common, as are feedback sessions held immediately after the scenario and within a relatively short timeframe. These practices are consistent with previous recommendations,3,41,42 as multiperspective debriefing has been shown to foster deeper reflection, support self-regulated learning, and encourage interactive dialogue, thereby offering a richer and more nuanced learning experience than traditional, instructor-centered feedback. 43
Although numerous didactic and conceptual educational overview publications emphasize the importance of experienced debriefers and the influence of their skills on learning outcomes,12,44 most of the included articles provided no information regarding the facilitators’ formal training or practical experience. This is particularly noteworthy, given that debriefing in CST is recognized as both essential and demanding. 45 Inadequately conducted feedback, especially when perceived as negative or poorly delivered, can lead to demotivation and impaired performance. 46 It is crucial to avoid such negative experiences. Learners should not perceive CST sessions as stressful or aversive but rather associate them with positive emotions. According to Kurtz et al. 3 positive experiences can foster broader cognitive engagement, promote peer learning, and strengthen interpersonal relationships. Achieving this, however, requires well-trained and experienced facilitators.
The second key group involved in training is the trainees’ conversational partners, commonly referred to as standardized patients (SPs). This term suggests the consistent portrayal of specific communication scenarios or clinical conditions to ensure comparable learning experiences. However, these individuals more closely resembled simulated patients or peers, fellow students trained to varying degrees to realistically assume patient roles. As with debriefers, information on SP preparation was limited, a gap also highlighted by Doyle et al. 21 This is particularly noteworthy, given that SP's role is not a trivial one. Professional standardized patients are particularly recommended in healthcare simulation and communication training to ensure a realistic, standardized, and high-quality learning environment. 47 Stand-in students or peers, as commonly described in our findings, should not be used as SPs, because real, experienced SPs can provide a deeper, more emotional insight into student performance. 48 Several practical and pedagogical factors may explain why peers are frequently used instead. Resource and cost constraints may play a role: employing trained standardized patients requires staff, preparation, and financial investment, whereas peers are readily available at no additional cost.
They can be integrated into teaching spontaneously, without the need for scheduling, briefing, or coordination with external individuals. Moreover, role-playing with fellow students offers advantages: learners gain insights from both the clinician's and the patient's perspectives, which deepens their understanding of communication dynamics. The use of real patients was rare, despite evidence suggesting that their involvement can enrich training by offering insights into patient-centered perspectives. 49 Similarly, the use of artificial intelligence (AI) as a conversational partner was even less common.
Taken together, many components are either not implemented at all or are reported inadequately. In some cases, the information may exist but was not reported by the authors due to the primary focus of the study, potentially distorting the overall evidence base. In other cases, certain characteristics may not have been considered or measured at all and were deemed irrelevant by the authors. We suspect that the observed gaps represent a combination of both reporting bias and genuinely missing information.
Our findings show a slight trend toward the increasing use of standardized feedback models, with no apparent link to the setting, communication scenario, or profession. Feedback appears to be less instructor-centered by now, incorporates multiple perspectives, and covers a wide range of conversational situations. Notably, a large part of the literature included provides little information on the qualifications of debriefers and SPs, making it unclear whether this gap stems from insufficient reporting or from a perception that these qualifications are unimportant, or even from actual neglect of this aspect of the method.
Despite the extensive literature highlighting feedback as a crucial element of the learning process, most authors did not offer comprehensive descriptions of how feedback was delivered. The lack of detail regarding the structure of feedback and debriefing suggests that their practical relevance may not yet be fully acknowledged or consistently integrated into training approaches. This observation points to a persistent gap between recommended educational practices and their implementation or reporting in published articles. As discussed above, previous studies have identified both facilitator expertise and the preparation of standardized patients as important contextual factors shaping learners’ experiences in communication skills training. Building on these considerations, future studies should explicitly report facilitator qualifications, training, and standardized patient preparation as core components of communication skills training interventions. Such transparency is consistent with the standards of the International Nursing Association for Clinical and Simulation Learning 12 and aligns with reporting frameworks such as the TIDieR checklist, 50 which aim to improve the completeness and transparency of intervention reporting, thereby enabling replication and critical appraisal. Greater adherence to these standards would support more systematic reporting and enable a clearer interpretation of the role of feedback and debriefing in communication skills training.
Limitations
CST typically consists of three key phases: the briefing, the communication scenario, and the debriefing.3,43 In this review, we primarily focused on the third phase, feedback and/or debriefing, which, however, does not represent an isolated activity. Effective debriefing presupposes a prior definition of learning objectives during the briefing phase as well as a tailored communication scenario. We included only those articles in which feedback was explicitly implemented. Articles describing CST without any opportunity for reflection were excluded during the title-abstract or full-text screening stages. Therefore, the earlier phases of CST were not the primary focus of this review.
During data extraction, it became apparent that several relevant parameters were not reported by the authors of the included articles. Pollock et al. 51 recommend contacting study authors when essential information is missing; however, this was beyond the scope of our review. Consequently, it remains unclear whether the lack of information on feedback reflects a reporting bias or whether feedback was not considered a critical element of CST in those cases.
Conclusion
The findings of this review reveal a significant gap between the theoretical understanding of the relevance and characteristics of feedback and its structured implementation in practice. Although most authors acknowledge the importance of feedback and/or debriefing in CST, this component is seldom reported in sufficient detail. Future research should place greater emphasis on this aspect, particularly within the context of curriculum development. To date, feedback and debriefing appear to be applied inconsistently, and their effectiveness has not been sufficiently investigated through systematic or experimental approaches. Beyond establishing consistent terminology, further research is required to determine which features of feedback and debriefing are most effective in specific contexts, aligned with defined learning objectives, target groups, and intended outcomes. Identifying the most effective feedback strategies thus represents a key challenge for the future.
Supplemental Material
sj-docx-1-mde-10.1177_23821205261431560 - Supplemental material for Feedback and Debriefing in Healthcare Education to Improve Communication Skills: A Large Scoping Review
Supplemental material, sj-docx-1-mde-10.1177_23821205261431560 for Feedback and Debriefing in Healthcare Education to Improve Communication Skills: A Large Scoping Review by Hanna Brodowski, Muriel Marieke Kinyara, Leonie Göbel, Felix Edert, Anna Dammermann, Nicole Strutz and Katharina Röse in Journal of Medical Education and Curricular Development
Supplemental Material
sj-xlsx-2-mde-10.1177_23821205261431560 - Supplemental material for Feedback and Debriefing in Healthcare Education to Improve Communication Skills: A Large Scoping Review
Supplemental material, sj-xlsx-2-mde-10.1177_23821205261431560 for Feedback and Debriefing in Healthcare Education to Improve Communication Skills: A Large Scoping Review by Hanna Brodowski, Muriel Marieke Kinyara, Leonie Göbel, Felix Edert, Anna Dammermann, Nicole Strutz and Katharina Röse in Journal of Medical Education and Curricular Development
Footnotes
Acknowledgments
The authors would like to thank Andres Jung and Kerstin Lüdtke for their support in refining the search strategy for this review.
Ethical Considerations
Not applicable.
Consent to Participate
Not applicable.
Authors’ Contributions
HB conceptualized the scoping review and made substantial contributions to the methodology, results, discussion, and conclusion. HB also independently screened records, extracted data, and prepared the original manuscript. MMK, LG, FE, and NS independently screened records and extracted data. MMK, LG, and FE additionally contributed to the drafting of the results section. AD interpreted the data and contributed substantially to the drafting of the introduction. NS also contributed to manuscript review and editing. KR provided methodological oversight. All authors reviewed and approved the final manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work is funded by the German Federal Ministry of Education and Research (BMBF) and the regional government of the state of Schleswig-Holstein, Germany under project number 16DHBKI075.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Supplemental Material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
