Abstract
Effective translation of data to inform real-time patient care is lacking in addiction inpatient settings. The current study presents the optimization of an assessment report that is used by clinicians to individualize treatment. A multi-aim, iterative approach was taken, utilizing an implementation science perspective to arrive at a final version of the assessment report. This occurred at a small inpatient addiction treatment facility. Participants were all available clinical staff (N = 7; female = 71%). A quantitative survey was used for aims 1 and 2 to, respectively, assess motives and context around the report as well as evaluate its design. Aim 3 focused on optimization via semi-structured interviews. Descriptive and modified content analyses were utilized appropriately across aims. This resulted in five versions of the assessment report being created between February 2021 and August 2022, the most recent of which was adapted into patients’ electronic medical records. We discuss each version of the report in depth, including clinicians’ iterative feedback and researchers’ perceived barriers to this translational process. The response rate was 64.3%. The current study highlights a replicable approach for optimizing the translation of assessment data into treatment for patients with disorders of addiction as well as an assessment report that could be utilized by similar facilities with a naturally low sample size.
It is known that addiction treatment facilities collect data as part of standard operations. Translation of real-time patient data into effective clinical care is a challenge in inpatient addiction settings.
The current study used input from various stakeholders to optimize a clinical report based on an assessment battery given to patients. The final report was a user-centered, functional, clinically-relevant tool for communicating research findings about patients to clinicians.
Assessment data and customized reports that have been optimized for a facility can be utilized to enhance patient treatment at other inpatient addiction centers.
Purpose
It is imperative that efforts to facilitate science communication between research and clinical stakeholders in the addiction treatment field continue to improve. To date, there are few examples of organizations where these disparate stakeholders can easily communicate outside of teaching facilities or time-limited research projects that naturally bring clinicians and researchers together. Multiple frameworks and models exist for guiding such relationships,1 -4 although following them as closely as intended in community-based, non-research-focused settings is difficult. This study examines the translational efforts made at a new inpatient addiction treatment facility that has an on-site Center for Addiction Science. Specifically, we used an assessment battery as an opportunity to design a clinical report with the intent of integrating the report into clinical care. We describe herein the process for creating and optimizing the clinical report in a real-world situation where the sample size of clinical staff and research resources are limited.
Disseminating and Implementing in the Addiction Field
The continuum of care in the addiction field ranges from primary prevention health promotion tactics to recovery support. 5 Treatment options are typically divided into residential inpatient (including acute medical stabilization) and outpatient services, with medication management and clinical therapy found in all types of facilities.5,6 There is also the further distinction of private non-profit organizations, private for-profit organizations, and government-operated facilities, a distinction which affects patient accessibility. 6
Disseminating best practice into addiction treatment settings is difficult,7,8 and there is a lack of translation to medical, public health, and other specialty-care settings. 9 Specifically, researchers have discussed the lack of translation and evidence-based programs in community addiction programs.10,11 Many leaders in the field are still focusing on the best way to organize and streamline dissemination and implementation efforts,12,13 which is necessary yet adds time to an already-lengthy process. One area of high need is how to best utilize assessment data. 14 Specific to the field of addiction where rates of relapse, morbidity, and mortality are high,5,15,16 it is imperative that new tactics be used to better serve patients and create science-based care.
It is common knowledge that addiction treatment facilities collect data as part of standard operations, largely via various assessment instruments. Yet these data are rarely utilized to their fullest potential either internally (eg, quality reports, patient progress reports) or externally (eg, publications). Thus, there is a need to create a distilled report based on patient assessment data that can be easily translated into clinical care. To date, assessment reports are long, cumbersome, and often only obtained in special circumstances (eg, neuropsychological testing, other addiction testing). 17 The assessment protocols used at our inpatient addiction treatment facility were taken from decades of clinical research, some of which are used in practice (see Methods for more details). The creation of a clinical report is critical to translation of such assessment data because otherwise the plethora of information goes unused.
Designing for Accelerated Translation (DART) is a translational model that suits a pre-implementation study such as this. It offers quick translation between research and practice by considering the health system, stakeholder partnerships, and design innovations into account. 18 DART guided the current study’s approach to the first few versions of the clinical report via a “simultaneous” approach. With this approach, we started with a useable product (ie, version 1 of the clinician report) that was created with a team approach, but it was known that unforeseen barriers would arise. The simultaneous approach allowed for an iterative process through which we optimized the report over multiple versions via stakeholder feedback. We chose DART because it felt appropriate given the real-world circumstances of the study and the desired goal of reaching a useable assessment report that could be fully utilized by all stakeholders.
Study Aims
The goal of the current study was to develop a user-centered, functional, clinically-relevant tool for communicating assessment data about patients to clinicians. This was done by utilizing a 3-aimed, simultaneous translation model via DART (Designing for Accelerated Translation). The aims were as follows: (1) Assess the motives and context surrounding the assessment report. (2) Evaluate the design development of the assessment report. (3) Optimize adoption of the assessment report. Each aim fulfills a key implementation focus. Aim 1 focuses on motives and context, or the environment in which the assessment report will be utilized. Aim 2 focuses on design because the report must be clear and organized to enhance translation. Aim 3 focuses on optimization which, while unique to each organization, is important to ensure buy-in and utilization. It is hoped that by presenting this internal quality improvement study that other facilities can better utilize their own assessment data or begin to collect assessment data that will then be translated into clinical care. This paper presents clear implementation steps and guidelines in both the manuscript below and Appendices A and B.
Methods
Design
This study took place at a small, inpatient addiction treatment facility in New York with the goal of utilizing the best available science to inform patient care. Data from clinicians were collected via REDCap19,20 and analyzed using R 21 and Microsoft Excel. 22 The opportunity to participate was offered to all clinicians (master’s-level social workers or creative arts therapists) who had a patient caseload and would therefore receive the report regularly. There were no other inclusion or exclusion criteria. All clinicians were informed that this study was initially for internal quality and education purposes and may be externally published. Institutional Review Board consent or waivers were not applicable given the scope of the study. Participation was voluntary, and clinicians could skip any questions that they were not comfortable answering. There was no obligation to complete aims 2 and 3 if aim 1 had been completed. A mixed-methods design was implemented whereby different approaches were taken for each aim and each version of the report (ie, quantitative survey and/or qualitative semi-structured interview; see below).
Assessment battery
The assessment battery used by our facility offers a thorough understanding of our patients and collects standardized data at baseline (ie, within a few days of admitting to the facility, depending on the severity of withdrawal symptoms). It is administered to all patients who plan to stay for the rehabilitation level of care (ie, not detox only) and is comprised of the following measures (all measures have demonstrated psychometric properties in the addiction literature): Substance Use Chart; Short Inventory of Problems revised 23 ; Composite International Diagnostic Interview24 -26,Alcohol and Drug Intentions Questionnaire 27 ; Addiction Treatment Attitude Questionnaire28,29; Reasons for Seeking Treatment; Alcohol and Drugs Abstinence Self-Efficacy Scale30,31; Mini-International Neuropsychiatric Interview 32 ; Outcomes Questionnaire 45 33 ; Family History of Substances (proprietary); Important People Interview. 34 Together, these measures comprised patients’ substance use assessment, treatment processes, psychopathology, and social network.
Assessment report
Following the assessment, the assessor was trained to condense the information from the assessment into an individualized report for the primary clinician and others on the patients’ treatment team (eg, assigned medical provider, nursing staff, trauma specialist). Patients were also allowed access to the report if they requested it; this request was carried out by the lead scientist (JLB) or the patient’s primary clinician. The first version of the report was Microsoft Word-based 22 and utilized Word’s “Developer” features. Version 1 was developed by the lead scientists at the facility and will be described here because it was the foundation of this project. See Appendix A for a copy of all versions of the assessment report. Version 1 prioritized white space over length, and it was 3 pages total. It listed raw numbers where appropriate, such as with average amount used per week, and listed the Likert scale result in other places with the score indicator nearby. This version presented raw numbers and scores as frequently as possible and did not provide detail on the psychiatric screener (only listed disorders for which patients endorsed symptoms or met full criteria) or social network (did not list affected family members, nor list each person in the network).
Sample
Minimal demographic information was collected to reduce ability to identify participants. While not every clinician participated in each aim, across all aims there were N = 8 participants, but 1 only completed demographic information and nothing else, so their information was omitted from analyses (N = 7; Mage = 34.86 years; woman = 71.43%; white = 85.71%; Hispanic = 14.29%). The average number of years as a licensed clinician was 6.14 (range = 2-11). This small sample size represents almost all clinical staff at our small inpatient setting, again, highlighting the real-world nature of this translational endeavor.
Procedures
The REDCap survey for motives and context (aim 1) were emailed to all clinicians who currently worked at the facility around the time that the assessment report launched in February 2021; it was also sent to new clinicians as they were hired and onboarded through May 2021. The purpose of this survey was to get a baseline view of the clinicians’ translational practices and interests, so it was appropriate for them to complete the survey before final optimization of the assessment report. The survey was automatically sent via REDCap up to 3 times over 9 days or until it was completed.
An iterative approach was taken for aims 2 and 3 via multiple surveys (aim 2) and a semi-structured interview (aim 3) whereby after each survey and interview, the report was updated to reflect comments. Specifically, a separate REDCap survey for the design development (aim 2) of version 1 of the assessment report was sent out after 20 assessment reports had been completed to ensure that every clinician had experience with several reports before being queried. Again, a reminder to complete the survey was sent up to 3 times in 9 days. The results were analyzed quickly such that version 2 of the assessment report was introduced by assessment report #30. The same iterative procedure was repeated after 20 more assessments such that version 3 of the assessment report was introduced by assessment report #70. Around assessment report #100, version 3 of the assessment report was updated based on semi-structured interviews aimed at optimization (aim 3). Interviews were chosen because clinical staff were more willing to sit down for a 1-h interview than to do more electronic surveys. Following that, 2 more assessment report versions (versions 4 and 5) were introduced based on the Center for Addiction Science staff’s impressions of the process and internal reviews. This was again done in an iterative and reflective manner.
Measures
There were 10 outcomes across the 3 aims, as outlined below. See Appendix B for the full surveys and copy of the semi-structured interview given to the clinical team.
Practices (Aim 1 - motives/context)
This outcome did not follow a standardized assessment. It was adapted from Bourdon et al 35 and included 5 questions related to length of being a clinician (in years), experience with the measures in the assessment (4-point scale from “I have never heard of this assessment” to “I have given this assessment in the past year and/or utilized results from this assessment”), perceived self-competency discussing results from the assessment with patients (4-point Likert scale from “strongly disagree” to “strongly agree”), and experience with psychological assessments (4-point Likert scale from “strongly disagree” to “strongly agree”). There were also 3 open-ended questions that asked clinicians to explain any experience that they had with the standardized measures listed as well as with assessments during their clinical training and practice.
Interest (Aim 1)
This outcome was brief and included 3 questions on a 4-point Likert scale (“strongly disagree” to “strongly agree”) related to clinicians’ hypothetical desire and utilization of the assessment data. These questions were customized for the project.
Attitudes (Aim 1)
This outcome included questions adapted from Bourdon et al 35 that were related to the clinicians’ views on translational science. Questions ranged from assessing views on whether translation between research and practice is important for disorders of addiction to querying which disorders would benefit the most from such translation. All questions were on a 4-point Likert scale, with answer options specific to the questions.
Appropriateness (Aim 2 - design)
Taken from Weiner et al, 36 this outcome assessed whether the assessment report was suitable for this specific inpatient setting. There were 5 questions on a 4-point Likert scale (“completely disagree” to “completely agree”). The section ended with a final open-ended question prompting clinicians to offer any additional thoughts on this topic.
Understandability (Aim 2)
This outcome subsumes format, clarity, and conciseness by following Shoemaker et al’s 37 PEMAT model. Specifically, 17 dichotomous yes/no questions were presented to clinicians about the content, word choice / style, use of numbers, organization, layout / design, and use of visual aids of the assessment report.
General Feedback (Aim 2)
Three open-ended questions were included to give clinicians an opportunity to explain what they like the most/least about the assessment report as well as how the format of the report affects their ability to use the tool.
Uptake (Aim 3 - optimization)
Questions related to this outcome were part of the semi-structured interview. They included questions querying what clinicians did when they received the report (eg, not read it and not have it inform therapy; read it and shared the information with patients).
Behavior change (Aim 3)
Questions related to this outcome were also part of the semi-structured interview. These questions asked clinicians how they used the assessment report information (eg, to inform treatment plan; to spark therapeutic discussion) as well as how their behavior has changed with each new version of the report.
General feedback (Aim 3)
General feedback questions were included in the semi-structured interview to help with optimization. Such questions were on the topics of what clinicians liked the most or least about the assessment report (acceptability), perceived barriers to using it in their everyday practice and in this specific inpatient setting (barriers), and how they would ideally like to use the report (ideal usefulness).
Analysis
Basic descriptives were calculated on all quantitative analyses (mean, sum scores, percentage breakdown). No mean difference analyses were conducted due to low and inconsistent samples sizes between aims and versions of the report. A modified content analysis approach was utilized for the semi-structured interviews whereby themes emerged from the data. 38 Given the small sample size, pointedness of the questions, and practicality of the optimization process, some steps of typical content analysis were skipped. Specifically, the traditional steps of code, category, theme were not followed; final themes and summaries were concluded early in the process.
Results
While there were N = 7 clinicians who took place in the study, n = 6 completed aim 1 surveys in full (1 only completed demographic information and practices). There were n = 4 responses for the first version of aim 2, n = 3 for the second version of aim 2, and n = 5 semi-structured interview participants for aim 3.
Motives and Context for Translating Assessment Data
See Table 1 for all results related to Aim 1 – motives and context for assessments and an assessment report as well as translational science as a whole in this field. Overall, clinicians expressed interest and favorable outcomes toward translating research into practice via assessments but reported minimal experience with such practices. Specifically, the current practices of the clinicians were mixed, with most being unfamiliar with most assessments used at the facility. However, they more often than not reported feeling competent with assessment-related tasks, such as reviewing reports with their patients (M sum score for competence = 10.83; range 4-16). Further, the clinicians mostly expressed interest in wanting assessments and reviewing the results with their patients (M sum score for interest = 9.83; range 3-12). Finally, clinicians reported mostly favorable attitudes toward this topic (M sum score for attitudes = 13.5; range 4-16). Specific areas of translation where they reported the most favorable attitudes (all M = 4.00, including all answered in the most affirmative) included discovering new and better treatments for disorders of addiction, targeting of resources to at-risk populations, and treating disorders of schizophrenia spectrum and other psychotic disorders, post-traumatic stress disorder, personality disorders, disruptive, impulse-control, and conduct disorders, and substance use disorders / disorders of addiction.
Frequencies and Scores for Individual Questions from Aim 1 (Motives/Context).
Combined Likert options 3 (I have utilized this in the past but not within the past year) and 4 (I have given this assessment in the past year and/or utilized results from this assessment in the past year).
Note 2: One participant only completed half of the aim 1 survey, so N = 7 for demographics and “practices” and N = 6 for “interest” and “attitudes.”
Iteratively Designing the Assessment Report
See Table 2 for all results related to Aim 2 – design of the assessment report. Overall, the appropriateness of the report between version 1 and 2 remained consistently moderate-to-high. Agreement with specific questions improved between versions, likely indicating better communication about the reports outside of the survey questions. Understandability captured the main design-focused elements of the report with the sub-sections of
Frequencies and Scores for Individual Questions from Aim 2 (Design).
Iteratively Optimizing the Assessment Report
Aim 3 was focused on optimization of the assessment report and assessed version 3 of the report (created after the second version of aim 2 was completed). In regard to uptake, it was revealed that the clinicians overly relied on the summary paragraph that was included with the report, and even then, primarily focused on any information related to patients’ suicidality. Few of them looked at or relied on the scales in the report. Finally, each clinician appeared to have different “favorite” sections that they tended to focus their attention, and they reported becoming more engaged with the report across versions 1-3. Relatedly, their behavior change was minimal; they stated that the report was best to confirm what they already knew about their patients except in for complex cases (including patients with suicidality). Despite significant differences between the facility’s intake assessment and the current 1, clinicians said that they relied on the intake assessment more often during treatment planning and therapy and that they did not habitually mention the current report in therapy. The clinicians mentioned ways it could be used, or ideal usefulness, such as putting it into the electronic medical record, additional information, and using it in supervision. However, several admitted that they were underutilizing this tool. Other comments included barriers such as fitting it into their workflow, the time needed to read and process the information, and continued education on what the information means and how best to use it. This resulted in version 4 of the report.
Updating version 4 of the report did not involve formal evaluation of the clinical team in the same vein of improving versions 1-3. Instead, the research team collectively decided the weak areas that remained in the report, small errors that should be fixed, and other ways that could improve workflow. The most significant changes included language in the headers of each section that indicated whether the information was useful for a treatment plan, utilization review, discharge planning, or other similar documents (as well as the section of these documents that was relevant). Smaller changes included typos, slight word changes, and DSM-5 clarifications. The resulting version 5 was adapted for the electronic medical record, but a Word version remained in case it was ever needed.
Discussion
The purpose of the current study was to iteratively develop a user-centered, functional, clinically-relevant report for communicating assessment data about patients with the clinical team in an inpatient addiction treatment facility. We focused on studying the (1) motives and context surrounding the report, (2) design of the report, and (3) optimizing the adoption of the report. While we found success in creating a well-liked, informative, visually pleasing report, adoption was difficult. Additionally, there are 2 secondary considerations - clinician engagement throughout this process and using implementation science techniques with minimal resources at an inpatient center. Despite any drawbacks, our strategy and final product offer a technique for similar facilities, as this study took place in a common environment (inpatient addiction) that is often under-funded and under-staffed to utilize all patient data.
Clinicians in our facility reported favorable outcomes in relation to the concept of translational science and the integration of research with practice but lacked experience doing so, a trend that has been reported elsewhere. 35 They were largely unfamiliar with the measures used in the assessment, which in theory should not prevent them from interpreting a report based on data from such measures. However, it has been well-documented that mental health and addiction clinicians, including medical psychiatrists, are slow to adopt new practices.39 -41 Unsurprisingly, every clinician reported that improved translation between research and practice would benefit disorders of addiction as well as disorders that often co-occur with AUD/SUD including disruptive impulse-control and conduct disorders; personality disorders; PTSD; and schizophrenia spectrum and other psychotic disorders.42,43 Every clinician also reported that translational efforts would have a strong influence on discovering new and better treatments for disorders of addiction as well as targeting resources to at-risk populations. Again, though, there may be a disconnect between what is intent versus reality, as clinicians are needed to assist with the translation. For example, clinicians reported feeling competent to discuss the report results with patients while also reporting little-to-no experience with the assessments used to garner the information. Further, despite clinicians hoping for effective translation, they also reported they do not actively utilize the assessment for better treatment (see below).
In regard to the actual design of the report, we paid careful attention to the clinical team’s continued feedback of low ratings in the areas of clutter, distractibility, and clarity for Versions 1 and 2. We triangulated such survey results with our own experiences and feedback from Center for Addiction Science staff in later versions of the report. One of the biggest and most surprising changes occurred between Versions 2 and 3 and involved the elimination of most numbers from the report. It appeared that the qualitative scales were more meaningful for clinical staff than quantitative ones. This highlights the longstanding difference between “clinical significance” and “research significance.” Clinicians work in fast-paced environments and must be able to quickly read an assessment report; changing a numeric Likert 1-5 description to the words “strongly opposed” or “strongly favorable” is a simple and important takeaway.
Other significant design changes involved the “psychiatric diagnoses and psychiatric problems” section. This changed considerably across versions, starting out as a written list of diagnosable criteria and ending as a list of diagnosable criteria checkboxes with space to write detailed notes. Clinicians often praised the various note sections of the report, whether it be in the psychiatric section or at the end of the report. This was ultimately a fine line to walk, as we did not want them to become too dependent on our scientific interpretation. They commonly reported after Version 3 that they relied on the paragraph and reviewed a few other scales. By Version 4, we internally decided to keep notes at the end of the report minimal except to note significant trends or behaviors exhibited during the assessment. Throughout the entire study, other barriers noted by clinicians included fitting in time to read the reports and process the information, and continued education on what the information meant and how to use it. Clinicians suggested that the report be uploaded into patient medical record, which occurred with Version 5.
Relatedly, it was difficult to engage clinicians throughout this process, and feedback was minimal at certain stages of the project. Nevertheless, we were able to engage nearly the entire clinical staff at some point in this process as well as update the report at each stage to ultimately arrive at a version that holds great promise. Despite the naturally low sample size (see Limitations), less than half of the clinical staff at any given time tended to reply to an electronic survey. Interestingly, a higher proportion of the clinical staff engaged in the semi-structured interview than in any of the quantitative surveys despite the interview taking an hour and the surveys taking a fraction of that time. While an hour-long interview is not feasible for either a research or clinical staff on a regular basis, it could be a methodology worth pursuing less frequently to gain more meaningful information for regular translational and quality efforts.
Our implementation approach, the DART model, 18 was a success in this setting, which other inpatient facilities should note. Future organizations could use this approach to create a similar report, including the measures and design choices that we used. Per the DART model, we were able to achieve relatively quick translation between our Center for Addiction Science and the clinical department. 18 The “simultaneous” approach worked best for this facility, as we could iteratively update the report and even make changes to our approach along the way. For example, despite rigorously planning how often each survey or interviews would be conducted for each report version (eg, after a certain number of assessments), the original timetable of every 20 reports for the first 100 assessments was not feasible. This would have resulted in 5 versions within the first 100 reports, and we only achieved 3. Relatedly, fatigue quickly set in, and survey overexposure became a concern for the science team. The DART model allowed us to be flexible with our original goals, as the crux of the model is the iterative team approach to translation that allows for high demand, quick turnover, and low cost.
Limitations
There are a number of limitations to note. First, due to the facility being small, it was difficult to protect the privacy of clinicians. We kept demographic information minimal to avoid the information being identifiable. Second, employee turnover coupled with an already small clinical staff meant that few people completed all 3 aims. This made it difficult to track the optimization of the assessment into treatment. Third, studying effective translation of data into patient care in this inpatient rehabilitation center is “real world,” but this study was not controlled as are other similar studies. Without grants, a large team of researchers, or other factors that are common in studies of EBP and implementation, it is difficult to carry out a study such as the current one. This also resulted in a naturally low (N < 10) sample size even though nearly all clinicians at the facility participated in the study. However, studies such as this are important, as they further the conversation about barriers to implementing research into practice in a “real world” environment.
Conclusion
Science communication between research and clinical stakeholders in addiction treatment facilities is in need of improvement. The current study aimed to improve translational efforts whereby data from an assessment battery was used to design a clinician-focused report with intent of integrating the report into clinical care. A baseline view of clinician’s translational practices and interests were captured, and over time a total of 5 versions of the clinical report were created with improvements based on clinician’s feedback. This process can be replicated at similar facilities for optimizing and fully utilizing assessment data.
Supplemental Material
sj-docx-1-inq-10.1177_00469580241237117 – Supplemental material for An Implementation Approach to Translating Assessment Data into Treatment for Disorders of Addiction
Supplemental material, sj-docx-1-inq-10.1177_00469580241237117 for An Implementation Approach to Translating Assessment Data into Treatment for Disorders of Addiction by Jessica L. Bourdon, Taylor Fields, Sidney Judson, Nehal P. Vadhan and Jon Morgenstern in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
Supplemental Material
sj-docx-2-inq-10.1177_00469580241237117 – Supplemental material for An Implementation Approach to Translating Assessment Data into Treatment for Disorders of Addiction
Supplemental material, sj-docx-2-inq-10.1177_00469580241237117 for An Implementation Approach to Translating Assessment Data into Treatment for Disorders of Addiction by Jessica L. Bourdon, Taylor Fields, Sidney Judson, Nehal P. Vadhan and Jon Morgenstern in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
Footnotes
Acknowledgements
We would like to acknowledge all who worked tirelessly to make Wellbridge a reality and who continue to realize its purpose on a daily basis. This includes our clinical, nursing, admissions, administrative, food service, housekeeping, and maintenance staff. We would also like to thank Sabrina Verdecanna with help formatting the final version of the paper for submission.
Author Contributions
NPV and JM developed the initial assessment measures. JLB, NPV, and JM finalized the assessment protocols. JLB implemented the assessment protocols, including collecting data and creating the assessment report. TF and SJ collected assessment data and wrote corresponding reports. JLB wrote the initial draft of the paper with assistance from TF and SJ for literature, tables, appendices, etc. All authors read subsequent drafts and assisted with edits.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Statement on Informed Consent
This study was deemed “not human subjects research” from Pearl IRB (ID 2024-0019).
Supplemental Material
Supplemental material for this article is available online.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
