Abstract
Highlights
We explored perceptions of a decision aid with education about localized prostate cancer treatment and preference elicitation using adaptive conjoint analysis.
Patients found the tool useful but were also confused by it, tried to discern the intent of the questions, and expressed negative emotional reactions.
In particular, there was a disconnect between patients’ negative reactions while using the tool and general satisfaction with the final values profile generated by the tool, which is an area for future research.
Decision aids have been widely promoted to help shared decision making (SDM) for preference-sensitive medical decisions. These tools refer to an expansive category of interventions that seek to aid in the medical decision-making process for patients. The tools have 2 main aims: education about outcomes and side effects of medical treatments, and prompting a patient’s decision based on their individualized needs and values. 1
Treatment decisions for localized prostate cancer are a rich site for SDM because the decision is preference sensitive with several options such as active surveillance or surgery and radiation, each with different side effects such as urinary and erectile dysfunction. Accordingly, the American Urological Association endorses patient decision aids that elicit preferences to complement the SDM discussion to reach high-quality decisions for patients. 2 A recent review evaluated 21 different decision aids for men with localized prostate cancer, varying in presentation and format, with most being designed as computer/Web-based multimedia programs, a few as booklets, and some in video format. 3
Previous evaluation studies of finalized decision aids—a recent Cochrane review of 105 randomized trials would be an excellent example—have been mainly quantitative: researchers compare endpoints such as treatment choices, decisional conflict, or decisional regret.4,5 The literature is far sparser for qualitative studies of postprocess evaluation of how patients feel about the tool. In systematic reviews of prostate cancer treatment decision aids, few tools were tested with patients as part of the development process. Of those that were documented as tested with patients, most studies used quantitative survey methods as opposed to observational interviews.6,7 This type of study may give insights into the design and implementation of decision aids that are not captured in quantitative studies that prespecify the endpoints of interest or in qualitative studies that interview patients after they have completed a decision aid.
In reviewing the literature, we found few studies in which patients have been asked to describe in their own words their experience of using a decision aid at the time they are using it, although none concerned prostate cancer.8–10
One study used cognitive think-aloud interviews to assess how women interpret a UK National Health Service leaflet on cervical cancer screening. Even though the leaflet had gone through extensive user testing, these interviews made it clear that many participants perceived the leaflet to be too complex. 8 Another study used think-aloud interviews with 26 patients and 26 physicians as part of alpha testing during the development process of a prototype treatment decision aid for early-stage breast cancer, in which participants were asked to go through the decision and immediately comment on the tool. They were also asked questions such as, “What is your general impression?” and “What did you particularly appreciate?” 9 However, the comments from both patients and physicians were mainly focused on ease of use, readability, and tool navigation, as well as feedback on specific content. The study by Hahlweg et al. 10 used cognitive interviews and focus groups to make a few changes to the decision aids, but again, these were more focused on comprehension of specific items and the order in which FAQs appeared. In contrast, our study focused on obtaining cognitive and affective responses from patients, that is, what goes on in people’s minds as they go through the tool and how the tool makes them feel.
We explored perceptions of patients about a decision aid for management of early-stage prostate cancer while they were actively using the decision aid. This multicomponent decision aid comprised a preference elicitation and values clarification exercise using conjoint analysis, the generation of a summary report presenting the weights of values and preferences, followed by guidance to make a decision by providing information about treatment options. This qualitative methodology allowed for systematic, detailed observation of patients’ behavior and verbalization, and we were able to solicit cognitive and affective responses to the tool. 11
Methods
Goal
This is a quality improvement project prior to consideration of implementation of a decision support tool in clinical practice, conducted between April and November 2019. The project was determined not to require institutional review board oversight by the Human Research Protection Program at Memorial Sloan Kettering Cancer Center. The goal of the project was to examine the ability of a Web-based decision support tool to deliver the benefits of SDM via a novel, innovative, and integrated technology to the patient. To do so, we solicited patients’ feedback and impressions of the Web-based decision support tool: whether the questions in the tool made sense to them, what they understood, how they felt about the presentation of the tool’s outputs, and what suggestions they would make for improvement.
Population and Setting
The population consisted of 21 patients presenting to a urology clinic following their diagnosis of localized prostate cancer of low or intermediate risk (Gleason score 3+3 = 6 [grade group 1] or 3+4 = 7 [grade group 2], prostate-specific antigen <20 ng/mL and clinical stage T1c-T2b) in a large metropolitan setting. The average age of the patient population in this setting is about 62 to 67 y,12,13 and the racial distribution of male urology patients comprises 82% White, 8% Black, 5% Asian, and 6% other race. Approximately 6% are Hispanic. Purposive sampling was the primary strategy for recruitment, and a target sample size of 20 was chosen to reach content saturation for the project’s goal of assessing a patient’s perceptions and feelings while using a decision support tool when determining treatment options. Content saturation was defined as the point at which a topic was fully explored, with no new information emerging from subsequent interviews. 14
Decision Tool
The development and details of the Web-based decision aid have been described previously.15,16 In brief, the tool was developed for patients with newly diagnosed localized prostate cancer with the goals of providing education about localized prostate cancer and treatment options, preference measurement, and personalized decision analysis, using adaptive conjoint analysis. It took participants about 15 to 20 min to walk through the tool. There is evidence that implementation of this decision tool preconsultation reduces decisional conflict and improves several measures of SDM, such as helping men feel better informed and more involved in the decision-making process. 15 In a randomized trial comparing the tool with an educational brochure about prostate cancer treatment, patients who completed the software-based preference assessment felt more certain about their treatment decisions and reported decreased levels of decisional conflict. 16
After completing the questions posed by the decision aid, the tool generates a patient personal profile based on values and preferences. This is achieved by conjoint analysis, which ranks preferences based on substitution (e.g., avoiding frequent urination > avoiding sexual dysfunction > avoiding a surgery requiring a hospital stay), in which the treatment(s) that match those preferences, together with tumor characteristics, will be ranked higher than other options using a “fit bar” (e.g., radiation therapy > brachytherapy > surgery > active surveillance). Treatments are ranked based on an expected value calculation that runs in the background using a decision analytic framework, with the quantified preferences providing the weights in the model. For conjoint analysis to accurately determine a patient’s preferences, it requires the questions be phrased for patients in a tradeoff manner. Patients were asked, “Which of these outcomes is more acceptable to you?” and then shown 2 groups of 2 to 3 health states. The patient is then asked to rate the strength of their preference on a 5-point scale. For instance, a patient might be asked to state a preference between, “After your treatment you quite frequently need to have a bowel movement with little notice. (Within six months you return to normal) AND You avoid an outpatient procedure. AND Your life expectancy is reduced by six months.” versus “Your bowel movement patterns remain the same. AND You need minor surgery, with no hospital stay required. AND You live out your expected lifespan.” The decision aid encompassed a total of 20 such questions.
Upon completion of the questions, the tool generates an individualized report displaying a values profile based on the patient’s values and preferences and lists the suggested treatment options. This report is then used during the consultation with the treating urologist.
Data Collection
Usability testing using cognitive interviews occurred in person in a clinic room. Cognitive interviewing is a qualitative approach that is used to assess the content validity of patient-facing materials. 17 Patients were introduced to the quality improvement project by their treating urologist, who explained the purpose and then left the room. Upon confirming verbal interest to participate in the project, patients were joined by 1 or 2 members of the evaluation team and given an electronic tablet to walk through the decision support tool. Interviews were conducted by 1 trained interviewer on the team, with or without another team member observing.
As per observational research methodology, jottings in the form of snippets of verbatim conversations were taken continuously by 1 or 2 observers during the interviews, serving as memory aids to document patient comments, 18 following a semi-structured interview script with open-ended questions regarding their overall experience of using the decision support tool in general (for example, “What features of the tool did you like the most/least?” and “Are there any questions you would like to add/remove?”) and its usefulness more specifically (for example, “Would you have found a decision aid like this useful when making your decision about treatment for prostate cancer and would you recommend this tool to a friend?”). Further probes (proactive and reactive) were used to elucidate emotional states experienced while using the tool (for example, the proactive probe: “Is there anything else that, in your opinion, would be important for this tool to ask, but was not included?” or the reactive conditional or emergent probes in which the interviewer would react to the participant’s behavior and responses during the think-aloud interview as the participant was reacting to the questions in the tool). Immediately after the conclusion of each interview, the jottings were written out as detailed field notes by the interviewer in a standardized interview summary report of patient feedback while referencing direct quotes that were jotted down. The structured interview report template consisted of the following labels: participant number, date, time of interview, observer’s post-interview thoughts, interview summary, and notable quotes from interview. The snippets of conversations were turned into detailed field notes after the interview. Interviews were not audio-recorded because of the observational methods used and to ensure anonymity of participants in this quality improvement project.
Analysis
We analyzed the summary reports using inductive thematic content analysis to examine themes and patterns of meaning within text data, using predefined codes derived from a subset of 5 interview scripts, then applied to all interview scripts. 19 Analysis of interviews was performed independently by 2 members of the project evaluation team. Both members were completely independent from the conception, development of aims, and conducting of interviews. Interview reports were reviewed independently by the 2 project team members and hand-coded to identify the code structure. Interviews were analyzed in groups of 5 to 6 patients, and the team members met for weekly consensus meetings during ongoing analysis to discuss and resolve any discrepancies in coding as well as discussed areas of convergence and divergence between participants and their responses. A summary consensus document was prepared for each meeting with key reference quotes included for each section. After coding and consensus discussion was complete, a secondary analysis was completed by combining the consensus documents and reviewing each section for common themes. Consistent with standard practice for qualitative interview analysis, feedback mentioned by at least 3 participants were included as a common theme. 20 After the completion of 4 consensus meetings, all consensus documents were combined into a larger convergence document that was shared and discussed with the quality improvement project team.
Results
Of the 21 patients with localized prostate cancer, 12 patients were accompanied by a family member or caregiver. Interviews lasted approximately between 15 and 20 min. Patients expressed general satisfaction overall with the decision aid tool with respect to several key aims of decision aids. Patients felt that the decision aid was educational and increased their knowledge about treatment options. They felt that it helped clarify values and produce a values-concordant outcome (“values profile”). They also found that the tool helped them think more deeply and generated deeper consideration about treatment decision making. Based on their responses during the semi-structured interviews, patients found the length of the tool was acceptable.
After 20 interviews, we identified and obtained saturation for 5 themes: 1) patients had a negative emotional reaction to the tool, pointing out the unnecessarily negative framing and language used; 2) patients were forced to stop and think about values and preferences while going through the tool and found this process of deliberation to be useful; 3) patients were confused by the tool; 4) patients tried to discern the intent of the questions; and 5) there was a disconnect between patients’ negative reactions while using the tool and a contrasting general satisfaction of the final values profile.
Theme 1: Patients had a Negative Emotional Reaction to the Tool, Pointing out the Unnecessarily Negative Framing and Language Used
Patients felt overwhelmed by multiple different options being presented at a single time and what was felt as unnecessarily negative language. Some language used in tradeoff questions was perceived to induce negative emotional states over and above that which is inevitable when discussing a medical decision with important tradeoffs. One patient noted the “extreme language” of some of the questions, for example, the usage of words “permanent” and “no improvement over time.” Upon further questioning, he explained that the “maximizing life span and losing one year” questions were “fear-inducing.” Another patient confirmed this feeling: “The wording makes me nervous. It hits you on the nose and I was just diagnosed three weeks ago.”
The patients’ main negative reaction concerned the choice between symptoms. The language was thought to be scary and overwhelming, as it was a matter of choosing between “the lesser of two evils.” Patients disliked being presented only negative outcomes in relation to their care. One patient spontaneously stated, “These questions suck!” When asked for clarification, he referred to the tradeoff questions being difficult to think about and choose between, because he felt that 2 different outcomes would both “suck.” Several other patients shared the same experience. One patient said, “These are bad questions. They are almost like psychology questions. It’s like picking the [lesser] of two evils.”
Theme 2: Patients were Forced to Stop and Think about Values and Preferences while going through the Tool and Found this Process of Deliberation to be Useful
Patients reported that their knowledge of prostate cancer treatments and side effects increased after completing the decision aid. A patient who had come to the urology clinic for a second opinion said, “I felt like the tool helped recall bits and pieces of what I had already learned. It tested my knowledge.”
This recollection of knowledge went hand-in-hand with patients reporting that the tool prompted a deeper consideration about the decision-making process of their treatment options. When probed about this sentiment, one patient said, “It was good to think this through and put my mind in the place of the things that can happen.”
This deeper consideration led another patient to state, “It was useful for quality of life questions since doctors don’t always ask these questions or have the time during appointments.” The same patient went on to describe the usefulness of the experience of completing the tool to “get the mind going,” prior to an in-depth discussion with their clinician.
However, not all patients found the tool useful in terms of prompting consideration of values and preferences. One patient illustrated difficulties with needing to make a decision, saying that a tool like this “wouldn’t be for me” as he preferred to talk to people (doctor, family members, etc.) about his different options, rather than a tool. The patient thought the tool failed to consider the human experience of decision-making.
Theme 3: Patients were Confused by the Tool
While some patients found the process of deliberation to be useful, others were confused by the tool and had a more difficult time answering certain tradeoff questions.
Subtheme 1: Patients struggled to evaluate health states that they had never been in
It was difficult for patients to imagine themselves in future health states that they had never experienced and to further compare between them.
One patient pointed out, One year is still a year, especially if you are on your deathbed, but it would depend on your quality of life during that year. People value it differently and you could probably put up with some baloney if you knew you could live longer.
Another patient described his reasoning in the context of his professional life. He deliberated, Do I want major abdominal surgery, or do I pee every time I smile? I’m a [profession] and I teach students. If I was incontinent and peed in class, I don’t think I could go to work. . . . This one is a difficult choice, because if I have to run to the bathroom because of an urgent bowel movement, I would have to stay home for 6 months.
One patient questioned, “Objectively speaking, are these scenarios that would happen in the future? I can’t imagine either one.” These future health state presented as possible outcomes even resulted in one patient expressing discomfort with the result, “Everyone would want to avoid urinary problems or bowel problems. I wouldn’t imagine someone wanting any of these symptoms.”
Furthermore, tradeoff questions focused on symptoms in medical terms and did not prompt patients to consider social meaning or consequences of symptoms. Patients produced negative reactions regarding how the wording of the tool described their experiences of symptoms, taking them at face value using the medical definition but not taking into consideration the social meaning of symptoms, such as urinary problems and bowel problems. The responsibility then fell on the patient to translate medical terminology into socially meaningful (hypothetical) health states. Patients had to interpret the medical meaning of symptoms to create relevance for their personal quality of life. One patient noted how the questions had an assumption that he was working and said, “Work isn’t something I have to worry about, so a lot of these questions don’t matter to me.”
Subtheme 2: There was wide patient confusion over comparing between tradeoff questions that paired health states together
Patients expressed further difficulty with questions containing time-based, temporal aspects, citing them as hard to comprehend, specifically when different time frames were presented in a single question. For example, patients were asked to rate their preference (
Another patient was confused and said, “I think it [tradeoff question] was confusing, the ‘AND’s’ and ‘OR’s’ really threw me off. I’m not sure this cleared up anything.”
One patient questioned the design of the questions that had 3 options on one side versus 3 on the other and asked, “I’m just thinking about the people who came up with these questions? You didn’t make this tool, did you?”
Theme 4: Patients Tried to Discern the Intent of the Questions
By having to trade off between the listed health states and possible symptoms of each question, patients felt they needed to deconstruct the true meaning and purpose of the questions. With further probing, some patients reported that the intent of the tool felt unclear to them due to its overall design. This questioning of the intent was often presented as general confusion, and patients wondered whether the primary purpose of the tool was for education, for helping to make a treatment decision, or simply a discussion guide. Patients inquired curiously about the development and design of the tool. One patient wondered how this tool would apply to his choice of treatment, stating, “Will it matter?”
Building on patients’ assumed responsibility to infer the intent of the tool, one patient attempted to understand the meaning behind the questions and tease out whether specific side-effects presented pertained to certain treatments: “Oh, so these questions are sort of binary options? So, this is like surgery or radiation? This is definitely asking about surgery versus radiation.”
Theme 5: There was a Disconnect between Patients’ Negative Reactions while Using the Tool and the Contrasting General Satisfaction of the Final Values Profile
Upon completing the tool, the patient was shown their personal values profile. The project evaluation team member asked each patient, “Do you think that this values profile aligns with your values and preferences?” Intriguingly, while most patients had expressed confusion and skepticism while using the tool, almost all nevertheless answered positively, a simple “yes” or “this seems about right.” They felt that the tool was accurate in its elicitation of their values and preferences in relation to the various side effects from the treatment options available. The generation of a values profile was seen as useful information that allowed for further conversations to be initiated between patients, their providers, and caregivers. Many patients indicated that they thought this feature would be helpful and beneficial for future patients and said, “This [values profile] would be helpful for future patients.” That is, in terms of values and preference elicitation, patients expressed the tool had a general sense of accuracy, in relation to the various side effects from the treatment options available. One patient said, “It helped me identify my values, that I desire less side effects such as urinary, sexual, and bowel dysfunction that I don’t want to experience.”
Discussion
In this quality improvement project, we assessed the clinical utility of a decision aid for prostate cancer treatment decisions by interviewing 21 patients using a think-aloud approach as they walked through the tool, under observation by the project evaluation team. While we found that patients expressed some negative emotions while going through the tool and while thinking through the decision, we also found that patients felt that the tool was educational and increased their knowledge, helped them think more deeply about treatment decision making and their own values and preferences, and prompted interactions with caregivers.
The most commonly applied methodology to evaluate a finalized decision support tool is quantitative, such as scores for decisional conflict or satisfaction with decision, as documented in systematic reviews.4,5 For example, the decision support tool assessed herein was previously tested with 109 users in a feasibility study showing that decisional conflict decreased by 37% (
Several prior studies have assessed patients’ experience of decision aids in their own words, but many of these interviewed patients retrospectively, after they had used the decision aid and sometimes even after they had made their treatment decision. This contrasts with our approach of soliciting impressions while the patient was using the aid. Such study designs would likely miss many of the findings we report, in which patients are looking at the aid for the first time.21,22 One study did not even require the participants to have the decision aid on hand while completing the interview, so these participants were evaluating the aids based on memory. 23
A prominent finding was that there was an important disconnect between patients’ reactions and emotions while completing the decision aid and their feelings about the final values profile. Many patients felt confused and skeptical about the intent of the decision aid while using it, and these feelings often developed into a general negative response to the decision aid. Arguably, the confusion and skepticism may be tied to patients’ difficulty with imagining future, unknown health states, and the nature of the problem in prostate cancer; these men are asymptomatic generally. Furthermore, prostate cancer treatments may affect several domains of a man’s life (sexual, urinary, bowel function, and quality of life), making it a complex decision involving multiple domains balanced against longevity and a chance of curing the cancer. Even active surveillance involves anxiety and biopsies. In contrast, for a treatment decision such as knee replacement, patients may have a more concrete improvement they can think about. However, these initial strong negative reactions by patients were complemented by positive responses when the decision aid produced a values profile, since patients felt that the values profile reflected their preferences accurately.
Almost all patients agreed with what the values profile was showing them; in other words, going through the tool seemed to indicate that the values clarification elicited patients’ preferences accurately and that the perceived value choice congruence was correct. Another way to interpret this is that participants appreciated being told their “answer” but disliked the process of making difficult tradeoffs in order to get to that answer. Making those tradeoffs feels “bad,” but having some clarity at the end feels “good.” A common view in the SDM community is that negative responses during the decision-making process are natural and inevitable for particularly challenging medical choices, but the observation that people are ultimately happy with the decision tool’s output is a key justification for the use of the decision aid. Engaging with difficult choices up front may prevent unrealistic expectations and reduce decisional regret. In a prior study using this tool, men exhibited very low levels of decisional conflict after using the decision aid. 24
There is also a possibility that the observed disconnect between patients’ views of process and outcome does not accurately reflect patients’ preferences but rather reflects cognitive biases. These might include choice-supportive bias, hindsight bias, and the “IKEA effect.” Choice-supportive bias is misremembering choice-related information that boosts the chosen option or demotes the foregone options. 25 Hindsight bias is the tendency to view events as more predictable than they really are. 26 In other words, after an event, people often believe that they knew, or at least could have predicted, the outcome of an event before it happened. The IKEA effect is the increase in valuation of self-made products. 27 That is, people tend to value “the product of their labor,” which could potentially explain why patients appreciate the values profile produced after spending time and effort completing a decision aid. An element of social desirability (people-pleaser) could also be at play; when asked by an interviewer in a clinical practice room, “Do you think that this values profile aligns with your values and preferences?” patients may be inclined to agree just to be polite to the medical team or the developer of the tool.
Another limitation is the fact that the patients’ urologist introduced the project and may have biased the patient toward evaluating the tool more positively (e.g., if they admire their doctor they may believe that the fact their doctor endorses the project means that the tool must be worthwhile). To circumvent this, the interviewer verbalized to patients at the beginning of the interview that they could feel free to be as critical about the tool as they wanted, and patients shared both negative and positive feedback.
The observation of decision aids enabling a process of deliberate decision making in our interviews (“pause and think about things”) has been confirmed in prior studies, including qualitative interviews with pregnant women facing the decision whether to screen for fetal abnormalities (“it made you think twice”). 28 Patients also expressed suggestions for improvement regarding the presentation of information in decision aids to avoid patients feeling overwhelmed and addressing literacy. Recommendations for developing plain language decision aids for localized prostate cancer treatment decisions have been described previously, including methods to present medical text and numerical formats. 29 A recent systematic review of 19 decision aids for localized prostate cancer treatment concluded that little attention was paid to communicative aspects (e.g., presentation of information and suitability of information) in many tools, corroborating patients’ perceptions of aspects of the tool evaluation in this project. 30
Strengths and Limitations
Strengths of this project include the requirement that participants have the decision aid on hand, as opposed to having participants try to remember the aids. As described above, some studies interviewed participants about their experiences using the decision aid weeks after they had used them and sometimes even after they had already made a treatment decision. We believe that our approach allowed us to obtain more accurate assessments of participants’ true experience of using the decision aid. Moreover, many studies used quantitative surveys to ask participants about their experiences, which did not allow for many open-ended or spontaneous responses. In contrast, we used a qualitative interview design with the think-aloud interviews, which allowed participants to describe their experiences in their own words as they reviewed the decision aids for the first time. Our approach allowed us to obtain richer data by observing the participants’ process and deliberation throughout the qualitative interviews, as opposed to reading what participants had to say about the experience in a free text box on a questionnaire.
It should also be mentioned that the contemporary version of the tool has undergone edits and revisions since the time these patients were interviewed. There are now screens that introduce medical concepts prior to rating exercises, and the partial profile comparisons involving 3 separate outcomes—explicitly described as confusing by our participants—have been removed.
With regard to limitations, we note that field notes rather than audio recordings were taken; however, we were able to capture near-verbatim quotes by 2 interviewers taking continuous hand-written jottings of snippets of conversations and completing the interview report immediately after the interviews. We did not collect any sociodemographic, education, or health literacy information from participants other than the average age, race and ethnicity, and clinical tumor characteristics of the typical patient population in this setting. There was also a lack of a Spanish version of the tool; 1 participant’s daughter translated it for him, which is not the same experience as reading. Participants may also not have felt as open to sharing their thoughts and honest opinions about the decision aids to the project evaluation team; they might be more comfortable and willing to provide feedback in a more neutral setting to observers whom they did not connect with the tool.
We also observed that having the patients walk through the tool prompted spontaneous interaction with accompanying caregivers. Future studies are encouraged to further study the use of decision aids in increasing dyad interactions regarding important, preference-sensitive medical decisions.
Finally, we note that some of the concerns raised by patients, particularly about being faced with binary choices, reflect the framework of conjoint analysis, which is not a feature of many decision aids; thus, some of these concerns may not be generalizable to other tools. The tool tested in this study was a multicomponent intervention, and some features may be more desirable than others by patients.
Conclusions
Web-based decision aids aim to facilitate SDM for patients with localized prostate cancer considering treatment options and educate patients about side effects. We found a noticeable disconnect between the negative reactions toward the process of preference elicitation and positive satisfaction with the final values profile generated from such elicitation. Given the strong possibility that this effect may result from a variety of cognitive biases, further studies are required to determine the degree to which patients’ positive opinions of their values profile truly reflect an effective values clarification versus an effect of cognitive biases. This analysis also demonstrates the value of eliciting patient perceptions of decision aids in their own terms, and we encourage studies on other decision aids using this approach.
Footnotes
Acknowledgements
We sincerely thank Konstantina Matsoukas, research informationist at Memorial Sloan Kettering Cancer Center, for assistance with the literature review.
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Andrew J. Vickers is a co-inventor of the 4Kscore, a commercially available reflex test for predicting prostate biopsy. He receives royalties from sales of the test. He owns stock options in Opko, which offers the test. The remaining authors have nothing to disclose. The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Financial support for this quality improvement project was provided by the Movember Foundation. The authors’ work on this article was supported through a National Institute of Health/National Cancer Institute Cancer Center Support Grant (P30-CA008748) to Memorial Sloan Kettering Cancer Center. S.V.C. was further supported by a National Institutes of Health/National Cancer Institute Transition Career Development Award (K22-CA234400). The funding agreement ensured the authors’ independence in designing the project, interpreting the data, writing, and publishing the report. This work was previously presented at the Society for Medical Decision Making (SMDM) virtual meeting 2020.
