Abstract
As the pace of medical discovery widens the knowledge-to-practice gap, technologies that enable peer-to-peer crowdsourcing have become increasingly common. Crowdsourcing has the potential to help medical providers collaborate to solve patient-specific problems in real time. We recently conducted the first trial of a mobile, medical crowdsourcing application among healthcare providers in a university hospital setting. In addition to acknowledging the benefits, our participants also raised concerns regarding the potential negative consequences of this emerging technology. In this commentary, we consider the legal and ethical implications of the major findings identified in our previous trial including compliance with the Health Insurance Portability and Accountability Act, patient protections, healthcare provider liability, data collection, data retention, distracted doctoring, and multi-directional anonymous posting. We believe the commentary and recommendations raised here will provide a frame of reference for individual providers, provider groups, and institutions to explore the salient legal and ethical issues before they implement these systems into their workflow.
Keywords
Introduction
The Health 2.0 movement has gained a foothold in the healthcare industry as applications that enable crowdsourcing to grow in popularity.1,2 By allowing healthcare providers (HCPs) and patients to explore new and innovative approaches to access and distribute relevant medical information, crowd-based technologies are now viewed as a potentially viable solution to the increasing complexity of modern medicine.3–8 While the literature has primarily examined peer-to-peer and peer-to-physician crowd-based technologies, the role of technologies that enable physician-to-physician crowdsourcing is typically overlooked.1,9 The purpose of this article is to explore the salient legal and ethical issues that accompany crowdsourcing in the healthcare arena and to increase awareness among HCPs and policymakers before the adoption of crowd-based technologies becomes more widespread.
Over the past decade, the variant of social networking called crowdsourcing has been used to tap into the collective intelligence of skilled workers. 10 Growth in the mobile device market, with the expansion of wireless networks across private and public organizations, has fueled the adoption of crowdsourcing. In the business environment, crowdsourcing provides an opportunity to explore problems at low cost from a crowd with a range of complementary expertise. Crowdsourcing has also proven to be a powerful tool for scientific research given its ability to enable data capture at reduced costs.2,3,11,12 The corporate environment incentivizes collaboration through crowdsourcing, with companies tracking employee participation as a metric for promotions. Current estimates indicate that the majority of physicians use either smartphones (84%) or tablet devices (53%) for professional purposes at work. 13
Historically, crowdsourcing has also appealed to many industries, particularly outside of healthcare, because they enlist a range of people to quickly obtain information or solve a problem at a relatively low cost. At the same time, it builds on the natural inclination of people to help others. Helping others, especially when an individual can show expertise within a field, can lead to the “warm glow” effect described by psychologists and thus may have internal positive reinforcement elements in addition to the external impact of assisting another professional within a community. 14
Within the healthcare industry, peer-to-peer applications like CrowdMed.com and PatientsLikeMe.com try to fill gaps in the naturally existing dialog between patients and physicians by allowing patients to discuss diagnoses and treatment options with other patients, or laypeople, at a distance.8,15 HealthTap, using a peer-to-physician model, boasts a network of 67,000 physicians who can answer patient questions over the Internet within 24 h. 16 While social networking websites like Sermo and Doximity have amassed a large panel of physicians, they primarily provide a social networking platform, rather than a forum to discuss time-sensitive patient information.17,18 Other applications, like TigerText and Medigram, while initially designed to support timely communication between provider teams, are now being geared to share low-stakes information intended for use in direct treatment scenarios.19,20 Despite these characteristics, medical institutions have been slow to implement these crowd-based technologies because of their yet unproven effect on both patient outcomes and cost. And while Sermo, Doximity, CrowdMed, and other companies have incorporated elements of crowdsourcing for marketing and business promotion, up until our trial of the mobile crowdsourcing application DocCHIRP, direct physician-to-physician crowdsourcing remained untested in the healthcare domain.
Closing what has been coined the “knowledge-to-practice gap” at the point of care involves blending clinical experience and organization-specific knowledge with published data. 21 As would be predicted, knowledge gaps encountered in practice involve matters related to advances in diagnosis, drug therapy, or treatment.22,23 Despite the volume of published literature, information on rare disorders and guidance regarding complex medical decisions are not well represented. Moreover, many clinicians lack the time and skill to mine the best available evidence efficiently. 24 Variability in drug prescription practices, excessive use of surgical services, and the irrational provision of end-of-life care are just a few symptoms of this problem. 25 When confronted with these questions, HCPs often turn to colleagues whom they trust and possess the necessary expertise.26,27 Unfortunately, time constraints and geographic separation limit face-to-face discussions between colleagues, and electronic mail, paging, or text messaging serve as adequate surrogates for various reasons. 28 And despite their pervasive use, electronic medical records are intended, appropriately so, to store and share patient data, not to provide a portal for provider communication.29–31 Given the association between provider communication, patient outcomes, and healthcare costs, payers are beginning to recognize the need to find ways to enhance collaboration across the care spectrum.22,32–34
In this commentary piece, we provide an overview of our experience with the DocCHIRP trial and post-trial survey and focus primarily on the key concerns raised by the participating cohort of providers. In particular, we address potential legal and ethical issues surrounding the use of physician-to-physician crowdsourcing and put forth recommendations for HCPs and medical institutions to consider move through the evolving landscape of the Health 2.0 movement. By focusing the conversation on these topics, we hope to inform the future development of crowd-based applications that help medical providers deliver evidence-based, high-quality care in a way that is both efficient and aligned with the core values of the doctor–patient relationship.
Conceptual basis of the commentary
Experience from the DocCHIRP field trial
We developed and tested a crowd-based application with 85 physicians at the University of Rochester Medical Center to investigate the feasibility and practicality of physician-to-physician crowdsourcing. 35 DocCHIRP (Crowdsourcing Health Information Retrieval Protocol for Doctors) was designed to help HCPs share explicit and tacit knowledge when making medical decisions in near real time. Developed in 2012 for mobile (iOS (Apple Inc., Cupertino, CA) and Android (Google Inc., Mountain View, CA)) and desktop use, DocCHIRP used push and email notifications to enable near real-time collaboration among HCPs. 35 Network access was restricted to verify HCPs, allowing them to select and manage members of their crowd, set notification preferences, and publicity display areas of expertise.
Over the 244-day trial period, 85 providers logged in 1544 total visits to the DocCHIRP server. Users included physicians (91%) and nurse practitioners (9%). The majority of HCPs were pediatricians (n = 28, 33%) and neurologists (n = 27, 32%) who favored mobile devices (67.1%), with the majority using iPhones (81.7%). Post-trial surveys show that most users (>80%) felt crowdsourcing could help diagnose unusual cases, facilitate making appropriate patient referrals, and problem-solve at the point of care. 36 In addition, users agreed that the approach could be used to establish standards of care within practice groups and institutions to help establish and publicize standards of care. 36
Experience from the post-trial user survey
The post-trial survey was conducted anonymously with 72 of the 85 DocCHIRP participants (85% response rate) using the online SurveyMonkey platform. The survey included questions about provider demographics, use of mobile technologies in clinical practice, frequencies and modes of HCP-to-HCP communication, and impressions of medical crowdsourcing. Survey respondents were asked to self-identify as users or non-users based on their DocCHIRP usage (never, occasionally, regularly), which were confirmed against server transcripts. All analyses were performed using Statistical Product and Service Solutions (SPSS) version 15.0 software (SPSS Inc., Chicago, IL, USA). Data were organized in two-by-two tables, and Fisher’s exact tests were performed to look for differences between users and non-users. An exact, two-sided α of less than 0.05 was considered statistically significant.
Approach to the commentary
While many users in our trial endorsed the overarching concept of provider-to-provider crowdsourcing, they also identified barriers that could interfere with crowd-based applications. Specifically, HCPs voiced concerns over a range of legal and ethical issues, including (1) privacy, (2) security, (3) personal liability, (4) information discovery and data retention, and (5) patient safety including the impact of digital technology on “distracted doctoring.” To provide complementary perspectives on these topics, we recruited legal experts from the State University of New York office of General Counsel (S.G. and J.S.) with expertise in records management and digital privacy and a bioethicist (M.H.Sh) with expertise in the intersection of law and healthcare. The narrative comments generated through iterative discussion are presented with an introduction of the sub-topic by the study authors (M.H.-S. and M.W.H.), followed by commentaries on the salient legal and ethical implications.
Privacy
In our experience, providers equated program success with receiving high-quality responses promptly from a crowd of sufficient size, experience, and expertise. However, providers were also concerned that they be able to open the channel for discussion without disclosing their ignorance or otherwise tarnishing their professional reputation. Our post-trial survey asked if physicians would take advantage of the opportunity to post anonymously. While it could protect the reputation of the index provider, many providers believed that anonymity would undermine user confidence in the rapid recommendations made by the crowd at large. Moreover, some thought that blocking the use of anonymous bi-directional posting would increase the likelihood that physicians and other providers would use the system. It is worth noting that most providers felt the ability for users to consult outside of their specialty was a crucial feature of the overall approach. Included within this category of outside specialties were members of the allied health professions including social workers, physical therapists, pharmacists, and others. Thus, by reaching outside the specialty, it seems less likely that users would be perceived as inexperienced since the information they seek is outside their expected knowledge base.
Legal issues
When considering data privacy, it is necessary first to define the nature of the data. Certain data receive legal protection. For example, if the data are classified as “protected health information” (PHI), the Health Insurance Portability and Accountability Act (HIPAA) guarantees patients a right of access to information that is part of a “designated record set” used to make decisions about their treatment. 37 If the data include PHI, then it is necessary to consider whether questions and answers posted on DocCHIRP would be part of the “designated record set.” Defining the contours of a designated record set can be complicated, but for our purposes can be broken down into a two-step process (we assume that data in DocCHIRP meet the basic HIPAA classification elements). First, are the data used, “in whole or in part … to make decisions about individuals”? Second, do the data contain information that is personally identifiable to any individual patient? If the answer to both is yes, then the information in the application is likely to be included as part of the patient’s designated record set.
Applying this simple test to our data, we see that if a posting includes questions and answers intended to assist the HCP in making treatment decisions and identifies an individual patient, then that posting is likely part of the patient’s designated record set. If the postings are not targeted at soliciting or offering treatment advice or do not include identifiable health information, then they would not be protected under HIPAA and would not constitute part of the designated record set. In the case of the approach to support provider collaboration through just-in-time crowdsourcing with the DocCHIRP mobile application, users were not prompted to provide any PHI. However, the application supported free text communication and unique information including the patient’s name, age, and diagnosis could ostensibly be disclosed and violated to use policies designed to protect privacy. However, in our experience reviewing the use transcripts, physicians were typically asking questions regarding the acceptable off-label use of medications, advice on best use of diagnostic tools, and soliciting advice on broadening the differential diagnosis of complex cases. (Note: in this trial, the identity of the user was also known, thus since the provider’s census would be easily discoverable through the institution’s electronic medical record (EMR), the identity of the patient could be readily discoverable.)
Ethical issues
The patient–provider relationship requires trust, including trust in each other to maintain confidences. Only in a setting respectful of confidences can a patient safely divulge her personal and medical history, private thoughts and feelings, and other information necessary for the provider to understand, diagnose, and treat. At the same time, the practice of medicine is both interprofessional and collegial. While law and professional ethics provide protections for the confidentiality of patient information, these protections allow for providers to consult with colleagues to provide the best care possible. These consultations can be formal, such as when a provider from another service independently examines the patient or patient information; or informal, such as when a provider asks for a colleague’s thoughts or advice outside of the patient’s room. A significant difference between formal and informal consultations is the relationship. Formal consultations create a patient–doctor relationship. Informal consults do not. Informal consults involve provider-to-provider relationships. Informal consultations, sometimes called curbside consultation, are not new but were previously limited by proximity. 3 Use of technology to facilitate informal consultations raises new issues.
A traditional informal consult occurs when a provider has a question or concern about a patient’s care and seeks out a colleague to discuss the concern or question. This model allows the provider to select the colleague using criteria that meet specific needs. If the provider is asking a question that requires knowledge, he or she will seek out a colleague known to possess the requisite knowledge. If the provider is asking a question of medical judgment, he or she will seek out a colleague with respected clinical judgment. Notice that these interactions require proximity and trust. Only in a setting respectful of confidences can a provider share uncertainty or vulnerability. Selection bias protects from negative judgments, explaining concerns expressed above by users of DocCHIRP. Use of technology greatly enhances the potential power of informal consultations by removing the proximity barrier, but the full potential still requires trust among the professional users.
In addition to raising issues of trust in provider–provider relationships, crowdsourcing technology raises issues of trust in the doctor–patient relationship. Patients historically trusted providers’ opaque decision-making processes. Shared decision-making (if known by the patient) complicates the relationship in potentially positive ways. Patients expect providers to have knowledge and expertise and may not appreciate the complexities of medical care today. Honesty and transparency are critical. HCPs must help patients understand the collaborative and collegial nature of medicine and appreciate the contributions of all team members, including those consulted through technology.
Security
Trust in the security of the crowdsourcing network was a dominant theme in the post-trial survey. Despite the architecture biased away from accepting PHI, users remained wary of potential disclosure of PHI. These concerns were allayed in part where the provider was given the option to have the application hosted on servers owned by the academic medical center, behind the institutional firewall. Ironically, we found that providers were conflicted on this fact in that they also felt it would be useful to collaborate beyond confines of the institution and connect with providers on a regional and national scale. These competing interests of protection and the opportunity to have open, intellectual discussions is a root issue in the use of crowdsourcing in healthcare.
Legal issues
HCPs using an application such as DocCHIRP would need to consider whether postings should be included in the designated record set and be prepared to defend that decision. Obviously, the best way to do this, and an important precursor to widespread implementation of something like DocCHIRP, is to have clear institutional processes in place for identification of designated record sets in the application, as well as means to provide patient access or denial and institutional review of these decisions. Deliberately considering the characteristics of the postings allows HCPs the opportunity to reflect on what aspects of clinical decision-making ought to be part of the patient’s record. While the information in the patient record includes data used in clinical decision-making, that data do not usually include questions clinicians ask to one another with the goal of generally informing their base knowledge or future judgment.
Ethical issues
As seen in the discussion above, trust in the security features of any technological intervention is also critical. Here, the ethical issues overlap with the legal issues. Providers are ethically committed to protecting the personally identifiable health information of patients and will not risk patient exposure even at the cost of inhibiting technological advancement.
Personal liability
The concept that diseases could be reduced to a discrete set of variables and handled using computational algorithms gave rise to expert decision support systems for clinical use. 38 Unfortunately, these technologies failed to meet the day-to-day requirements of HCPs in several regards. First, they failed to represent the totality of medical knowledge accurately. Second, without a codified lexicon, the algorithms could not infer the complex models of disease from raw data alone. Third, the algorithms commonly return lists of diagnoses too extensive and abstract to be useful. On a practical level, desktop computers do not fit into the workflow of clinicians. Prior research also shows that providers simply did not trust the advice provided by the system, even in situations where the reasoning was transparent; this is mainly due to lack of knowledge regarding the provenance of the data and completeness of the analyses. 39 In the end, we recognize that trust is ultimately a human phenomenon; users will only trust the system if they know the information originated from either a trusted individual or institution.
While clinicians have not adopted computer-based differential diagnosis programs over time, most of our study participants were not concerned about the legality of using a human-backed, crowdsourcing application at the point of care. This is related in part to the accumulation of quality, evidence-based recommendations for many common disorders, which has led to the creation and dissemination of algorithmic care pathways, particularly in large healthcare organizations. While initially resistant to these approaches, physicians increasingly recognize the role of care pathways in improving outcomes and providing safe and cost-effective care to patients.
Legal issues
In a recent consensus statement, the Food and Drug Administration considered collaborative tools like DocCHIRP to be medical references and, thus, exempt from regulatory oversight. 40 To this point, it is worth considering that before access to the Internet became so ubiquitous, it was commonplace for physicians to consult medical texts that are often several years out of date when formulating a therapeutic plan. In the current era, physicians have unlimited access to medical information from print, digital as well as peer-to-peer interaction, but it remains the responsibility of each provider to judge the merits of the information before putting it into practice. In fact, some HCPs involved in the DocCHIRP field trial raised concerns that the technology would provide a window into their clinical decision-making process. Consequently, participants also asked if the application could provide anonymity. This fear of transparency may be related to the fear of litigation, but it may also be related to a fear of exposing perceived inadequacies to colleagues and others. The nuances of this fear impact the use of technology designed to facilitate collaboration in clinical decision-making. It is possible that digital natives may be more comfortable with this technology than digital immigrants and the technology may find more uptake as digital natives come to dominate the profession.
Ethical issues
Clinical decision-making is complex, individualized, and situated in an ever-changing landscape. It requires teamwork and collaboration. At times, an HCP may feel uncertain; fear inadequate knowledge on a particular topic, or fear being perceived as having insufficient knowledge. In these situations, it may feel safer to stop a trusted colleague in the hallway to ask a question—only the trusted colleague is witness to the uncertainty; the colleague is trusted not to perceive an inadequacy that does not exist, and the colleague is trusted to remediate any inadequacies without judgment. Exposing one’s internal decision-making process requires trust. Applications designed to facilitate collaboration in clinical decision-making can enhance the quality of decisions, but only if HCPs trust the community of users. The difference between the hallway conversation and posting a question electronically is significant because it not only requires trust of the community of users but it also creates a record that others, people outside of the trusted community, can potentially access. It becomes important to consider who outside of the trusted community might access the data and for what purposes.
Data retention, accuracy, and information discovery
Physicians are bound by a code of conduct established by Hippocrates in the pre-common era to use “treatment to help the sick according to [one’s] ability and judgment, but never with a view to injury and wrong-doing.” It stands to reason that the risk of an adverse outcome would be higher under circumstances of diagnostic or therapeutic uncertainty; the same conditions that a physician may seek the advice from one or more colleagues. Under usual conditions, such conversations would be conducted in person or by phone. However, with the ability to recover past email and text communication, there is growing concern that these conversations could ultimately come back to create issues for providers in situations where patients experience adverse outcomes.
Legal issues
If the data are considered useful at the division or institutional level, an oversight organization might also find the data useful. Institutions may consider the data collected to define the current standards of practice and may want to disseminate aggregate findings to all HCPs in the practice field, not just those who use the application. The data might inform decisions about educational programs the institution ought to sponsor. It could identify potentially useful research questions or gaps in clinical knowledge. Individual HCPs would have the same interest as the institution in aggregate data that can improve patient care. Patients may be interested in the specific data relevant to their medical care, even if the data do not include their PHI. Patients may believe that having insight into the HCP’s thought process would help inform their decisions, including their decision about whether to trust their provider. Patients unhappy with their treatment, or their legal representatives, may want access to the data as part of a medical malpractice lawsuit.
The rate of medical advancement is also speeding up. As it does so, any library of medical advice must be cautious that outdated or withdrawn advice is not presented as up to date. To that end, designers should consider methods for ensuring that out-of-date information is removed from the library. One option to consider is to enforce automated deletion after a certain amount of time, although this incurs the risk of discarding relevant and accurate information along with outdated or incorrect information, commonly analogized as “throwing the baby out with the bathwater.” Alternatively, users could flag or vote on items that are outdated, after which they can be reviewed and removed. A third option is to maintain the service “in the present” and not have a deep back library of past questions and answers. In the legal arena, the currency and accuracy of information shared in a digital environment are also issues of concern. Here, court cases and statutes are subject to a system that details currency (LEXIS-NEXIS calls this “shepardizing” and Westlaw calls it the Key system). Readers can learn whether a law or case has been supported, criticized, or overturned by later cases or statutes. Short of the Cochrane review process, 41 short of a meta-assessment of the journal reputation and citation score for a given article, there is no structured system used in medical informatics that incorporates such perspectives. As publication increases and digital publications can live forever, the medical community should consider developing a similar system wherein users can easily determine the currency and accuracy of published articles regarding treatment.
In the virtual context, there are two methods one could consider to provide oversight of crowdsourced content. The first is the Wikipedia method wherein users go in and edit or flag certain content for editing. The advantage here is that editors of known ability can offer a clear review and act as gatekeepers for content. The disadvantage is the cost in time or other resources for those editors (who likely have other jobs and responsibilities) and the system. Another method for crowd review is the Yelp method wherein users and practitioners can rate research as accurate and useful or not. This spreads out the responsibility of voting content up or down among many users, leading to less resource cost, but it is less trustworthy as you do not have designated authoritative editors. Furthermore, in either case, it would help identify or require disclosure of a conflict of interest by raters/editors, but this is less likely in an anonymous platform.
Whatever information is posted, tracked, stored, or used in a crowdsourcing application will be accessible to outside (nonuser) parties via various legal avenues. The parties to litigation are sometimes given access to a wide swath of information about the claims at issue through discovery—or in this case, e-discovery. 42 Considerations of proportionality will sometimes come into play, and a judge will limit the scope of e-discovery in such a way as to equate the burdens of production to the potential value of the information produced to proving or disproving material facts at issue in the litigation. 42 However, by and large, the sole standard governing access to data in e-discovery is relevance, and the sole defense is privilege. Privilege does not stand as a defense to disclosure when the adverse party (i.e. the plaintiff) is the patient who controls the provider–patient privilege. 43 Therefore, if patient information exists in the application that is relevant to that litigating patient’s claim, it will often be discoverable during litigation. Moreover, a judge could even order that other patient data be de-identified (or, less often, produced under a protective order) if it were shown to be particularly relevant to the claims at issue.
Government entities have broad power to issue subpoenas and similar demands for records and information in the course of administrative and criminal investigations. Often such demands can be issued directly by investigators without judicial oversight. 44 Such demands are especially common in the highly regulated healthcare environment. Moreover, many entities grant insurers and financiers auditing rights over treatment records. Whatever information is created and used in the course of treatment will be of interest to the overseers of the healthcare industry, especially when the vehicle for such information is novel.
All of the access considerations above also implicate retention considerations and institutions adopting crowdsourcing technologies to facilitate clinical decision-making need to create or amend existing retention policies to address the issues raised in this article. Under HIPAA, various sets of records must be maintained for 6 years or longer. 45 Information relevant to litigation must be retained from the moment such litigation is reasonably anticipated until the potential claim is resolved or the statute of limitations has expired. 46 State and federal oversight regulations impose a surfeit of retention periods over various types of treatment records, many of which will surely be determined or expanded to cover some crowdsourcing postings. Contracts with affiliates and vendors, especially insurance companies, self-impose retention periods that must be considered. Finally, practical considerations concerning the continuing value, or lack thereof, of postings need to be brought to bear in some workable manner.
Ethical issues
Crowdsourcing technologies record information allowing for data review and analysis. This has implications for how a provider considers privacy and personal liability because how a provider uses crowdsourcing technology provides information about the provider. The issues around these data depend on who, or what entity, has access and for what purpose. Imagine a provider who only asks questions but fails to offer responses or a provider who only answers questions and never poses one, or a provider who repeatedly asks the same, or essentially the same question. What do these online behaviors disclose about the users? Might reviewers come to different conclusions? While the legal issues describe approaches to data review, it is also important to consider various potential individual perceptions about information disclosure and review. Discussion and transparency about provider decision-making remain the goal, and it is important to consider human barriers to achievement of that goal.
Patient safety and distracted doctoring
Although real-world experience in other settings provides prima facie evidence of the effect, it is well documented that interruptions by pagers, smartphones, and digital tablets adversely affect provider–patient interactions and the quality of care delivered. 47 If such distractions are considered onerous with one-to-one style communication systems, it is easy to understand why physicians participating in a mode of communication involving one-to-many relationships would be concerned about the possible string of digital disruptions that would ensue. One of the goals of our study was to understand whether “near real–time” communication supported through the use of push notifications to the smartphone app could give physicians the ability to find answers to clinical questions while they were at the bedside or in the clinic examination room. Interestingly, providers indicated that while they would be willing to respond to consult questions in under 5 min if not otherwise indisposed, over half recognized the potential drag on productivity and negative impact on patient outcomes this might engender.
Legal issues
In addition to some discomfort about the potential impact on the HCP about privacy, data collection on privacy, anonymity, liability, and data retention, the HCPs expressed concerns about how this technology might impact patient safety and HCPs’ ability to provide the highest quality of care. For example, HCPs expressed concern about how and when to use technology and whether it would ever be appropriate in the presence of the patient. The detrimental effect of computers and electronic records on the patient encounter is well documented.48,49 HCPs worry this additional technological tool may increase the likelihood of physicians looking at the screen instead of the patient and have the same reported detrimental effect as other screens. 47
HCPs might also worry that patients would perceive the use of crowdsourcing applications as an admission of incompetence as opposed to a collegial effort to provide the best care possible. Few outside of medicine understand the complexities of patient care and the truly collaborative nature of the practice. Physicians may allow patients to continue to harbor misconceptions about the nature of the practice of medicine. This decision is not necessarily malicious. Physicians may want to inspire confidence in their abilities to enhance the power of the therapeutic relationship. They may not have the time to educate patients about evolving systems of healthcare in this country. Before deciding to use crowdsourcing technology in front of a patient, the HCP would need to consider, as they do in every clinical decision, the specific application in the particular patient’s case. HCPs need to consider whether they can explain to the particular patient the purpose of the handheld device and why the provider is using it at this time. If the provider determines the technology would not enhance the therapeutic relationship if used in the room with the particular patient, then the HCP would not use it at that time. At the same time, medical professionals and non-medical professionals are so used to constant use of phones, tablets, and other electronic devices that use of such a device today may go unnoticed, even if it would have raised eyebrows a decade ago. Most likely, little explanation is needed except if a patient specifically asks.
HCPs also expressed concerns about what they should consider before acting on advice from a trusted member of the crowd. One could assume that if a provider has consulted with a colleague of confidence, and the mobile application was interjected in place of a phone call or email, the individual would likely feel comfortable implementing the recommendation. However, in cases where the users are not familiar with one another (risk tolerance, reputation among others in their field), the user will likely corroborate the recommendations provided using online tools like Up-to-date. 50 In this case, however, the information was still useful as it would give the consulting physician the ability to anchor their search using the terms and concepts provided by the colleague, thus providing a short cut to the answer that they will ultimately make when evaluating the collateral information. The other important aspect of crowdsourcing here is that the opinion is returned not just from a single provider, but rather, from multiple physicians. In the DocCHIRP trial, the ability to up or down vote these responses provided additional layers of confidence, and in some cases, the scope of the conversation expanded to include a range of new considerations, further enriching the diagnostic or therapeutic process. In the end, just like the classic hallway conversation, the physician can listen to the advice of the crowd of experts but will ultimately be responsible for the medical advice.
Ethical issues
Providers today must focus on patients amid a multitude of distractions. Some of these distractions can provide additional information about the patient, augment clinical observations, inform treatment recommendations, and improve patient care. Technology such as DocCHIRP can help providers take better care of patients, as long as providers know how to manage technological interventions. Providers need guidance on how to best manage information, including how to effectively incorporate technology into patient care. These issues are related to foundational questions about what it means to be a healthcare professional today.
Conclusion
As HCPs and institutions consider the adoption of crowd-based technologies, they need to address concerns raised during the DocCHIRP studies. Avenues for institutional support may involve creating policies that detail the purpose of the application (e.g. to encourage collaboration to improve patient care) a description of the application, and the dissemination of best approaches to foster trust across provider groups. Employers should also be explicit regarding expectations for proper communication (e.g. guidance on the kinds of questions that are best for this technology and rules for excluding PHI). Of course, systems should also be put in place to monitor appropriate use and respond swiftly when complaints are logged. While considering an integral part of our cultural literacy, HCPs must also reflect on the normal boundaries of professionalism as they experiment en masse with untested modes of communication (e.g. across levels of training or practice specialties).
It is our hope that providers and institutions will consider the issues raised in this article before adopting these approaches into their practice. To maximize benefit while minimizing risks, early adopters should devise strategies to promote trust between providers, their patients, and the institutions supporting these conversations. We also intend for our audience to consider the contents of this article as a springboard as they contemplate a specific action plan for implementing their process. Ultimately, the success, or failure, of crowd-based technologies within institutions is multifactorial and will depend on crowd participation, trust within the network, and trust that the technology can either match or improve upon the current standard-of-care medical references.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
