Abstract
In a world increasingly saturated with digital health technologies, the promise of empowerment through information has become almost axiomatic. Yet what if access does not equate to understanding, and what if the sleek interfaces and personalized nudges of today's tools merely simulate agency while displacing it? This presentation interrogates the epistemological and ethical limits of four dominant models underpinning digital health design: the information deficit model, the knowledge–attitude–practice (KAP) framework, health literacy strategies, and behavioral nudging. Despite their differences, each presumes a rational, autonomous user who simply needs the right data or design to act wisely. Drawing on critical public health literature and sociotechnical theory, we argue that these frameworks obscure the structural and social determinants of health (SDoH), such as time poverty, financial stress, and cultural tensions, that fundamentally constrain genuine agency. Rather than merely optimizing individual behavior, this commentary compels the field to confront fundamental questions of power: Who gets to define health? Who designs the algorithm? And who is excluded in the process? By centering these inquiries, the real frontier is not smarter apps, but fairer governance. The paper concludes that addressing the digital divide requires structural interventions, such as participatory oversight and redistributive design, ensuring that digital health systems are grounded in human understanding rather than just administrative efficiency.
Background
Academic writing exemplifies how digitalization has redefined the relationship between input and output. Once grounded in slow, attentive reading and limited access to physical sources, scholarship has become increasingly shaped by speed and abundance. The shift from printed libraries to digital databases has greatly improved the efficiency of information retrieval yet also transformed the practice of engagement: as Tiernan 1 observes, it became possible to cite a paper based on a single sentence retrieved via search engine often without reading the full text. This acceleration, while promoting a surge in research outputs and reference counts, 2 has not deepened understanding. 3 The rise of generative AI has further extended this trend from reading to writing. Given the mimetic nature of large language models 4 trained on vast academic corpora, 5 producing output that mimics the form and style of high-quality research has become remarkably easy. Together, these developments reveal a paradox of the digital age: information and productivity have multiplied, yet comprehension and reflection have not. This tension between the “fast and many” and the “slow and few” also echoes the changing landscape of health information and care.
As one of the domains profoundly disrupted and restructured by digital technologies, healthcare, and health information dissemination in particular,6,7 has undergone a comparable transformation. In formal health systems, whether grounded in biomedical frameworks, public health models, or other established therapeutic traditions, care has traditionally referred to clinical activities such as diagnosis, treatment, and professional mediation between symptoms and medical judgment, or population-based analysis and programmatic solutions, exclusively carried out by trained professionals, who acted as gatekeepers.8,9 Running parallel to those professional systems, laypeople have always practiced their own forms of care through everyday acts, such as how to eat, rest, or seek help, based on their own health knowledge, which was historically rooted in life experience and was local, intergenerational, and informally shared.10,11 In practice, biomedical, population-based, and experiential types of knowledge have continually informed each other: clinicians, public health scientists, and practitioners of diverse health systems interpret symptoms or problems through patients’/users’ narratives or from population-based data. They shape lay sense-making and health literacy through their risk framing and communication, while lay interpretations of health are refined through encounters with this range of professional advice. In this sense, healthcare and health knowledge are mutually constitutive. This relationship has been increasingly reframed by digital innovation: with successive waves of digital innovation from Web 1.0 to Web 2.0 health information, increasingly accessible via search engines, online forums, and social media platforms. The proliferation of artificial intelligence and smart health systems has further amplified this trend, offering users an abundance of scientifically validated data and interactive tools that appear to enable new forms of care: individuals can now interpret bodily signs, monitor health metrics, and adjust daily habits in ways once mediated by professionals.12,13 In this paper, “care” is therefore understood broadly, encompassing not only the clinical intervention but also the relational, communicative, and informational support through which health is experienced and negotiated in daily life.
However, this apparent empowerment is not equally distributed across social and technological contexts. Much like the evolution of academic writing, the expanded informational access enabled by the digital turn in health communication is not always translated into meaningful understanding or sustainable behavior change.14–16 Vulnerable populations, such as older adults, 17 migrants, 18 and people living with disabilities,19,20 are frequently overwhelmed with health metrics, alerts, and algorithmically generated “recommendations.” While they may have access to an unprecedented volume of information, this does not equate to enhanced agency or truly informed decision-making. 17 Instead, they remain distanced from the support systems necessary to navigate health choices with confidence and autonomy.
Existing literature has taken a critical stance toward the integration of technology in healthcare, acknowledging both its transformative potential21–23 and its limitations.24,25 Although tech tools, from smartphone apps to mobile sensors, aim to improve outcomes by influencing how people know, decide, and act,26,27 even successfully achieve the objectives for certain populations, it was also highlighted that technological solutions are far from being a panacea.28,29 Scholars increasingly point to the unintended consequence of exacerbating the digital divide, especially among marginalized populations.30–34
The connection between design flaws and uneven outcomes of digital health intervention across populations and contexts cannot be overlooked, however, as Lupton 35 highlighted, design is itself a manifestation of governance rationalities that shape how concepts of health and responsibility are technologically imagined. To understand persistence of such uneven outcomes, this commentary aims to interrogate whether digital technologies genuinely influence health outcomes by looking beyond the tools themselves and examining the behavioral logic behind them. Four dominant approaches, the information deficit model (IDM), knowledge–attitude–practice (KAP) framework, health literacy, and nudging, will be investigated. Further reflection on how each frames the role of technology in promoting health will be conducted, leading to exploration of their structural limitations. This paper argues that the effectiveness of these digital health interventions is fundamentally flawed because their underlying behavioral models oversimplify human action and, thus, fail to account for the structural determinants of health. We will demonstrate that this leads to fragile outcomes (e.g. high abandonment) for many users and becomes actively exclusionary for structurally disadvantaged populations. These dual failures in effectiveness, we argue, are symptoms of flawed governance that ignores real-world contexts and perpetuates inequities.
From information to outcome? The limits of the IDM
In the Web 1.0 era, where information transmission was largely unidirectional, technological tools were primarily seen as mechanisms for expanding the reach of expert-led health communication and public education. 36 The goal was to disseminate medical knowledge, originally confined to professional domains, to broader publics, in hopes of achieving improved health outcomes. This strategy was largely grounded in the IDM, which assumes that providing individuals with sufficient and accurate data—about their bodies, behaviors, or biomarkers—will naturally lead to better understanding, behavior change, and ultimately improved health outcomes. 37
In the health domain, the challenges are even more pronounced. The IDM not only overstates the reliability of the link between information and behavior but also tends to overlook the critical step of transforming information into usable knowledge. As Wang et al. 38 pointed out, a person may be aware of their blood pressure readings (information) without understanding their significance (knowledge), let alone taking sustained action to manage their condition (behavior). Furthermore, the IDM ignores the broader structural, cultural, and institutional conditions that shape an individual's capacity to act,39,40 as health behavior is rarely driven by information alone. In this sense, the model reduces the governance of health to a technical act of information delivery, rather than addressing the structural interventions necessary to support equitable health outcomes. Ultimately, the effectiveness of the IDM as a digital health intervention strategy is severely limited, as it mistakes information delivery for a complete solution while ignoring the structural determinants of health that fundamentally shape these outcomes.
The KAP model: a promise of better information?
In response to the limitations of the IDM, scholars began to examine more closely the process by which information is translated and transformed into actual behavior.38,41,42 As another dominant theory in health promotion, the KAP model was introduced to describe the origin and development of health behavior. 43 While it suggests that attitude stands on the knowledge as basis and turns into behavior as momentum, it only vaguely acknowledges the independence and complexity of these three stages.44,45
Under the influence of the KAP model, greater attention was paid to how information could be reshaped into more accessible and culturally acceptable formats to facilitate understanding, belief formation, and ultimately behavioral change.46,47 Technology, in this context, began to play the role of a translator, shifting its function from simply delivering more information to delivering better information: information reprocessed into knowledge using simplified language and visualized formats.
The KAP framework thus contributed to a more phased understanding of how health outcomes may emerge. However, like IDM, it remains largely individual centered and takes for granted that people will naturally move from knowledge to action, based on the assumption that health is always an individual's top priority. 48 This logic overlooks the limited freedom individuals have in making health choices, particularly under cultural norms and structural constraints that shape access, opportunity, and agency. 49 Therefore, while the KAP model offers a more nuanced process than the IDM, its real-world effectiveness is similarly constrained by the very structural and cultural realities it fails to incorporate.
Health literacy and its structural dependencies in the Web 2.0 era
Although often incorporated as a measure of knowledge within the KAP model, 50 health literacy represents a broader and more nuanced concept. It extends beyond factual knowledge to encompass an individual's ability to access, understand, evaluate, and apply health information in real-world contexts. 51 In this sense, health literacy can be understood as both a component of and a critical enhancement to the KAP framework, one that begins to engage with the structural and contextual complexities that traditional KAP approaches tend to overlook.
With the information explosion and the rise of user-generated content (UGC) in the Web 2.0 era, along with the increasing prevalence of digitized health services such as wearable devices, online appointments, and teleconsultations, individuals are now required to navigate more complex forms of health information. Web 2.0 has not only multiplied the sources of health-related content but also redefined health access and reshaped the interaction between the public and clinical systems. In response to these shifts, the notion of health literacy has been extended into eHealth literacy, which refers to an individual's capacity to make sense of health information in digitally mediated environments.52,53
A growing body of literature has demonstrated the positive association between higher eHealth literacy and improved health outcomes. 54 As a result, researchers and practitioners have increasingly emphasized the importance of educational interventions aimed at enhancing health literacy.55,56 However, other scholars have cautioned against framing health literacy purely as a matter of teaching and learning, arguing instead that it must be examined within broader structural contexts.57,58 Recent qualitative work on symptom-checker use illustrates why health information processing resists linear modeling. Koch and colleagues 59 showed that lay users oscillate between app outputs, prior experiential knowledge, and inputs from family, peers, and clinicians. Rather than a neat pipeline from “data” to “decision”, users engage in socially distributed sense-making, negotiating uncertainty, moral responsibility, and feasibility. This is precisely the layer that most eHealth interfaces under-support: they deliver metrics and recommendations yet provide little scaffolding for the social negotiations through which recommendations become livable actions. Consequently, what appears as “non-adherence” from a dashboard may in fact be a locally rational compromise negotiated with kin, coworkers, and clinicians under tight temporal and financial constraints. Interfaces rarely make these negotiations visible or support them. Thus, the effectiveness of literacy-based interventions is not a simple matter of individual skill; it is structurally dependent on broader social determinants of health. Factors such as socioeconomic status and educational background determine whether people have the resources, time, and supportive environments necessary to pursue such competencies. 60
Nudging health: behavioral design without structural support?
Noticing the increasingly complex and unreliable transitions between information, knowledge, belief, and behavior, public health practitioners and behavioral scientists have shifted focus from educating individuals to directly shaping their actions.61,62 Rather than expecting people to learn and then change, health promotion strategies now often adopt principles from behavioral economics, particularly the use of nudges, subtle design features that guide people toward healthier choices without overt mandates.63,64
Technology, especially smartphone apps and wearable devices, has become a key platform for these nudges. 65 Fitness apps like Keep and Fitbit send reminders to complete daily goals, celebrate streaks, or track steps. Chronic disease apps offer real-time alerts for medication and symptom management. These tools have proven helpful for many; for instance, someone working from home may find a midday nudge to stretch or hydrate quite effective. 66
But just like earlier models, nudges depend on assumptions about user freedom. 67 A factory worker on a 10-h shift is unlikely to respond to a 5 pm workout prompt, not because of apathy but because they are exhausted. A calorie-tracking app that suggests fresh salmon or quinoa will not work for someone who's broke or feeding a family on a tight budget. These everyday realities show how nudges often privilege those with flexibility, time, and resources and leave others behind.
This demonstrates that while nudges may have some effectiveness for small, isolated adjustments, their ability to create meaningful and lasting improvements is fundamentally limited, requiring more than behavioral tweaks. 68 Similar to what is known by behavioral economists, the fundamental change should come from the structure and system, 69 as nudges alone fail to address the deeper social and economic constraints that shape people's lives.
In fact, rather than filling a gap in understanding, many digital health tools may inadvertently obscure the very idea of understanding itself. They present polished data, personalized alerts, and automated guidance, but often with little regard for how people actually make sense of health in their everyday lives. Technologies, in this case, may stop short at the level of information provision or surface-level education, 70 failing to support deeper learning, reflection, or behavioral transformation. As suggested by Lupton,35 such interfaces often replace embodied understanding with data representation, potentially fostering “data fetishism,” a reduction of meaning to metrics. And in the process, the burden of understanding is quietly outsourced to the interface.
This is where a crucial distinction emerges: while health outcomes are shaped by multiple interacting layers, behavioral, cultural, psychological, and social, technology typically intervenes only at the nonclinical, upstream stages of that chain. To assume that digital tools can “fix” outcomes without addressing these other layers is to misunderstand both the potential and the limits of what technology can do. By juxtaposing theoretical critique with a governance lens, this paper argues that digital health inequities arise not merely from flawed behavioral models but from the systems that perpetuate their assumptions.
Critical discussions
Across the IDM, the KAP framework, health literacy strategies, and behavioral nudges, we see a shared goal: to improve health outcomes by influencing individual behavior. Yet despite their differences in emphasis, whether on information, knowledge, capacity, or behavior, all four approaches rest on a common assumption: that individuals are rational, autonomous actors capable of making health-enhancing decisions, provided they have access to the right resources.
This shared foundation renders each model vulnerable to structural blind spots. IDM assumes that individuals, once equipped with sufficient data, will act rationally; KAP implies that people naturally move from knowledge to practice; health literacy promotes the idea that navigating complex systems is a skill individuals can acquire; and nudging attempts to circumvent cognitive friction altogether, directly steering behavior through design. But the problem extends beyond these structural blind spots. The limits of these strategies are not confined to structurally disadvantaged groups. Even among resource-rich users, engagement and effectiveness are fragile: adults frequently abandon lifestyle and mental health apps due to cognitive load, motivational fatigue, and misfit with daily routines.71,72 Meta-analytic evidence suggests that the relation between eHealth literacy and behavior is positive but heterogeneous, contingent on context and sustained support rather than access alone. 54 These patterns caution against assuming that “time- and resource-rich” users naturally adopt and maintain the four strategies; effectiveness remains conditional. This is because in real-world settings, health behavior is rarely the outcome of rational calculation alone. While this fragility manifests as motivational fatigue or cognitive load for resource-rich users, for many others, behavior is even more fundamentally constrained by time poverty, financial stress, limited cognitive bandwidth, and cultural dissonance. As Michael Marmot put it, “People do not choose to be unhealthy; they are denied the choice.” 73
Such constraints are not theoretical. Health nudges that are irrelevant to users’ real-life context can become a source of disruption rather than support, ultimately leading many users to disengage from and ultimately abandon the app. 71 Offering a health literacy course to highly mobile or unhoused populations may not only be unhelpful, it may also worsen mental burden and reinforce feelings of inadequacy. 74 These examples reveal how interventions that presume freedom of choice often reproduce exclusion, by ignoring the inequities that shape who can choose at all.
The structural barriers discussed above manifest across multiple layers of digital health practice. Table 1 summarizes four common forms of inequity (material, temporal, cultural, and administrative) and outlines corresponding governance-oriented design responses.
Illustrative inequities in digital health and corresponding governance levers.
The structural barriers summarized in Table 1 sit uneasily alongside the contemporary healthcare paradigm of “informed choice.” Digital health tools and patient-centered systems increasingly frame individuals as empowered decision-makers. Rather than enforcing compliance, they offer options. Yet this vision of agency presumes a level playing field that simply does not exist. Structural barriers deny many people the ability to access, understand, or act on information. Culturally inclusive care is increasingly acknowledged by systems, while implicit bias and peer pressures can inhibit even those with formal rights. For example, in North West England, women from minority ethnic communities reported that even when cultural or religious accommodations were technically available, clinical staff's indifference or judgement and systemic insensitivity discouraged them from acting on those preferences. 75 And further, when freedom existed without cultural and structural constraints, the overwhelming volume of data and choices can result in what Barry Schwartz termed the “tyranny of choice,” where decision-making itself becomes a source of anxiety and blame.
Across all four models, technology plays a key role: enabling information flow, translating data into knowledge, scaffolding decisions, and automating behavioral cues. While these tools have transformed health promotion and access, their benefits have been unevenly distributed. The resulting digital divide is often mistaken as a product of technology itself. This supports the view that the digital divide is not merely a matter of connectivity or access, but rather a reflection of how digital systems are designed, deployed, and governed.76–78 The problem is not that technology is inherently exclusionary, but that it mirrors and amplifies preexisting structural inequities. It is tempting to import user experience (UX) paradigms, participatory design, agile methods, domain-driven design into health. Yet without institutional alignment, these remain theater. Clinical risk management and liability regimes reward caution 79 ; procurement and reimbursement privilege feature checklists over relational access 80 ; and “participation” is too often tokenistic. 81 . A governance turn therefore requires multi-level responsibility: regulators to mandate accessibility and redress, platform owners to embed transparency and auditability, provider organizations to reallocate time for relational work, and civic intermediaries to sustain community accountability. By governance, we refer to a multi-actor configuration spanning regulatory agencies, platform providers, healthcare institutions, and civic intermediaries. Fair governance implies accountability mechanisms across these layers from enforceable accessibility standards and transparent audit systems to participatory oversight at the community level. It is this cross-sectoral alignment, rather than isolated technical design, that determines whether equity can be institutionalized. In this sense, an “unfriendly” interface is less a technical defect than a social symptom. As Hollimon et al. demonstrated, this divide extends beyond mere access, it encompasses availability, adequacy, acceptability, and affordability and reveals how digital inequities stem from governance and policy failures, not just technological gaps. 76 When the social labor of sense-making is invisible to platforms, design choices default to narrow individualism, and governance failures masquerade as “user deficits.”
Yet evaluating effectiveness alone risks missing a deeper question. When the success of digital health is measured purely by usage metrics or short-term behavior change, the more fundamental issue of whose needs and values these technologies serve remains unasked. A program may be statistically effective yet socially exclusionary, efficient in reaching some, but alienating to others. This reveals that effectiveness and fairness are not separate lines of inquiry; they are inextricably linked. The persistent ineffectiveness of interventions for marginalized groups as demonstrated in the critiques above is not a random failure. It is a direct consequence of systems designed with structural blind spots and governed without equity as a central goal.
Therefore, the central thesis moves beyond a simple critique of effectiveness to diagnose its root cause. The question is not simply “does it work” but “for whom does it work, and under what conditions.” This pushes the inquiry to the foundational level of governance: Who defines what counts as health? Who governs these platforms, and for whom are they designed? The effects of digital health depend less on the tools themselves and more on the values and intentions embedded in their use. For marginalized communities, a one-size-fits-all model, whether for delivering information, enhancing literacy, or nudging behavior, is unlikely to be effective. Instead, we need context-sensitive, inclusive, and practical designs that respond to real-world constraints and capacities.
Each of these strategies has made important contributions to public health. But none can generate sustainable change in isolation. Health outcomes are shaped not by information or design alone; it is the systems into which these tools are embedded that shape their effectiveness. Digital health, then, should not be treated as a panacea, or a poison, but as a reflection of the institutional logics and social values that govern it. The assumption that technology is neutral, simply as a conduit for knowledge or behavior change, ignores how design choices, platform governance, and data infrastructures can encode existing inequities and exclusions. To move toward equitable digital health futures, we must shift focus from optimizing interfaces and algorithms to rethinking the structures, priorities, and participation that shape technological development. Governance, not gadgetry, must be at the center of any strategy that aims to promote health for all. Ultimately, this dual critique of behavioral reductionism and of structural neglect underscores that fair governance is not an alternative to effective models but the very condition for their success.
Conclusions
This commentary is directed primarily toward researchers, policymakers, and digital health designers seeking to align technological innovation with equity-driven governance. It has highlighted the indispensable role of governance and structural design in determining the effectiveness and fairness of digital health interventions. By unpacking the underlying assumptions shared across prevailing theoretical models, we demonstrate that the digital divide originates not from technology itself, but from the governance frameworks and institutional logics that steer its use, often inadvertently reinforcing long-standing inequities.
For digital health to truly embrace inclusion, governance must pivot decisively toward people, balancing streamlined efficiency with genuine equity and embedding collective values within the DNA of technological systems. Here, “systems” encompass not only healthcare delivery and regulatory infrastructure but also the sociotechnical arrangements that mediate data use, consent, and everyday access. Achieving these demands moves beyond simplistic narratives of digital literacy and interface refinement, toward a deeper, more critical exploration of what it means to foster genuine digital participation and sustain meaningful inclusion. Practically, fairer governance entails (1) participatory oversight bodies with decision-making power; (2) explainable and appealable data practices, including opt-out and deletion by default; (3) time-poverty-sensitive service windows (asynchronous channels, low-bandwidth fallbacks); and (4) redistributive design that budgets not only for apps but for devices, connectivity, and human support. Only when these structural commitments are resourced can a “user-centered” design in health move beyond rhetoric.
Future research should tackle head-on the inherent tensions within digital governance: How can we craft digital infrastructures that empower without imposing control? What kind of digital literacies are necessary, not simply to navigate tools, but to actively shape and redefine them? Ultimately, the promise of digital health does not lie in sleeker gadgets or smarter nudges, but in systems reimagined through the lens of justice, empathy, and collective participation. Reorienting our focus from mere technological use toward genuine inclusion and from sheer efficiency to deeper equity is no longer just a technical imperative, but an ethical call to action. Equally important is the role of service users and public contributors in shaping digital health systems. Participatory approaches, such as needs assessments, co-design workshops, and citizen panels can surface context-specific priorities and lived constraints that are invisible to top-down policy design. Embedding these participatory processes within governance structures transforms users from data subjects into co-governors, aligning technological innovation with social justice.
Footnotes
Acknowledgements
The authors would like to thank colleagues and peers who provided valuable feedback on earlier drafts of this manuscript.
Ethical approval
As this study is a commentary/review based on published literature, no ethical approval was required.
Contributorship
All authors contributed to the conception, design, drafting, and critical revision of the manuscript. All authors approved the final version of the article and agree to be accountable for all aspects of the work.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by commissioned research projects, including (1) a project commissioned by Huawei Technology Co., Ltd., Research on the Technical Framework and Applications of Smart Healthy Cities; and (2) a project commissioned by Hangzhou Municipal Patriotic Health Campaign Committee Office, Research on the Current Situation and Development Strategies of Smart Health City Construction in Hangzhou. The funders had no role in the study design, data collection, analysis, interpretation, or the decision to submit the manuscript for publication.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
