Abstract
Background
Remote photoplethysmography (rPPG) is a non-contact method for measuring physiological parameters using smartphone cameras. While the potential for scalable self-monitoring is promising, little is known about its usability and acceptability among patients with chronic cardiac and respiratory conditions.
Objective
This qualitative study explored the user experiences of a smartphone-based rPPG app (Vitacam) to assess its usability, acceptability, and perceived utility in real-world conditions.
Methods
Seven adults with chronic heart or respiratory conditions used the app at home over one week. Semi-structured interviews were conducted and explored using reflexive thematic analysis.
Results
Participants appreciated the app's simplicity, real-time guidance, and convenience. Key barriers included environmental sensitivity (e.g. lighting), technical limitations, vague error messaging, and lack of clinical integration. Users valued basic self-monitoring features but expressed concerns about accuracy and interpretation, especially for complex conditions like atrial fibrillation.
Conclusions
rPPG via smartphone is a promising, low-burden option for basic self-monitoring in chronic disease management. To increase adoption and utility, future iterations should improve feedback clarity, algorithm sensitivity, and integration with clinical systems. These developments could enhance user trust, accuracy, and long-term engagement.
Keywords
Introduction
In Europe, it is estimated that there are 60 million people living with cardiovascular disease (CVD) and over 30 million with chronic obstructive pulmonary disease (COPD), with significant overlap between these groups. In the UK alone, the cost of treating CVD and COPD exceeds £9.3 billion. 1 Despite differences both between and within these disease groups, self-monitoring plays a significant role in the management and progression of these diseases.
Self-monitoring of key health parameters, such as blood pressure and heart rate, can enhance individuals’ awareness of their condition and the effects of medication. While the clinical relevance and interpretation of such parameters depend on the actual disease and professional expertise, increased awareness may improve adherence to prescribed treatments and empower individuals to take a more active role in managing their health. Greater engagement can also foster a deeper understanding of their condition and support more effective communication with healthcare providers. 2 Ultimately, this can lead to stronger motivation to follow treatment plans and adopt recommended lifestyle changes.
There are increasing initiatives to regularly monitor people with chronic conditions outside of clinical settings, including a component of digital self-management and tracking. The growth and popularity of smartphones underpin these initiatives. One example is NHS England's BP@Home programme, which has seen over 200,000 blood pressure monitors distributed to people with hypertension through primary care, with a mobile app for reminders and recording measurements. Another example is TeleCare Nord in Denmark, which has targeted both heart failure and COPD patients with a self-monitoring kit.
While self-monitoring initiatives have demonstrated some cost-saving benefits, 3 several challenges remain, particularly concerning attrition and clinical outcomes among individuals living with COPD. 4 Challenges include the logistics of distributing, storing, and tracking blood pressure monitors in primary care settings for onward use by those at risk of hypertension; limited resources for integrating self-measured data into electronic medical records and ensuring appropriate follow-up; and varying levels of user acceptance, which can affect the adoption of self-monitoring technologies. 5 In addition, evidence suggests that the accuracy of self-measured vital signs, especially when collected manually, is variable, potentially limiting the reliability of telehealth assessments based on these data. 6
Remote photoplethysmography (rPPG) is a non-contact optical technique that can measure physiological signals, including heart rate, respiratory rate, oxygen saturation and blood pressure, primarily by detecting subtle changes in skin colour caused by blood volume variations. This technology holds significant promise for enhancing self-monitoring capabilities, as it can be delivered using consumer devices such as smartphones. By reducing reliance on dedicated physical devices, it may alleviate logistical challenges in primary care and lower barriers to use, thereby improving access for a wider population. However, challenges remain in its adoption relating to reliability outside controlled laboratory settings. The accuracy of rPPG can be affected by ambient lighting, skin tone, and motion. 7 While this can be mitigated through the guidance given to a user for measurement, there is a possibility for user error and misinterpretation of instructions.
This study examines a smartphone app designed for people living with chronic conditions, focusing on the self-measurement of vital signs using rPPG. The app aims to provide an accessible way for patients to monitor vital signs, addressing logistical challenges in obtaining and distributing devices and the barriers often associated with their use. Although smartphone apps are increasingly versatile, little research has explored their acceptability and usability as alternatives to traditional monitoring devices. The present study therefore explored patient experiences of using an rPPG-based smartphone app, with particular attention to acceptability and feasibility. The clinical effectiveness of the app was not in scope for this study.
Methods
Design
This qualitative study investigated patient experiences of the Vitacam health monitoring app, developed for individuals with heart or respiratory conditions. As the objective of this study was to assess usability under identical task conditions, participants were recruited as a single, functionally defined cohort rather than stratified by diagnosis. Both cardiac and respiratory participants completed the same app-based tasks, including device positioning, remaining still during measurement, and reporting symptoms. These tasks place comparable cognitive and physical demands on users who commonly experience breathlessness, fatigue, and anxiety, making this grouping appropriate for usability evaluation.
Participants trialled the app for 1week before taking part in a remote semi-structured interview conducted via Microsoft Teams or telephone, according to preference.
Participant inclusion criteria
Participants were eligible if they met the following criteria:
Aged 18 years or older UK-based Had a formal diagnosis of a heart or respiratory condition Required regular health monitoring
Study setting
This study was conducted remotely across the United Kingdom, focusing on individuals living with cardiac or respiratory conditions. Participants used the app in their own homes for a period of one week prior to the interview, enabling the research to capture real-world experiences of app usage in everyday life. This remote setting allowed for inclusivity across geographic regions and supported participation from individuals managing chronic health conditions, many of whom may have limited mobility or face barriers to attending in-person research sessions.
Recruitment
Participants were recruited through social media and newsletters distributed by the Health Innovation Network. A study advertisement invited interested individuals to contact the research team. Respondents were provided with a participant information sheet, screened for eligibility by the research team and asked to complete an online consent form. Twelve individuals expressed interest; of these, three withdrew, one was deemed ineligible, and eight provided consent to trial the Vitacam app for one week.
Interviews
Following the one-week app usage period, semi-structured interviews were scheduled with those who had consented. Seven participants completed interviews lasting between 21 and 40 minutes. Interviews were conducted one-to-one, either by telephone or video call, with only the researcher and participant present, and audio recorded with the participants’ consent. Interview questions focused on participants’ experiences using the app, perceived usability, acceptability, and any barriers or facilitators to engagement.
To maximise participation, scheduling was flexible, and participants received a £15 voucher as a token of appreciation. The amount was chosen to ethically acknowledge the time and effort involved without exerting undue influence over participants’ decisions to take part.
Ethical considerations
All participants gave informed consent and were advised of their right to withdraw at any stage. The study was approved by the London South Bank University ethics committee, ETH2324-0273.
Interview team
Interviews were conducted in 2024 by two experienced female qualitative researchers: Dr Kerry V Wood (KVW) and Amelia Moore (AM). KVW is a Chartered Health Psychologist with a PhD and extensive experience in applied health research, particularly in working with clinical and vulnerable populations. AM is a mixed-methods researcher with a background in psychology and health services research. Neither researcher had any prior relationship with participants. Participants were informed that the researchers were conducting an independent study to evaluate experiences of using the app and were not involved in its development or delivery.
Data analysis
We employed reflexive thematic analysis (RTA) to explore participant experiences of using the app, following Braun and Clarke's six-phase framework.8–11 Interviews were transcribed verbatim and managed in NVivo. KVW and AM engaged in repeated reading and inductive coding, sensitised by the study's focus on acceptability and usability. Codes were developed into themes through a reflexive and iterative process of interpretation, with researcher subjectivity treated as a resource rather than a threat to analytic rigour. Regular discussions between the authors facilitated deeper engagement with the data and consideration of alternative readings.
A dataset of seven interviews was judged sufficient to address the study aims. In line with reflexive thematic analysis, sample adequacy was determined by the richness, coherence, and interpretive potential of the data rather than numerical saturation. Data saturation was not used as a methodological goal, as reflexive thematic analysis does not conceptualise themes as discoverable entities that can be exhausted through sampling. The interviews generated detailed, information-rich accounts of usability, trust, and interpretation of rPPG outputs, allowing for robust thematic development. Further recruitment was deemed unlikely to meaningfully extend conceptual understanding and was not pursued, particularly given participants’ chronic health conditions and the exploratory nature of the study. The analysis prioritised depth and interpretive insight over breadth, 9 consistent with qualitative quality principles in reflexive thematic analysis.
Results
Participant characteristics
Seven participants completed the study: four identified as male (57%) and three as female (43%), with a mean age of 66.3 years (range: 57–80). All participants were British nationals and self-identified as White (White UK, White Caucasian, or White Scottish), resulting in a demographically homogenous sample. Participants were living with a range of cardiovascular and respiratory conditions, including hypertrophic cardiomyopathy (n = 2), hypertrophic obstructive cardiomyopathy with atrial fibrillation (n = 1), atrial fibrillation with left ventricular systolic dysfunction (n = 1), COPD/emphysema (n = 1), and pulmonary fibrosis (n = 2, including one with interstitial lung disease). Years since diagnosis ranged from 1986 to 2020, reflecting a mix of long-term and more recent experiences of chronic illness.
Thematic findings
Thematic analysis generated seven themes that reflect the experience of using the app: user experience, usability, app support, technical issues, health metrics tracking and interpretation, satisfaction and perceived utility, and suggestions for improvement. Each theme is described below and further illustrated (with subthemes) in Table 1.
Summary of key themes, sub-themes, number of participants referencing each sub-theme, and illustrative quotes.
User experience
Participants described the app as intuitive, with instructions that were generally clear and easy to follow. Many appreciated the real-time guidance, especially prompts about lighting and facial positioning. One participant said, ‘It helped me adjust the angle a few times when I wasn’t quite lined up, it was good at catching that’ (P01). However, some encountered repeated alignment issues and expressed confusion when error messages lacked detail. As P03 noted, ‘It just told me it didn’t work, I wasn’t sure what I did wrong’.
One participant also questioned whether the app was designed with different age groups in mind. ‘It might be more suited to younger people, I had to hold the phone at arm's length without my glasses’, said P06.
Usability
The app's low-burden design was widely praised. Several participants said they were surprised by how easily it fit into their routine. ‘I could use it while watching TV in the evening, it didn’t feel like a task’, reported P02. This was particularly beneficial for participants with limited mobility or low energy levels. One participant remarked, ‘I’m not great with tech, but this one made sense right away’ (P05).
However, those with dexterity or vision issues found it less seamless. ‘Trying to hold the phone, keep my face in the frame, and press start, it was a bit fiddly without help’, commented P06.
App support
Real-time prompts were widely appreciated for helping users correct lighting and posture, contributing to a generally positive experience. ‘The little messages about light or angle kept me on track’, explained P04. Still, there was some frustration when feedback was too vague. ‘I would’ve liked more information when things didn’t work, like, “Was it the light? Was I too far?”’ asked P03.
Participants expressed that more step-by-step feedback could have helped build confidence and reduce abandonment.
Technical issues
Technical challenges emerged as a significant barrier to consistent use. Most commonly, poor lighting conditions led to failed measurements of heart rhythm. One user shared, ‘In my flat the light isn’t great, so I had to move around the room to get a good reading’ (P04).
Connectivity issues also impacted usability for participants in rural or hospital settings. ‘Sometimes I couldn’t upload results because of poor Wi-Fi’, said P01. This highlights the need for offline capability or buffering options to accommodate different environments.
Tracking and interpretation of readings
Participants appreciated the ability to view their heart rate and respiratory rate in real-time. ‘It was interesting to see how my breathing changed in the evening’, said P02. However, the absence of explanatory information or trend feedback limited its practical value. ‘I saw the numbers, but I didn’t really know what they meant for me’, reported P01.
For some, readings contradicted their medical experience. P03, diagnosed with atrial fibrillation, shared, ‘It kept saying my rhythm was normal, but I know it's not it made me question if it was working properly’. This highlights the difference between a heart rate estimated by the pulse and the actual heart rate from electrical activity. In subjects with atrial fibrillation, it is typical for the pulse to diverge from the heart rate.
Satisfaction and perceived utility
Overall satisfaction was high, particularly regarding ease of use. Several participants noted they would be open to using the app again if it could be integrated into a care plan. ‘If my GP could see the results too, then yes it would feel more useful’, said P05.
However, others expressed disappointment that the app did not provide more detailed insight. ‘It's good for reassurance, maybe, but it didn’t tell me anything I could act on’, explained P07. Users with more complex health needs often felt the app fell short in helping them manage their condition more effectively.
Suggestions for improvement
Participants suggested several areas for improvement. Common recommendations included: clearer explanations of readings, better feedback when scans failed, more sensitivity to lighting conditions, and optional reminders for daily use. ‘A little message saying, ‘Time to check your heart rate’ would help keep me on track’, offered P04.
Several participants wanted integration with healthcare systems: ‘If it linked to my NHS records, it would be much more useful, especially if a nurse or GP could comment on changes’, said P01.
Discussion
This study offers early insights into how individuals with chronic cardiac and respiratory conditions perceive and interact with a smartphone-based rPPG app (Vitacam). The findings suggest that while rPPG offers meaningful advantages in terms of simplicity and accessibility, several barriers, both technical and experiential, must be addressed before such tools can be reliably integrated into chronic disease self-monitoring at scale.
Participants consistently praised the app's usability, particularly the intuitive interface, real-time prompts, and minimal time demands, which encouraged engagement. This aligns with broader evidence that digital tools perceived as easy to use are more likely to be adopted and integrated into daily routines.12,13 The importance of simplicity and low burden is especially relevant for older adults or those managing multiple comorbidities, who may face cognitive or physical challenges with more complex digital interfaces. 14
A key strength of this study is its focus on real-world usability, with patients using rPPG technology independently in their own homes rather than in controlled or supervised settings. This contrasts with previous studies, such as those conducted within U.S. Veterans Affairs facilities, where testing occurred on-site with staff support. 15
Similarly, Carluccio et al. evaluated a non-commercial rPPG system in elderly care facilities following participant training. 16 While usability feedback was generally positive, the controlled environment and assisted setup limit insight into barriers encountered during self-onboarding and unsupervised use. By comparison, the present study highlights usability challenges that emerge specifically under real-world conditions, extending prior work that has primarily focused on technical feasibility rather than patient-led interaction.
Although our study was not theoretically driven at the design stage, the findings align closely with established models of technology acceptance, particularly the Technology Acceptance Model (TAM). 17 Participants’ accounts map onto core TAM constructs, including perceived ease of use, perceived usefulness, and trust in system outputs. Ease of use was reflected in the app's intuitive interface, minimal time burden, and real-time guidance, which supported initial engagement even among participants who described limited confidence with digital technologies.
Lighting and posture are well-established challenges for successful rPPG-based measurement of vital signs. Many studies have therefore focused on quantifying the impact of single environmental variables, such as lighting or posture, with participants typically assessed in controlled settings and guided to remain still or rest their head on a support. 18
Casalino et al., however, demonstrated autonomous use of an rPPG app in natural lighting conditions by a diverse sample of healthy volunteers. 19 In that study, insufficient lighting prevented successful measurement for three of fifteen participants, and no lighting guidance was provided. By contrast, the present study demonstrates successful rPPG use by patients in a home environment, supported by lighting guidance and without in-person onboarding.
Overall, participants reported that the app was easy to use even when unwell and did not perceive it as an inconvenience. Sitting to record a video required additional effort for two participants with kyphosis related to cystic fibrosis and COPD, respectively, but this was not reported as a barrier to use. Kyphosis is common in older populations, and its potential impact on rPPG performance warrants further investigation, including through post-market surveillance.
Perceived usefulness, however, was more conditional. While participants valued the convenience of self-monitoring, many questioned the clinical relevance of the data in the absence of interpretation, trend feedback, or integration with healthcare services. Trust emerged as a dynamic and fragile construct, shaped by perceived accuracy, transparency of system limitations, and consistency with users’ embodied knowledge of their condition. For example, participants with atrial fibrillation expressed reduced confidence when app readings contradicted their known diagnosis. These findings suggest that for rPPG-based tools, usability alone is insufficient to sustain engagement; perceived accuracy, interpretability, and clinical legitimacy are equally critical drivers of acceptance and continued use. Potential solutions are to provide more analysis of both trends and individual results, and to summarise the evidence of accuracy in plain language in the app.
One of the core strengths of the app was its ability to provide immediate feedback during measurement attempts. Real-time guidance helped users correct positioning and lighting, improving confidence and reducing the likelihood of dropout. Both the VA study and Carluccio's study proposed the addition of such prompts and guidance to their investigational devices. Such findings echo usability studies in mobile health, which highlight that in-the-moment support is essential to building and sustaining user engagement. 20 However, when feedback was vague, particularly in the form of unexplained error messages, users felt confused or frustrated. This suggests that the design of feedback systems in health apps must not only address functional issues but also support users’ interpretive confidence and self-efficacy.
The non-contact nature of rPPG emerged as a clear advantage over traditional blood pressure monitors and wearables, particularly in terms of ease of access, absence of physical discomfort, and reduced reliance on logistical infrastructure. As other studies have reported, 21 eliminating the need for physical contact or external hardware can enhance adoption, especially in underserved or remote populations. Nevertheless, the performance of the rPPG system remained context-dependent, sensitive to ambient lighting, internet quality, and user movement, factors known to affect signal fidelity and accuracy. 22
Concerns about accuracy were particularly salient among participants with complex clinical histories such as atrial fibrillation (AF). Several reported discrepancies between their known clinical status and the app's readings (e.g. reporting a ‘regular heart rhythm’ in someone with diagnosed AF). These occur because rPPG measures changes in skin blood volume to detect the pulse, rather than tracking the heart's electrical activity. The pulse reflects ventricular contractions, not atrial impulses. In AF, chaotic atrial signals may not reach the ventricles, causing irregular and incomplete contractions. This can result in a weak or irregular pulse, while the actual heart rate may be even faster. Poor signal quality in rPPG makes it harder to distinguish AF from noise without machine learning. 23 This has important implications: without clarity of information or clinical oversight, digital tools may inadvertently provide false reassurance, delay care-seeking, or create anxiety.
Hence, it is important for app developers to be very clear on what is actually measured and explain its limitations, such as the differences between pulse and heart rate, to avoid confusion.
Participants expressed a desire for richer feedback, such as trend graphs, contextual explanations, or alerts about abnormal readings. This desire reflects a growing user expectation that self-monitoring tools go beyond data collection and actively support understanding and decision-making. 24 In its current form, the app served primarily as a passive measurement tool. For users managing long-term conditions, the absence of clinical context or interpretive scaffolding limited its perceived value and may explain why some participants did not view it as useful for regular use.
Another key finding relates to the gap between usability and clinical utility. While participants generally found the app simple and easy to use, many questioned its relevance to their care. This tension is well-documented in digital health literature 25 : tools that are technically user-friendly may nonetheless fail if they are not embedded in clinical workflows, linked to decision-support, or validated for specific patient populations. Without integration into a formal care pathway or feedback loop with clinicians, even well-designed apps risk being seen as novel but ultimately non-essential.
For those with long-term cardiac conditions, the app provided reassurance and a sense of empowerment. Participants with myopathy were especially interested in the heart rate and rhythm, which are more pertinent for monitoring in their condition.
Cardiac conditions can be well controlled by medication, and participants did not feel anxiety when measuring themselves or viewing their results. In contrast, participants with COPD may be aware that, while symptoms can be managed, the underlying disease course cannot be altered by treatment.
Although respiratory rate is an important parameter to monitor in COPD, one participant was reluctant to track this measure, stating that they already knew their breathing was poor. This suggests that app developers could play a role in supporting patient understanding of how changes in vital signs provide additional context beyond symptom awareness and may, in some cases, act as early indicators of deterioration, prompting advice-seeking from a GP or pharmacist.
The demographic homogeneity of the sample raises important considerations regarding digital equity in connected health. All participants identified as White British, limiting insight into how rPPG-based self-monitoring may be experienced by ethnically and culturally diverse populations. This is particularly salient given evidence that rPPG performance may vary across skin tones due to differences in melanin absorption, 26 and that digital health engagement is shaped by intersecting factors including digital literacy, health literacy, language, and trust in healthcare systems.
While this study does not aim for statistical generalisation, it offers analytic transferability by identifying mechanisms likely to be relevant across contexts, such as interpretive uncertainty, trust calibration, and the perceived legitimacy of self-generated data. These mechanisms may manifest differently across populations and healthcare settings, underscoring the importance of future research that intentionally recruits diverse users and incorporates equity-oriented design and evaluation approaches. Without such efforts, there is a risk that rPPG technologies may inadvertently reproduce or exacerbate existing digital health inequalities.
Finally, the study's context, where participants used the app independently without training or clinician input, may not reflect how the tool would function in a more integrated digital health ecosystem. In a structured setting, with onboarding support, tailored alerts, and data sharing with care teams, user experience and perceived value might be substantially enhanced.
Implications for practice and policy
Our findings suggest that smartphone-based rPPG holds promise as an accessible and user-friendly tool for remote vital sign monitoring. For health systems, the low-burden nature of smartphone-based measurement offers an appealing supplement to existing telehealth models, particularly for older adults or those without wearable devices. However, large-scale adoption requires attention to usability in diverse real-world settings. Developers and policymakers should consider adaptive lighting features, clearer error messaging, and accessible onboarding to support users with varied levels of digital literacy. Clinical integration is also essential to enhance the app's relevance, credibility, and sustained use. These directions align with NHS digital health goals, which prioritise patient-centred, personalised, and data-driven care. 1
Limitations
This study has several limitations. The small and demographically homogenous sample (n = 7, all White British) limits insight into how rPPG-based self-monitoring may be experienced across ethnically, culturally, and linguistically diverse populations. While sufficient for in-depth qualitative exploration, the sample does not allow examination of how intersecting factors such as digital literacy, health literacy, or prior experiences of healthcare marginalisation may shape usability, trust, and engagement.
The study did not systematically collect data on participants’ prior exposure to digital health applications or self-monitoring technologies, which may have influenced perceptions of usability and confidence. Although some participants self-identified as having limited confidence with technology, future research would benefit from explicitly characterising digital experience to better understand its role in technology acceptance.
The study was exploratory and did not include clinical validation of app outputs; concerns about accuracy are therefore based on user perceptions rather than objective comparison. Finally, the app was trialled outside of a formal care pathway and without onboarding support, which may have reduced perceived utility but also provided insight into real-world, unsupported use.
Conclusion
This study suggests that smartphone-based rPPG technology can offer a viable and acceptable method for self-monitoring basic health metrics among patients with cardiac and respiratory conditions. Its strengths include ease of use, low burden, and real-time guidance. However, limitations around environmental sensitivity, vague feedback, and lack of clinical integration must be addressed to improve long-term adoption and perceived value. Developers of rPPG-based devices should prioritise adaptive feedback systems, improved accuracy for arrhythmia detection, and seamless integration into clinical workflows. If these challenges are addressed, rPPG has the potential to contribute meaningfully to scalable, equitable models of digital chronic disease management.
More broadly, developers should conduct usability studies within the target patient population and intended use environment. Such studies can identify opportunities for product improvement and provide evidence that may be considered by commissioners prior to piloting new technologies. In addition, usability findings can inform clinicians and support teams about likely challenges following deployment, helping to determine the level and type of patient support required.
Footnotes
Acknowledgements
We would like to thank the Health Innovation Network (HIN) South London for their support in promoting the study and assisting with participant recruitment.
Ethical considerations
The study was approved by the London South Bank ethics committee, ETH2324-0273.
Consent to participate
Participant written informed consent was obtained from all participants prior to their involvement in the study.
Author contributions
Kerry V. Wood and Amelia Moore contributed to the conceptualisation, methodology, investigation, and formal analysis of the study. Kerry V. Wood prepared the original draft of the manuscript, and all authors, Kerry V. Wood, Amelia Moore, Moyeen Ahmad, and Dila N. Bostanci, contributed to reviewing and editing the final version. Project administration was led by Kerry V. Wood. All authors read and approved the final manuscript.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was funded by the Greater London Authority (grant reference 074).
Declaration of conflicting interests
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Moyeen Ahmad is a developer of the Vitacam app evaluated in this study and is listed as a co-author. The study was conducted independently by the research team, and Mr Ahmad had no involvement in data collection, analysis, or interpretation. His contribution was limited to reviewing the final manuscript.
