Abstract
Background
Cervical dystonia (CD) is a movement disorder that is characterized by involuntary muscle contractions causing abnormal head postures. Current assessments rely primarily on infrequent clinical evaluations and subjective patient-reported outcomes (about every 3 to 4 months), limiting detailed understanding of symptom progression between treatments.
Objective
Improved understanding of symptom progression would enhance personalized treatment with Botulinum toxin A therapy. For this purpose, we introduce Move2Screen, a smartphone application designed for frequent home-based, asynchronous video monitoring of symptom progression for CD patients. We evaluate the usability and adherence of Move2Screen.
Methods
Adherence was assessed in a longitudinal home-based observational study involving eight from initially 11 patients with CD who continued participation after the initial on-site acquisition. Usability was measured using the German mHealth App Usability Questionnaire (G-MAUQ) after the two-week mark, which evaluates ease-of-use, interface satisfaction, and usefulness. Adherence was analyzed by tracking the frequency and timing of recorded videos.
Results
The app showed good overall usability scores on the G-MAUQ and a high adherence rate, with participants completing 74% of the intended video recordings. However, usability evaluations highlighted some challenges in perceived usefulness and minor technical issues reported by users.
Conclusion
The Move2Screen app demonstrates promising usability and adherence for home-based symptom monitoring in CD. However, addressing technical limitations, expanding participant cohorts, and integrating clinical decision support are essential next steps. These developments may enhance individualized care by enabling more informed therapeutic adjustments and improving long-term disease management.
Keywords
Introduction
Dystonia is a movement disorder characterized by involuntary, sustained, or intermittent muscle contractions in various parts of the body, resulting in abnormal, often repetitive movements or postures. The most common form is focal cervical dystonia (CD) that occurs in around 6–30 out of 100,000 people.1,2
Currently, the diagnosis and assessment of dystonia are based entirely on clinical examination and symptom description by the patient. Local injection of Botulinum toxin A (BoNT) has become the standard of care to achieve a reduction of dystonic muscular overactivity. 3 The typical frequency of injections is about every 3–4 months.4,5 The injection scheme in terms of total dosage, choice of muscles, and number of injections depends on the individual presentation of the patient and on the clinical observation and examination. Corresponding disease severity and treatment effects are estimated by rating scales such as the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS) or the collum-caput concept, which has been proposed as a classification of CD phenomenology to guide the process of injection planning. 6
To optimize treatment efficacy, knowledge about the effect is key. However, as treatment appointments are typically scheduled at 3–4-month intervals, the symptom progression resulting from BoNT may only be documented with temporal coarse observations, occurring when the effect of the most recent BoNT injection is already waning. For the assessment of treatment outcomes within the time span between two visits, only the subjective report of the patient is available, making a detailed objective assessment impossible. Nevertheless, a more exact and temporally detailed understanding of the specific effect of the treatment would be beneficial for the implementation of a more individualized therapeutic approach.
Telemedical measures are already used for a status update on the disease to support a higher monitoring frequency. Approximately 40% of movement disorder society members, who took part in a survey, had already conducted video consultations with patients suffering from movement disorders in 2015. 7 A study on telemedicine video consultations showed high satisfaction among dystonia patients. 8
However, the measures mentioned rely on synchronized telemedical methods, which involve real-time communication between clinician and patients. These approaches require scheduling appointments, which may be impractical with high-frequency assessments. In contrast, asynchronous telemedicine interventions, such as recorded videos, online questionnaires, or symptom-tracking apps, do not require real-time interaction, allowing patients to submit information at their convenience. While this approach eliminates the need for scheduling, it also lacks direct clinician-patient interaction. Therefore, it is necessary to guide and systematize symptom assessment through predefined instructions.
Corresponding asynchronous approaches for the assessment of specific neurological disorders are already available: The Emendia app (Emendia MS. 2024. Available at: https://emendia.de/ [Accessed: June 23, 2024].) assists patients and clinicians with multiple sclerosis by collecting various parameters entered through the app’s interface. Treating clinicians can quickly grasp the progression through an overview display. EPI-Vista (EPI-Vista: Therapie Management. German. Available at: https://epivista.de/ [Accessed: September 23, 2024].) enables structured documentation of epileptic seizures and relates these to therapy. Using server-based data storage, clinicians and patients can independently enter and access data and use it as a basis for therapeutic decisions. MyDystonia (Dystonia Europe. MyDystonia – your personal diary. 2024. Available at: https://dystonia-europe.org/projects-1/projects/mydystonia/ [Accessed: July 25, 2024]) is an electronic diary for patients with dystonia. In addition to organizational features such as a calendar or clinician contact information, the user can save their symptoms in the app by answering a series of predefined questions.
Although these solutions enable asynchronous monitoring, they only support the assessment of subjective, patient-reported parameters. In contrast, to obtain an objective assessment of movement disorders, a good choice is arguably video recording as it is particularly well-suited to the evaluation of motor movement disorders, given that the symptoms are largely visible. The validity of video-based symptom assessment through clinical evaluation has been confirmed by studies. In the example of cervical dystonia, while evaluating a synchronous telemedicine approach, Fraint et al. 8 demonstrated that telemedicine-based evaluations, including video-based assessments using the TWSTRS motor severity subscale, have excellent agreement with in-person evaluation. Dorsey et al. 9 demonstrated in a randomized controlled trial that telemedicine visits for Parkinson’s disease achieved excellent test–retest reliability for motor assessments via video (ICC = 0.82). Srinivasan et al. 10 reviewed evidence for telemedicine applications in hyperkinetic movement disorders, underscoring the feasibility and diagnostic reliability of both synchronous and asynchronous video evaluations for conditions such as tremor and dystonia. Additionally, Bull et al. 11 demonstrated video-based assessment for patients with Huntington’s disease enabled reliable motor scoring (ICC = 0.78) and excellent test–retest reliability (ICC = 0.90).
Unlike severity capturing methods, such as those using inertial measurement units,12,13 video recordings directly capture observable motor symptoms and can be evaluated without the need for translation into abstract scores, thereby reducing the risk of interpretation errors introduced by algorithmic mapping.
However, the disadvantage of video recording, insofar as it is used as an asynchronous tool, is that it requires guidance to ensure the correct completion of the protocol, while incorrect performance cannot be addressed in real time. An asynchronous approach that has already been used in a research setting is the SARA
With the development of Move2Screen, we pursue a similar approach for asynchronous video-based symptom assessment in CD. Our app also enables video recording and submission and features a fixed protocol with detailed timestamps, supporting a standardized execution of tasks. To our knowledge, no comparable app for asynchronous video-based documentation of movement disorder symptoms in CD currently exists in either clinical practice or research settings. In this article, we evaluate Move2Screen in terms of usability and adherence in a three-month lasting study, where weekly recordings are intended.
Methods
This section presents Move2Screen and the methods used for evaluation. The app is based on the results of an initial usability study, aimed at estimating an initial impression of the app from the perspective of the intended users, considering feedback from a cohort of 10 subjects. 15 The second and most recent (longitudinal) study was conducted with a cohort of 11 subjects, of whom eight continued participation after the initial on-site acquisition and were included in the usability and adherence analysis.
Move2Screen app design
Move2Screen is designed to enable intuitive video-based monitoring of movement assessments that are suitable to monitor disease progression. It is intended to complete the EuroQol five-dimensional (EQ-5D) questionnaire on subjective quality of life 16 (Figure 1(a)) after every recording. After positioning themselves within the camera frame (Figure 1(b)), the subjects perform certain movements indicated by spoken and written instructions and avatar-based illustrations of the movement to be performed. The avatars, shown in Figure 2, are displayed by the app for each movement task. In addition, a timer and a signal tone determine the exact time at which the described movement should be performed. After the video recording, the EQ-5D questionnaire is provided by the app for completion. The EQ-5D assesses five domains: mobility, self-care, ability to carry out everyday activities, pain, and depression. The users also assign a score between 0 and 100 to their current state of health. Immediately after completion, the recorded data are encrypted and transferred to a server with the user’s prior permission. Nevertheless, local copies are stored on the user’s smartphone for personal use.

Screenshots of several pages of the Move2Screen app showcase its key functionalities: The questionnaire view (a) presents an interface for manually filling out the EuroQol five-dimensional (EQ-5D) questionnaire. Page (b) is the starting page of a new video protocol, during which the subject is instructed to move to a predefined position, indicated through a silhouette in the video view. After login, the initial page (c) becomes visible and presents two options: to start a new video recording or to view the list of previously recorded videos. Page (d) allows the user to either manually fill out a new questionnaire (a) or to view all completed questionnaires.

Video protocol: A sequence of movements that are instructed acoustically and visually by the avatars shown in Move2Screen. The respective sections are accompanied by a countdown and thus indicates how long the exercise still has to be performed. During all sections, the camera image is hidden while the movements are being performed, so that no symptom-correcting self-monitoring can take place.
The initial architecture and design of the app were described by Stenger et al. 15 since which minor adaptations have been made. Changes to the app mainly concern the design, such as the color highlighting of the lower navigation bar (Figure 1(c) and (d)), wording on the different screens, and a new contour for placement in front of the camera (Figure 1(b)). In addition, notifications have been changed from two selectable times to two fixed reminder times (morning and evening) on a fixed day of the week. Reminders follow until the recording is made or for 3 days, after which a recording is considered missed.
Video protocol
The video protocol instructs the user to perform a series of movements. Before the start of the protocol, the subjects are instructed to position themselves within a defined frame in the camera in order to ensure the consistency of all video recordings (Figure 1(b)). The protocol then consists of 10 instructions, each of which is designed to assess a specific aspect of the TWSTRS through an instructed movement or to familiarize the subject with the protocol in general. The instructions are as follows:
Instruction 1, eyes open (6 seconds): This instruction is a warm-up to avoid starting the first exercise immediately in order to get the subject used to the app interface. Instruction 2, eyes closed (10 seconds): During this time, the instruction is to move the head in the most comfortable direction, which may be influenced by dystonia related muscle contraction. The eyes are closed, thereby preventing any visual-based self-correction of the actual head posture. Instruction 3, head still, starting at the neutral midline position indicated through the frame (60 seconds): This tests how long the head can be held at midline position or how fast dystonic contractions appear. Instruction 4, head still sideways (10 seconds): The subject is instructed to rotate the chair by 90 Instructions 5–10, turn left, turn right, tilt left, tilt right, head up, and head down: These exercises are designed to assess the maximum range of head rotation around all three axes of rotation. For instructions 9 and 10, the subject is instructed to turn 90
All instructions are used to assess partial aspects of the TWSTRS. During the execution of instruction 2, the severity of torticollis, laterocollis, antero- and retrocollis, as well as the lateral and sagittal shift, is evaluated. Instructions 4–10 are used to assess the range of motion. In addition, any potential shoulder elevation and anterior displacement may be identified in all exercises as part of the symptom spectrum of CD.
To ensure data protection of sensitive healthcare data and protect the privacy of users, all data are encrypted before uploading. Symmetric encryption was chosen for the videos and the questionnaire. The symmetric key is created individually per user on the subjects’ smartphone, which itself is encrypted asymmetrically using the servers’ public key and is uploaded to the server, assuring that the decryption requires the servers’ private key. Transparency is provided by enabling the user to have full control over which data is actually uploaded to the server. This is ensured by informing the user before each saving that the automatic uploading of the data will begin when the data is saved. However, the user has the option of viewing the data again beforehand and deleting it directly if necessary, which prevents it from being uploaded. Saving and, therefore, uploading the data must be approved twice.
Usability/adherence study
This study implements the already optimized app from the previous study 15 and was tested in a longitudinal home-based observational study by eight subjects over a period of 3 months. The subjects were recruited on an ongoing basis over a period of 2 weeks starting from 6 November 2023 (till 20 November), from the Botulinum toxin clinic at the University of Luebeck. Inclusion criteria required participants to have a clinical diagnosis of CD and access to an Android smartphone. The app was installed on the subjects’ smartphones on site. The initial settings, including the creation of a personal test subject ID and the designation of the data collection commencement weekday for video recording, were made. It was intended that over the following three months, the app would be used at home as part of the study, with subjects completing one recording per week. Participants were asked to record on the same day of the week each time, and reminders were set in the app accordingly. However, to provide users with some flexibility, a delayed recording of up to 3 days was deemed acceptable. This flexibility was introduced to ensure that participants were not restricted by unforeseen circumstances, such as scheduling conflicts. Due to the slow progression of CD symptoms (3–4 months), we do not expect any significant bias in the results due to occasional delayed recordings. This would result in a total of 12 videos and questionnaires per subject. Furthermore, subjects were asked to complete an online questionnaire (German mHealth App Usability Questionnaire (G-MAUQ)) after 2 weeks, for usability evaluation.
Usability evaluation
The G-MAUQ 17 is the German version of the MAUQ, 18 which is a standardized and validated questionnaire explicitly for medical-related apps. It consists of 18 questions and is divided into three groups: ease-of-use (five questions), interface and satisfaction (seven questions), and usefulness (six questions). Each item is rated on a 7-point Likert scale, ranging from 1 (“strongly disagree”) to 7 (“strongly agree”). The subjects were asked to complete the G-MAUQ 2 weeks after the start of the study. A score was calculated for each of the three question groups of the G-MAUQ separately. The score was derived by averaging the responses to all questions within each group. A comparison was made between these results and those obtained from the initial on-site study using the same questionnaire. This comparison served the purpose to determine if the study context (a three-month usage period vs. a 30-minute onsite session) influenced users’ subjective evaluations of the app. Significance analyses were performed on the results from all three question groups of the G-MAUQ via a Mann–Whitney U-test. To control for the increased risk of type I error due to multiple comparisons, we applied a Bonferroni correction by dividing the conventional alpha level (0.05) by the number of tests (three), resulting in an adjusted threshold of p<0.017 for statistical significance. 19
Adherence evaluation
To analyze adherence to the study protocol, the number of video recordings made by each subject during the study period is counted. Adherence is calculated by dividing the total number of videos recorded by the number of planned recordings. It is notable that no subject recorded multiple videos per week, and thus no video was redundant in the sense of the study.
Results
The study initially included 11 subjects, all diagnosed with CD. Three subjects did not continue after the initial assessment and never used the app, and thus have been excluded from further consideration in this article. The subjects’ ages ranged from 43 to 73 years, with a median age of 60.5 years, and included five women and three men. Two of the subjects only use their smartphones less than once a week or multiple times a week. The rest are using it on a daily basis. Nearly all of them describe their experience as moderate (three subjects) or good (four subjects). Only one subject has a poor experience with smartphones.
The results of the G-MAUQ from the initial study (A) and the current study (B) in this work are shown in Figure 3. The mean and standard deviation of the answers are given (encoded numerically) in Table 1. A Mann–Whitney U-test shows significant differences in ease-of-use and usefulness scores between both studies. The p-values are as follows: ease-of-use (p = 0.008), interface and satisfaction (p = 0.177), and usefulness (p = 0.010). The median ease-of-use score for B is significantly higher than A, indicating better performance in this domain. However, the median usefulness score for B is significantly lower than for A. Question Q9 (I feel comfortable using this app in social settings), which was rated a median of 5 in A, was rated a median of 7 in B. This suggests that the subjects in B feel more comfortable using the app in their home environment than the subjects from A, who tested it in a clinical setting.

Comparison of German mHealth App Usability Questionnaire (G-MAUQ) results from an initial on-site study (A) and the longitudinal study (B), sorted by question groups “ease-of-use,” “interface and satisfaction,” and “usefulness.” The significance was calculated via a Mann–Whitney U-test.
Mean and standard deviation (std) of the responses from the German mHealth App Usability Questionnaire (G-MAUQ) in the initial on-site study (A) and the current study (B). The answer options were converted into a score, from the worst possible rating of the respective question (1) to the best (7).
The time stamps of the video uploads in B allow conclusions to be drawn about usage behavior. Figure 4 shows the times of the video uploads relative to the time elapsed since the study began. In total, 74% of all intended videos have been recorded from the subjects who continued with the study after the acquisition, where 50% of the test subjects made their last recording in the last week scheduled. All video recordings, which are considered for the adherence analysis, were within the intended time span of 12 weeks from the individual start of each subject. Two subjects proceeded with an additional video recording, designated as the video in the 13th week, which is subsequently not included in the adherence analysis as it was not scheduled.

Relative time passed after the subjects’ individual start of the study for each video recorded. A total of 74% of all intended video recordings have been performed.
Discussion
In this study, we evaluated the usability and adherence of the Move2Screen app in two settings: an initial on-site study (Study A) and a subsequent three-month home-based study (Study B). Overall, the results indicate that the app is well accepted by users. Notably, significant improvements in ease-of-use were observed between the two studies, and the increased comfort, as reflected by higher ratings on the question item Q9 (I feel comfortable using this app in a social environment), suggests that participants felt comfortable using the app in their home environment compared to a clinical setting. Although the usability questionnaire was administered at the two-week mark, the positive trends suggest sustained acceptance.
These positive experiences are challenged by parts of the G-MAUQ results. While the G-MAUQ indicates a significant increase in the domain “ease-of-use” and relatively stable ratings in the “interface and satisfaction” question group, it showed a significant rating decline, considering the “usefulness” towards Study B. The perception of the Move2Screen app’s “usefulness” for the home environment (Study B, with an average agreement of 4.9 out of seven) declined by about 15% in comparison to the on-site use (Study A).
We do not believe that the minor design changes made to the app following Study A had a significant influence on the evaluation outcomes. Rather, we attribute the observed differences to variations in the study settings, not to the app modifications.
Comparative lower “usefulness” evaluation in both studies may come from participants not fully recognizing the app’s potential, as its intended benefits of symptom monitoring and clinical decision support are not entirely accessible. Due to the lack of EU Medical Device Regulation (MDR) certification, the present study did not integrate clinical decision-support functionalities, and clinicians did not actively use the recorded data to adjust therapy.
A possible explanation for the decline in perceived usefulness in Study B lies in the imbalance between user effort and perceived benefit. While both studies lacked clinical integration due to pending MDR certification, Study B required sustained engagement over three months, whereas Study A involved a single 30-minute session. According to the contrast model from Zentall, 20 greater effort tends to elevate expectations of reward. The higher investment required in Study B may have amplified such expectations, resulting in greater disappointment when no tangible benefit was perceived.
The observed high adherence rates of 74% from the eight subjects who continued after the first one-site acquisition further indicate the app’s potential for long-term use outside clinical settings. A critical point is the dropout of three participants, which may indicate that adherence was indeed lower than reported. However, these subjects did not participate in the app evaluation using the G-MAUQ and did not respond to follow-up inquiries, leaving the reasons for their non-participation unclear. As no participation was observed from them in any aspect of the study, they were classified as natural dropouts. Consequently, a conclusive attribution of their non-participation to the app itself cannot be made.
An investigation into whether the recorded videos are not only sufficient to capture the presence of symptoms, but are also of high enough quality to detect subtle changes in motor function over an extended time span, remains to be conducted. This is particularly relevant in the context of monitoring treatment effects, such as gradual improvements or fluctuations following BoNT injections. Due to regulatory constraints, no clinical evaluation has been performed on the recorded data yet.
Technical issues were experienced: feedback from two subjects indicated issues, such as poor visibility or missing notifications. For another subject, the app had reportedly disappeared, an issue for which the cause could not be determined. These technical issues could reduce the perceived reliability and overall user experience and impact adherence negatively. While these occurrences were limited to the three named users, they highlight potential areas for technical optimization. For example, missing notifications may be due to Android’s system-level battery optimization or background activity restrictions, which may disable notifications for less frequently used apps. To address this, notification permissions and battery optimization settings may be explicitly reviewed and configured during the on-site installation process. Furthermore, additional reminder mechanisms, such as calendar-based alerts, may further support adherence.
The small sample size (n = 8) restricts the generalizability of the findings. While our results provide initial insights, a larger cohort would be necessary to confirm the robustness and broader relevance of our results. Furthermore, including a healthy control group would provide a baseline for usability and adherence, helping to identify whether difficulties are disease-related or inherent to the app itself.
Conclusion
The Move2Screen app shows considerable promise as a tool for remote, home-based symptom monitoring in CD. The significant improvements in ease-of-use and the high adherence rates observed suggest that the app could be a feasible option for long-term use outside the clinical setting. Nevertheless, the decline in perceived usefulness highlights current limitations—particularly the lack of integration into therapeutic decision-making processes due to pending regulatory certification.
Although the potential for clinical decision support was not explored in this study, we regard it as an essential direction for future work. One key challenge is the large amount of video data generated, which is difficult to analyze comprehensively within the limited time available to clinicians. Manual inspection would be both time-consuming and costly. Thus, we intend leveraging machine learning techniques to automate core components of the analysis process. Such automatic analysis could be used to prioritize cases for manual review. Given that the therapeutic effects of BoNT are often subtle, algorithms capable of flagging big deviations from a patient’s baseline would help clinicians identify relevant recordings more efficiently.
Future work should focus on technical refinement, expanding the study cohort to include both patients with the disease and healthy controls, extending the study duration, and integrating decision support into clinical practice. We consider these steps essential for fully leveraging this digital health solution to deliver more personalized and continuous care for patients with CD. Continued development of Move2Screen may enhance clinical monitoring, support more informed therapeutic decisions, and contribute to improved management of CD.
Footnotes
Acknowledgements
We thank all participants of the user study for their time and effort. We acknowledge financial support by the German Federal Ministry of Education and Research (01ZZ2007). We acknowledge financial support by Land Schleswig-Holstein within the funding programme Open Access Publikationsfonds.
Ethical considerations
We have an ethics vote from the institutional ethical committee of the University of Luebeck (reference number: 2023-210).
Consent to participate
All participants provided informed consent prior to study inclusion. Participants were informed about study objectives, procedures, risks, benefits, and their rights regarding data protection and withdrawal. Consent specifically covered the secure collection, transmission, storage, and analysis of personal and health-related data, including smartphone-recorded videos documenting cervical dystonia movements. Data security measures included encryption, restricted researcher access, secure storage for up to 10 years, and subsequent deletion. Participants could withdraw consent at any time without negative consequences, including immediate stop of data collection and deletion. The study protocol and informed consent materials were approved by the responsible Ethics Committee.
Author contributions
Rica Schulze and Roland Stenger were involved in study design, study conduction, software development, data curation, and writing–the original draft. Sebastian Loens and Tobias Baeumer were involved in study design, study conduction, writing–review, and editing. Sebastian Fudickar was involved in study design, writing–the original draft, writing–review, and editing.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the German Federal Ministry of Education and Research (01ZZ2007).
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.
Data availability statement
The datasets generated and analyzed during the current study are not publicly available due to the absence of participant consent for public data sharing. Subjects participating in this study provided informed consent limited to the use of their data strictly for research analysis by the study investigators.
Guarantor
Roland Stenger
Permissions
All images presented in this article were either created by the authors or derived indirectly from a repository under the MIT License, thus requiring no additional permissions.
