Abstract
Retrospective alcohol use data are prone to recall bias, a limitation that could be addressed with real-time ecological momentary assessment (EMA) tools. We aimed to (1) introduce a simple (single-click) EMA methodology for collecting real-time alcohol use data, and (2) investigate the EMA methodology’s performance relative to established alcohol use data collection tools. In March–April 2021, we sampled undergraduate students (n = 84) and collected a week of alcohol use data. Participants entered their real-time drinking start times using our EMA methodology, self-reported their drinking details in daily surveys, and a subsample recorded their breath alcohol concentration (BrAC) using smart breathalyzers. We estimated the accuracy of our EMA methodology in collecting alcohol use data relative to data collected by daily surveys and breathalyzers. Overall, 199 drinking events were recorded with the EMA methodology. Numbers of drinks recorded with the EMA methodology were correlated with self-reported daily surveys (r = .82, p < .001) and BrAC readings (r = .69, p < .001). Sensitivity and specificity of the EMA methodology in detecting heavy drinking relative to daily surveys were 82% (95% CI [67%, 92%]) and 97% (95% CI [85%, 100%]), respectively. These were 74% (95% CI: [64%, 83%]) and 92% (95% CI: [85%, 96%]) for binge drinking. Similar results were found when we used breathalyzers as the reference standard test. We developed an EMA methodology for collecting real-time alcohol use data (alcohol drinking start-time, frequency, magnitude, patterns, and pace). Our findings support the utility of our EMA methodology in collecting alcohol use data among college students.
Keywords
Introduction
Excessive alcohol use is a prevalent, modifiable risk behavior, especially among young adults and college students (Substance Abuse and Mental Health Services Administration, 2019). In 2019, 53% of full-time college students in the United States aged 18 to 22 reported drinking alcohol in the past 30 days, 33% reported binge drinking, and 8% reported heavy drinking (Substance Abuse and Mental Health Services Administration, 2019). Alcohol misuse, described as excess daily and/or total alcohol consumption (Centers for Disease Control and Prevention, 2021a; National Institute on Alcohol Abuse Alcoholism, 2007), is linked to multiple preventable outcomes and diseases, from unintentional injuries, violence, such as homicide and suicide, and risky sexual behaviors to a range of chronic and infectious diseases (Centers for Disease Control and Prevention, 2021a; Danaei et al., 2009). For public health professionals, researchers, epidemiologists, and others who work to support young adult health, it is important to be able to measure alcohol use accurately. Such accuracy is essential when monitoring prevalence and trends of alcohol use, studying correlates and outcomes of misuse, and developing interventions to reduce misuse.
Alcohol use can be measured using objective approaches (e.g., biological measurements, breathalyzers, transdermal alcohol monitors [Fairbairn & Bosch, 2021]) or subjective approaches (e.g., self-reported questionnaires, interactive voice response, or interviews (Parker et al., 2018; Richter & Johnson, 2016)). The most common, affordable, and convenient method of alcohol use data collection is retrospective self-report (Del Boca & Darkes, 2003; Richter & Johnson, 2016). In general, self-reports, particularly an alcohol frequency or quantity single item screener (Toner et al., 2019), are considered valid data sources for alcohol use measurement (Del Boca & Darkes, 2003). However, there are several factors that may affect the precision of data collected via self-report. For example, they are prone to measurement error due to recall bias, especially due to the cognitive distortion linked to alcohol consumption (Del Boca & Darkes, 2003; Weissenborn & Duka, 2003). Memory errors are more likely to occur as the time between drinking and reporting increases (Ekholm, 2004; Gmel & Daeppen, 2007). Among measurement tools that collect recall data, a daily estimation approach, where participants report their previous day’s consumption in daily increments during the data collection period (daily survey), is considered to be more accurate as it minimizes recall time to <24 hr (Del Boca & Darkes, 2003).
However, even daily surveys might not be free from recall bias. It is known that alcohol use, even at low doses, disrupts functioning of complex tasks (Field et al., 2010). In such cases, recall bias is expected to have a larger influence on alcohol use magnitude data (e.g., number of drinks) than on frequency data (e.g., number of drinking events) (Laforge et al., 2005). While this detail may seem comparatively trivial, accuracy of this number is imperative for identifying certain drinking patterns, such as binge drinking, which are defined by the number of drinks taken within a specific timeframe—so even an error of a single drink might result in incorrect binary classification (e.g., binge drinking vs. not binge drinking).
Ecologically momentary assessment of alcohol use is the real-time repeated evaluation of individuals’ drinking behavior in their naturalistic drinking environment (Shiffman, 2009). Strengths of EMA methodologies include recording dynamic patterns of alcohol drinking, removing measurement error linked to recall bias, and measuring alcohol use in an ethically sound and ecologically valid approach (Shiffman, 2009; Shiffman et al., 2008; Wray et al., 2014). Most EMA methodologies use time-based prompts (regular or random) to capture alcohol use data through screen-touch or text responses (Beckjord & Shiffman, 2014; Shiffman, 2009). Still, completing EMA procedures can be burdensome for participants, particularly if it involves responding to multiple questions and performance of complex tasks (e.g., counting a number of drinks). EMA methodologies that capture data using single clicks may reduce participant burden. Lastly, despite the availability of EMA methodologies in the field (Carpenter et al., 2019, 2022; Miranda Jr et al., 2014; Stevenson et al., 2019), some with high compliance among participants (Miranda et al., 2014), there is a lack of validation evidence for these approaches.
Objectives
We aimed to (a) introduce a simple single-click EMA methodology for collecting real-time data on alcohol drinking start time, drinking pace, and alcohol use magnitude, and (b) investigate the EMA methodology’s performance in identifying drinking event start times and in detecting excessive alcohol drinking (binge/heavy drinking) relative to the established alcohol use measurement tools (daily surveys and breathalyzers).
Methods
Study Setting and Design
For this study, we used data from a parent study that aimed to understand the accuracy of a new generation of wearable alcohol monitors in measuring alcohol consumption within naturalistic drinking environments (Kianersi et al., 2023). Here, we only used the EMA methodology, breathalyzer, and daily survey alcohol use data collected in the parent study. The Indiana University Human Subjects and Institutional Review boards reviewed and approved our study protocol (#2012949660). When relevant, we followed the STARD 2015 reporting guidelines (Bossuyt et al., 2015; Cohen et al., 2016).
We conducted the study at Indiana University-Bloomington (IUB). IUB is a large Big Ten state school in Monroe County, Indiana, with an annual undergraduate student enrollment of ~33,000. The IUB student drinking profile is similar to that of other large universities (Cantor et al., 2015; Wiersma-Mosley et al., 2020). We used a prospective study design to collect self-reported alcohol use data from participants over 1 week.
Participants
Eligibility criteria for the parent study were as follows: (a) aged 18 years or older, (b) enrolled in IUB courses during the Spring 2021 semester, (c) living in Monroe County, IN, (d) self-reported alcohol drinking at least once a week, (e) good general health, (f) owning an iPhone (a compatibility requirement for the wearable alcohol monitor), and (g) giving electronically signed informed consent. Exclusion criteria were: (a) using a medicine that was contraindicated with alcohol use, and (b) pregnancy or breastfeeding.
In March-April 2021, we identified potentially eligible participants using two sampling techniques: (a) one-stage random cluster sampling, and (b) a network sampling technique known as acquaintance sampling (Cohen et al., 2003). Clusters were the IUB General Education classes, which are required for all IUB undergraduate degree-seeking students. A total of 339 classes were available. Using the Pandas library in Python, we conducted simple random sampling with replacement on these classes, selecting 56 for invitation. We sent study introduction emails to instructors and asked them if they were willing to forward our study recruitment emails to their students. A total of 17 instructors agreed and shared the study invitation email with all students in their classes. The study recruitment email included a survey link to an eligibility form. If eligible, respondents were directed to an online consent form followed by the baseline survey. In the baseline survey, randomly sampled participants from the one-stage cluster sample were asked to nominate 2 to 3 of their friends with whom they usually “hung out” on a typical night of alcohol drinking. We then sent additional recruitment emails to the nominated friends (i.e., the sample of acquaintances). The same data were solicited from all participants; nominated friends were not asked to nominate others.
Study Procedures
After providing informed consent, participants filled in a baseline survey about demographics and alcohol consumption patterns (Figure 1). Using this survey, participants also scheduled a baseline visit. At their baseline visit, we described the study procedures and helped participants set up our EMA tool on their smartphones for recording real-time drinking patterns (described in detail below). Participants received their alcohol monitor wristbands and wore them throughout the study. We provided breathalyzers to 25 participants, chosen via simple random sampling with the Pandas library in Python. If a selected participant missed the baseline visit, the breathalyzer was given to the next attendee. Starting the day after their baseline visit, participants who had a drink on the previous day self-reported their drinking start time and number of alcohol drinks on seven consecutive daily surveys. Participants returned the breathalyzers (and the wristbands) and completed an endline data quality control survey at an endline visit. Participants were compensated with a maximum of $75 for completing all study procedures; they received $3 for submitting each daily survey, $3.50 for recording their drinking using the EMA app, and $20 for recording their breath alcohol concentration (BrAC) using the breathalyzers. Other payments were relevant to the parent study procedures (e.g., wearing the alcohol monitor wristbands). Of note, compensation was not linked to drinking behavior; for instance, participants received $20 even if they had a BrAC reading of 0.000%.

Study procedures.
Baseline Characteristics
We collected data on the following variables in the baseline survey: age (18 to ≥21 years), sex at birth (female/male), race (white/non-white), year in school (freshman to senior), residence (off-campus/on-campus), and annual income, including money from employment, gifts from family, scholarships, and other means (≥$25,000/<$25,000) along with Greek membership (yes/no). The Greek system in U.S. colleges is a network of student-led social organizations. Membership in these social organizations is associated with increased alcohol consumption (Borsari et al., 2009; Kianersi et al., 2022). Further, we collected data on the average number of drinking days in a week (0–7 days) and the average number of standard alcoholic beverages consumed in a typical night of drinking (0.5–50 standard drinks).
EMA Methodology
We used an electronic data capture system, REDCap (Harris et al., 2019; Harris et al., 2009), to collect timestamp data. A link to the survey’s web address was saved as an icon (similar in appearance to a smartphone app) on each participant’s iPhone’s Home screen during their baseline visit (Figure 2, Panel A and B). The survey had a “Now” button that participants could tap to record the current date and time. At the baseline visit, we demonstrated how to use the survey. We asked participants to open the survey and tap on the “Now” button every time they started drinking a new standard alcoholic beverage to record their drinking start time for each drink (Figure 2, Panel C). The EMA methodology recorded two types of timestamps, one when the survey was opened and one when the “Now” button was clicked. In data cleaning phase, we removed the drinking timestamps that were submitted more than 2 hr after opening the survey to make sure the analyzed drinking timestamps were recorded in real-time and with no recall (n = 9 drinking timestamps were removed).

EMA methodology setup on participants’ smartphones.
Number of Standard Drinks and Drinking Event Start Time Recorded in EMA Methodology
Each timestamp in the EMA methodology dataset represented the drinking start time for one standard drink collected in real-time. For each participant, we classified the timestamps within the same day as one drinking event. Standard drinks consumed after the midnight were also grouped with the ones consumed before the midnight. “Number of standard drinks” was the total number of timestamps from the same drinking event. The first timestamp in each drinking event was the drinking event start time. Participants could not review their previously recorded timestamps.
Daily Surveys
We contrasted the EMA data with data from “the best available method[s] for establishing the presence or absence of the target condition” (Bossuyt et al., 2015), which in this case were (1) daily surveys (subjective) and 2) breathalyzers (objective). Every day at noon (12:00 p.m.) during the data collection week, we sent participants a web page address (URL) that was linked to a daily survey about their previous day of drinking (SFigure 1 in Supplemental File 1). Each participant received a unique URL. Throughout the day, we sent reminder emails and text messages to daily survey non-responders. We kept track of the timestamps that daily surveys were sent to the participants and the timestamps that participants submitted their daily survey and calculated the time difference between these two timestamps.
In the daily survey, we asked participants if they had any alcoholic drinks on the previous day (Response options: Yes or No). Participants who answered “Yes” to this question could see and answer two follow-up questions about: (1) the alcohol drinking event’s start time, and (2) the number of standard alcohol drinks consumed in that event. An alcohol drinking event was defined as the consumption of 0.5 or more standard alcohol drink(s) in a day. Participants could report one drinking event in each daily survey. A standard drink was defined as a drink that contained 14 grams (0.6 fluid ounces) of alcohol, for example, 12 fl oz regular beer or 5 fl oz table wine (National Institutes of Health, 2021). A picture of common standard alcoholic beverages was included in the survey.
Breathalyzers
We collected objective alcohol use data using smart BACtrack C6™ keychain breathalyzers from a subsample of 25 participants. These small portable devices can estimate BrAC level through exhaled breath and store this estimate and the timestamp of the BrAC reading on participant’s phone via Bluetooth. At the baseline visit, we helped participants to connect the breathalyzers to their phones and provided instruction on how to use the breathalyzer. We asked participants to record their BrAC level up to four times every 20 to 30 min after their last drink and meal. In our analysis, we used the maximum recorded BrAC estimate for each drinking event.
Calculated Measures
Drinking Event Start Time Comparison
This was the absolute time difference between (a) real-time (EMA methodology) and (b) daily self-reports drinking event start times. This variable was calculated by taking the absolute difference between the drinking event start times recorded in real-time using the EMA methodology and self-reported in daily surveys. This variable could range from 0 to 1,440 min (24 hr), with a value of 0 implying that the start time in both data collection tools matched exactly (less than 1 min of variability).
Excessive Alcohol Use
Binge drinking: Binge drinking is a drinking pattern that brings an individual’s BrAC level to ≥0.08 g/dl (National Institute on Alcohol Abuse Alcoholism, 2004). For breathalyzer data, a drinking event was coded as binge drinking if a participant’s BArC level reached ≥0.08 g/dl in that event. For daily surveys and EMA methodology data, a drinking event was coded as binge drinking if men reported consuming ≥5 standard drinks or women reported having ≥4 standard drinks in that event (National Institute on Alcohol Abuse Alcoholism, 2004).
Heavy drinking: We used daily survey and EMA data to estimate heavy drinking, defined as consumption of 8 or more drinks for women or 15 or more drinks for men during the data collection week (Centers for Disease Control and Prevention, 2021b).
Statistical Methods
We first calculated summary measures of the time comparison variable. For continuous measures, we estimated the Pearson’s correlation coefficient to understand the linear correlation between the alcohol drinking recorded in real-time through the EMA methodology and self-reported via daily surveys. We also calculated the mean absolute error (MAE) for these paired observations using the same measure. For categorical measures, we reported true positive, true negative, false positive, and false negative values, along with sensitivity and specificity. We performed a complete case analysis. In cases where a test value was missing, the observation containing that value was removed from that analysis. We obtained the exact 95% confidence intervals (CIs) for sensitivity and specificity measures. We further assessed the correlation between EMA methodology data and the objective BrAC data. An a priori power analysis was leveraged from a parent study and not performed specific to the aims of this study. Data processing, visualization, and analysis were completed in Python (version 3.7.6, Python Software Foundation, Beaverton, OR, US) (Van Rossum & Drake Jr, 1995). We used SAS software, Version 9.4 of the SAS system for Windows (Cary, NC, USA) to obtain exact CIs. Study materials for reproducing the EMA methodology are included as supplementary materials (Supplemental Files 2 and 3). Code used in validation analysis is available by emailing the corresponding author.
Results
Participants
In the random cluster sample, instructors’ response rate was 30%. Overall, 84 students participated in our study (completed baseline visit), n = 46 from the random cluster sample (conservative estimated response rate = 17%) and n = 38 from the sample of friends (response rate = 52%). Because students in the cluster random sample were recruited through their instructors, we do not have a record of who received the study invitation. Therefore, when calculating the response rate for the cluster random sample, the denominator (the overall number of invited students in the random cluster sample) was a conservative conjecture based on the maximum possible number of students per IUB published class sizes. Figure 3 shows the study flow diagram. SFigure 2 (in Supplemental File 1) represents the study STARD flow diagrams for binge and heavy alcohol drinking measures.

Study flow diagram.
Descriptive data
Demographics: On average, participants were 19.7 (SD = 1.2) years of age, female (70%), white (73%), first year students (32%), off-campus residents (70%), non-Greek affiliated (70%), and had an income of less than $25,000 annually (88%). In the baseline online survey, participants reported drinking on average 2.5 days in a week (Median: 2, IQR: 1) and drinking on average 5.1 drinks in a drinking night (Median: 5, IQR: 3; Table 1).
Baseline Characteristics of the Study Participants, Indiana University Bloomington Undergraduate Students, March 2021.
Note. IUB: Indiana University Bloomington.
EMA app: Overall, 78 of the 84 participants recorded at least one drinking timestamp with the EMA methodology. A total of 857 timestamps were recorded on the app. Each timestamp in the EMA methodology corresponded to one standard drink. There were 199 recorded timestamps corresponding to drinking event start times (i.e., they were the first drink in an event). The median drinking event start hour in the EMA methodology data was 8:00 p.m. (Q1 = 5:00 p.m., Q3 = 9:00 p.m.). On average, participants reported drinking four standard drinks in each drinking event (Median: 3, IQR: 4). The median time difference between two consecutive drinks in a drinking event (drinking pace) was 26 min, with an interquartile range of 45 min (SFigure 3 in Supplemental File 1). Prevalence of binge drinking events and heavy drinking participants calculated using EMA methodology data were 39% (78 out of 199 drinking events) and 42% (33 out of 78 participants), respectively.
Daily surveys: All 84 participants completed at least one daily survey. A total of 577 daily surveys were sent out to the 84 participants. On average, participants submitted their daily surveys around 2 hr after we sent them the links, at 1:40 pm (Median: 12:00 pm). Three participants did not receive all the daily survey invitations because they started their week of data collection on the third day of the week. Participants completed 568 (97.6%) of the daily surveys. Overall, 213 drinking days were reported in the 568 completed daily surveys. One participant missed entering the drinking start time for one drinking day. Daily surveys for 21 of the 213 drinking days were submitted more than a day after the drinking event. These late submissions were excluded from the drinking start time comparison analysis because their recall period was more than 24 hr. However, since most of these late submissions (n = 17) were only 1 day overdue, we included them in our performance, sensitivity, and specificity, analyses. Previous studies show that a recall bias is less likely to occur for alcohol use quantity measurements when the recall period is less than 2 days (Ekholm, 2004). Participants missed entering the p.m./a.m. value for the drinking start time in 39 daily surveys. All but one of the non-missing drinking event start times were in the afternoon, so we replaced the missing a.m./p.m. values with p.m. The median drinking event start hour in the daily survey data was 8:00 p.m. (Q1 = 5:00 p.m., Q3 = 9:00 p.m.), mirroring the EMA app. Participants reported drinking a total of 1106 standard drinks in the week of data collection. On average, participants reported drinking 5.2 standard drinks in each drinking event (Median: 4, IQR: 5). Prevalence of binge drinking events and heavy drinking participants calculated using daily surveys data were 48% (102 out of 213 drinking events) and 49% (41 out of 84 participants), respectively.
A sum of 223 drinking events were reported in either the EMA methodology or the daily surveys while 189 of these were reported in both (188 drinking event start times). Four drinking events were reported in the EMA methodology but not in the daily surveys, and 24 drinking events were reported in the daily surveys but not in the app. Moreover, for 349 submitted daily surveys where participants reported no drinking, we did not identify any drinking event in the EMA methodology data. However, we identified six drinking events in the EMA methodology data where participants reported no drinking in the daily surveys.
Breathalyzers: Twenty-five participants recorded a total of 142 BrAC readings for 52 drinking events. We analyzed maximum BrAC level recorded for each of these events. The median of these BrAC measurements was 0.042%, with an interquartile range of 0.082%. The prevalence of binge drinking events calculated using BrAC data was 37% (19 out of 52 drinking events). Lastly, 48 of these 52 drinking events corresponded to a drinking event recorded with the EMA methodology and were included in the sensitivity/specificity analysis.
Main Results
Drinking event start time comparison: This analysis was conducted for the 188 drinking event start-times reported in both the EMA and daily survey data. The average absolute time difference between real-time EMA methodology records and daily survey self-reports of drinking event start time was 38 min (Median: 16 min, IQR: 33 min; Figure 4). The time difference was zero for 29 (15%) drinking event start times, the self-reported start times in daily surveys were before the recorded start times in the EMA methodology for 125 (66%) events (mean = 51 min, median = 18 min), and the self-reported start times in daily survey were after the recorded start times in the EMA methodology for 53 (28%) events (mean = 36 min, median = 18 min).

Absolute time difference between real-time EMA methodology records and daily survey self-reports of drinking start time.
Correlation Analyses
Pearson’s correlation coefficient between the numbers of drinks consumed in a drinking event recorded using EMA and self-reported in daily surveys was significant and strong (r = .82, p < .001; Figure 5). Moreover, their MAE, which shows the average absolute difference between the two variables, was small (MAE = 1.48). Similarly, the correlation between the number of drinks consumed in drinking events recorded in the EMA methodology and maximum BrAC level in the drinking events was notable and statistically significantly different from 0 (r = .69, p < .001). This correlation was slightly weaker when we compared BrAC data with the daily survey data (r = .62, p < .001). For 77 (45%) drinking events, participants recorded the exact same number of drinks in both EMA methodology and daily surveys. In 45 (26%) events, participants recorded more drinks in the app, and in 66 (39%) events, participants reported more drinks in daily surveys.

Correlation analyses on the number of drinks consumed in a drinking event.
Performance findings
Table 2 provides the sensitivity and specificity of EMA methodology in detecting heavy drinking and binge drinking patterns, compared to daily surveys and breathalyzers. EMA methodology had a sensitivity of 82% (95% CI [67%, 92%]) and a specificity of 97% (95% CI [85%, 100%]) in detecting heavy drinkers compared to the daily surveys. Moreover, compared to daily surveys, the EMA methodology had a sensitivity of 74% (95% CI [64%, 83%]) and a specificity 92% (95% CI [85%, 96%]) in detecting binge drinking. We found similar results, with wider 95% CI, when we used BrAC estimates as a reference standard for detecting binge drinking. Lastly, as an additional analysis, we assessed the daily survey performance in detecting binge drinking using breathalyzer BrAC estimates as reference standard. Relative to breathalyzers’ BrAC data, the daily survey performance was similar to the EMA methodology performance [sensitivity/specificity (95% CI): 89% (65%–99%)/84% (66%–95%)].
Performance Analysis.
Discussion
This study introduced a single-click EMA tool and methodology that looks and functions like a smartphone app for collecting real-time alcohol use data. We found that our EMA methodology usually produces accurate data when measuring drinking event start time among undergraduate students. Moreover, the correlations between data for number of standard drinks consumed collected via the EMA methodology compared to that collected via breathalyzers and daily surveys were moderate (above 0.6) to strong (above 0.8). Finally, the sensitivity and specificity of the EMA methodology in detecting binge and heavy drinking were high when compared to daily surveys data or breathalyzers’ BrAC readings.
Based on our findings, we propose that the EMA methodology described in this study is likely an accurate and valid tool for alcohol use data collection among college students, particularly undergraduate students at large public universities. In our sensitivity/specificity analysis, performance of our EMA methodology was comparable to daily surveys, an established alcohol use data collection tool (Del Boca & Darkes, 2003; Ekholm, 2004; Gmel & Daeppen, 2007; Graham et al., 2004; Leigh, 2000). The EMA methodology and daily surveys performed similarly in detecting binge drinking events relative to breathalyzers’ BrAC readings, further indicating that the EMA methodology performs as well as daily surveys. We found that the specificity of our EMA methodology in detecting excessive alcohol use was higher than its sensitivity. This makes the methodology a suitable measurement tool in situations where accurately classifying non-excessive drinking patterns is more important than detecting excessive drinking patterns. Lastly, correlation figures suggested that the EMA methodology might perform better when participants consume a few alcoholic drinks in a drinking event (versus several), though we did not statistically explore this due to limited number of observations.
Our EMA methodology uses one simple single-click approach to capture multiple dimensions of alcohol use in real-time, including drinking event start time, frequency, magnitude, drinking patterns, and drinking pace. Most EMA apps and methodologies collect data intermittently throughout the day, mainly by sending scheduled or random EMA surveys to participants, or by issuing prompts at various times (Paolillo et al., 2018; Serre et al., 2012). This approach might not provide the most precise drinking start time. Drinking event start times represent a critical window within the drinking event experience. This variable could be important in validation studies, such as those on transdermal alcohol monitors (Roache et al., 2019; van Egmond et al., 2020) or in studies that aim to develop EMA apps (Chung et al., 2017). These study types need accurate data on drinking event start time when developing their EMA tools. Moreover, drinking start time can capture critical information when evaluating acute effects of alcohol use (Gmel & Daeppen, 2007). Drinking frequency and magnitude are used in epidemiology studies, for instance, to understand the association or causal relationship between alcohol use and acute and chronic conditions (Kianersi et al., 2022; Shield et al., 2014). They are the building blocks of drinking patterns. Drinking pace can be another important measure when assessing certain drinking patterns, such as binge drinking (National Institute on Alcohol Abuse Alcoholism, 2004). A faster drinking pace could result in a higher BrAC rise. Researchers might be interested in understanding the dynamics and risk factors for increased drinking pace (Groefsema & Kuntsche, 2019; Thrul & Kuntsche, 2015).
An additional benefit of our approach is that it does not require any installation and is based on user’s internet browser (Safari or Google Chrome). This potentially could improve users’ trust in the EMA “app.” The EMA methodology icon (i.e., the “app” icon) on participants’ phone Home screens (Figure 2) plausibly acted as a reminder to participants to log in their drinking start-times because participants could see the icon every time they unlocked their phones. This could have helped to reduce the amount of missing data and the need for sending multiple prompts. Such a relationship is presently speculative as we did not collect data on the factors that led to our high level of data collection success. However, even though we did not collect data on user experience with the EMA methodology, participants frequently mentioned that it was easy to use it at the study visits. Nonetheless, combining a random/interval ambulatory assessment of alcohol use with our user-initiated event-based EMA methodology might provide richer and more precise alcohol use data (Trull & Ebner-Priemer, 2020).
Limitations
Our EMA methodology and study design had some limitations. The EMA methodology inherently depends on the availability of smartphones with internet connectivity and a sufficient amount of battery charge. While internet access may be constrained among specific populations or in certain regions, recent data from Pew Research Center show that nearly all people ages 18 to 29 years in the US have a cellphone, and 96% have a smartphone (Pew Research Center, 2021). Further, cognitive distortion due to alcohol consumption might make it difficult for participants to use our EMA methodology. In general, this is a challenge for EMA tools developed for substance use data collection (Shiffman, 2009). However, in our study, the correlation between the EMA methodology data and the more objective BrAC readings was strong. Additionally, in a related study using the same dataset, we identified strong correlations between self-reported measures and objectively collected transdermal alcohol content data (Kianersi et al., 2023). These results further address concerns regarding impression management, as both self-reports and objective, passively collected data produced similar patterns. Another limitation of the EMA methodology is its inability in differentiating no drinking periods from missing data. If a participant missed recording a timestamp for a drink, in the EMA methodology data, this was captured as a no drinking period instead of a missing data point. It is possible to address this issue, for example, by including a question, asking participants if they are drinking now, and sending random prompts to them to respond to this question. However, this could increase the degree of burden to participants.
Similar to most studies on sensitivity/specificity (Bachmann et al., 2006), our study sample size was small, particularly for BrAC measurements. This could have introduced bias to our test performance estimates and led to imprecise estimates with wide CIs (Cohen et al., 2016). However, this limitation only influenced our findings when the unit of analysis was participant (heavy drinkers) and not when the unit of analysis was drinking events (dinking event start times, number of drinks, or binge drinking) because there were more observations in the latter analyses (n = 188). Our study population was limited to IUB undergraduate students, which limits the generalizability of our findings. Nonetheless, because part of our study participants were a random sample of IUB undergraduate students, with similar drinking patterns to that from students of other large American universities (Substance Abuse and Mental Health Services Administration, 2019), we expect that our findings are generalizable to other American college students. Moreover, participants entered their drinking start time in real-time before self-reporting them in the next day daily survey. Therefore, the daily self-reports (reference standard test) might had been guided by the EMA methodology records. This could have caused an overestimation of the EMA methodology performance. We addressed this by using a second more objective reference standard test (breathalyzers). Lastly, our current study was a quantitative design; a qualitative or a mixed study approach could provide further insight into the feasibility of using this EMA methodology. However, in a prior work using the same data, the acceptability and feasibility of our data collection methods, including the EMA methodology, were high (Rosenberg et al., 2023).
Conclusion
We developed a simple tool for collecting real-time alcohol use data. This EMA methodology could be used in research settings to collect accurate, real-time data on different alcohol use variables. Moreover, our validation findings support the utility of our EMA methodology in collecting alcohol use data among college students. Alcohol researchers seeking to develop real-time alcohol-based interventions (EMIs) or evaluating effect of alcohol on different outcomes may benefit from adopting our EMA methodology or a similar tool.
Supplemental Material
sj-docx-1-sgo-10.1177_21582440241256531 – Supplemental material for Introduction and Validation of an Ecological Momentary Assessment Methodology to Measure Alcohol Use Among College Students
Supplemental material, sj-docx-1-sgo-10.1177_21582440241256531 for Introduction and Validation of an Ecological Momentary Assessment Methodology to Measure Alcohol Use Among College Students by Sina Kianersi, Maria Parker, Christina Ludema, Jon Agley and Molly Rosenberg in SAGE Open
Supplemental Material
sj-docx-2-sgo-10.1177_21582440241256531 – Supplemental material for Introduction and Validation of an Ecological Momentary Assessment Methodology to Measure Alcohol Use Among College Students
Supplemental material, sj-docx-2-sgo-10.1177_21582440241256531 for Introduction and Validation of an Ecological Momentary Assessment Methodology to Measure Alcohol Use Among College Students by Sina Kianersi, Maria Parker, Christina Ludema, Jon Agley and Molly Rosenberg in SAGE Open
Footnotes
Acknowledgements
This research was conducted while Sina Kianersi was at Indiana University School of Public Health-Bloomington. He is now at Brigham and Women’s Hospital and Harvard Medical School and may be contacted at
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Institute on Alcohol Abuse and Alcoholism [NIAAA grant # R25DA051249, 2021] and Prevention Insights at the Indiana University Bloomington School of Public Health. NIAAA had no role in the design, analysis, interpretation, or publication of this study. The content is solely the responsibility of the authors.
Ethical Approval
The Indiana University Human Subjects and Institutional Review boards reviewed and approved our study protocol (#2012949660). Respondents were directed to an online consent form.
Supplemental Material
Supplemental material for this article is available online.
Data Availability Statement
Study materials for reproducing the EMA methodology are included as supplementary materials (Supplementary File 2 and Supplementary File 3). Data to reproduce the results are available from the corresponding author upon reasonable request.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
