Abstract
This study compares estimates of health insurance coverage from the American Community Survey (ACS) to those in twelve state-specific surveys. Uninsurance estimates for the nonelderly are consistently higher in the ACS than in state surveys, as are direct purchase insurance estimates. Estimates for employer-sponsored insurance are similar, but public coverage rates are lower in the ACS. The ACS meets some but not all of the states’ data needs; its large sample size and inclusion of all U.S. counties in the sample allow for comparison of insurance coverage within and across states. State-specific surveys provide the flexibility to add policy-relevant questions, including questions needed to examine how health insurance translates into access, use, and affordability of health services.
Passage of the Affordable Care Act (ACA) created new demand for high-quality state-level estimates of health insurance coverage and access. Under most scenarios, health reform requires state implementation, ensuring variation in the strategies employed to establish reform and the effectiveness of policies in increasing access to health insurance coverage. To monitor and evaluate health reform, policy analysts need reliable state-level estimates to compare change across the fifty states and the District of Columbia, as well as to look more specifically within states to evaluate what is and is not working to increase coverage and access to care.
The Census Bureau’s Current Population Survey (CPS) has been the survey of record to document health insurance coverage estimates across states (Blewett and Davern 2006; Short 2001). Since 1982, the CPS has provided yearly estimates of health coverage by type of insurance, and estimates of uninsurance by state and the nation as a whole. The estimates come out in the fall every year and report on the prior year’s health insurance coverage.
The CPS has been criticized over the years for several inadequacies. As early as 1986, health services researchers cited concerns about the health insurance question, which asks respondents to recall the type of coverage they had for the prior calendar year (Swartz 1986). However, because the CPS estimates are similar to the point-in-time estimates of the National Health Interview Survey (NHIS), some analysts have come to use the CPS as a point-in-time estimate (Blewett and Davern 2006; Congressional Budget Office 2003; Fronstin 2000; Lewis, Ellwood, and Czajka 1998). States have other concerns with the CPS, mostly around sample size, the sampling frame (samples from select counties), and the way in which the Census Bureau imputes health insurance coverage questions that have large rates of nonresponse. These flaws are well documented elsewhere (Blewett et al. 2004; Ziegenfuss and Davern 2011), and the imputation routine has been improved (Boudreaux and Turner 2011).
The Census Bureau’s American Community Survey (ACS) was designed to replace the decennial census “long form.” Conducted every year, ACS added a health insurance coverage question in 2008. Because of its large sample size (approximately three million every year), drawing samples from all counties, the ACS is able to provide estimates at the state level and at substate levels of geography such as cities, counties, and even census tract levels through combining multiple years of data (U.S. Census Bureau 2009). 1 The ACS also addresses concerns that analysts hold about the CPS estimates in that it measures health insurance at a point in time rather than looking back to the prior calendar year (Kenney et al. 2012).
Yet, many state analysts are still not quite satisfied with the state-level estimates from the ACS. First, its national estimates are surprisingly similar to the annual coverage estimates produced by the CPS, yet there is significant variability when comparing state-level estimates from the two surveys. However, it can be argued that this similarity raises more concerns about the CPS estimates than those from the ACS. In addition, the ACS estimates of direct coverage (coverage purchased by an individual in the private health insurance market) are significantly larger than direct coverage estimates from the CPS and other federal surveys (Boudreaux et al. 2011; Lynch and Boudreaux 2010; Turner and Boudreaux 2010; Turner, Boudreaux, and Lynch 2009). Finally, the ACS lacks questions about health status, barriers to insurance coverage and health care services (e.g., having a usual source of care), and other information needed to evaluate health reform (e.g., affordability).
Over the years, many states have responded to these concerns with federal data sources by developing their own state-specific health access surveys (Blewett and Davern 2006). We have documented that as many as forty-seven states and several territories have conducted at least one survey, and at least sixteen states and one territory have conducted these surveys on a regular basis to provide trend information on access and coverage. 2 States have greater flexibility to alter their surveys to capture policy-relevant indicators such as reports of the affordability of care, experiences seeking health insurance, difficulty accessing a usual source of care, and use of emergency departments (Long, Stockley, and Dahlen 2012; Blewett and Davern 2006) These states have come to rely on their own estimates, yet turn to the ACS and CPS to see how they compare to other states in the nation and to the national average. Most states with long-standing health access surveys have learned how to reconcile and explain discrepancies among state estimates from these various data sources to local policymakers, stakeholders, and media representatives.
In our previous work, we compared estimates of health insurance coverage from the CPS to a group of twenty-four state-specific surveys. We found that the CPS estimates of uninsurance were, on average, 23 percent higher than the estimates from the state-specific surveys (Call, Davern, and Blewett 2007). This paper is an extension of that work, motivated by the general question of how health insurance coverage estimates from the ACS compare to similar estimates from well-established state surveys. We compare state survey and ACS estimates of health insurance coverage, contrast key components of state surveys that differ from the ACS, and discuss possible reasons for the differences in state-level estimates of health insurance coverage. We close by discussing the unique contributions of each survey—the ACS and the state-specific surveys—for monitoring and evaluating the impact of health reform on the distribution of health insurance coverage including the more specific impact on rates of uninsurance.
Method
Data
We identified thirteen states that fielded household surveys in either 2008 or 2009. 3 Health insurance coverage estimates and metadata documentation (e.g., sample frame and size, mode, operational definition of plan types, and response rates) were culled from published reports and state documents where possible and through one-on-one correspondence with state analysts. We report coverage rates for the noninstitutionalized nonelderly population (ages zero to sixty-four). Adequate data were collected from twelve of the thirteen states. For three states (Massachusetts, Minnesota, and Oklahoma), we produced health insurance estimates directly from the microdata.
We also used the microdata from the 2008 and 2009 ACS, which is well documented by the U.S. Census Bureau. Specifically we used the edited public-use ACS microdata (Lynch, Boudreaux, and Davern 2010) to generate state-level estimates of health insurance for the noninstitutionalized nonelderly population in the same year the state surveys were conducted. Data from all sources were weighted to reflect their respective state population. Standard error estimates account for the complexity of the sample design.
Measures
Each state survey includes a series of questions listing various sources of health insurance coverage. All the state surveys have a verification question at the conclusion of the coverage list to confirm the lack of coverage for those reporting “no” to each coverage type. Similarly, the ACS coverage question provides a list of insurance types; respondents are instructed to check “yes” or “no” for each type. The ACS does not include a verification question.
State-level coverage estimates were produced with the ACS data following the same specifications used for each state-specific survey. For example, some states define military coverage as public (California, Georgia, Minnesota, New Jersey, Ohio, Pennsylvania, Utah, and Wisconsin); other states define military coverage as employer-based insurance (Colorado, Massachusetts, Oklahoma, and Washington). Several states apply a hierarchical or primary source of coverage rule such that each person is assigned one type of coverage (California, Massachusetts, Minnesota, and Oklahoma) whereas in all other states, individuals covered by multiple plans are counted under each plan, allowing a double counting of coverage type. The ACS data presented here adhere to the state specifications of coverage categories and hierarchy. Standard errors for the ACS were produced using successive difference replication (U.S. Census Bureau 2009).
Analysis
We compared the ACS with each state survey across a variety of dimensions (see Appendix Table A1). The response rate formula was not consistent across data sources, and thus is not perfectly comparable within or across states. In Table 1, we report actual sample size and effective sample size (ESS). The ESS, defined as the ratio of the sample size to the design effect, is a measure that adjusts for the fact that each survey is based on a different sample design and, therefore, has a different level of precision. The design effect is defined as the ratio of the variance of the estimate produced, with consideration to the complex sample design, to a standard error assuming a simple random sample. The design effect is a construct-specific measure of the relative efficiency of the sample size compared to a simple random sample of the same size; here, we average the design effect across the four coverage measures (i.e., employer-based, private direct purchase, public insurance, and uninsurance).
Sample Size for State Surveys and ACS for Insurance Coverage, Non-elderly (0-64) Population.
Source: 2008 or 2009 American Community Survey and state surveys
Note: Sample size is defined as number of completed surveys used in uninsurance estimate. ACS = American Community Survey.
ESS = effective sample size, which is the ratio of the sample size to the average design effect across all insurance types (employer, direct purchase, public, uninsurance).
Table 2 provides state-specific coverage estimates for each survey for the nonelderly population. In addition to presenting the estimates to show the absolute difference in the estimate per state, we present the relative difference between estimates derived by each survey, with the ACS as the reference ([state survey − ACS] / ACS). Statistical comparisons between the ACS and the state surveys are done using an independent sample t-test; we report significance at the p < .05, p < .01, and p < .001 levels.
Contrast of ACS and State-Specific Health Insurance Coverage Rates, Non-elderly (0-64) Population.
Source: 2008 or 2009 American Community Survey and state surveys.
Note: Analyses are based on the noninstitutional, non-elderly (0-64) population. ACS = American Community Survey.
p < .05. **p < .01. ***p < .001.
Results
The ACS is a mixed-mode survey that collects data primarily by mail using the Master Address File as a sample frame, with telephone follow-up, and in-person follow-up for a sample of nonrespondents; the response rate for 2009 is reported as 98 percent (Hefter 2009). The goal of the ACS is to describe basic population demographic information, whereas the state surveys, for the most part, are designed to provide information needed to monitor health insurance and access to health care services. The California Health Interview Survey and Wisconsin Family Health Survey are more comprehensive health surveys, and the Washington State Population Survey covers a wide range of topics that are of interest to state agencies. The state surveys were primarily fielded by telephone with landline frames or dual landline and cell frames (Massachusetts also includes mail and web-based administration); response rates ranged between 16 percent and 57.2 percent, using varying response rate calculations. (See Appendix Table A1 for general information about the surveys.)
Generally speaking, the larger the sample size, the greater the precision of the survey statistics for the population of interest and policy-relevant subpopulations (e.g., coverage rates by age, race and ethnicity, and poverty status). Table 1 compares the actual and ESS per survey per state (i.e., we account for the design effect or adjust for the complexity of the sample design). The ACS sample size is much larger than that of the state surveys. On average, the ACS sample size is six times larger than the state surveys, ranging from as little as 2.5 times larger in Pennsylvania to 13.9 times larger in Massachusetts. The ACS ESS is nine times larger than the state surveys on average, ranging from 1.5 times larger in Ohio to thirty-seven times larger in New Jersey. State surveys vary in terms of the complexity of their sample designs, with the smallest reduction in sample size due to design effect in Massachusetts and the largest reduction in Pennsylvania. For the ACS, there is also a variation in the design effect across the states, diminishing the capacity for generating precise subpopulation estimates.
Table 2 presents the comparison of state estimates of uninsurance, public, employer-based, and direct purchase insurance by survey source. With the exception of Ohio, uninsurance estimates are consistently lower in the state surveys than in the ACS and are significant for all but three states: Minnesota, New Jersey, and Wisconsin. The lowest relative difference is in New Jersey (−.8), where the state survey estimate of uninsurance is nearly equal to the ACS (14.1 percent compared with 14.2 percent). By contrast, in Massachusetts, the state survey estimate of uninsurance in 2009 is 3.1 percent compared with 4.9 percent in the ACS (a 1.8-percentage-point difference). While seemingly small, it does represent a relative difference of 37.1 percent.
Public coverage estimates are almost always significantly higher in the state surveys when compared to similar estimates in the ACS. The anomaly is in Wisconsin where the state survey estimate is almost 15 percent smaller than the ACS, with the state survey indicating 16 percent of the nonelderly population in Wisconsin enrolled in public insurance programs compared to 18.7 percent in the ACS (p < .001).
For employer-based insurance, there are small absolute and relative differences for both surveys for each state; however, these differences are significant for eight of the twelve (67 percent) state comparisons. Almost half of the state surveys yield significantly higher estimates of employer-based insurance than the ACS, and for three states, the estimates are significantly lower.
The lower segment of Table 2 presents the direct purchase results. Here again, we see large and significant differences for some states but not all. Estimates of direct purchase are consistently lower for the state surveys than for the ACS. The largest discrepancy is seen in Pennsylvania; the state survey estimate for direct purchase is 64 percent lower than the ACS estimate (4.2 percent in the state survey compared with 11.5 percent in the ACS).
On average across the twelve states, the absolute difference between the estimates of coverage from the state surveys and the ACS are small. For example, for public coverage, the estimates are, on average, 2.5 percentage points higher in the states surveys compared to the ACS estimates. The story shifts when looking at relative differences, which are large and varied for all estimates except employer-based insurance. On average, the relative difference in uninsurance rates for state surveys is 13 percent lower than state uninsurance rates from the ACS. Public coverage is approximately 17 percent higher on average for state surveys, whereas state survey estimates of direct purchase insurance are 25.8 percent lower on average than the ACS state estimates.
Discussion and Conclusion
The ACS is the newest addition to the family of federal population surveys that ask questions about health insurance coverage (Davern et al. 2009; SHADAC 2012). The ACS measures insurance at the time of the survey and uses sample from all U.S. counties, thereby addressing several weaknesses of the CPS. The key advantages of the ACS include its sample size and inclusion of all counties in the sample design within and across states. The ACS sample size is approximately fifteen times larger than that of the CPS and is on average six times larger than samples in the state-specific surveys. The ACS uses an address frame and mixed-mode data collection strategy that provides greater sample coverage, and therefore better representation of the population of interest. Researchers are able to obtain direct estimates of health insurance for states and for geographic areas within states. Direct state and substate estimates are also available for subpopulations, including low-income children and adults with and without children, as well as estimates by race/ethnicity, work status, and other key socioeconomic indicators (U.S. Census Bureau 2012).
These new state estimates of health insurance coverage from the ACS provide a valuable resource for all states. For those states that conduct their own survey, the ACS, in theory, could replace the need for state-specific health insurance and access surveys. Yet, this is not likely to happen any time soon for the thirteen states we approached for this study and other states launching surveys in 2014. Even with low response rates and a more limited sample frame due to reliance on telephone surveys, the benefits of state-specific surveys continue to outweigh the limitations. Specifically states value (1) the ability to easily add state-specific policy-relevant questions to their survey, (2) timely access to microdata for state-specific analysis, and (3) the ability to respond quickly to legislators and other policymakers seeking state-specific information on health insurance, access to insurance coverage, and the relationship between coverage and access to health care services.
Limitations
A limitation of this study is that we are confined to describing patterns when comparing the ACS to state surveys; we can only speculate about the reasons behind the surveys’ differences in insurance coverage estimates. Further research is needed to understand the insurance distribution differences that occur in the ACS by survey mode (e.g., mail, phone, in-person), which may impact comparability with state-specific surveys that are primarily telephone based. In addition, understanding the reason behind the observed absolute and relative differences between the ACS and state-specific surveys is made difficult because each state survey varies on a range of factors that could impact coverage estimates (e.g., survey design, sample design, sample coverage, timing of data collection that may reflect seasonal variation in coverage, survey vendor, response rates, date editing, and processing).
What About the Comparison of the Estimates?
With the exception of the estimate of employer-sponsored insurance, there is a great variation in state health insurance coverage estimates depending on the survey source (the state-specific survey or the state estimate from the ACS). Yet some patterns do emerge. Similar to the CPS (Call, Davern, and Blewett 2007), the ACS estimates of uninsurance are consistently higher than the estimates from state surveys. For our twelve states, the ACS state estimate of uninsurance was on average almost 2 percentage points higher than the respective state survey estimate (ranging from 4.3 percentage points higher to 0.4 percentage points lower). Uninsurance estimates may be lower in state surveys due to sample coverage issues, particularly for those states with lower response rates and those that do not incorporate cell phone sample frames or weight the data to account for the absence of cell phone households (Call et al. 2011; Lee et al. 2010). The ACS point-in-time measurement is an improvement over the current CPS questions that ask about coverage in the past calendar year. More specifically, the ACS provides a weighted average annual point-in-time measure of insurance coverage; as such, its estimates reflect a measurement period that is different from the data collection period for state-specific survey estimates, which may explain some of the discordance. Finally, although the ACS has no verification question for those responding “no” to all insurance types, it could be argued this is not necessary in a mail form such as the ACS where the respondent can concurrently review and select from an array of coverage types.
Estimates of public coverage are generally higher in the state surveys than the state-level public coverage estimates from the ACS. State surveys typically include separate questions for each public program offered and include their state-specific name (e.g., BadgerCare, Medi-Cal, MinnesotaCare, Medical Assistance), which is thought to improve measurement (Loomis 2000). However, the evidence concerning the efficacy of including state-specific names is somewhat mixed (Eberly, Pohl, and Davis 2009; Pascale 2010). In addition to having a comprehensive list of insurance sources, all state-specific surveys include a verification question for respondents who say “no” to the full list of coverage types. By contrast, the ACS includes separate questions about Medicare, military, and Veterans Administration coverage and a catch-all public program response option that asks about “Medicaid, Medical Assistance, or any kind of government-assistance plan for those with low incomes or a disability”; however, due to the need for a nationally standardized mail form for the ACS, there are no state-specific names for public programs on the survey. The inclusion of the term “low incomes” may deter a “yes” response from those who are enrolled in the Children’s Health Insurance Program (CHIP) and other state-specific programs with income eligibility above Medicaid thresholds and do not consider themselves “low income.” This may result in underreporting of some public coverage and overreporting of private insurance (O’Hara 2009).
State-specific estimates of direct purchase coverage are on average 25.8 percent lower than the respective ACS state estimates. The comparatively high estimates of direct purchase coverage in the ACS both for the state and national estimates raises concern about how respondents are answering this question. For example, it is not clear whether participants in premium support programs—where participants pay a portion of their premium, as does their employer and/or state government—would report private health plans or public coverage in the ACS or instead report that they purchased their health insurance coverage directly (from a private plan). Also, the ACS provides a write-in option, and many of these responses are coded to direct purchase coverage, contributing to the higher estimates in the ACS (Mach and O’Hara 2011). Another potential explanation for the lower rate of direct purchase coverage in state-specific surveys is the reliance on telephone surveys; the one-on-one interaction may ensure respondents are not reporting single-service/benefit policies (e.g., dental, drug, hospital-only coverage) when questioned about self-purchased health insurance coverage. 4 Clearly, an advantage of telephone surveys is the interviewer’s availability and ability to clarify a respondent’s confusion with questions about something as complex as health insurance coverage.
An in-depth exploration of direct purchase insurance coverage and the impact of reporting multiple sources of private and public coverage in the ACS is needed (e.g., coding of someone with employer-based insurance and dental coverage or Medicare plus a supplemental policy) since this may inflate private purchase estimates (Mach and O’Hara 2011). A viable strategy for validating the ACS questions (or state survey questions for that matter) is to obtain a sample of people with known insurance coverage (e.g., health plan data that include the full array of insurance products) and have them complete the survey to observe the magnitude and direction of measurement bias. This requires coordination, cooperation, and resources. However, such a study would be of great value, especially in light of changes in insurance products, such as the introduction of insurance exchanges that offer plans with and without subsidies. The introduction of exchanges and subsidies are likely to increase confusion and measurement error, which may further threaten confidence in our ability to monitor the types of coverage and characteristics of those who access insurance through these new policy initiatives.
Disadvantages and Advantages of Each Survey Source
Despite the advantages of a large sample size, inclusion of all counties in the sample design, and timely release of annual insurance estimates, the ACS has several disadvantages. For example, the mail format that is uniform across all states restricts the amount of detail about the types of public programs that exist (it includes a general question on Medicaid that captures all public programs together rather than asking separate questions for Medicaid and CHIP, programs with different income-eligibility thresholds). Additionally, the direct purchase question lacks a qualifier that excludes single service plans such as dental or prescription drug coverage; it also fails to explain that direct purchase should not include coverage related to current or former employment (Boudreaux et al. 2011).
Finally, the ACS is not a health-related survey, and therefore lacks additional health-relevant data (e.g., questions about access to insurance, health status, barriers to care, having a usual source of care) that might be used over time to monitor the impact of having or not having health insurance, or of having public or private coverage. Other federal surveys that include questions on health insurance coverage typically ask additional health questions. However, these surveys are nationally representative surveys with limited state-level information. For example, the NHIS includes questions on conditions, health behaviors, access to care, and use of health care services, and the survey was recently expanded to include other reform-relevant questions (National Center for Health Statistics 2012). The Household Component of the Medical Expenditure Panel Survey (MEPS) focuses on health care use, expenditures, and out-of-pocket spending for health care. Finally, the CPS, which does include a state-representative sample, includes questions related to general health status and disability status and added a set of questions about annual out-of-pocket expenditures to the 2010 supplement.
States have been creative in the development of their surveys. State-specific surveys ask respondents about access to employer coverage and the size of their employers; the surveys ask about difficulty in finding a provider, other barriers people face in getting access to needed care, and reasons for emergency room use. They also include questions about medical debt and excessive out-of-pocket costs, and queries about health status, health conditions, and utilization. In fact, states often borrow key questions from an array of federal surveys and combine them into one state-specific survey, thus gaining local-level data they need and the possibility of comparing their findings to national benchmarks.
As mentioned, the disadvantage of the state-specific surveys is that states simply do not have the resources to field a survey as large as the ACS and cover the cost of in-person surveys. Many state surveys are paid for by state general funds and must compete for resources with other state priorities. To keep costs down, most states rely on random digit dial (RDD) telephone surveys with some leaving out the growing cell phone population. The National Center for Health Statistics estimates that in 2011, approximately one-third (34 percent) of U.S. households had only cell phones and no landlines, with a wide variation from state to state (Blumberg et al. 2012). Not including households with cell phones can lead to a biased state estimate since cell phone users are typically younger, have lower incomes, are more likely to rent than own a home, and are more likely to be uninsured (Blumberg and Luke 2012). Seven of the twelve states used a dual landline and cell phone sampling frame, adding cell phone households to the survey sample. Five of the states did not include a cell phone sample, again, probably because of cost constraints and, potentially, lack of capacity to weight dual frame data. Adding cell phones can increase by two to four times the price per survey (Hu et al. 2011; Link et al. 2007). And while some adjustments can be made to the data to account for these differences, including cell phone households is preferable (Call et al. 2011; Lee et al. 2010).
While response rates are only one measure of quality (Groves 2006), the state surveys represented here had response rates ranging from 16 percent to 57.2 percent compared with a reported response rate of 98 percent for the ACS. State-specific surveys do not carry the power of mandatory participation and do not have the resources to conduct the follow-up required to achieve the high ACS response rates.
Does the ACS Address State Data Needs?
In closing, the ACS meets some but not all of the states’ data needs. The ACS certainly meets the need for timely estimates 5 of health insurance coverage that are consistent over time and available at the state level. Because of the large sample size available through the ACS, states also benefit from estimates of coverage at the substate level (e.g., variation in coverage by income, race/ethnicity, and geography). However, states need more than coverage estimates for monitoring health reform. As health insurance coverage expands in some states, there are growing concerns about difficulties navigating access to insurance and to health services, worries about provider availability, and concerns about the affordability of needed health care (Long, Stockley, and Dahlen 2012; SHADAC 2011b). This information is not collected in the ACS, restricting its usefulness in monitoring how well health insurance coverage translates into the access, use, and affordability of health services.
Many states have invested in their own health insurance and access surveys in part because of the limitations of federal surveys. State-specific surveys have the advantage of being focused on coverage, use, and access, and therefore include an array of policy-relevant questions not available in the ACS. States also have the flexibility to alter their surveys in response to the policy environment rather than wait for an act of Congress. We examined the methods and insurance estimates for twelve states that conducted health coverage surveys in the period 2008–2009. These states have come to rely on their own data for monitoring health insurance coverage and for evaluating the impact of state and national health reform on coverage and, importantly, the relationship between insurance coverage and access to care. States are growing adept at understanding the strengths and weaknesses of the various data sources at their disposal and practiced at trying to reconcile differences between estimates from these various data sources. As is true with the CPS, states can and do rely on the ACS to examine annual uninsurance rates and trends over time, and to draw comparisons with the national coverage estimates and those of their neighboring states. Both the ACS and state-specific surveys contribute to states’ understanding of the dynamics of their health insurance coverage trends; state-specific surveys answer additional questions relevant to monitoring the impact of health reform.
Footnotes
Appendix
Methods, Sample Size, and Response Rates from Thirteen Surveys.
| State | Year | Sample frame | Mode | RR (%) | Scope of survey |
|---|---|---|---|---|---|
| ACS | 2008/2009 | Master Address File | Mail/phone/in-person | 98.0 a | General purpose |
| California | 2009 | Dual-frame RDD | Phone | 17.4 | Comprehensive health and health insurance |
| Colorado | 2008–2009 | Dual-frame RDD | Phone | 35.5 | Health insurance and access |
| Georgia | 2008 | Dual-frame RDD | Phone | 42.0 | Health insurance and access |
| Massachusetts | 2009 | Dual-frame RDD/ABS | Phone/mail/web | 41.0 | Health insurance and access |
| Minnesota | 2009 | Dual-frame RDD | Phone | 45.0 | Health insurance and access |
| New Jersey | 2009 | Dual-frame RDD | Phone | 45.4 | Health insurance and access |
| Ohio | 2008 | Dual-frame RDD | Phone | 34.6 | Health insurance and access |
| Oklahoma | 2008 | Landline RDD | Phone | 16.0 | Health insurance and access |
| Pennsylvania | 2008 | Landline RDD | Phone | 50.0 | Health insurance and access |
| Utah | 2008 | Landline RDD | Phone | 57.2 | Health insurance and access |
| Washington | 2008 | Landline RDD | Phone | 29.2 | General purpose |
| Wisconsin | 2009 | Landline RDD | Phone | 52.0 | Comprehensive health and health insurance |
Source: Published reports and personal communication.
Note: ACS = American Community Survey; RR = response rate (no standard formula used by all states); RDD = random digit dialing; ABS = address-based sampling.
The average response rate for 2008 and 2009 for the twelve listed states.
Acknowledgements
We are grateful to the following state analysts who generously provided detailed information about their survey data and/or allowed us to perform special runs of their data: from California, E. Richard Brown, Shana Alex Lavarreda, and Royce J. Park; from Colorado, Jeff R. Bontrager and Rebecca Crepin; from Georgia, William Sanders Custer, Karen Jean Minyard, and Angela Bauer Snyder; from Massachusetts, Cindy Wacks; from Minnesota, Stefan Gildemeister; from New Jersey, Joel Cantor and Dorothy Gaboda; from Ohio, David Dorsky and Timothy Sahr; from Oklahoma, Buffy Heater; from Pennsylvania, Dorothy Gaboda and Edward Naugle; from Utah, Kimberly Partain McNamara and Jennifer Wrathall; from Washington, Thea Mounts; and from Wisconsin, Ann Buedel and Eleanor Cautley.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Support for this article was provided by a grant from the Robert Wood Johnson Foundation to State Health Access Data Assistance Center (SHADAC).
