Abstract
The factors that influenced school districts’ decisions to offer virtual, hybrid, or in-person instruction during the 2020–2021 school year—the first full school year after the emergence of the COVID-19 pandemic—have been the focus of a large body of research in recent years. Some of this research examines the influence of school spending, among other factors; however, these studies do not consider spending in relation to cost, “cost” being the amount needed for a school district to achieve a given outcome. This paper uses a measure of adequacy, which is the amount of spending under or over estimated cost, to determine whether spending correlates with the amount of time a school district offered virtual instruction. We find that spending adequacy significantly and substantially predicts time spent in virtual instruction: for every $1,000 positive change in adequacy (closing a gap and/or adding to a surplus), the time spent in virtual schooling decreases 0.9 percentage points. A one standard deviation positive change in adequacy, therefore, results in 12.8 fewer days of virtual instruction. While our findings are descriptive, they do require future researchers to consider school spending adequacy, as much as any other factor, as a predictor of pandemic instructional models.
Keywords
The closing of schools at the onset of the COVID-19 pandemic in the spring of 2020 is arguably one of the most consequential events in the history of American education. A research consensus is emerging that finds that remote schooling during the 2020–2021 school year had a pronounced effect on the academic progress and mental health of students. School districts, however, varied widely in how they delivered instruction: some districts offered fully remote/virtual schooling, some were fully in-person, and some were a hybrid of the two. In addition, in response to changes in the local COVID-19 case rate or other factors, districts changed their instructional models as the year progressed. Given the consequences of these choices, understanding why school districts offered the instruction that they did is an important policy question.
Over the past several years, researchers have explored the correlation between pandemic instruction and variables that might explain school districts’ choices: political affiliation, teachers union strength, local COVID-19 prevalence, and student demographics have all been assessed as potential predictors of instructional delivery models. A small number of studies have also included spending per pupil in their models; most have found no correlation between spending and the types of instruction offered. What these studies fail to include, however, is a measure of school spending adequacy. Research has established that school districts with different student populations and different economic contexts require different amounts of funding to provide equal educational opportunity. Two school districts spending the same amount per pupil may have very different costs, “cost” being the amount necessary for a district to achieve some common educational goal. A simple measure of spending per pupil will, therefore, be an insufficient measure of a district’s ability to provide an adequate education—and, potentially, its ability to provide a particular type of schooling during a global pandemic.
Instead of spending, we propose in this paper to use a measure of adequacy as our variable of interest. To determine adequacy, we employ a cost model to estimate the amount of funding needed to achieve a particular educational outcome: in this case, national average scores on equated state tests in English language arts and mathematics. Adequacy, in our measure, is the amount above or below this estimated cost that is actually spent. For our spending measure, we use “current spending,” as reported by the U.S. Census Bureau. As in other cost modeling research, we do not include capital spending, as it is difficult to attribute that spending to any particular year; however, current spending does include facilities maintenance and staffing, which are important parts of pandemic schooling readiness.
Certainly, it is true that the cost of attaining a particular educational outcome is not the same as the cost of providing a certain mode of instruction (in-person, hybrid, or virtual) during a pandemic. Yet there is good reason to theorize that these costs are closely linked. During the 2020–2021 school year, school districts were advised to implement a variety of mitigations to reduce the spread of COVID-19 if providing in-person instruction. The resources needed to implement these mitigations were, in many cases, the same resources needed to achieve specific educational outcomes as measured by standardized tests. Social distancing, for example, requires enough staff to adequately supervise students, even as they are spread out into larger areas. Four out of every 5 dollars of current expenditures in schools are spent on staff (National Center for Education Statistics, 2024a); therefore, districts spending over adequacy would likely have “excess” staff—more staff than needed to achieve a particular outcome—and, subsequently, more flexibility in deploying that staff across larger spaces during a pandemic than similar districts spending under adequacy. If this theory holds, pandemic instructional models would correlate with measures of adequacy—even if they did not correlate with per pupil spending measures—because “adequacy” accounts for a variety of school contexts that affect cost, while spending measures do not.
This paper contributes to the literature on pandemic schooling by testing this theory, examining the correlations between measures of funding adequacy and instructional delivery. Our measure of adequacy, which we have described in a previous study (Baker, Weber, & Srikanth, et al., 2021), is based on well-established, empirical methods that determine a school district’s cost of achieving national average outcomes, accounting for that district’s student population and characteristics. After establishing the difference between a district’s estimated adequacy cost and its actual spending—which we refer to as its “adequacy gap/surplus”—we determine the correlation between adequacy, in years before and during the height of the pandemic, and the percentage of time students spent in virtual, hybrid, or in-person schooling during the 2020–2021 school year. Because our analysis focuses on spending relative to cost—as opposed to simply spending—we theorize that we are better able to determine whether fiscal capacity was a relevant factor in districts’ responses to the pandemic.
Our paper begins with a review of the literature on pandemic instructional models, including how spending has been measured and used in regression-based and other analyses. We then recap on our previous work in education cost modeling, reviewing both the theoretical underpinnings and methods of our cost estimates. Next, we explain our empirical strategy for determining the correlations between the percentage of time a district’s students spent in different types of pandemic instruction and measures of school funding adequacy. As a contrast, we also employ a model, similar to previous research, that uses a simple measure of per pupil spending among a set of other predictors with the percentage of time in virtual instruction in 2020–2021 as the dependent variable. After presenting our estimates, we discuss the ramifications of our findings for policymaking and future research.
Literature Review
There is little question that the COVID-19 pandemic, the subsequent closing of schools in the Spring of 2020, and the deployment of various types of instructional models during the 2020–2021 school year had a profound effect on students in the United States. The extent of the “learning loss” suffered by students is still to be determined, but there is ample evidence losses did occur (Kuhfeld et al., 2022; Pier et al., 2021). In addition, losses were not evenly distributed across student populations, with students in poverty and students of color suffering larger decreases in educational outcomes (Goldhaber et al., 2022; Kuhfeld et al., 2022). There is also evidence that academic declines were greater in districts with less in-person instruction (Halloran et al., 2021). Given the consequences of pandemic schooling decisions, it is understandable that education researchers have focused on how and why these decisions were made.
A substantial body of research has emerged that attempts to ascertain which factors predict how a school district delivered instruction during the height of the COVID-19 pandemic (Singer, 2022). Several studies have found correlations between political affiliation and whether a school district delivered in-person instruction (Flanders, 2020; Grossmann et al., 2021; Harris & Oliver, 2021; Hartney & Finger, 2022; Kretchmar & Brewer, 2022; Singer, 2022; Valant, 2020). Other research has found that teachers union strength correlates with pandemic instruction models; stronger unions predict a greater chance a district delivered remote instruction (DeAngelis & Makridis, 2021; Flanders, 2020; Grossmann et al., 2021; Hartney & Finger, 2022; Hemphill & Marianno, 2021; Marianno et al., 2022). We note that these studies use a wide variety of measures of union strength; none is a direct measure of how much influence a teachers union in a particular district has over COVID-19 related policy. Some studies have found increases in the local COVID-19 case rate reduced the likelihood of in-person instruction (Christian et al., 2022; Harris & Oliver, 2021). Others have found correlations between student population demographics and instructional models, with remote learning more likely in districts with higher proportions of students of color (Camp & Zamarro, 2022; Harris & Oliver, 2021; Oster et al., 2021; Viner et al., 2022).
All of these studies are subject to one important limitation: they are descriptive, which precludes making any claims of causality. As an example, White parents showed a greater preference for in-person instruction than Black and Hispanic parents during the height of the pandemic (Gilbert et al., 2020). However, it does not necessarily follow that this preference itself is the underlying cause of students of color being more likely to receive remote instruction. It may be that parents’ preferences were driven by their perceptions of the ability of their children’s schools to provide safe in-person instruction. Black and Latinx parents perceive that schools serving students of color are inadequately funded (Leadership Conference Education Fund, 2017). Further, research shows Black/African-American and Hispanic/Latinx students are twice as likely as White students to be enrolled in inadequately funded school districts (Baker, Di Carlo, Reist, et al., 2021). If their children’s schools were better funded, it is possible these parents would feel differently about in-person instruction, as they may also perceive that schools have the staff and facilities needed to educate their children safely. Descriptive research that shows a correlation between student demographics and pandemic instruction types cannot determine whether this is the case, or whether other factors are in play. This is particularly problematic when examining the correlations between pandemic schooling models and political affiliation, which is correlated with student race and ethnicity, poverty, and health (Harris & Oliver, 2021).
Several of these studies include a measure of spending per pupil within a regression model (Harris & Oliver, 2021; Hartney & Finger, 2022; Houston & Steinberg, 2022; Marianno et al., 2022). Only Houston and Steinberg’s (2022) study found spending to be a consistent and statistically significant predictor of the percentage of time in in-person learning. The study is, however, limited to a county-level analysis, using “the values for the largest district by enrollment in each county” (p. 12) as its fiscal measure. There can be significant heterogeneity in fiscal measures across districts within a county; however, this analysis does not account for that variation.
While the studies above use district-level measures of spending per pupil, they are likely inadequate for the task of determining whether school funding plays a role in pandemic instructional decision making. Variations in student populations and school district contexts can greatly influence educational costs (Baker, Weber, & Srikanth, et al., 2021). Disadvantaged students, for example, require more school resources to attain equal educational opportunity (Baker & Duncombe, 2004; Duncombe & Yinger, 2005, 2011). Variations between labor markets can raise or lower the relative cost of hiring qualified staffs (Taylor, 2006). Enrollment sizes can affect economies of scale for school districts (Baker & Duncombe, 2004). These and other factors are not captured by simple measures of per pupil spending; two districts with similar expenditures per student may still have very different costs to achieve their educational goals.
There is reason to believe that a school district’s funding relative to its costs would affect its decisions to offer various types of pandemic instruction. Schools were advised by public health officials to implement a series of mitigations before opening, including,
increasing physical distance by dedensifying classrooms and common areas, using hybrid attendance models when needed to limit the total number of contacts and prevent crowding, increasing room air ventilation, and expanding screening testing to rapidly identify and isolate asymptomatic infected individuals. (Honein et al., 2021, p. 824)
A district with greater funding relative to its costs would be more likely to be able to carry out these recommendations, both because its facilities would be better maintained to meet ventilation and density standards, and because it would have the numbers of staff necessary to provide smaller class sizes with student cohorts that do not mix. School facilities improvements—particularly improvements in air quality— have been shown to have a positive effect on student outcomes (Duran et al., 2021; Lafortune & Schönholzer, 2022; Sadrizadeh et al., 2022). Those improvements, which would require increased spending both on initial purchase costs and ongoing maintenance, are likely the same improvements that would heighten the chances of adhering to suggested ventilation mitigations against COVID-19. A district that spent less than its adequacy cost, however, would likely be unable to implement these and other mitigations, increasing the chance it would offer only remote instruction.
This study contributes to the research on pandemic schooling by considering funding adequacy, rather than simple measures of school spending, as a possible predictor of the decisions school districts made to offer in-person, hybrid, or remote schooling at the height of the COVID-19 pandemic. What we present are descriptive estimates; we make no claims of causality, particularly as funding adequacy is highly correlated to student race and ethnicity, a clear predictor of pandemic instructional models. Nevertheless, this research provides a new perspective on why school districts responded as they did to the pandemic. Funding adequacy—as much as political affiliation, teachers union strength, student demographics, or COVID-19 prevalence—should be considered when speculating on why schools made the decisions they made.
Research Questions
Our research addresses two questions:
Q1: Is per pupil spending, adjusted for labor market effects, a predictor of whether schools offered in-person, hybrid, or virtual instruction during the 2020–2021 school year?
Q2: Is school funding adequacy, as measured by how much more or less a school district spends relative to its estimated cost to achieve national average outcomes, a predictor of whether schools offered in-person, hybrid, or virtual instruction during the 2020–2021 school year?
Methodology
Our empirical strategy consists of two models:
(1) A regression model where the time spent in virtual (or other types) of instruction is predicted by a set of variables measuring district and student characteristics, COVID-19 prevalence in the region, and school district spending; this is the focus of our first research question.
(2) A regression model where the time spent in virtual (or other types) of instruction is predicted by COVID-19 prevalence and an estimation of how much more or less a school district spends than the predicted cost of providing an education that meets a common educational outcome (average national test scores); this is the focus of our second research question.
We draw on previous work in educational cost modeling to determine district spending relative to predicted cost, which we refer to as district spending “adequacy.” We begin by summarizing this work and briefly describe the methods and data we employ to calculate our adequacy estimates. Next, we explain our models that predict a district’s pandemic instructional model based on either its spending or its adequacy gap/surplus, holding other district and student characteristics constant.
Modeling and Estimating Educational Costs
The theoretical underpinnings and methodologies of educational cost modeling have been well established in the school finance literature (Duncombe & Yinger, 2000, 2005, 2011). Unlike production models, which use educational outcomes as the dependent variable and spending (along with other covariates) as the independent variable, cost models predict spending with outcomes as an independent variable. A core challenge in cost modeling is producing estimates that are not biased by the endogenous relationship between spending and outcomes. A large and growing body of evidence demonstrates that educational outcomes—such as test scores—have a positive relationship with educational spending (Baker, Weber, & Srikanth, et al., 2021; Jackson, 2018; Jackson et al., 2016; Lafortune et al., 2016; Rothstein & Schanzenbach, 2021). Yet, a school district with better outcomes will likely have higher property values, creating both incentives and the capacity for taxpayers to spend more on schools, thus protecting their investment in their homes by keeping property values high. Researchers employing cost models, therefore, must use methods that address this endogeneity (Baker, Weber, & Srikanth, et al., 2021; Duncombe & Yinger, 2011; Kolbe et al., 2021).
We use a two-stage, instrumental variables model to address this endogeneity in our approach: the National Education Cost Model (NECM) (Baker, Weber, & Srikanth, et al., 2021). In its first stage, the NECM uses the aggregate demographics of surrounding school districts (districts within the same labor market) as plausibly exogenous instrumental variables that define the competitive context in which a district operates. Conceptually, a district surrounded by higher-performing districts faces stronger competition, causing its residents to want to raise student outcomes; this, in turn, will drive that district to spend more to achieve better results. Yet, we argue that the demographics of surrounding districts do not otherwise influence district spending; consequently, if our argument holds, variables that describe neighboring districts’ demographics plausibly meet the exclusion restriction necessary for valid instruments.
Like any IV model, our approach is premised on our instruments affecting the dependent variable through, and only through, the predictor; in other words, surrounding district demographics affect school spending through, and only through, the competitive context of school outcomes. Notably, similar IV approaches have been the standard in education cost modeling for years (Baker, 2006; Duncombe & Yinger, 1997, 1998, 2006, 2011; Gronberg et al., 2017; Kolbe et al., 2021; Zhao, 2023a, 2023b).
We use a panel of data from FY2009 to FY2022 for our model. The first stage in our model instruments outcomes on demographic variables that predict educational outcomes; the second stage then uses this instrumented outcome variable to estimate educational costs. The first stage is specified as:
where Outcome is the equated average test scores for district i in school year t and X is a set of student population and district context factors (see below). Outcome is instrumented on two variables: Surrounding Poverty is the income-to-poverty ratio for all other school districts in a particular district’s labor market (excluding the observed district), and Surrounding Hispanic or Surrounding Black or Hispanic is the percentage of students who are Black or Hispanic/Latinx in all other school districts in the same district’s labor market. We vary the use of the race/ethnicity instrument depending on the specification of the second stage (see below). We apply standard methods for evaluating whether these instruments are valid, using Partial F to determine the share of outcome variation (the endogenous measure) explained by the excluded instruments. We also employ a standard test to check for over-identification (Hansen J). In both of our models, these tests exceed commonly accepted minimum required thresholds.
The second stage is:
where Spending is the natural log of total current per pupil spending for school district i in year t, and γ is a linear year variable (see below). The set of student population and school district characteristics (X) consists of:
A labor cost index to account for varying costs across different labor markets (Taylor & Fowler, 2006).
The district child poverty rate, adjusted to account for the fact that Census poverty rates apply the same income thresholds despite significant variations in the cost of living from district to district and state to state (Baker et al., 2013).
District percentage of students with disabilities (SWDs).
District percentage of students who are English Language Learners (ELLs).
District percentage of students enrolled in Pre-Kindergarten.
District percentage of students enrolled in Grades 9 to 12.
A categorical variable for district enrollment size.
A density measure: log of population per square mile.
In addition, we include three potential measures of district efficiency that serve as predictors of spending differences not directly associated with outcomes; in other words, measures of inefficiency. Variations in spending not directly related to outcomes are at least partially predictable as a function of measures of competition density, fiscal capacity, and public monitoring (Grosskopf et al., 2014). Leaving these measures out of the model may result in omitted variables bias (Duncombe & Yinger, 2011; Grosskopf et al., 2014). Our efficiency variables are:
The percentage of the population that is between 5 and 17 years old. The theory here is a higher percentage of school-aged students in the overall population will put greater pressure on policymakers to spend on schooling above what is needed to achieve a desired outcome.
The ratio of housing values in a district to those in the same labor market. Districts with relatively higher property values compared to their neighbors may feel less competitive pressure to spend above cost.
The Herfindahl index, a common measure of market concentration. 1 When a labor market has more school districts to choose from—in other words, more competition density—voters can better select where to reside, seeking school districts that match their desired combination of quality of public services and tax rate. Empirical evidence shows that competition density can lead to more productive schools (Hoxby, 2000). With this in mind, and following previous research, we include the Herfindahl index in our model (Grosskopf et al., 2014; Imazeki & Reschovsky, 2004; Kolbe et al., 2021).
As we are using panel data, we include a time period variable in our models. The NECM was developed, in part, to make short run (1–2 year) forecasts about trends in funding adequacy. Using a linear annual time variable, as opposed to a year fixed effect, aided in facilitating these forecasts. For this paper, we elect to keep the time variable as it is and not use a year fixed effect; however, as a robustness test, we run our spending model (see below) with both a linear time variable and a year FE. Appendix Table 2 shows no effect on the variable of interest (spending per pupil), and little to no effect on the covariates, when changing the model’s time variable. Likewise, in developing the NECM, we have seen little to no effect on cost estimates when substituting a year FE for a linear time variable.
Outcomes are district-level averages of state test scores, collapsed by year across all grade cohorts, from the Stanford Education Data Archive (SEDA; Reardon et al., 2021). These scores are equated and standardized so as to provide a consistent measure of academic outcomes across various states administering different tests (Fahle et al., 2021). Appendix Table 1a shows the NECM estimates; Appendix Table 1b shows the estimates in the first stage.
We use two versions of the NECM in this paper: the second mirrors the first but adds the percentage of Black students in a district as a covariate in the second stage, interacted with population density. Previous research in cost modeling shows that excluding race as a covariate can result in biased estimates (Baker, Weber, & Srikanth, et al., 2021; Duncombe & Yinger, 2011; Kolbe et al., 2021). Adding a “pct. Black” variable to the model mitigates against underpredicting the costs for districts with large proportions of Black students to achieve average national outcomes. Recent work on the NECM—involving a series of model fit, diagnostic performance (IV Reg), prediction accuracy and residual bias (omitted variables) bias checks—found that including a “pct. Black by Population Density term” yielded the optimal model (Baker, 2024). As described above, however, we use “pct. Black and/or Hispanic” in surrounding districts as an instrument in the first stage in our second model. We therefore substitute “pct. Hispanic” as a first-stage instrument in the model with “pct. Black” as a second-stage covariate.
It is important to note that we do not claim our models precisely estimate the true cost of an adequate education; that would be impossible, as there will always be omitted variables that affect educational costs. What we are able to do, however, is credibly estimate educational costs while addressing the problem of endogeneity between spending and outcomes, given the data available. This results in a measure of educational adequacy superior to simple measures of spending, which do not account at all for a variety of contexts and factors that affect costs. Since our adequacy variable is an estimate, we do acknowledge that our pandemic schooling models include a variable measured with error; we discuss the implications of this below.
Using the NECM estimates, we calculate the predicted per-pupil cost of all school districts in our dataset, assuming outcomes and efficiency measures for all districts at the national average. We then calculate how much more or less a district actually spends (total current spending) compared to the estimated adequacy amount it would need to spend to reach national average outcomes, assuming average efficiency. This difference becomes the variable of interest in our second model below.
Pandemic Instruction and School District Spending
Our first model is a simple regression with percentage of time in virtual schooling as the dependent variable, and a set of district characteristics, including current spending per pupil, as the predictors:
where CurrentSpendingPP is district current expenditures per pupil; State is a categorical variable and included as a fixed effect, interacted with the county-level COVID-19 case rate per 100,000; and X is a set of school district and student population characteristics that will affect district spending:
The natural log of student enrollment.
The percentage of the district’s students who are enrolled in high school (Grades 9–12).
The percentage of students who are English Language Learners (ELL).
The percentage of Students With Disabilities (SWDs).
The percentage of 5- to 17-year-olds living within the district’s boundaries who are in poverty.
The COVID-19 case rate is included under the theory that a higher regional case rate would make it more likely a school district would move from in-person to hybrid or remote learning. That likelihood, however, is conditioned on public health policy, much of which is under the jurisdiction of the state, particularly regarding COVID-19 (Kaufman et al., 2021). We therefore interact the case rate with the state categorical variable. As a robustness test, we run models with and without the state fixed effect; we describe the results below. Because case rates are reported by county, we cluster standard errors for our estimates at the county level. 2
The second model substitutes our estimation of a district’s adequacy gap (or surplus) for the current spending figure:
This model removes the set of student and district characteristics. Those variables were used in the NECM; consequently, they are correlated with the adequacy gap/surplus estimations. Because adequacy is our variable of interest, including them would bias the estimates. Appendix Table 3 shows the correlations; poverty in particular is highly correlated with our adequacy gap estimations.
We begin with percentage of time in virtual learning as the dependent variable, as both hybrid and fully in-person learning required a school to be operating and open to at least some of a district’s student population. Virtual learning, in contrast, did not require any students to be within school buildings. The facilities’ requirements and demands on staff of virtual instruction are fundamentally different from both hybrid and in-person learning; in contrast, the difference between hybrid and in-person is only a difference in degree. However, while we believe estimates with percentage of virtual learning as the dependent variable are the most relevant to our policy question, we also produce estimates using percentages of time in hybrid and in-person learning as the dependent variable.
Data
For our measures of time spent in various pandemic instructional modes, we use data from the COVID-19 Data Hub (COVID-19 School Data Hub, 2022). Instructional models are classified as in-person, hybrid, remote/virtual; the data measure the percentage of the 2020–2021 school year each district spent offering one of these three models. COVID-19 case rate per 100,000 data are from the U.S. Center for Disease Control (U.S. CDC, 2022). The countywide average used here is the unweighted mean of all observations for a county, excluding all “0” observations, averaged from August 15, 2020 to June 15, 2021.
Total current per pupil spending is for FY2019 to FY2021; the data source is the U.S Census Bureau’s Annual Survey of School System Finances (F33; U.S. Census Bureau, 2022). Spending is adjusted by the Comparable Wage Index for Teachers (CWIFT) (National Center for Education Statistics, 2024b). The other covariates in the spending model are from the SY 2018-19 through SY 2020-21 Local Education Agency (School District) Universe Survey Data (National Center for Education Statistics, 2024c).
We begin our analysis with data from FY19 (school year 2018–2019), the last year that schools were uninterrupted by the pandemic. We then run our models for FY20, the year pandemic school closures started, and FY21, the year of our pandemic instruction dataset. We believe that the FY19 analysis is the most relevant to our research questions. No additional funding related to the pandemic was given to school districts in FY19; data from that year are, therefore, the best pre-pandemic measure of the underlying fiscal condition of school districts. That said, school districts did receive supplemental federal funding through the Elementary and Secondary School Emergency Relief Fund (ESSER). This additional funding could have affected how schools chose to deliver instruction. We therefore run our models using data from the next 2 years, which would include reported spending of revenues from ESSER and other federal sources. We also create a 3-year panel and run the model with a linear time variable (see above).
In all of our datasets, variables for New York City are “rolled up” when necessary to the weighted average of its component districts or its five counties. Colorado does not report special education percentages at the district level; consequently, we run two models: one without Colorado data and with a Students With Disabilities percentage variable, and one with the data but without the variable.
Table 1 gives summary statistics for variables used in the models in all 3 years analyzed; means are unweighted by enrollment. Notably, the range of values for the two adequacy gap/surplus models varies substantially, with estimates from the model using race as a covariate showing greater average gaps in later years. There are extreme outliers both in adjusted spending per pupil and in the adequacy gap/surplus. For our estimates below, we exclude observations where adjusted spending is over $50,000 or under $4,000. We further exclude observations where the adequacy gap/surplus from the model without race is less than -$50,000 and greater than $50,000.
Descriptive Statistics
Note. ELL: English Language Learners; SWD: Students With Disabilitites; NEMC: National Education Cost Model.
We have documented the variables used and their sources for the NECM in previous work (Baker, Weber, & Srikanth, et al., 2021).
Results
Table 2 presents the estimates from three models using FY19 data with “percentage of time in virtual schooling” as the dependent variable; we consider this the most important of the three pandemic instructional models as hybrid and in-person would require school buildings to be open, a fundamental difference with virtual. To aid in interpretation, we express both spending per pupil and the adequacy gap/surplus in thousands ($1,000s) of dollars.
Spending and Adequacy Models’ Estimates
Note. SEs clustered at the county level. NECM: National Educatiron Cost Model; ELL: English Language Learners; SWD: Students with Disabilities; SAIPE: Small Area Income Poverty Estimates; FE: Fixed-effect.
p < .01, **p < .05, *p < .1.
The spending models—which are the models closest to those employed in previous research on pandemic schooling—show a statistically significant and positive correlation between spending per pupil and the percentage of time a district offered virtual schooling. For every additional $1,000 spent, the time spent in virtual schooling increases 0.4 percentage points; assuming a 180-day school year, this is approximately three-fourths of a day (0.72). This positive correlation aligns with findings by Houston and Steinberg (2022), although the finding is inconsistent across their models, as well as Harris and Oliver (2021), although their finding is not statistically significant.
Contrast this to the estimates in the adequacy gap/surplus models, which find a statistically significant and negative relationship between spending adequacy and virtual schooling. When using the model without race: for every $1,000 positive change in adequacy (closing a gap and/or adding to a surplus), the time spent in virtual schooling decreases 0.6 percentage points, a little over a day (1.08). The model using race estimates an even larger, statistically significant relationship, with a $1,000 positive change in adequacy decreasing time in virtual schooling 0.9 percentage points (1.62 days).
Notably, COVID-19 cases per 1000—which is a county-level measure of the prevalence of COVID-19—is not a significant predictor within any of the models. As described above, we interacted our COVID case rate variables with a state fixed effect, under the theory that state health policy may affect the type of instruction a district offered. As a robustness check, we run two models with and without the state fixed effect: Spending Model #1 (which omits the pct. SWD variable and includes Colorado data), and Adequacy Model #2 (which includes pct. Black as a covariate in the model that calculates the adequacy gap/surplus). Estimates for these models are found in Appendix Table 4. When the state fixed effect is excluded from the spending model, the effect of spending per pupil on percentage of time in virtual instruction is no longer statistically significant. However, when the fixed effect is excluded in the adequacy model, the coefficient for the adequacy gap/surplus is only slightly reduced (to -0.007 from -0.008), and the p value remains the same (p = .001). In summary: the exclusion of the state fixed effect does not alter the conclusion that the correlation between funding adequacy and pandemic instruction is both substantial and statistically significant.
We next compare spending’s and adequacy’s effects on pandemic instruction across 3 years: FY19, FY20, and FY21. Table 3 draws on estimates from one spending model—the one without pct. Students With Disabilities, so as to retain Colorado’s districts in the dataset—and one adequacy model—the one with pct. Black in the second stage, so as not to run the risk of omitted variable bias. The estimates for the variables of interest are remarkably similar across all 3 years. In addition, the model using the 3-year panel and a time variable also yields estimates that nearly match those of the individual years.
Spending and Adequacy Models’ Estimates, Across 3 Years and in a 3-Year Panel
Note. SEs clustered at the county level. NECM: National Educatiron Cost Model; ELL: English Language Learners; SAIPE: Small Area Income Poverty Estimates; FE: Fixed-effect.
p < .01, **p < .05, *p < .1.
To further explore the correlation between adequacy and pandemic instruction, we next use the second adequacy model (with the pct. Black variable) with three different dependent variables: percentage of time in virtual, hybrid, and in-person schooling. The estimates for FY19, and for the 3-year panel, are presented in Table 4. Unsurprisingly, the correlation flips when moving from a virtual to an in-person model: spending adequacy is positively correlated with in-person learning time. The coefficients are nearly perfect mirrors of each other in both models: a $1,000 increase leads to 0.09 or 0.08 percentage points less time in virtual instruction, and 0.09 percentage points more time in in-person learning. The percentage of time in hybrid schooling does not significantly correlate with adequacy.
Adequacy Model Estimates With Three Dependent Variables
Note. SEs clustered at the county level.
p < .01, **p < .05, *p < .1. NEMC: National Education Cost Model.
Discussion
Based on previous research, policymakers might assume school funding has little to do with how school districts responded to the pandemic; at most, they would conclude that spending has a positive but inconsistent relationship with time spent in virtual instruction. But per pupil spending does not measure school funding adequacy; it fails to account for the different costs school districts have due to their differing student populations, labor market pressures, and other characteristics. Models using our measures of adequacy show that schools that spend more toward or above adequacy were less likely to offer virtual instruction and more likely to offer in-person instruction.
The differences are substantial: assuming a 180-day school year and using our adequacy measure that includes race, a one standard deviation positive change in adequacy results in 12.8 fewer days of virtual instruction. Using our model without a race variable, the same one standard deviation positive change in adequacy predicts 9.8 fewer days in virtual instruction. To put the size of the effect into context: the United States Education Department defines chronic absenteeism as missing at least 15 school days in a year (U.S. Department of Education, n.d.). A one standard deviation change in funding adequacy, therefore, is correlated to an amount of time in virtual instruction equivalent to most of the time needed to be considered chronically absent. Of the 11,291 school districts for which we have estimated adequacy in FY21, 551 are more than a standard deviation below the mean, enrolling more than 2.8 million students. The yearly cost to bring these districts’ current spending up to our estimated adequacy targets is $54.1 billion; as current expenditures for United States K-12 public education were $927 billion in 2020–2021, 3 this would represent an increase of 5.8%.
We note that while our models find a positive correlation between adequacy and in-person instruction, and a negative correlation between adequacy and virtual instruction, we find no correlation between adequacy and hybrid instruction. The COVID-19 Data Hub definition of hybrid is: “A blend or combination of in-person and virtual instruction for all or the majority of students.” (COVID-19 School Data Hub, 2022, Learning Model Codebook) It is possible that this definition is not precise enough to show a correlation with adequacy in our models, as it may encompass instructional models that include a wide range of time spent in either virtual or in-person instruction. We also note that our dataset includes the spending of additional pandemic-related revenues, made available through ESSER and other sources, up through FY21. The consistency of our estimates across time (both in the individual year models and the 3-year panel model) suggests that any additional pandemic-related spending had little effect on the correlation between adequacy and instructional model.
The primary limitation of our findings is the same limitation on all of the research on this topic: the estimates are descriptive and do not show a causal connection between school spending adequacy and pandemic instructional models. As noted above, the factors that have been shown to be predictors of variation in instructional models are correlated, making it difficult, if not impossible, to disentangle their effects. This may also be true for school spending adequacy. Teachers union strength, for example, has been shown to be associated with increases in per pupil spending (Cowen & Strunk, 2015; Marianno et al., 2021; Strunk, 2011). While spending and adequacy are not the same, they are related; if union strength improved a district’s adequacy, it may have also improved its chances of providing in-person instruction during the pandemic. Similarly, because adequacy is negatively correlated with poverty and race/ethnicity, the preferences of parents in urban communities for virtual learning may be driven by their perception that their children’s schools are inadequately funded and, therefore, unable to implement recommended mitigations. In addition, many Republican-leaning or “red” states are highly inadequately funded when evaluated by the methods used in our models (Baker, Di Carlo, Weber, et al., 2021). It may be that the funding inadequacy in these states influenced their decisions to offer in-person instruction, or that the same political preferences manifested themselves both in school spending adequacy and pandemic instructional models; in other words, the effects of school funding adequacy on district decisions during the pandemic may or may not be causal.
In addition, we note here several other limitations on our research. First, our fiscal measure is “current” spending, which does not include capital outlays. Capital expenditures are usually not included in cost-modeling studies using panel data as it is impractical to assign spending in one year to a capital project that may actually benefit a school or student in another year. Yet, revenues—including federal and state pandemic funding for schools—may have been used to build or improve school facilities in ways that may have facilitated in-person instruction; that spending would not necessarily be counted in our fiscal measure. Facilities, however, need to be maintained, which requires supplies and personnel; those expenditures are counted as part of current spending. In addition, given the nature of school capital projects, which usually require substantial time and planning, it is unlikely additional ESSER revenues used for capital spending during the FY20 or FY21 years had a large, immediate impact on districts’ ability to provide in-person instruction. Education finance analysts have also noted that the majority of ESSER spending has taken place after the end of FY21, which is after the time extent of our data (Gartner, 2023). In summary, while excluding capital expenditures may have some effect on our estimates, that effect is, in all likelihood, modest.
Second, our adequacy measure is an estimate and, therefore, measured with error. Consequently, the estimate of the coefficient on the NECM adequacy gap may be biased toward zero; in other words, our models may be underestimating the correlation between adequacy on the time spent in virtual instruction (and in-person instruction). While we believe that the effect we have estimated is both statistically significant and practically large, we also acknowledge our estimates are not precise; stakeholders should consider this if using our findings to inform their decisions or research.
Finally, and as stated above, we acknowledge that our measure of adequacy yields estimates of the cost of achieving an educational outcome—in the case of the NECM, average standardized test scores—and not the costs of providing a certain mode of instruction during the pandemic. Our goal in this paper was to test the theory that our measure of spending adequacy meaningfully predicts a district’s pandemic instructional model; our results show it does. That is not to say that a district that spent “adequately” would have been able to provide in-person instruction. Instead, we find that our measure of a school district’s adequacy, as opposed to a simple measure of district spending, is capturing variation that correlates with how instruction was delivered during the pandemic.
Despite these cautions, we believe the findings presented here are important. School spending adequacy has unique advantages as a predictor of school districts’ pandemic instructional models. Unlike measures of union strength, which are indirect measures of how unions may be able to influence districts’ COVID-19 policies, adequacy can be measured directly; and unlike political affiliation or race, adequacy can be affected by policy: governments can choose to more (or less) adequately fund schools. If the United States faces another crisis that necessitates changing how instruction is delivered, the education funding choices policymakers make now will very likely impact the ways in which school districts respond. In addition, even in the absence of another public health emergency, how schools responded to the pandemic is a barometer of their operational capacity—one we have shown correlates with their funding adequacy.
Until now, the research consensus has been that school spending had an inconsistently positive association with virtual instruction during the 2020–2021 school year. When spending is measured relative to cost, however, the opposite is true: more adequate spending is associated with less virtual instruction and more in-person schooling. Given these findings, future research into how and why school districts reacted to the pandemic should include some measure of districts’ spending adequacy, with the acknowledgement that simple measures of per pupil spending do not capture districts’ costs. Whether school spending adequacy can ever be extricated from other factors that may have affected pandemic schooling remains an open question; however, adequacy cannot simply be ignored in future analyses.
Footnotes
Appendix
Spending and Adequacy Models With and Without State Fixed Effects
| Dependent Variable: Pct. of Time in Virtual Instruction | ||||
|---|---|---|---|---|
| Spending Model 1, With State FE | Spending Model 1, No State FE | NEMC Adequacy Gap/Surplus Model 2, With State FE | NEMC Adequacy Gap/Surplus Model 2, No State FE | |
| Spending per Pupil ($1,000s) | 0.003*** ( 0.001) | 0.001 ( 0.001) | - | |
| NECM Adequacy Gap/Surplus per Pupil ($1,000s) | - | - | -0.008*** ( 0.001) | -0.007*** ( 0.001) |
| Enrollment (natural log) | 0.045*** ( 0.003) | 0.026*** ( 0.004) | - | - |
| Pct. Enrolled Grades 9 to 12 | -0.005 ( 0.015) | -0.034* ( 0.019) | - | - |
| ELL Pct. | 0.391*** ( 0.055) | 0.948*** ( 0.066) | - | - |
| SAIPE Poverty pct. | 0.529*** ( 0.062) | 0.425*** ( 0.060) | - | - |
| Covid Cases Per 1000 (county-level, interacted with state FE when included) | -0.003*** ( 0.001) | -0.002*** ( 0.001) | -0.020 ( 0.029) | -0.069*** ( 0.006) |
| Year (linear) | -0.196* ( 0.111) | -0.047 ( 0.046) | 0.000 ( 0.000) | 0.000 ( 0.000) |
| Constant | 0.423*** ( 0.100) | 0.321*** ( 0.022) | ||
| N | 33543 | 33543 | 33543 | 33543 |
| R-sq. | 0.513 | 0.245 | 0.461 | 0.110 |
Note. SEs clustered at the county level. NEMC: National Education Cost Model; FE: Fixed-effect; ELL: English Language Learners; SAIPE: Small Area Income and Poverty Estimates.
p < .01, **p < .05, *p < .1.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors received no financial support for the research, authorship, and/or publication of this article.
Notes
Authors
MARK WEBER is a lecturer at Rutgers, The State University of New Jersey. Department of Education Theory, Policy, and Administration, Graduate School of Education, 10 Seminary Place, New Brunswick, NJ 08901 (email:
BRUCE D. BAKER is a professor at the University of Miami. Department of Teaching and Learning, School of Education and Human Development, Merrick Building 5202 University Dr., Coral Gables, Florida 33146 (email:
