Abstract
Graduating from college requires understanding major curricular requirements and making several complex interdependent choices to fulfill them. But curricular requirements can be complex and students report that it is not always clear which classes they need to take. In this paper, we create measures to describe and quantify complexity in major requirements and compare complexity across disciplines and across universities. We find wide variation. Much of this variation is not explained by stable differences across departments and across campuses, which indicates that there may be room for simplification. Academic planners, the faculty, and administrators tasked with creating and modifying curricular plans, should consider what kind of academic gauntlets they are creating and curricular complexity should be a part of planning conversations.
Keywords
The path to a college degree is arduous, filled with complex institutional and bureaucratic obstacles (Dynarski et al., 2023; Scott-Clayton, 2015). This complexity is reflected in outcomes. Over half of first-time, degree-seeking college students do not earn a degree within 6 years of starting college (Hussar et al., 2020). Students who do successfully graduate take longer and complete considerably more classes, on average, than is strictly necessary (Shapiro et al., 2016).
One class of barriers is curricular planning: determining which classes are necessary to earn a given credential and fitting those classes into a workable academic plan. This rarely-researched-but-necessary task can be complex, as students are presented with multiple, and often overlapping, individual requirements that they need to satisfy. The complexity inherent in this curricular planning varies across majors and across institutions, with some majors presenting more byzantine requirements than others. Students report being confused and frustrated by major requirements (e.g., Hill et al., 2024; Moore & Shulock, 2014; Rosenbaum et al., 2006); a survey of community college leavers in Florida found that over 25% of those who left college stated that not knowing which class to take next was a factor for leaving (Ortagus et al., 2021).
The potential for confusion is not lost on administrators and policymakers. Guided Pathways is one well-known reform to improve student success and efficiency by simplifying and clarifying the path to a degree, including revisions to curricular requirements within majors (Bailey et al., 2015). More than 400 colleges across the country—both community colleges and 4-year institutions—are involved in formal guided pathways initiatives (Community College Research Center (CCRC), 2021; York et al., 2019). A recent evaluation of 30 colleges found that adoption of guided pathways practices was correlated with significant increases in student early momentum metrics (Lahr et al., 2023).
We know that major requirements appear confusing, that students report that major requirements are complex, and that relatively blunt interventions that address this complexity are correlated with positive changes in student outcomes. However, very little research has identified and measured the specific types and prevalence of complexity in major requirements or the effects of reducing complexity in specific domains. This is one of the first papers to explicitly examine the extent of and the variation in complexity in curricular requirements in higher education.
In this paper, we make methodological and empirical contributions. We do so by creating measures to describe and quantify common forms of complexity in major requirements and by comparing the extent and types of complexity across disciplines and across schools. Specifically, we first create general and broadly applicable measures of task complexity in curricular requirements. We accomplish this by translating sets of major requirements into a common language (Boolean logic strings) and then computing three measures of complexity: (1) the number of unique categories of courses that must be considered to fulfill graduation requirements; (2) the average number of times each course category is referred to in the major requirements; and (3) the number of computations that must be performed in order to determine whether a given set of courses will lead to graduation. The first of these is of particular policy interest because it is straightforward to calculate.
We use these measurements to examine the complexity of curricular requirements for four large majors (Biology, Psychology, Economics, and English) at each of the 32 public 4-year colleges in California. 1 We find variation in complexity across majors and across campuses; however, more importantly, we find considerable variation within majors and within campuses. The large amount of variation across campuses in the same major suggests that there is room for the departments with complex requirements to simplify without intellectual or academic implications.
Creating universal and broadly applicable measures of the complexity present in everyday tasks, such as deciding which classes to take to earn a specific degree, allows policy makers and administrators to better understand the relative complexity of various tasks and to have models for simplification, which could lead to meaningful and effective policy reforms. This study serves as a foundation for future exploration by introducing a novel methodological approach and by presenting the first empirical evidence of variation in measured complexity.
Guiding Frameworks
The American higher education system uniquely prioritizes curricular choice and exploration; within defined constraints students have considerable freedom in choosing the classes they use to fulfill major and general education requirements (“weak framing” in the parlance of Basil Bernstein (Bernstein, 1973; Sadovnik, 1991)). Indeed, graduation requirements within majors allow for scores of unique paths to graduation (Lang et al., 2022). With this flexibility and freedom come an increased onus on the student to determine what is the optimal, or even an acceptable, series of courses to take. Without perfect computational abilities and limited access to information and advising, this creates confusion and frustration for students (Bailey et al., 2015; Rosenbaum et al., 2006).
This insistence on great choice within constraints is rarely discussed and often taken for granted, even though higher education systems in other countries do not operate in the same way (Chaturapruek et al., 2021). In examining the prevalence and patterns of this type of curricular structure, we build on recent work that has explicitly examined curricular pathways (e.g., Chaturapruek et al., 2021; Kizilcec et al., 2023) as well as analyses of the market forces and bureaucratically-driven decisions that drive the development of curricular requirements (e.g., McClellan et al., 2023).
How Curricular Requirements are Built
Curricular requirements capture the scope and sequence of intended knowledge and skill generation within a specific discipline. The process for constructing and amending requirements within a major is not standard across schools and, reflecting the notable autonomy of individual units within a school, it is often not standardized across divisions within schools. Illustrating the nature and process of decision making in higher education—famously referred to as “organized anarchy” (Cohen et al., 1972, p. 1)—this work of creating curricular requirements is a decision “involving unclear goals, unclear technology, and fluid participants” (Cohen et al., 1972, p. 11). This all occurs within a context that might differ across institutions in mission, financial resources, and governance arrangements, as well as how much freedom is accorded individual departments in decision-making (Lattuca & Stark, 2009). Because of the challenges inherent in the great freedom afforded to institutions and departments, some state legislatures have stepped in to standardize course content or major requirements across schools (e.g., AB 1111 and AB 1440 in California).
Three classes of forces affect curricular decision making: learning-based factors, market-based factors, and institutional structures and constraints (McClellan et al., 2023). Learning-based factors reflect both the breadth of knowledge and skills that graduates of a given field should have mastered, but also the underlying structure of knowledge. Both the scope and sequence of assumed knowledge affects the structure of curricular requirements.
We provide examples from two fields to illustrate this point. In economics, Siegfried and colleagues’ influential 1991 paper recommends that major requirements have a highly structured set of required courses in the first 2 years (introductory and intermediate micro- and macroeconomics and quantitative methods courses), followed by a relatively flexible set of electives that provide both breadth and depth (a “Breadth Major” in our stylized depictions in Appendix A). In contrast, a 2008 taskforce of the American Psychological Association suggested that psychology major requirements should have binding temporal dependencies; they recommend tightly hierarchically organized structures with introductory courses serving as prerequisites for intermediary courses and intermediary courses serving as prerequisites for higher level courses (see a “Multiple Concentrations Major” or “Linear Major” in Appendix A; Stoloff et al., 2010).
The second class of factors that influence curricular requirements are market forces. These can be broadly categorized as student recruitment and graduate employability. Departmental resources, faculty lines, and status—in essence, departmental survival—rely on sufficient student enrollment. Departments with enrollment concerns and departments at schools with resource constraints especially need to attend to the interests and demands of prospective and enrolled students. This often means adding new classes or specializations within majors (McClellan et al., 2023).
Perceptions of the employability of graduates, and explicit or implied pressure from accreditors and professional associations, also affects the structure of requirements within a major. For example, fields that are largely seen as pre-professional (e.g., education and engineering) might be designed to prepare students with specific skills necessary to complete an internship or capstone course in the final year of a program (Grote et al., 2021). These curricular goals often result in students needing to make one large decision (e.g., selecting into a certain track) with relatively tightly structured curricula within tracks.
In addition to these forces, the actual process of making curricular decisions has important implications for curricular structures and complexity. In particular, the goals and roles of individual faculty play an important function. Many individual faculty have their own professional or personal goals (such as continuing to have sufficient enrollment to teach the course they have always taught) that might not always perfectly align with the goals of their department. Additionally, while faculty often have considerable decision-making power when it comes to curricular requirements, this work is often not directly compensated or incentivized, which means both that faculty rotate on and off curriculum committees and that faculty might have limited energy and motivation to devote to these departmental activities (Cohen et al., 1972; McClellan et al., 2023).
These varied goals, which are at times in conflict with each other, and the decision-making environment, have two important implications for work on curricular complexity. First, curricular requirements in the same field across schools can vary significantly (please see Appendix A for some stylized examples). Second, changes to curricular requirements often result in more complexity.
Studies have found significant differences in structure across schools within the same discipline. For example, Petkus et al. (2014) found large differences in key required classes in economics majors (e.g., 64.3% of included schools required calculus and 40.7% required econometrics). Similarly, Stoloff et al. (2010) found large variation in the structure of psychology majors (9% of included departments are classified as minimally structured while 20% are fully structured) and in the types of classes that are required for graduation (e.g., 43% of surveyed schools required a laboratory class).
Examinations of changes to curricular requirements show that changes are relatively common and that they are generally associated with increasing options and rules. One survey of department administrators in political science departments found that 70% of departments reported changing their curricular requirements within the past decade (McClellan et al., 2023). These changes were almost entirely in the direction of more offerings and more choice, such as increasing the number of required classes or electives offered or changing course sequences. Specifically, many departments started offering research methods or capstone classes, and more departments implemented high impact practices such as first year seminars and service learning (McClellan et al., 2023). Notably, ease-of-navigation was never stated as an explicit goal for curricular designers (McClellan et al., 2023).
Curricular Complexity
While relatively little empirical work has examined the task complexity inherent in curricular planning, researchers have anecdotally documented that the decision space is complex. Judith Scott-Clayton (2015) noted that curricular descriptions “provide little guidance on which courses should be taken when” (p. 105) and that “term after term this complex decision process must be repeated” (p. 105).
A handful of studies have explicitly examined this topic (e.g., Heileman et al., 2017, 2018; Slim et al., 2014). These studies, some of the first to treat a curriculum as a formal unit of analysis, defined curricular “structural complexity” as a measure of the organization of courses in a major. Using graph-theoretic techniques, they describe the overall complexity of a major by measuring how central each course in a major is to a student’s progress. Their focus is primarily on prerequisite course structures and their measures capture how long a student will necessarily be delayed by failing to take or pass a given course. Other researchers have extended this work to other settings (e.g., Grote et al., 2021 for community college-to-university transfer patterns) to examine the correlation between these measures of complexity and student success.
We build on these studies, some of the most closely related to our own work, in a few key ways. These past studies apply most cleanly to highly structured programs with tightly defined requirements (mostly engineering majors) and focus, generally, on the consequences of not proceeding past a specific course. With this focus, these studies examine a different aspect of complexity than we do. They focus on an aspect of network centrality, specifically how many classes a given class is connected to and how often that class serves as a bridge or a blocker between two other classes. These studies do not examine the complexity inherent in deciphering course requirements and determining which set of courses will lead to a successful graduation. They also do not account for student choice in selecting courses to fulfill requirements. We attempt to fill these gaps by devising measures of curricular complexity that address student choice and are appropriate for majors with significant flexibility.
Like Heileman et al. (2017, 2018), we treat the curriculum as the unit of analysis. We map measures of task complexity developed in other domains onto the forms of complexity present in major requirements. In this context, we define the student’s task to be selecting a set of courses that will successfully lead to graduation in a specific major. This task consists of a set of individual rules that are sometimes complex in isolation and are also sometimes interdependent.
Complexity in this task can take several forms. Our review of curricular requirements, which we detail below, reveals several common types of rules and connections between rules. For example, individual rules can be complex when they have sub-rules (“take four chemistry classes, at least two of which must be organic chemistry,” or “CHEM203 must be taken with CHEM204 to receive credit”) or when they have exceptions (“take four upper-division courses, unless two independent studies are performed, in which case take only three” or “take four upper-division courses other than ENG330”). Interdependencies between rules can also add complexity, such as when a specific class can be used to fulfill exactly one of several different requirements or when requirements that supersede all other requirements (“in addition to all other requirements, at least two of all classes taken must be laboratory classes.”). Such connections make it impossible to successfully satisfy one rule without cross-referencing other rules. We note that the form of complexity we measure does not capture the forms of complexity in past work (e.g., Heileman et al., 2017, 2018). Indeed, decreasing task complexity as we define it can increase network centrality.
Please see Appendix B for a comparatively complex example of a set of major requirements that illustrates the various forms that curricular complexity can take. We describe how we quantify these types of complexity in the methods section below.
Data and Methods
Background: Defining Task Complexity
Our methodological approach is based on research in other fields that has defined and measured complexity in multi-step tasks. We thus start the description of our methodological approach a bit unconventionally, with a brief overview of defining and measuring task complexity based on prior literature.
A complex system is a system “made up of a large number of parts that interact in a nonsimple way [such that] the whole is more than the sum of the parts . . . given the properties of the parts and the laws of their interaction” (Simon, 1957, p. 468). A number of distinct factors can affect the overall complexity of a system: the number of components/rules in a system, the nature and extent of the interdependence between these components, the architecture of the system (e.g., does the decision maker need to revisit previously considered rules?), how routine the task is for the operator, the number of alternative paths that lead to the same desired outcome, how the information is presented, and how dynamic the individual rules and their interdependencies are (Lehman et al., 2020; Rasmussen et al., 2015; Schwartz, 2015; Tversky & Kahneman, 1973; Wood, 1986). As Liu and Li (2012) note, “there is no universally-accepted definition for task complexity . . . task complexity itself is a markedly complex construct, thus defining task complexity itself is a markedly complex task” (p. 554).
In this paper, we adopt a structuralist viewpoint, borrowed from Liu and Li (2012), in which the complexity of a task is defined solely by its structure. While our goal is to measure the cognitive demands that a task puts on an individual (the “cost of implementing a rule” per Oprea [2020, p. 3914]), we proxy these cognitive demands using measures of the task itself. Like Lehman et al. (2020), we define task complexity as a function of the number of elements or rules that a task is composed of (which we refer to as components) and the relationships between these elements (which we refer to as connections). These two measures are among the most common used to measure task complexity and are two of the 10 dimensions of complexity that Liu and Li (2012) identify based on a review of the literature. In this methods section, and again in the discussion section, we discuss the measures of complexity that we do not include in this study.
Components are the formal sections that compose a task and are defined without respect to other components in the same task (Baccarini, 1996; Lehman et al., 2020; Williams, 1999; Wood, 1986). Tasks with a high number of components contain more detail and require more actions to successfully complete the task. Successful completion of tasks with high numbers of components requires “more organizational resources in the form of cognitive effort, attention, and coordination” (Lehman et al., 2020, p. 1442).
Connections describe when one component exhibits a functional dependency on other component(s) in the system. Components that are connected to other components may be affected by activities pertaining to another component (Campbell, 1988; Lehman et al., 2020; Wood, 1986). The extent of complexity driven by connections depends on The number, strength, and dependencies of the connections between tasks or elements in a system . . . [complexity from connections] increases when the task elements or tasks are highly connected and the output of one element depends on the input of another. (Rasmussen et al., 2015, p. 232)
Our definitions of components and connections inform our specific approach to measuring complexity, which we describe below.
There are several measures of complexity that we do not include in this study. Some dimensions of structural task complexity we omit because they are not applicable in the context of curricular planning. These include variability (changes or unstable characteristics of task components) and novelty (novel, irregular and non-routine events or tasks; Liu & Li, 2012, p. 564). Other measures of complexity that could be important for this task are outside the scope of this paper. These include factors such as interchangeable classes and the presentation of requirements. We also do not include individual-specific drivers of complexity, such as familiarity, access to information, or preferences. As noted above, our measures of complexity focus on individual rules and the connections between rules. We do not attend to the implications of failing to take or pass a particular class (akin to the network centrality that Heileman et al. focus on). We discuss these opportunities for future research further in the discussion section.
While this work does not explicitly examine the relationship between measured complexity and student outcomes, past work indicates that such a relationship likely exists. Work across several domains (e.g., tax rules [Abeler & Jager, 2015], public health [Lehman et al., 2020], and nuclear power diagnostics [Ham et al., 2012]) has shown strong relationships between measured task complexity and outcomes. Specifically, task complexity is related to an individual’s ability to process information, an individual’s ability to make a strategic decision, and to the eventual level of intrinsic motivation and task satisfaction (Abeler & Jager, 2015; Liu & Li, 2012, p. 554). While this paper does not explicitly measure the relationship between task complexity and outcomes, we see this past work as a strong indication that such a relationship likely exists.
Our Measures of Task Complexity
It is on this foundation that we based our objective measures of curricular complexity. We had three guiding principles when designing our measures of task complexity in major requirements: that they can be calculated by evaluating only the major requirements and courses without reference to other information; that they can be applied to any major and compared across campuses, major types, and time; and that they can be calculated with reasonable precision for any major. This set of principles, in combination with an understanding of task complexity as defined above, led us to three concrete measurements. Our measurements are intended to account for both the number of components of the task as well as the complexity of understanding connections between components (Liu & Li, 2012). 2
We begin the measurement process by first creating a set of categories in which to sort the different available classes. Each category is made up of classes that have the exact same effect on graduation as other classes in the category. That is, courses in each category are interchangeable with regards to graduation: any class in the category could be swapped for any other class in the same category with no effect on graduation outcomes. The process of creating these categories is relatively straightforward, does not require much technical skill, and could be replicated broadly by others.
As an illustrative example, imagine that requirements to earn a degree in Biology were:
Take two core courses.
Take three electives.
BIO 310 may be counted as a core course or an elective, but not both.
There are three categories here: (1) courses that are core courses but not BIO 310 (we call this category CORE), (2) courses that are electives but not BIO 310 (ELEC), and (3) BIO 310 (BIO310). If a student takes a set of courses that successfully leads to graduation, any of their CORE courses could be swapped for any other CORE course without changing the graduation outcome, and similarly for the other categories. We refer to this measure as Number of Course Categories. We built this measure to be roughly analogous to the number of distinct components in a complex task (Lehman et al., 2020; Liu & Li, 2012).
Our second and third measures are meant to capture the number of distinct components, the number distinct of connections, and the complexity of these connections (Lehman et al., 2020; Liu & Li, 2012). The second measure of complexity we use is Number of Mentions per Category. Once the categories have been determined, we count the number of times each category must be referenced when figuring out how to graduate. We then take the average across the full list of categories. Counting the number of references could be done in a simple colloquial way that would not require technical training—by reading through the list of requirements as written in the set of graduation requirements and counting a mention each time the category is referenced. In the case of our example Biology requirements above, “Take two core courses” relates to both the CORE and BIO310 categories and “Take three electives” relates to ELEC and BIO310. This equals one mention for CORE and ELEC each, and two mentions for BIO310, for an average of 1.33 mentions per category. This calculation uses two references for BIO310 rather than three as we have written the requirements above.
In this paper we take a more disciplined—but more difficult to apply—approach to counting the number of mentions. We use a computer-understandable set of Boolean statements (described below) and count the number of times each category is referenced in these statements. We also use these Boolean statements for a third measure of complexity.
For our third measure of complexity, we convert each set of major requirements into a set of machine-interpretable Boolean statements that, when executed, determine whether a given set of courses would successfully complete the task and lead to graduation. We focus only on requirements within a major (that is, we ignore school-level distributive and general-education requirements). Our steps in creating the Boolean statements are detailed in Appendix C and can be applied to any majors for purposes of comparison.
We note that creating the Boolean statements is a time-consuming and somewhat complex task. To ensure that our measures were consistent and as efficient as possible we used a multiple coder process. After the first coder (a student assistant or one of the authors) created the codes, a second coder (one of the authors) checked the Boolean statements to ensure that they were accurate and as efficient as possible. When there were significant disagreements between the first two coders, a third coder was brought in.
Taking the same Biology requirements as above as an example, the first step in constructing the Boolean statements is to determine the categories for all available courses. As above, we have CORE, ELEC, and BIO310 for this set. Then, to determine if a given set of courses leads to graduation, we count the number of courses taken in each category, storing these as variables.
Finally, we write a set of Boolean statements that evaluate to True if the student graduates, and False otherwise. For our current example, the student would graduate if:
AND
To turn these sets of Boolean statements into a measure of complexity which we call the Number of Operations, we process the written forms of the statements themselves. To count the Number of Operations, we count up the number of times that two numbers must be compared to each other with >, <, or =, the number of arithmetic or logic operations used with AND, OR, +, −, or *, and the number of sets of parentheses. Parentheses serve to account for sub-calculations, as well as any use of “Min” or “Max” functions, which occur regularly in the coding of majors.
In our example, there are five operations in ELEC + BIO310*(CORE ≥ 2) ≥ 3: two “≥”s, a set of parentheses, a +, and a *. There are two operations in CORE + BIO310 ≥ 2 (+ and ≥), for seven total. Then, there is an eighth operation: the AND combining the two together. This number reflects how long it would take a computer to calculate whether a given student graduates using the set of Booleans. 3 This measure, similar to computational conceptions of complexity, represents the overall complexity from both components and connections.
To align with past work on task complexity, our measures are meant to capture both the number of rules (components) and the degree of interconnectedness between these rules (connections) using an approach that can be calculated using only the major requirements, that can be applied to any major regardless of location and time, and that they can be calculated with reasonable precision.
Of course, given the complexity of this task, one could accomplish this goal in other ways. One possible avenue for creating metrics that represent the complexity of college major requirements is to use generative AI models like ChatGPT. In Appendix D, we describe our attempts at measuring task complexity in major requirements using generative AI. We find both approaches to be lacking and determine that until the technology improves considerably, it is not capable of automating the creation of complexity metrics. We thus present our approach as one method in an effort to spur future work and conversation.
Applying Our Measures: Task Complexity in Major Requirements
We applied our three measures of complexity to the major requirements for four majors at each university in the California State University (CSU) and University of California (UC) systems. The CSU system is comprised of 23 campuses. 4 It is the largest 4-year public university system in the nation; together the CSU campuses enrolled 352,793 full-time equivalent undergraduate students in Fall 2023, 5 and conferred 109,919 Bachelor’s Degrees in 2021–2022. 6 On average, CSU campuses admit 56% of applicants. The inter-quartile range of combined SAT scores for entering students was 884-1,102. 7
The UC system is comprised of nine campuses with undergraduate enrollment (and one that only enrolls graduate students). In 2022 the system enrolled 230,407 undergraduates, and in 2022–2023 conferred 62,311 Bachelor’s Degrees. 8 On average, UC campuses admit 36% of applicants and the inter-quartile range SAT range of entering students is 1,078–1,340. We focus on these university systems because, as major university systems with large enrollments, the complexity of their requirements is interesting for its own sake. Additionally, these large systems can serve as a reference point for other universities.
Seven of the nine UC campuses operate on the quarter system (three terms during the academic year plus a summer term) while two (UC Berkeley and UC Merced) operate on the semester system. During the time of our data collection, four CSU campuses operated on the quarter system (CSU East Bay, Cal Poly Pomona, CSU San Bernadino, and Cal Poly San Luis Obispo) while the rest operated on the semester system. We examine differences in curricular complexity between the CSU and UC sectors in the findings section and discuss potential implications in the discussion section.
We coded the major requirements for four majors at each campus: English, Biology, Psychology, and Economics (or their closest analogues at the given campus). We chose these majors because they are among the biggest majors at each campus, exist on almost every campus, 9 and provide reasonable disciplinary diversity. We were successfully able to create Booleans for all selected majors in our sample except for English at UC Santa Cruz, for which we were unable to find complete information.
We downloaded the major requirements either from the departmental webpage or from the campus catalog when the departmental webpage did not offer clear requirements. We coded major requirements as available on these websites during the 2016–2017 academic year. PDFs of major requirements saved from this time are available from the authors. Each set of requirements was coded by either one of the main authors or by a student assistant. Each set of requirements was checked twice, once by another author and once by a student assistant. Our calculations for each major are in Appendix E.
With the complexity calculations for each major at each campus, we analyze differences in these measures across major and campus. We first examine how related our three measures of complexity are using bivariate correlations. We then use simple graphical methods (density plots) to examine variation in each of our measures of complexity. To formalize these graphical findings and examine how much of the overall variation is explained by disciplines and by campuses, we examine basic differences in means as well and use formal analysis of variance (ANOVA) techniques.
Comparisons of complexity across majors within the same campus allows for a descriptive look at how complexity might differ as a function of the perceived necessary content of the major in question. Comparison of complexity within majors across campuses offers insight into how campus-specific norms and expectations might affect complexity. Complexity that is not explained by either major or campus effects gives insight into how much variation there is in the perceived depth and breadth of necessary content within a major. Considerable variation in the degree of complexity across campuses that is not explained by campus-specific effects might indicate that more complex majors could simplify.
Results
Complexity Across Campuses and Majors
Table 1 presents the bivariate correlations between our three measures of complexity in our sample of four majors at each of the CSU and UC campuses. Panel A of Table 1 shows that the Number of Operations is strongly related to both the Number of Categories and the Mentions per Category. However, Number of Categories and Mentions per Category are not strongly related. This indicates that Number of Operations is picking up both the forms of complexity captured by Number of Categories and by Mentions per Category, but that these two forms of complexity are roughly orthogonal to each other, suggesting the two measures are picking up different aspects of complexity. If examining UCs separately from CSUs, then among UCs there is a positive correlation of .3 between Mentions per Category and Number of Course Categories that is statistically significant at the 90% level.
Correlation Matrix of Complexity Measures
Panel A: All Colleges
Note. Asterisks indicate statistical significance at the 10% (*), 5% (**), and 1% (***) level. Data are from authors’ calculations of the number of course categories, the average number of mentions per category, and the number of computational comparisons that need to be successfully completed to successfully earn Biology, Economics, English, and Psychology majors at all 23 California State University and nine undergraduate-serving University of California campuses. Major requirements were coded as available on these websites during the 2016–2017 academic year.
Number of course categories
Figure 1 shows the distribution of the Number of Course Categories measure for each of the four majors across campuses. There are two key points of comparison in this figure: differences in the means across majors and the amount of dispersion within majors. On average, majors in our sample have 13.4 course categories. Figure 1 shows that there are considerable differences across major types in terms of the mean number of course categories. Biology has the highest mean. Nearly every Biology major has more categories than the median of each other major type. The mean Biology major has 18.9 categories of courses, compared to 9.4 for Economics, 12.4 for English, and 13.4 for Psychology. Further, Biology has more extreme outliers on the high end, with several Biology majors requiring students to keep track of how many courses they have taken in more than 30 categories. Economics stands out as having relatively few categories. English and Psychology are fairly similar to each other.

Number of Course Categories.
Figure 1 also shows considerable heterogeneity within majors. Biology has a fairly tight distribution around 16, with some additional outliers on the high end. English and Psychology are more spread out, with concentrations around 10 and 11, respectively, and secondary concentrations around 15 and 18, respectively. Economics is the most similar across campuses, with a relatively tight distribution centered around 9, and few large outliers. The extent of variation within each major suggests that there may be room for the more complex majors to simplify by emulating simpler majors or that there are campus-specific differences in complexity.
We formalize these examinations of heterogeneity using ANOVA analyses. Table 2 shows the results of an ANOVA with the variation in Number of Course Categories explained by campus fixed effects and by major fixed effects. Panel A shows that across all schools, 37% of the variation in Course Categories is explained by differences between majors. This is similar when we examine the UC campuses (Panel B) and CSU campuses (Panel C) on their own. This likely reflects the disciplinary differences in the assumed structure of knowledge within a field.
ANOVA of Complexity Measures on Major and Campus
Panel A: All Campuses
Note. Asterisks indicate statistical significance at the 10% (*), 5% (**), and 1% (***) level, from a joint F test. Major Patrial SS indicates the amount of variation explained by major (Biology, Psychology, English, or Economics). Campus partial SS indicates that amount of variation explained by campus. Data are from authors’ calculations of the number of course categories, the average number of mentions per category, and the number of computational comparisons that need to be successfully completed to successfully earn Biology, Economics, English, and Psychology majors at all 23 California State University and nine undergraduate-serving University of California campuses. Major requirements were coded as available on these websites during the 2016–2017 academic year.
Campus fixed effects explain 28% of the variation in Number of Course Categories across all campuses (and again this is similar when we examine the CSU and UC campuses separately). This variation is explained by overall campus-wide differences (campus effects that are shared across all majors on a campus). For example, across all majors, California Polytechnic San Luis Obispo tends to have major requirements with comparatively more categories (its Biology, Economics, and Psychology majors all have the second-highest number of categories). However, while there are significant campus effects, these effects do not align with differences across the CSU and UC systems. On average, UCs have 1.3 fewer categories across all majors, but the difference is not significant (results available upon request). The 35% of variation unexplained by either major or campus likely indicates that some majors that are comparatively complex almost certainly have room to simplify.
Number of mentions per category
Figure 2 shows the distributions of the number of mentions per category between majors. Here, we see relatively little difference between different majors, with nearly identical-looking distributions for all four majors. The only differences between them are in the high-end outliers; English and Economics both extend a bit further to the right than Biology and Psychology. These distributions have standard deviations of about 0.4 each. Our ANOVA analysis (presented in Table 2, Panel A) indicates that 2% of the variation in mentions per course category is explained by major and that this is not statistically significant at the 10% level. It is similarly not significant in the UC and CSU campuses when analyzed on their own (Panels B and C).

Number of Mentions per Category.
While there is variation in mentions per course category across campuses, campuses do not appear to have shared mentions-per-category effects. The ANOVA in Table 2 Panel A shows that while 30% of the variation in mentions per category is explained by campus effects that are shared across major, this is not statistically significant at the 10% level (and looks similar across the CSU and UC sectors). This suggests that the 30% of variance explained is due to the sheer number of campuses and that there are not meaningful differences across campuses. Additionally, there is no statistically significant difference between the UCs as a group and the CSUs as a group. Adding the statistically insignificant campus effects to the insignificant 2% of variation explained by major, there is still 68% of the variation unexplained by major or campus.
Number of operations
Figure 3 shows the distribution of the Number of Operations necessary to determine whether a given set of courses leads to graduation. Across all four majors, the average Number of Operations is 22.1. Figure 3 shows that for Number of Operations, like the number of course categories shown in Figure 1, Economics is simpler than the other majors, with a mean of 18.7. In this case, Psychology, English, and Biology have similar means of 23.6, 23.6, and 24, respectively, but Psychology has a unimodal distribution around that mean, while Biology and English are bimodal.

Number of Operations.
Again, there is considerable heterogeneity within majors across campuses. In economics the simplest majors require only five comparisons, but the most complex require almost 25. Table 2 Panel A shows that there are no significant differences between majors in the averages of the Number of Operations (differences between majors only explain 2% of the total variation). However, when we examine the CSUs and UCs separately, we see that differences between majors explain 18% of the total variation among the UC campuses, significant at the 90% level (shown in Panel B). Figure 3 indicates that the shape and spread of the distributions around the means are very different.
There are also no significant differences between campuses. Specifically, the ANOVA is checking for a shared campus effect that applies to all its majors. We see obvious variation within majors across departments in Figure 3, but the results of the ANOVA indicate that this variation is not shared between departments on the same campus. For example, UC Santa Barbara varies from being the Psychology major with the fifth most Operations across campuses to the Biology major with the 29th most Operations. We do see institutional differences across sectors, though. UC campuses have on average 6.6 more operations than CSU campuses, and the difference is statistically significant at the 1% level.
Overall complexity
Taking all three figures together, we can generally characterize each major type. Economics is consistently the simplest across the three measures. Economics majors tend to have the fewest requirements and these requirements are relatively simple; reflecting this, Number of Categories and Number of Operations are both comparatively low. These values are reflective of what we see directly in the requirement descriptions: economics tends to have a “breadth” structure in which a few required core courses are followed by picking from a list of electives (see Appendix A for a visual representation). This matches the recommendations set forth by Siegfried et al. (1991).
English tends to be structured with comparatively few unique course categories, but in some schools these requirements interact in complex ways. For example, some schools require that students take courses focusing on specific historical eras or on certain literary topics. These requirements may overlap (e.g., “Love and Desire in Contemporary American Poetry” can be used to fulfill the “one course on literature and ethnicity, literature and gender, or literature and sexuality” requirement or the “one course focusing on literature written in English between 1900 and the present” requirement).
Biology has high Number of Categories measures, but they come together in relatively simple ways, with Number of Operations not as high in relative terms. This reflects a tendency in biology majors to require students to follow certain “tracks,” with few ways to satisfy any one decision; biology has many unique categories of classes, but each category is referred to in comparatively simple ways (e.g., “Take two of the following Genetics and Biotechnology classes”).
The change in the relative complexity between Biology and Psychology, comparing Figure 3 to Figure 1, indicates that psychology majors have, on average, fewer categories of classes but more sub-requirements, (e.g., “Two courses are required from two subfields below [one of which is a seminar]. Two additional Psychology courses are required from two different subfields below”). Such interconnected requirements add complexity, which is picked up in our count of operations. This structure, with relatively few categories of classes but tight dependencies within them, matches the curricular guidelines set by the 2008 taskforce of the American Psychological Association.
The accordance between general impressions of the majors, the curricular guidelines established by disciplines, and the measurements we find, adds credence to the use of the measurements and suggests that they may be useful as ways of measuring the complexity of major requirements. These measures also provide more detailed information on where most of the between-field variation is. We find little difference between fields in the number of mentions per category and more variation in the number of different categories to consider and in the complexity of how these categories fit together.
The variation across fields gives a sense of the value of the measurements and how these fields can be characterized. However, we consider the most important finding to be that most of the variation is within-field and within-campus for all three measures. The high degree of within-field variation, and the fact that this variation is not explained by campus-specific effects, indicates that there is considerable room for the more complex majors to simplify across all three dimensions. There is no obvious and apparent reason why majors on the right tails must be more complex than the same major at a different campus.
Conclusion
Colleges are full of complex requirements and byzantine policies and procedures. Even ignoring the many academic and intellectual demands that students must meet to graduate, navigating one’s way to a degree is difficult. Research increasingly indicates that non-academic demands play a very important role in student success, particularly for traditionally underserved groups. A prime example of this often-hidden complexity is curricular planning: deciphering requirements to determine which sets of classes will successfully lead to graduation and working those classes into an academic plan. In this study we created broadly applicable measures of curricular task complexity.
Applying our measures of complexity to four majors at each of the 32 public 4-year colleges in California, we find considerable heterogeneity in complexity. While our measures of complexity are sometimes predicted by both major and campus fixed effects, we find considerable variation that is not explained by either factor. We take this as evidence that there might be room for simplification across fields and across campuses.
Such complexity is, by its very nature, hard to quantify. Creating succinct, widely applicable, and easy-to-understand measures of complexity is a difficult undertaking that will necessarily omit some real aspects of complexity—yet, it is important to do so. If we can identify—either through behavioral data or these close examinations of requirements and policies—the decision points that are most difficult for students, we can make schools easier to navigate and increase success rates while decreasing differences in performance across groups. Such improvements could occur both at the margin of choice and the margin of completion.
Students, particularly in their first few years of enrollment, are often considering multiple majors at the same time (Baker, 2018). Several logistical factors, unrelated to academics, labor market outcomes, or enjoyment, affect this choice (Baker et al., 2018; Hill et al., 2023). While the complexity inherent in deciphering major requirements is unlikely to move a student between very different majors—like economics and English—frustration related to creating a viable program of study could move students between similar majors—like economics and statistics. This may be particularly true for students with less information and fewer resources.
Conditional on choosing a major, the complexity of major requirements could affect student completion and efficiency. Past research has shown that “not knowing which class to take next” is a commonly cited factor explaining why community college students decide to leave college (Ortagus et al., 2020). In interviews, students who have finished degrees have expressed frustration that courses they took ended up not counting towards graduation (Rosenbaum et al., 2006).
Recent policy conversations show that policy makers believe that curricular requirements can be complex and confusing, that the complex nature of the major requirements leads to great cognitive demands in choosing classes, and that the resulting mistakes and frustration could harm performance, persistence, efficiency, and graduation. Recent policies with the goal of reducing complexity—such as Guided Pathways—have led to interventions such as suggesting curricular paths for students and defaulting students into course plans in their first year (Bailey et al., 2015).
Preliminary evidence suggests that such simplifications could have large positive effects on student outcomes. For example, evidence from a five-college district in Texas shows that over the course of adoption, excess credit accumulation decreased by 13% (Jenkins et al., 2020). However, we note that current interventions guide the process for students but do not measure or change the level of complexity of a set of major requirements. To better understand the extent to which such reforms could be helpful, as well as cases in which less-intensive intervention may be more valuable, we need to better understand the prevalence of and variation in different types of complexity.
Policy Implications
These findings from past research, combined with the evidence of considerable unexplained heterogeneity in measured complexity in this study, suggest that academic planners, the faculty, and administrators tasked with creating and modifying curricular plans, should consider what kind of academic gauntlets they are creating and that curricular complexity should be a part of planning conversations. We assert that a quantitative measure of curricular choice complexity that can be independently calculated by different departments, such as the measures we have presented in this study, would allow for comparative understanding of complexity across campuses and majors and could better inform those making decisions about implications of their choice.
Based on the prevalence of, forms of, and variations in complexity we present in this paper, we believe there are several potential actions that could be undertaken by policy makers or practitioners that are low-cost and easy to implement and that could support student efficiency and success. First, students could be presented with default curricular paths and academic schedules upon selecting a major. These paths could be easily modifiable based on student preferences and interests but could provide students with a successful example from which to start. The Program Pathways Mapper (programmapper.org) is an example of such a tool. Second, programs can use existing technology to provide students with the opportunity to self-audit their academic plans and identify potential enrollment mistakes. A number of colleges use Ellucian’s Degree Works software to notify students if they are taking a course that is not required for their declared program (and thus could be out of compliance with federal financial aid policies). However, rather than providing student-level solutions, such as providing students with tools and guidance to navigate complex and confusing requirements, schools and programs could focus on structural reforms by actually reducing the complexity of their requirements. Schools could use similar majors at other schools as guidance for creating simpler major requirements.
Limitations
There are, of course, important limitations to our approach. Our measures of complexity are simple in that they look only at the structure of major requirements, but the latter two measures require the difficult task of coding major requirements in Boolean format. The measurement of the number of course categories, on the other hand, is straightforward to calculate and so might have broader applicability. An administrator wishing to calculate this can go through the list of all courses relevant to the major and group together any courses that are truly interchangeable. Swapping one course for another in a given group should have no effect on graduation. The number of groups is the result. This measure does not capture all aspects of complexity, but it provides a relatively comprehensive view that is comparatively easy to calculate and easy to communicate to important stakeholders.
Our computational approach leaves out important aspects that might make requirements variably complex and that are likely to be important for student success. This includes how requirements are presented and how difficult it is to locate major requirements. In our own examinations, we found several departments that listed similar majors on the same page (such as different major tracks, or BA/BS variants) in such a way that one set of requirements could easily be confused with another. We also note that the presentation of curricular requirements is not always standardized across departments, and this could add complexity for students (Bonner, 1994; O’Donnell & Johnson, 2001; Zhao, 1992), as could unreliability induced by inaccurate and misleading information (Greitzer, 2005; Woods, 1988).
We also do not distinguish between interchangeable classes. This could be an important consideration as the number of alternatives (Lussier & Olshavsky, 1979; Payne, 1976; Schwartz, 2015) and number of solutions or paths (Bonner, 1994; Campbell, 1988) could add to complexity for students.
Our approach does not account for temporal dependencies such as pre-requisites or differential consequences of failing specific classes (such as examined by Heileman et al., 2018). Finally, the task temporal aspects of this decision, both in the time pressure caused by registration windows and capacity constraints and the fact that the decision might take place across multiple sessions, could cause complexity for students (Greitzer, 2005; Payne et al., 1993). Any measurement of something as multifaceted as complexity is likely to be partial, but these are important omissions.
Importantly, the measures we use in this study are independent of the individuals performing the task and are based solely on the structure of task requirements. Our study does not measure students’ experiences of this complexity. We do not include what Oprea (2020) refers to as “non-algorithmic” drivers of complexity and we note that certain structures may be objectively simple but subjectively complex, or vice versa. Our measures of task complexity are meant to provide a proxy for the mental effort required to successfully implement a set of rules to complete a task (Oprea, 2020; Rasmussen, 2015), but measures such as time on task and mistake rates could provide a clearer measure of the person-specific subjective complexity of a task. As this study focuses on objective, stable measures of task complexity, such measures are beyond the scope of this paper but would be a natural extension to the work we present here.
As a final limitation, we also note that we treated curricular complexity as an isolated example of complexity, when in reality understanding major requirements and choosing a set of courses that will lead to graduation is part of a complex interdependent system. We note two examples of how curricular complexity could interact with other forms of complexity.
Students who wish to transfer institutions (from a community college to a 4-year university, in particular) need to navigate at least two sets of major requirements. Furthermore, community college students who plan to apply for transfer to more than one 4-year school need to navigate several sets of requirements—those for earning an associates degree, and the classes required for transfer, and those required for earning a bachelors degree at their transfer destination. There are a number of policy initiatives in California that work to address this, including the Associates Degree for Transfer, which streamlined transfer from CCCs to CSUs by ensuring that transfer requirements were common across all CSUs within a certain major, and common course numbering initiatives across sectors. The focus of this paper obviously interacts with these other initiatives. Studies examining excess credit accumulation among 2- to 4-year transfer students (e.g. Fink et al., 2018) and studies examining the effects of these policies on student efficiency (e.g., Baker, 2016; Baker et al., 2023) underscore the importance of clear curricular pathways and focused advising.
Curricular complexity could also interact with academic calendars. Past work has shown that switching from semesters to quarters negatively impacts on-time graduation (Bostwick et al., 2022). Part of this effect could be due to reduced scheduling flexibility if a student fails to take a necessary course and might need to wait longer until it is offered again. The effects of curricular complexity could look different across contexts.
Even with those significant limitations, we believe that this work provides an important first step in examining task complexity in major requirements, while pointing at future work on the actual student experience of complexity. The existence of measures themselves allow for rich, targeted discussions within departments and at the university level. Providing decision-makers with objective, quantifiable measures may lead to more agreement on specific actions. These measures also enable further investigations. We could use them to ask, for example, whether students choose their major partially based on the complexity of its major requirements. Similarly, it becomes possible to ask how complexity is related to the probability of leaving the major, persistence, transfer from a community college to a 4-year institution (e.g., Grote et al., 2021), or graduation. Because departments change major requirements with some frequency, the effects of complexity on important outcomes may be causally identified in a panel data setting.
In these examinations of the effects of curricular complexity on student outcomes and success, it will be particularly important to attend to issues of equity. People with less experience in a particular domain experience the greatest costs to information processing and are thus likely to spend less time searching out and evaluating information about courses, to exhibit inefficient searching, and to use simplifying heuristics when deciding which classes to take (Bettman & Park, 1980; Brucks, 1985; Onken et al., 1985). This means that students with relatively weak informational networks, first-generation students, and students from backgrounds that are structurally disadvantaged will have more difficulty navigating complex requirements with weak framing than more straightforward requirements with strong framing. These students could experience the greatest ill effects because of complex curricular requirements. Additionally, to the extent that complexity varies across different majors, differential complexity may also influence the share of students from disadvantaged backgrounds being guided towards different majors and thus different careers.
Of course, policy interest in this complexity goes beyond equity concerns. There is no shortage of attempts to change student incentives to graduate, but if complexity is a barrier those can only go so far. The curriculum environment cannot improve without an investigation of the task of curriculum selection itself (Lehman et al., 2020). As Lehman et al. (2020) point out, “failure to accomplish routines is more likely for routines associated with more complex rules” (p. 1442).
In this paper we present some of the first generalizable and broadly applicable measures of the complexity of major requirements across disciplines and across campuses. Our findings show that much of the existing complexity might be unnecessary; there is great variation within majors across campuses and within campuses across majors. This suggests that curricular simplification is an attainable goal, and that by closely examining the policies that we expect students to follow we might be able to make changes that affect student success.
Footnotes
Appendix A
Appendix B
Appendix C
Appendix D
Appendix E: Complexity Calculation Results
| Psychology | |||||||
|---|---|---|---|---|---|---|---|
| University | Categories | Mentions Per Tag | Opera-tions | University | Categories | Mentions Per Tag | Opera-tions |
|
|
San Diego | 18 | 1.5 | 21 | |||
| Cal Poly SLO | 18 | 1.57 | 26 | San Francisco | 12 | 1 | 11 |
| Cal Poly Pomona | 19 | 1.6 | 29 | San Jose | 13 | 1 | 20 |
| Bakersfield | 11 | 2 | 16 | Sonoma | 17 | 1 | 18 |
| Chico | 8 | 1 | 10 | Stanislaus | 23 | 1.4 | 25 |
| Monterey Bay | 11 | 1 | 20 | Channel Islands | 5 | 1 | 6 |
| Northridge | 16 | 1.63 | 22 |
|
|||
| Dominguez Hills | 10 | 1 | 17 | Berkeley | 16 | 1.53 | 38 |
| Fullerton | 9 | 1 | 19 | Davis | 10 | 1.8 | 37 |
| Los Angeles | 10 | 1.11 | 21 | Merced | 12 | 1.67 | 26 |
| Long Beach | 17 | 1 | 30 | Riverside | 12 | 1.33 | 30 |
| East Bay | 9 | 1.56 | 28 | Irvine | 16 | 1.06 | 41 |
| Fresno | 18 | 1 | 16 | Los Angeles | 13 | 2 | 52 |
| Humboldt | 12 | 1 | 10 | Santa Barbara | 14 | 1 | 12 |
| Sacramento | 19 | 1.7 | 34 | Santa Cruz | 14 | 1.31 | 39 |
| San Bernadino | 14 | 1.63 | 22 | San Diego | 11 | 1 | 20 |
| San Marcos | 8 | 1.67 | 17 | ||||
Note. Data from authors’ calculations of the number of course categories, the average number of mentions per category, and the number of computational comparisons that need to be successfully completed to successfully earn Biology, Economics, English, and Psychology majors at all 23 California State University and nine undergraduate-serving University of California campuses. Major requirements were coded as available on these websites during the 2016–2017 academic year. CSU = California State University. UC = University of California. Cal Poly SLO = Cal Poly San Luis Obispo.
Acknowledgements
We thank seminar participants at California State University, Fullerton, the University of Delaware, the University of Pennsylvania, and Stanford University for their thoughtful feedback. We thank Mitchell Stevens and Elizabeth Bruch for generous help in framing this paper.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Project funding was provided to Baker by the NAEd/Spencer Foundation Postdoctoral Fellowship and The University of California, Irvine’s School of Education.
Notes
Authors
RACHEL BAKER is an associate professor at The Graduate School of Education, University of Pennsylvania, 3700 Walnut Street, Philadelphia PA 19104; email:
NICK HUNTINGTON-KLEIN is an assistant professor in the department of economics at Seattle University, Savery Hall, 410 Spokane Ln, Seattle, WA 98105; email:
