Abstract
The “Assessment Movement” in higher education has generated some of the most wide-ranging and heated discussions that the academy has experienced in a while. On the one hand, accrediting agencies, prospective and current clientele, and the public-at-large have a clear vested interest in ensuring that colleges and universities actually deliver on the student learning outcomes that they promise. Anything less would be tantamount to a failure of institutional accountability if not outright fraud. On the other hand, it is no secret that efforts to foster a “culture of assessment” among institutions of higher learning have frequently encountered resistance, particularly on the part of faculty unconvinced that the aspirations of the assessment movement are in fact achievable. One consequence of this tension is the emergence of an embryonic literature devoted to the study of processes that monitor, enhance, or deter the cultivation of a “culture of assessment” with sufficient buy-in among all institutional stakeholders, faculty included. Despite employment of a wide-ranging host of research methods in this literature, a significant number of large unresolved issues remain, making it difficult to determine just how close to a consensual, culture of assessment we have actually come. Because one critical lesson of extant research in this area is that “metrics matter,” we approach the subjective controversy over outcomes assessment through an application of Q methodology. Accordingly, we comb the vast “concourse” on assessment that has emerged among stakeholders recently to generate a 50 item Q sample representative of the diverse subjectivity at issue. Forty faculty and administrators from several different institutions completed the Q-sort which resulted in two strong factors: the Anti-Assessment Stalwarts and the Defenders of the Faith. Suggestions are offered regarding strategies for reconciling these “dueling narratives” on outcomes assessment.
Outcomes-assessment practices in higher education are grotesque, unintentional parodies of both social science and “accountability.” No matter how much they purport to be about “standards” or “student needs,” they are in fact scams run by bloodless bureaucrats who do not understand the holistic nature of a good college education. Those who are afraid of rubrics and assessment instruments remind me of Luddites who refuse to perceive reality. If we are to rely on our time-tested bold statements that “we are a quality institution,” without any evidence, then we deserve to be judged by outside constituencies.
Introduction
What has been labeled the “Assessment Movement” (Ewell, 2002, 2009) in higher education has generated some of the most wide-ranging and heated discussions that the academy has experienced in quite some time. Indeed, judging from the most audible voices in conversations surrounding outcomes assessment, the general impression conveyed by the tone and tenor of the debate bears less of a resemblance to a vigorous yet disciplined, diplomatic exchange of alternative intellectual views than an all-out, profoundly polarized, and acrimonious “dialogue of the deaf” between deeply-entrenched and seemingly antithetical positions. Occupying one side of the ensuing stand-off are the advocates of outcomes assessment. Among proponents of the assessment movement, it is absolutely essential to gauge the effectiveness of higher education and to hold institutions accountable (Carey, 2010; Glenn, 2010; Havens, 2013; Miller, 2012). On the opposite side of this proposition we find an equally committed cluster of higher education professionals, namely the skeptics or detractors of the outcomes assessment enterprise that has gathered momentum, particularly as a result of public demands for accountability and accrediting agency practices over the past decade. Central to the resistance by members of the skeptics camp are a range of concerns including, among others, complaints about the “political” nature and origins of the accountability movement, the lack of meaningful faculty input and ownership, and the persistence of troubling epistemological questions bearing on the validity of various measures commonly employed to provide undisputable evidence of real educational progress (Hazelkorn, 2013; Horn & Wilburn, 2013; Nugent, 2008).
Whether these caricatures of the pro and con camps on the value of outcomes assessment efforts are accurate is a question that the present research is designed in part to address. As we shall see, extant research aimed at ascertaining progress in the quest to develop institution-wide “cultures of assessment” by calibrating and monitoring over time attitudes and practices vis-à-vis outcomes assessment on the part of key stakeholders—accrediting agencies, academic administrators, teaching faculty, and students—is by no means of one piece in terms of methodologies and metrics, no less than substantive findings. Indeed, our review of the literature bearing on assessment and its effective implementation across the universe of higher education in the United States leads us to surmise that one of the biggest deterrents—if not the principal obstacle—encountered in the quest to cultivate campus-wide “cultures of assessment” stems from inadequately understood viewpoints held by key parties to the enterprise. Particularly important in this regard are the perspectives of faculty and administrators as institutions embark on the creation of such cultures in a manner able to satisfy external accrediting agencies while upholding or advancing the quality of the college’s core educational mission. If so, it may prove beneficial before reviewing the results from diverse efforts along these lines in setting the stage for the methodological alteration we employ here, to pause briefly beforehand in an effort to place the “Assessment Movement” in an abbreviated historical context.
The State of Play in the Outcomes Assessment Enterprise
We can begin by noting that assessment has always been a critical component of the education-learning process. Initially, assessment was the exclusive province of teachers who designed courses, developed assignments, and then evaluated the extent to which students had mastered the material. Over time, others who had an interest in education and the learning process became involved. For example, in the early 20th century, groups like the Carnegie Foundation for the Advancement of Teaching—with an interest in objective and scientific evaluation of student learning—began developing standardized tests to evaluate specific areas of learning. These efforts were followed in the 1930s by the work of several universities, including the University of Chicago, to expand assessment efforts to measure multi-disciplinary learning and “general education.” This period saw the development of tests such as the Graduate Record Examination. This work was followed in the post–World War II era by what Shavelson (2007) calls the “The Rise of the Test Providers.” As a direct result of the number of returning soldiers using the GI Bill to go to college, several companies, most notably Educational Testing Service (ETS), were developed to help assist colleges in screening and assessment process. Due to the combined effects of these separate developments, the evolving culture of higher education in the United States of the mid-1960s was generally hospitable to a growing emphasis on assessment. More specifically, this acceptance was manifested in the ever-growing reliance on ETS-type instruments such as the SAT, GRE, and the like as widely-used and legitimate measures of educational outcomes.
The exact sequence and precise effects of the crucial events to follow are subject to different interpretations, but what is clear from the various accounts is that the assessment process began to change significantly during the 1970s. Prior to this decade, the outside forces that had pushed for greater assessment were interested primarily in enhancing the education-learning process. That began to change and the assessment movement took on an additional emphasis as a consequence of alterations in the political and economic environment that, in concert, accelerated the demand for college degrees by employers and prospective employees in search of certification. This in turn was accompanied by rapid rises in the cost of college, increases that far outpaced the capacity of federal and state governments to subsidize these costs. Soon, partisan differences on the role of such subsidies would be activated and this, combined with increased pressure from tuition-strapped families, contributed to elevated concern among political office-holders. The cumulative effect of these developments was a political climate far more hospitable to the exercise of added control of colleges and universities, all defended as part of a legitimate interest in holding higher education accountable.
Universities had always claimed that students were learning, but now the time had come, some argued, for universities to demonstrate that learning was actually taking place. A multitude of forces coalesced to produce this turn of events. Included among those is the emergence of the emphasis of private sector management models being applied to the public and education arenas. What this meant was a greater emphasis on evaluation and accountability (Zumeta, 1998, 2000). In addition, during this period there were substantial funding cutbacks for higher education tied to a more audible demand from state governments that universities be accountable for what they do with their taxpayers’ dollars. Some states began to establish funding formulae based on objective indicators of performance by universities (Ewell, 1994, 2001). At the same time, spending cutbacks escalated the cost of higher education which, when coupled with tax revolts across the country, increased the public’s demand for greater accountability (Miller, 2012; Zumeta, 2000). This demand for greater accountability was taken by some, especially faculty, as unjustifiably contrary to the respect and deference given to universities in the past. At the same time, however, with the need for more money and the trend toward the application of business models in higher education, boards of directors at many universities had become dominated by members from the business world. As a result, boards came to function much less as buffers and protectors of institutions of higher learning from outside forces, and more as vehicles by which the demands from both the public and private sectors would be implemented. Accrediting agencies, though having been involved for decades, also began to take on a new focus by requiring institutions of higher education to develop measures of institutional effectiveness and requiring colleges to develop clear student learning outcomes (Ewell, 2009; King, 2000; Zumeta, 1998).
Added to these institutional alterations in the environment of American higher education is a discernable increase in the demands for accountability from many critics of higher education. One in particular warrants citation. Margaret Spellings, Secretary of Education under President George W. Bush from 2005 to 2009, in The Commission on the Future of Higher Education report bearing her name (Spellings, 2006) focused on accountability from a self-proclaimed consumerist perspective. The Commission’s report called for the creation of a data base so that the public could have access to information about individual colleges and universities. This information would serve as performance indicators to demonstrate a given institution’s ability to produce results so that prospective students and their parents could make better choices on where to spend their money. Similarly, higher education could not escape the impact of other social forces operating in the external environment such as the culture wars. Works such as those of Allan Bloom (1987) in The Closing of the American Mind along with other books by then-Secretary of Education William Bennett (1989) provided thinly veiled political assaults on the perceived leftward leanings of most institutions of higher education as well as their faculty. This added considerably to the demand for greater accountability. Similar attacks have continued up to the present (Archibald & Feldman, 2011; Arum & Roka, 2011).
Arum and Roka’s Academically Adrift: Limited Learning on College Campuses warrants special attention due to the fact that it constitutes what is arguably the most thorough, careful, and thoughtful—not to mention, sobering—application of assessment per se to key educational outcomes at two dozen highly esteemed colleges and universities of varying sizes and locations within the United States. Spanning several years, Arum and Roka’s research design enabled them to chart progress on several fronts within the institutions they examined—critical thinking, analytical reasoning, and written communication—over time while also issuing data-based judgments of institutional effectiveness at a global level across the schools they examined. Based primarily on data generated by student responses to the rubric-based Collegiate Learning Assessment (CLA), an instrument fostered and promoted by the Council for Aid to Education, the results were devastating: excluding dropouts, fully 45% of the participants failed to demonstrate any significant gains in the three critical skill areas cited above during the first 2 years of college. Equally depressing, this figure was reduced only marginally, to 36%, over 4 years of college. Not surprisingly, these findings gained wide circulation among mass-media outlets in the United States, adding a growing sense of urgency to the political incentives to shore up accountability. Finally, it bears noting that these growing concerns were gaining increased circulation at the same time that severely-elevated student debt levels were garnering widespread attention. The spike in aggregate debt burdens incurred by college graduates stemmed in large measure from the fact that average tuition costs, at 4-year public universities circa 2014, had climbed to 225% of their 1984 levels (College Board, 2015, p. 16).
Unresolved Issues From Scholarly Scrutiny
In such an environment, it should come as no great surprise that the outcomes assessment movement is shrouded in controversy. And the controversy catalyzed by divergent perspectives on assessment held by differing constituencies appears to persist even when attention to the issue shifts from popular or policymaking venues to scholarly efforts to monitor meaningful progress in the cultivation of genuinely cooperative cultures of assessment. Repeated surveys of faculty buy-in to institutional assessment regimes, undertaken by Matt Fuller and colleagues, defy holistic narrative interpretation and, despite some evidence of declining resistance generally by faculty to outcomes assessment, the percentages of avid supporters among the ranks of teaching faculty fall short of substantial majorities (Fuller, 2011; Fuller, Henderson, & Bustamante, 2015; Fuller & Skidmore, 2014). Additional commentary and studies of both an impressionistic nature (Ewell, 2009; Gold, Rhoades, Smith, & Kuh, 2011; Hutchings, 2010; Katz, 2010; Kelly-Woessner, 2011; Lederman, 2010; Praslova, 2013) and more empirically oriented studies on samples of critically placed administrators and/or program participants (Farkas, Hinchliffe, & Houk, 2015; Hunt-Bull & Packey, 2007; Kuh & Ikenberry, 2009; Loughman & Thomson, 2006; MacDonald, Williams, Lazowski, Horst, & Barron, 2014; Marrs, 2009; Welsh & Metcalf, 2003a, 2003b) underscore a complementary conclusion in pointing to the sense of faculty buy-in as the pivotal variable in explaining why some efforts to construct cultures of assessment succeed and others fail. For his part, Fuller (2011) is convinced that research capable of identifying the roots of faculty support for and opposition to outcomes assessment holds the key to success in breeding cultures of assessment. Moreover, it is Fuller’s view that until large-sample surveys of the sort he has conducted can generate genuine narratives in which discreet survey responses assume the properties of a subjectively coherent account, scholarship of the sort needed will remain elusive. Meanwhile, the appearance of Astin and Antonio’s (2012) widely regarded, authoritative volume on such matters not only cast doubts on the degree of faculty buy-in within current efforts to cultivate cultures of assessment, it lays at the feet of faculty the clear onus of blame for this condition. Faculty, according to these authors, are inherently resistant to change in their work environments and are deemed guilty for resorting to initiatives under the rubric of assessment by engaging in a stylistic series of “academic games” that sabotage reasonable cooperation with authorities to institutionalize feedback processes fundamentally aimed at enhancing student learning outcomes.
The dearth of data-based accounts, coupled with the limitations of extant empirical studies, have not prevented other scholars (e.g., Kinzie, 2010; Miller, 2012) from advancing ambitious generalizations speaking to the overall state of play vis-à-vis assessment in the academic community as a whole. Both, for instance, claim that assessment has taken root on the vast majority of campuses across America and, while faculty still lag behind administrators in their enthusiasm for devising metrics of institutional effectiveness, in many if not most institutions the process of cultivating an authentic culture of achievement is well—and widely—underway. Miller’s (2012) conclusions, derived from an effort to track attitudes toward the Assessment Movement by examining the debates over the past 20 years about assessment in the scholarly journal Change, are worth citing in this regard: In some ways, the assessment movement over the last 25 years is similar to what individuals experience as they move through Kübler-Ross’s stages of grief: denial, anger, bargaining, depression, and acceptance. . . . During the initial denial stage, faculty and staff could not understand why assessment was necessary, which led to anger that outside forces were trying to mandate it. However, demands for accountability continued to create pressure for colleges and universities to assess student learning, leading institutions to try bargaining with state officials and regional accreditation agencies. Unflattering national evaluations of American higher education . . . propelled many institutions into depression. But eventually, reluctantly, slowly, and unevenly, many institutions came to an acceptance of assessment and its role in higher education. (p. 3)
For our part, such a characterization seems premature at best. Indeed, our own interest in the state of play in the assessment game was catalyzed in the spring of 2014 when a chair of a political science department posted a negative comment about the assessment process on a department chairs’ online discussion list, apsanet.org, operated by the American Political Science Association. This particular post was followed by a flood of comparably hostile comments, and a few relatively supportive comments, from a wide swath of professors and chairs representing political science programs across a diverse selection of colleges and universities. The discovery of this commentary led us to search other material and discussions, particularly a series of articles about assessment in the Chronicle of Higher Education. Many of the articles in the online version of the Chronicle were followed by hundreds of comments, most of them critical of what was currently taking place under the aegis of the Assessment Movement. At the very least, the commentary taken as a whole cast a lengthy shadow of doubt on Miller’s contention that the higher education community had come to accept if not make peace with the rigors and requirements of assessment. Indeed, if the commentaries we encountered were to any degree representative, it would appear that many stakeholders now occupying the classroom trenches as practitioners in the assessment enterprise are not accurately labeled as embodying the acceptance stage at this point and are still moored, if not mired, in the anger stage. To be fair, Miller’s conclusions are framed in terms of compliance with the assessment mandates at the institutional rather than the individual level of administrative or faculty stakeholders. And she has couched her observations in carefully imprecise (e.g., “many institutions . . . ”) language that make the truth-claims she advances difficult to refute. Even so, her claims run precariously close to committing the ecological fallacy by, in effect, assuming that institution-wide compliance with accrediting agencies’ emphases on assessment can automatically be taken to infer supportive subjectivity on the part of the individuals comprising those institutions.
Q Methodology
This project exploits the advantages of Q methodology to examine the subjective structure of the discourse regarding assessment in higher education. Originated by William Stephenson (1935, 1953), Q methodology provides for the systematic study of subjectivity. McKeown and Thomas (2013) provide an overview of Q methodology: Q methodology encompasses a distinctive set of psychometric and operational principles that, conjoined with statistical applications of correlational and factor-analytic techniques, provides researchers with a systematic and rigorously quantitative procedure for examining the subjective components of human behavior. Within the context of Q methodology, “subjectivity” is regarded as a person’s communication of a point of view on any matter of personal or social importance. A corollary is the twofold premise that subjective viewpoints are “communicable” and advanced from a position of “self reference.” (Preface, p. ix)
Key to Q methodology is the concept of concourse, by which Stephenson meant all that could be said about a particular topic, which is, of course, theoretically infinite in nature. Concourse is rooted in self-reference and this universe of statements is made up of statements of opinion. For example, only the delusional would dispute that Barack Obama is president of the United States in 2015, but everyone has an opinion about that fact, and the communication of such opinion—that is, shared communicability—is behavior. Similarly, expressions of opinion about the assessment movement in higher education, when shared, is behavior that can be studied scientifically, using the procedures and principles of Q methodology.
A sample of statements (Q sample) drawn from the concourse is presented to subjects who rank-order the statements through a process known as Q-sorting. Thus, each statement is placed along a continuum, relative to the other statements in the Q sample, reflecting the sorters point of view about that topic. Note the substantial difference between Q-sorting and responding to a Likert scale or a battery of survey items. In a Likert scale or survey, each item is independent of the other items. In Q-sorting, the items are compared against the others, with salience attributed to those statements at either end of the continuum. Those statements in the middle of the Q-sort have lesser importance to the sorter, and, indeed, the zero point in the continuum signifies a neutral feeling.
Once Q-sorts are collected, the data are subjected to correlation and factor analysis that reduces the data by grouping sorts that were done similarly. Thus, similarly done sorts are grouped together to form factors. These factors are operant representations of shared points of view and can be compared and contrasted with other factors to allow the researcher the ability to understand the subjective structure of the views concerning the topic under study.
Applying Q Methodology: Concourse, Q Sample, and P-Set
Given the aforementioned limits of the few empirical investigations of academic attitudes on assessment and the Assessment Movement, this project seeks to redress the relative neglect of stakeholder subjectivity in this research and explore the meanings and viewpoints of professional academics on this subject. To tap these subjective understandings of assessment and the Assessment Movement, Q methodology was selected (Brown, 1980; McKeown & Thomas, 2013; Stephenson, 1953). Q methodology is particularly effective in dealing with subjective evaluations of various entities as well as uncovering various viewpoints on policy issues (Brown & Maxwell, 2007; Gargan & Brown, 1993.). The first step was a careful examination of the concourse of communication (Stephenson, 1978) about the Assessment Movement. As indicated previously, this included a lengthy discussion on the American Political Science Association’s (APSA) department chair’s online discussion list. This was followed by an examination of articles on assessment and the extensive commentary that often followed, particularly in the Chronicle of Higher Education. It also included extracting statements from the commentary on assessment cited earlier in this article. In addition, the occasional papers and research reports available from the National Institute for Learning Outcomes (NILOA) were reviewed, as well as material available from The Association for the Assessment of Learning in Higher Education (AALHE), The Association of American Colleges and Universities (AACU), and some regional assessment organizations (e.g., NEEAN, New England Educational Assessment in Higher Education). This was supplemented by interviews with several persons who were actively involved in the assessment process, or were known to be critical of the process. After this extensive review, the commentary began to become redundant indicating the concourse on this subject had been adequately covered. The result was over three hundred statements broadly representative of the wide variety of viewpoints on assessment and the Assessment Movement. These ranged from statements about the purpose and consequences of assessment, to concerns about the validity of the instruments used in assessment, the role of faculty, and the sources of the pressure for more assessment. The concourse did not yield nor were there any suggestions from the literature of a theoretical framework for the selection of the final Q sample. There was a definite tendency for some statements to be positive and supportive of assessment, some to be negative, and some to be ambivalent or neutral. There also were some that were very descriptive of the process, while others were hostile because of the pressure put on faculty to change, or because it added more work. Accordingly, an effort was made to ensure to the extent possible a sample of statements that reflected the diversity contained in the concourse. As a result, 50 statements were eventually selected to comprise the Q sample. The entire statement sample is contained in the Appendix; the following represent some illustrative examples of the range of subjectivity displayed within the concourse: 2. The assessment movement in higher education has been driven as much, if not more, by outside political forces determined to exercise greater control over education, than it has been by persons legitimately interested in advancing the quality of learning. 4. To faculty, it usually seems to be a burdensome, pointless extra, grafted onto an already heavy workload. However, an assessment process embedded in work routines that can be implemented in a way that minimizes extra work might be more acceptable. 14. Assessment of student learning is about inquiry and discovery. It is a systematic, intellectually stimulating way of asking questions about educational goals so that learning can be improved at the level of the student, the course, the program, or the institution. 20. Look at all of the careers made (VP of Assessment, Assessment Czar, whatever) by this industry. Observe all of the vendors hawking their “assessment software” and other “assessment snake oil remedies” for the assessment “problem.” Assessment, good or bad, will never go away. There is far too much money to be made and careers to be built by it. 26. When institutions narrow their educational vision to a discrete set of skills and outcomes that can be measured at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision of what they try to cultivate in students. What we measure dictates what we teach and what we do not teach. 29. Designed appropriately, a well-organized sequence of outcomes assessments can provide information vital to tracking student learning over time, and potentially increasing institutional effectiveness.
The fifty statements comprising the final Q sample were provided to respondents who were asked to rank the statements from +5, those that were most characteristic of their beliefs about assessment, to −5, those that were most uncharacteristic according to the following opinion continuum:
The Q sample with instructions initially was sent out to all of the individuals who had participated in the original debate on the political science department chairs’ discussion list. Added to this were other professional academics who had written or commented on articles in the Chronicle of Higher Education or were members of national organizations involved in the assessment process. In addition, each author invited as participants colleagues at their respective institutions who were involved in the assessment process either in some type of official capacity or as an individual who at least had been affected in some way by a current or developing assessment process. A total of 40 persons eventually responded and, as indicated in Table 1, this P-set reflects a diverse range of respondents in terms of age, gender, faculty status, administrative position, and involvement in the assessment process.
Backgrounds and Factor Loadings.
Indicates a department chair but considers themselves primarily faculty, not administration, or not both.
Factor Loadings 36 or above significant p < .01.
Results: Dueling Narratives on Outcomes Assessment in Higher Education
The data were analyzed in customary Q technique fashion: All Q-sorts were correlated and the resulting 40 × 40 correlation matrix was factor analyzed, initially by both centroid factor analysis and principal components analysis (Schmolck & Atkinson, 2012). The decision to settle upon a simple two-factor solution produced by Principal Component Analysis (PCA) and a varimax rotation was not difficult: The two factors were defined by the purely significant loadings of 32 of the 40 participants, and all of the remaining eight respondents had significant loadings on both factors. Still, the final two factors were clearly orthogonal, being correlated at −.11. Factor loadings are presented in Table 1. Loadings 36 or above (two place decimals have been removed) are significant (p < .01). The decision to retain these two factors and use PCA with a varimax rotation (over, say, centroid factor analysis and judgmental rotation) was based on the clean, readily interpretable character of the factor structure produced. The eight respondents who had significant loadings on both factors are, in Q methodology terms, “confounded” or “mixed.” This means that they share some of the sentiments of both factors. Further interviews with these subjects might well shed light on why they are “confounded,” but it is common practice in Q methodology to focus attention on those views that are shown to be shared.
Factor A: Anti-Assessment Stalwarts
Factor A is comprised of 17 of the 40 sorters, all faculty members. Five of the sorters also serve as Department Chairs, but see their roles as primarily faculty members. All but two of the 17 sorters are male, and 15 teach either in the areas of humanities or social science. Those with high factor loadings on Factor A are steadfast in their critique of the assessment process and find it a burdensome, unnecessary intrusion into their academic life. They are hostile to assessment on a number of different fronts: They see assessment as having been forced upon them by entities outside of academia; that the process is inconsistent with the mission of higher education (and worse, yet, having a deleterious impact on higher education); that assessment cannot really measure the value of the teacher-student dynamic; that the assessment movement does not value faculty or hold students accountable for their part in the learning process; and that scant evidence exists to suggest that assessment has led to any meaningful, positive changes in the educational enterprise. In short, Factor A is not only skeptical of assessment, but downright cynical.
These themes are seen clearly by examining those statements most agreed with by Factor A. The following statement received the highest score and points to their belief in the futility of the assessment process to measure the learning that takes place when dealing with higher-level thinking. (The respective factor scores for A and B are in parentheses following the statements.)
36. The problem is that what is truly learned in college often does not come to fruition until years later; long after the “assessment process” has been completed. (
Factor A is also troubled by the perception that assessment is driven by forces external to higher education, either by those wishing to apply an economic model to college teaching or by those who want to hold faculty “accountable” in various ways to satisfy other constituencies.
The following statements show a concern among Factor A sorters for the role of external demands.
38. I do have a problem when assessment becomes just another hoop we have to jump through to please an outside constituency. More and more, that is what seems to be driving outcomes assessment. ( 2. The assessment movement in higher education has been driven as much, if not more, by outside political forces determined to exercise greater control over education, than it has been by persons legitimately interested in advancing the quality of learning. ( 49. Although assessment is data driven, it is being driven by those who seek to know the cost and benefit of everything, but know nothing of the values of things taught and accomplished. ( 6. Those who are afraid of rubrics and assessment instruments remind me of Luddites who refuse to perceive reality. If we are to rely on our time-tested bold statements that “we are a quality institution,” without any evidence, then we deserve to be judged by outside constituencies. 35. We and the accreditation agencies are on the same side—we are both about student learning. They want us to prove that we are doing what we claim we are doing. We want them to leave us alone—but they won’t until we devise valid and reliable measures that demonstrate that learning is taking place. (−
In addition, Factor A believes that the narrow focus of measuring learning outcomes is inconsistent with the mission of higher education. For these sorters, a college education is more than the sum of its constituent parts, and it is that larger picture that eludes the grasp of the simplistic application of quantitative measures. Faculty on Factor A thus seem to share sentiments with Laurie Fendrich (2007) whose lamentation over the extent to which assessment efforts have been captured and denigrated by faceless bureaucrats appears as an opening epigram to this article. The following statements reinforce this theme: 26. When institutions narrow their educational vision to a discrete set of skills and outcomes that can be measured at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision of what they try to cultivate in students. What we measure dictates what we teach and what we do not teach. ( 32. I have yet to see an assessment protocol that truly measures what we claim to be doing. Where are the measures of the ability to solve major societal problems? The measures of leadership ability? The measures of the potential to become a good citizen?
From the viewpoint of Factor A, not only does assessment miss the integrative nature of higher education, when put into practice, assessment tools hamper the educational mission in at least two ways: first, by adopting the approach of “No Child Left Behind,” professors will be led by assessment mandates to teach concepts and ideas whose mastery by students may be easily quantifiable, but will not be measures of a good education, and, second, the bureaucratic infrastructure that assessment has and will continue to demand will direct scarce resources from the classroom with the attendant negative consequences on student learning: 10. There is an inevitable vicious circle here where much of what we teach cannot be measured so we establish outcomes that can be measured which forces us to teach what we really do not think is what we should be teaching in the first place. ( 34. The assessment movement provides an ideological smokescreen acting as a distraction from the real problems of U.S. higher education that relate to issues of inequality, cost, and the out of control expansion of the number of administrators. ( 20. Look at all of the careers made (VP of Assessment, Assessment Czar, whatever) by this industry. Observe all of the vendors hawking their “assessment software” and other “assessment snake oil remedies” for the assessment “problem.” Assessment, good or bad, will never go away. There is far too much money to be made and careers to be built by it. ( 47. The assessment movement offers a fundamental change of our higher education system: learning is now non-negotiable and the claims for learning are clear. This is a profound change and stands to reverse the erosion of quality in higher education. (−
Factor A also does not believe that there has been reliable evidence produced that assessment practices have resulted in improved student learning. Steven Hales’s (2013) provocative statement—“How can we be sure whether outcomes assessment really works as advertised or has the accuracy of a Soviet agricultural report?”—surely would resonate with Factor A. Statement 28 was given a high positive score by Factor A (p. 2): 28. I’ve almost given up saying this, but good grief, people, how about some evidence! Has there been a single, carefully controlled study that shows assessment produces better-educated graduates? (
Statements in the Q sample that were critical of faculty or suspicious of their motives were, not surprisingly, rejected by Factor A. At the same time, items that questioned why students were not held to more account for their responsibility in their own education resonated positively with this perspective.
42. What happened to the respect for faculty; the belief that they actually know what they are doing? ( 41. Faculty resist assessment because they resist everything. They are the most immovable objects on the planet. (− 21. We need some type of assessment because too many professors and administrators are failing to hold students accountable, but are letting them slide through college without learning much. (− 30. No assessment vehicle I have ever encountered measures the extent to which students are often unwilling to do the work of getting an education. Refining teaching methods puts the onus on faculty, so does the assessment buzzword of the day: engagement. (
Factor A simply does not see assessment as a necessary endeavor, nor as one that would be helpful to them as educators. They systematically reject a series of statements that trumpet assessment as a way for faculty to think more deeply about their courses, or as a means to measure the quality and significance of what they do: 16. OK, I admit it: I like assessment. I like it because it encourages faculty members to think more carefully about what they do, how they do it, and why they do it that way. (− 14. Assessment of student learning is about inquiry and discovery. It is a systematic, intellectually stimulating way of asking questions about educational goals so that learning can be improved at the level of the student, the course, the program, or the institution. (− 1. The idea that we ought to be exempt from assessment, from demonstrating the value of our work, smacks of privilege, as though we think everyone ought to dutifully support us without asking us to be accountable to them. (− 9. It’s not radical doubt about the role or effectiveness of grading as a measuring tool for learning outcomes that motivates assessment. It’s just the desire to provide a second-level check on the effectiveness of such tools. (− 50. What can be so wrong about asking someone to systematically and empirically demonstrate that they actually do accomplish their stated goals and objectives? (−
Finally, Factor A strongly rejects the idea that assessment is a form of scholarship, and that faculty should be held to account for having adequately performed assessment during tenure and promotion evaluations: 39. Assessment should be treated as a form of scholarship that is closely linked to teaching and learning, and it should play a role in the tenure and promotion processes. (−
In sum, Factor A thoroughly rejects any useful purpose for assessment and contends that the process is fatally flawed, promoted by external forces who do not understand higher education, or, worse yet, actively seek to dramatically change the nature of academia to no good end. Clearly, Factor A participants do not want much, if anything, to do with assessment; however, they exist in an environment in which demands to participate in the process are unrelenting and expanding. This contradiction must provide a great deal of conflict for these faculty members as they seek to balance the opportunities and obligations at the core of their professional lives.
Factor B: Defenders of the Faith
Factor B is comprised of 15 respondents from all three major areas of academia: the humanities, social sciences, and the natural sciences. In contrast to Factor A, which is made-up entirely of faculty members, seven sorters on Factor B are either full-time administrators or faculty who have some administrative duties and consider themselves as performing dual roles. Also, Factor B is nearly evenly split along gender lines, with defining sorts provided by eight males and seven females. Factor B defends assessment and promotes the idea that outcomes-based education can be helpful as long as the college or university controls the process. Factor B rejects the view that outside forces are controlling and dictating assessment as a means to control higher education, and also rejects the idea that colleges should be accountable to outside constituencies. The accountability that Factor B types believe assessment would serve is to make faculty more conscious of the pedagogical decisions they make in terms of the impact of those decisions on student learning, program development, and ultimately, institutional effectiveness. Further, Factor B does not believe that involvement in assessment will bring dramatic changes to higher education; rather it will help everyone to do their jobs better.
Factor B sees assessment as a helpful process in discovering how effective the institution is in preparing students. They see assessment as an organic process that professors are already engaged in, and believe that a more systematic application through well-designed assessment tools will benefit all involved. The following statements were all scored positively by Factor B: (The respective factor scores for A and B are in parentheses following the statements.)
33. The point of assessment is to ask, “What do we want our graduates as a cohort to know and be able to do by the time they graduate?” Are we getting them there? If not, where is the curriculum not serving our goals for our students and what can we do to change that? (−1, 45. It’s not like teaching and assessing are some separate, episodic events, but rather they are, or should be, ongoing, interrelated activities focused on providing guidance for improvement. (−1, 18. Executed well, assessment encourages faculty members to articulate their course and assignment goals more clearly and to develop sound rubrics. That helps them think more broadly about overarching program goals, and how to measure students’ success in reaching those goals. (−1, 23. Neither the assessment tools of the professor nor of the external assessor are perfectly reliable. Despite that, both can carry valuable information, if their assessments are well designed. (0, 29. Designed appropriately, a well-organized sequence of outcomes assessments can provide information vital to tracking student learning over time, and potentially increasing institutional effectiveness. (0,
Factor B sees assessment as a means by which faculty can think more deeply about their courses and articulate more clearly, for students and themselves, what the learning objectives for a given course might be. It is a process of reflection and careful consideration that supporters believe will lead to more meaningful and productive pedagogical decisions being made by faculty.
48. It is incumbent on academics to decide for themselves how to assess whether their students are learning, less to satisfy external calls for accountability than because it is the right thing for academics, as professionals who care about their students, to do. (+2, 14. Assessment of student learning is about inquiry and discovery. It is a systematic, intellectually stimulating way of asking questions about educational goals so that learning can be improved at the level of the student, the course, the program, or the institution. (−4, 3. If assessment accomplishes nothing else than to force faculty to sit down and discuss what it is they are trying to do and whether or not they are accomplishing that, then it can be considered a success. (−2,
Factor B seems sensitive to the critique made by Factor A types that education is more than the sum of various course objectives, but, unlike Factor A, does so in a way that still embraces assessment.
8. If we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes—whether these outcomes are somehow connected or entirely independent of each other—then we have to expand our approach to include process as well as product. (−2,
One of the major concerns of Factor A—the degree to which assessment is being driven by forces outside the institutions—is not seen as an issue for Factor B. These participants reject the idea that assessment is a vehicle to bully higher education, whether perpetrated by vote-seeking politicians or by corporate types who are attempting to apply a business model that is inappropriate for higher education.
22. Outcomes assessment is not really about gathering knowledge or improving quality, but to bully higher education. From that perspective, it’s working pretty well. (+2, − 34. The assessment movement provides an ideological smokescreen acting as a distraction from the real problems of U.S. higher education that relate to issues of inequality, cost, and the out of control expansion of the number of administrators. (+3, − 19. The history of the assessment movement is that it originates with public scrutiny over the cost of higher education. In a way, we have done this to ourselves. Rather than confront the cost issue, our accreditors and professional organizations decided to demonstrate that the cost was worth it, by proving how much students learn. (+1, − 49. Although assessment is data driven, it is being driven by those who seek to know the cost and benefit of everything, but know nothing of the values of things taught and accomplished. (+3, − 15. It is a wonder anyone learned anything in the days before we had a formal metric. Assessment is done not for students, but for administrators. Not for faculty, but to faculty. Not for program improvement, but for compliance monitoring. (+1, −
However, despite Factor B’s rejection of the nefarious motives of external agencies, they do give support to the idea that assessment needs to remain in the hands of those most knowledgeable, and presumably, the institution itself.
11. Many if not all of us would agree from our own experiences that assessment, when used properly, can move an educational process forward in positive ways. But what is appropriate and what is proper, and who will decide this, are the important questions. (+1,
Given their generally positive view of assessment, it should come as no surprise that Factor B types do not resist being involved in the process. And while they see assessment as necessary and proper, they do not believe that the entire landscape of higher education will be dramatically changed by its use.
40. I’ve given up fighting this thing. I just do as minimal a job as allowed and then hope even that time is not wasted. (−1, − 44. What goes on in the classroom on a daily basis does not “count.” What “counts” is “documented” learning, that is, the product-as-educational-widget. We would do well to push back as hard as we can so that the assessment movement does not gobble up and spit out higher education. (+2, − 13. It’s easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. (−1, −
Finally, Factor B is fundamentally at odds with Factor A’s view that assessment has led to an overload of administrators tasked with carrying out the process. However, Factor B does share Factor A’s view that faculty are not intransigent and predisposed to reject assessment because they do not want to be held accountable.
20. Look at all of the careers made (VP of Assessment, Assessment Czar, whatever) by this industry. Observe all of the vendors hawking their “assessment software” and other “assessment snake oil remedies” for the assessment “problem.” Assessment, good or bad, will never go away. There is far too much money to be made and careers to be built by it. (+3, − 41. Faculty resist assessment because they resist everything. They are the most immovable objects on the planet. (−5, − 21. We need some type of assessment because too many professors and administrators are failing to hold students accountable, but are letting them slide through college without learning much. (−3, −
Factor B offers a strong endorsement of assessment practices from an educational vantage point, believing that there is intrinsic merit to participation in assessment and that the educational product will benefit from a careful examination of what it is that faculty do in the classroom. Factor B does not see assessment as being driven by suspect forces outside the university, and firmly believe that faculty are not resisting assessment because they reject being scrutinized or are just temperamentally predisposed to resist any encroachment on their authority. This seems to argue that Factor B believes faculty can and will “buy in” to assessment when they are sufficiently educated as to the benefits of this systematic approach. Factor A shares the view that faculty are not resistant by nature or are allowing students to pass their courses without demonstrating competency. However, as Wendy Weiner (2009) has written, “If the faculty does not own it (assessment), it is not going to happen” (p. 28). It would seem that given the viewpoints uncovered here, Factor A types are not at all ready to “buy in” to assessment and this at least raises the possibility that the types of benefits seen by supporters of assessment may never be realized.
Concluding Discussion: Is “All-In” Outcomes Assessment Attainable?
Before subjecting these findings to appraisal for their significance and implications, we are obliged to issue the customary disclaimers that tend to accompany Q-methodological studies. Foremost in this respect is the reminder that P-sets (or person samples) in Q studies are typically small and non-randomly composed compared with large-sample surveys. In the case at hand, 40 respondents (though well within the usual range for Q-based inquiries) cannot and are not taken as grounds for estimating the larger distribution of opinion on outcomes assessment in contemporary higher education with the United States. However, the faculty-centric nature of Factor A is worth noting. All those that loaded on Factor A are faculty members or Department Chairs, who see their principal role as that of a faculty member. Clearly, not all faculty members are Factor A types, as there are faculty members with significant loadings on Factor B. Administrators in this study loaded on Factor B, which again is not to say all administrators are Factor B types, but both of these patterns are suggestive. At the same time, it is worth noting that in the aforecited empirical studies on opinions toward assessment, we find articles based on as few respondents as three (Marrs, 2009), 12 (Loughman & Thomson, 2006), and 45 (Kinzie, 2010). More compelling in this connection, however, it is worth reiterating that the “generalizations” sought from Q studies are focused on discovering how stakeholders think on a given matter rather than how many of a certain demographic identity subscribe to a given view (Thomas & Baas, 1992/1993). Seen in this light, the pair of perspectives presented above are perhaps best viewed as constituting a preliminary calibration of two views in their subjective, narrative character that may or may not have been anticipated prior to undertaking this study, though the former possibility does in retrospect seem more likely than the latter. It is of course possible (though doubtful for reasons outlined above) that a second disclaimer, pertinent to Q-based research, is in order here: Specifically, it is possible that our sampling of the concourse of subjective communicability surrounding outcomes assessment was deficient somehow in turning up items of a more nuanced or ambivalent character.
Setting this possibility aside for the moment, it still seems doubtful that faculty aligning with Factor A are unaware that Factor B exists. By the same token, administrators with heavy duties in the assessment area are likely well aware of faculty colleagues who fit the Factor A profile to a T. However, there may be some surprise that other, more nuanced factors failed to emerge. Perhaps Factor B types have comforted themselves in the belief that the strident opposition to assessment voiced in Factor A is limited to a small core of faculty who, if adequately educated on the matter, would join their ranks as suggested by the Astin and Antonio (2012) treatment of “games” played by faculty. Such a line of thought is plausible—indeed, reasonable—if many faculty grudgingly comply with assessment demands without voicing their opposition; the limits of the present analysis notwithstanding, the data here at least suggest that a Factor A view may be fairly prevalent among teaching faculty. But if the empirical documentation of “two cultures” rivaling C. P. Snow’s (1959) famed treatise on epistemological bifurcation in the Academy occasions no great surprise when attitudes toward assessment are examined, does this automatically portend “bad news” for the dueling narratives and for the inclusive community of higher education of which they are a part? Are we forced to follow the implication of Wendy Weiner’s (2009) judgment that the creation of plentiful and productive cultures of assessment exists only in the form of pipe dreams absent adequate faculty buy-in? On the one hand, such a reading seems inescapable if the foregoing findings are shared with a breadth commensurate to their depth. For readers with deep reservations with the assessment movement and its effects on faculty morale, the discovery of Factor A would seem to remove any doubts about the depth and breadth of such concerns among one’s colleagues. Likewise, those holding more benign views toward assessment and its advocates would likely be gratified by the appearance of Factor B and the subjective validation it affords for like-minded professionals on a matter of considerable controversy. But if feelings of subjective validation accompany the discovery of kindred spirits on a contentious issue, what are the likely effects of encountering incontrovertible evidence of equally strong believers in the counter-attitudinal viewpoint? How, in other words, would Factor A be expected to react to the operant character of Factor B and vice versa?
Given their strong differences, the reflexive response to the latter question about the ontological status of the alternative viewpoint to either Factor A or Factor B, it seems safe to say, is likely to be one of affective consternation. At a minimum, we might hypothesize, those readers having affinities with Factor B would be expected to recognize with dislike the Anti-Assessment Stalwarts of Factor A as the veritable embodiment of the dug-in state of intransigence that Factor B has come to assign blame for the failure to make adequate progress in addressing and aggressively tackling assessment tasks. Similarly, it seems logical to expect that readers inclined to agree with Factor A, while finding gratification borne of confirmation in the existence of fellow believers, would inevitably find themselves taking strong exception with the perspective and the proponents of Factor B despite the fact that its existence was expected. Like those on Factor B confronted with Factor A, the latter is likely to elicit disdain, however implicit, due to its simple, undeniable failure “to get it” in comprehending, let alone appreciating, the viewpoint it denigrates.
At the risk of appearing devoid of common sense, we are not ready to endorse this particular form of conventional wisdom. Indeed, we would like to advance as a possibility for serious scholarly consideration the counter proposition that, in circumstances defined subjectively and structurally by “dueling narratives”—a pervasive condition of our polarized politics and culture in the contemporary United States—expectations in the fullest sense are not dashed by the presence of the “other” party to the duel; rather, expectations require the presence of another that is susceptible to blame and demonization in order to keep one’s own view energized and viable. And if this is indeed the case, then the aggregate consequence of these dynamics is the persistence of the stand-off at the expense of substantive change.
At one level, what we are proposing here rests on a simple-yet-preliminary reiteration of the oft-heard adage that what we see is a function of where we sit. In other words, one’s viewpoint is often traceable to one’s vantage point, and we have drawn attention to the possible applicability of this postulate to the data at hand by identifying the differing composition of Factors A and B in terms of professional roles in relation to the assessment enterprise. Factor A, it will be recalled, was primarily defined by teaching faculty (including a half-dozen department chairs who elected to describe their principal duties as faculty members rather than administrators). Factor B, in contrast, contained a far higher percentage of academic administrators among its ranks, thereby lending circumstantial evidence to the notion that viewpoint and vantage point are potentially indistinguishable in settings defined by subjective polarization.
The broader, more contentious, point we are proposing here—that is, that each side locked in a dueling-narratives dispute is lacking in sufficient incentives to alter its behavior, including its perspective on the opposition/enemy as the principal locus of conflict—can be illustrated with a brief reference to a comparable dynamic from contemporary American politics. It is no secret that partisan polarization has now reached excessive proportions at the federal level in U.S. politics: Congressional Republicans have “succeeded” in almost every instance not in blocking legislative initiatives originating from the Obama White House since the 2010 Midterm Elections produced a significant partisan majority for the GOP in the House, followed in 2014 by the same change in party control in the Senate. Symptomatic of the ensuing partisan acrimony, then-Speaker of the House John Boehner filed a suit against the President for an alleged violation by Obama of his oath of office. At the same time, more zealous critics of Obama within Boehner’s party launched a campaign for instigating impeachment proceedings against the president. In what at first blush appears blatantly paradoxical, both parties utilize and exploit such threats as dramatically effective occasions to raise money through public donations. Neither party, in other words, senses any meaningful incentive to diminish the degree of partisan polarization despite the fact that, in the longer term, such polarization is fundamentally at odds with responsible (and responsive) governing. Something similar, we are suggesting, may be involved in the imbroglio defined by the dueling narratives that now accompany the assessment movement.
Take, for example, the likely subtext for Factor A: that the targets of assessment (faculty in this case) naturally resent the power that this system gives to the assessors (administrators on or off campus) who then use the results to decide the fates of the examinees. Such an arrangement wreaks havoc with systems of faculty governance that at least aspire to the democratic principle of peer review insofar as the same “contingencies of reinforcement” (Skinner, 1969) do not apply equally to the assessors and the assessed. The recent 40th anniversary of Watergate returned to public light a powerful analogy from American political history. In this case, the parallel event occurred in the form of the Supreme Court’s unanimous decision, in Nixon vs. the United States. For his part, the former president argued, in essence, that he was not bound by the same set of rules that applied to all other citizens and therefore did not have to turn over to the Special Prosecutor audiotapes of discussions within the Oval Office pertaining to the attempted cover-up of the Watergate break-in and a host of other illicit activities tied to the White House or the Committee to Re-Elect the President. To be sure, there are obvious differences in the details, motives, and magnitude of Watergate, in its threat to democratic governance under a Constitutional Republic, and the feelings of faculty in the face of the Assessment Movement. At the same time, however, there are subjective parallels that bear consideration when efforts are made to understand the indignant subtext that underlies and animates Factor A. To keep that sense of indignation alive, Factor A “needs its Nixon,” so to speak, and as it happens, this is conveniently marshaled in the form of Factor B.
If such speculation holds water, are we forced to conclude that a genuinely consensual, “all-in” approach to assessment is an unlikely prospect on campuses containing prominent clusters of both Factor A and Factor B types? The answer, we believe, is that “it depends.” It depends, first of all, on decently clear communication between parties to the assessment process, and in the quest to make progress on this front, the example put forward by Gargan and Brown (1993) warrants consideration and emulation in the case at hand. Titled “What is to be Done?” the Gargan-Brown project invites local policymakers along with those having a special interest in those policies to generate off-the-cuff nominees of problems warranting attention and, separately, proposed solutions worthy of immediate agenda-item status for policymaking officials. The process, completed in the course of a single such meeting, serves as a practical and practicable demonstration of a course that could be taken to mitigate the effects of Marrs’s (2009) concern that the actual meaning of “assessment” is so ambiguous that it maximizes the chances for confusion and irrelevant affect in discussions of what is assumed to be the same phenomenon.
Finally, the prospects for an all-in culture of assessment are elevated to the extent that its development and application across campus deviate from a one-size-fits-all, top-down approach in favor of an aggressively decentralized strategy that seeks, to the extent possible, to neutralize the subtext of Factor A that is taken as a not-so-subtle sign of professional disrespect, whether intended or not. Toward this end, we would endorse a spirit of liberal experimentation that, ironically, aims for a return to basics. If, as common sense implies, higher education is ultimately “what we make of it,” then studies designed to explore aspects of a particular institution’s invisible tapestry (Baas & Thomas, 2011; Thomas & Ribich, 2007) or undergraduates’ understanding of the liberal arts (Thomas, 1999) at colleges bearing that title become relevant as important first steps in an inevitably long-lived, open-ended, and multi-faceted attempt to better understand how what we do is understood by those we serve. In this spirit, it bears reiteration that contained within but obscured perhaps the dueling-narrative nature of Factors A and B were eight participants (one fifth of our P-set) who performed Q-sorts that were significantly loaded on both factors. To these individuals, the previously described self-reinforcing dynamic of the dueling narratives would quite likely, literally, fall on deaf ears.
Perhaps this is the case in all or most such situations where dueling narratives render inaudible the voices of the ambivalent, thereby leaving the impression that what truly remains accurately described as a dialogue of the deaf. It is worth remembering, however, that those with ambivalent attitudes are neither deaf nor dumb, and in occasions such as these may well have voices well worth hearing. Such an eventuality of course is no guarantee of reconciliation. At the same time, it likely advances the date when we will be posed to answer the all-in question about outcomes assessment in a more genuinely affirmative manner.
Footnotes
Appendix
Statements and Factor Scores.
| Factor scores |
||
|---|---|---|
| Statements | A | B |
| 1. The idea that we ought to be exempt from assessment, from demonstrating the value of our work, smacks of privilege, as though we think everyone ought to dutifully support us without asking us to be accountable to them. | −3 | 1 |
| 2. The assessment movement in higher education has been driven as much, if not more, by outside political forces determined to exercise greater control over education, than it has been by persons legitimately interested in advancing the quality of learning. | 5 | 0 |
| 3. If assessment accomplishes nothing else than to force faculty to sit down and discuss what it is they are trying to do and whether or not they are accomplishing that, then it can be considered a success. | −2 | 3 |
| 4. To faculty, it usually seems to be a burdensome, pointless extra, grafted onto an already heavy workload. However, an assessment process embedded in work routines that can be implemented in a way that minimizes extra work might be more acceptable. | 0 | 3 |
| 5. Assessment is a way of making things explicit, thereby compelling faculty/programs to be clear on what they want to achieve and helping students learn what they need to achieve. Administrators love this idea of course, because it promises to break the strangle hold of expertise on teaching. | −2 | −1 |
| 6. Those who are afraid of rubrics and assessment instruments remind me of Luddites who refuse to perceive reality. If we are to rely on our time-tested bold statements that “we are a quality institution,” without any evidence, then we deserve to be judged by outside constituencies. | −5 | −2 |
| 7. Assessment is a form of policy review. And as with any policies, we ought to be reviewing the value of what we’re doing with some meaningful rigor. | 0 | 1 |
| 8. If we want to demonstrate the degree to which a college experience is more than just a collection of gains on disparate outcomes—whether these outcomes are somehow connected or entirely independent of each other—then we have to expand our approach to include process as well as product. | −2 | 3 |
| 9. It’s not radical doubt about the role or effectiveness of grading as a measuring tool for learning outcomes that motivates assessment. It’s just the desire to provide a second-level check on the effectiveness of such tools. | −3 | 0 |
| 10. There is an inevitable vicious circle here where much of what we teach cannot be measured, so we establish outcomes that can be measured which forces us to teach what we really do not think is what we should be teaching in the first place. | 3 | −2 |
| 11. Many if not all of us would agree from our own experiences that assessment, when used properly, can move an educational process forward in positive ways. But what is appropriate and what is proper, and who will decide this, are the important questions. | 1 | 4 |
| 12. It is striking how quickly assessment can come to be seen as part of “the management culture” rather than as a process at the heart of faculty’s work and interactions with students. | 0 | 2 |
| 13. It’s easy to imagine a scenario in which the educational structure that currently produces majors and minors in content areas is simply replaced by one that produces majors and minors in some newly chosen learning outcomes. | −1 | −3 |
| 14. Assessment of student learning is about inquiry and discovery. It is a systematic, intellectually stimulating way of asking questions about educational goals so that learning can be improved at the level of the student, the course, the program, or the institution. | −4 | 4 |
| 15. It is a wonder anyone learned anything in the days before we had a formal metric. Assessment is done not for students, but for administrators. Not for faculty, but to faculty. Not for program improvement, but for compliance monitoring. | 1 | −3 |
| 16. OK, I admit it: I like assessment. I like it because it encourages faculty members to think more carefully about what they do, how they do it, and why they do it that way. | −4 | 2 |
| 17. Because the very act of learning occurs in a state of perpetual social interaction, taking stock of the degree to which we foster a robust learning process is at least as important as taking snapshots of learning outcomes if we hope to gather information that helps us improve. | 1 | 3 |
| 18. Executed well, assessment encourages faculty members to articulate their course and assignment goals more clearly and to develop sound rubrics. That helps them think more broadly about overarching program goals, and how to measure students’ success in reaching those goals. | −1 | 5 |
| 19. The history of the assessment movement is that it originates with public scrutiny over the cost of higher education. In a way, we have done this to ourselves. Rather than confront the cost issue, our accreditors and professional organizations decided to demonstrate that the cost was worth it, by proving how much students learn. | 1 | −4 |
| 20. Look at all of the careers made (VP of Assessment, Assessment Czar, whatever) by this industry. Observe all of the vendors hawking their “assessment software” and other “assessment snake oil remedies” for the assessment “problem.” Assessment, good or bad, will never go away. There is far too much money to be made and careers to be built by it. | 3 | −3 |
| 21. We need some type of assessment because too many professors and administrators are failing to hold students accountable, but are letting them slide through college without learning much. | −3 | −4 |
| 22. Outcomes assessment is not really about gathering knowledge or improving quality, but to bully higher education. From that perspective, it’s working pretty well. | 2 | −5 |
| 23. Neither the assessment tools of the professor nor of the external assessor are perfectly reliable. Despite that, both can carry valuable information, if their assessments are well designed. | 0 | 4 |
| 24. We understand from other areas that the assumption that the simple presence of data invariably leads to improved outcomes and performance, and that those who are presented information under data-driven improvement schemes will know how best to make sense of it and transform their practice, is simply not true. | 2 | −1 |
| 25. We live in an age when parents and students are not content to accept our assurances that we are doing a good job educating students. This expectation is especially pertinent because of the increasingly high cost of education. | 0 | 1 |
| 26. When institutions narrow their educational vision to a discrete set of skills and outcomes that can be measured at the end of an undergraduate assembly line, they often do so at the expense of their own broader vision of what they try to cultivate in students. What we measure dictates what we teach and what we do not teach. | 4 | 0 |
| 27. The view that the status quo is the only correct model of teaching and learning is the kind of hubris that makes higher education appear haughty and conceited, rather than as a vehicle for growth and opportunity. | −2 | −2 |
| 28. I’ve almost given up saying this, but good grief, people, how about some evidence! Has there been a single, carefully controlled study that shows assessment produces better-educated graduates? | 4 | −1 |
| 29. Designed appropriately, a well-organized sequence of outcomes assessments can provide information vital to tracking student learning over time, and potentially increasing institutional effectiveness. | 0 | 3 |
| 30. No assessment vehicle I have ever encountered measures the extent to which students are often unwilling to do the work of getting an education. Refining teaching methods puts the onus on faculty, so does the assessment buzzword of the day: engagement. | 4 | −2 |
| 31. We do not need mandates or government pressure to explore ways to improve teaching and learning, our institution is interested in getting it right and they have embraced the use of data to diagnose what is and is not working and then changing our practices. | −2 | −1 |
| 32. I have yet to see an assessment protocol that truly measures what we claim to be doing? Where are the measures of the ability to solve major societal problems? The measures of leadership ability? The measures of the potential to become a good citizen? | 4 | 0 |
| 33. The point of assessment is to ask, “What do we want our graduates as a cohort to know and be able to do by the time they graduate?” Are we getting them there? If not, where is the curriculum not serving our goals for our students and what can we do to change that? | −1 | 5 |
| 34. The assessment movement provides an ideological smokescreen acting as a distraction from the real problems of U.S. higher education that relate to issues of inequality, cost, and the out of control expansion of the number of administrators. | 3 | −4 |
| 35. We and the accreditation agencies are on the same side—we are both about student learning. They want us to prove that we are doing what we claim we are doing. We want them to leave us alone—but they won’t until we devise valid and reliable measures that demonstrate that learning is taking place. | −3 | 0 |
| 36. The problem is that what is truly learned in college often does not come to fruition until years later; long after the “assessment process” has been completed. | 5 | 2 |
| 37. I do not understand all the resistance to assessment. It’s just a bit more systematic than what we have been doing in the past. That cannot be all bad. | −4 | 1 |
| 38. I do have a problem when assessment becomes just another hoop we have to jump through to please an outside constituency. More and more, that is what seems to be driving outcomes assessment. | 5 | 0 |
| 39. Assessment should be treated as a form of scholarship that is closely linked to teaching and learning, and it should play a role in the tenure and promotion processes. | −4 | 2 |
| 40. I’ve given up fighting this thing. I just do as minimal a job as allowed and then hope even that time is not wasted. | −1 | −5 |
| 41. Faculty resist assessment because they resist everything. They are the most immovable objects on the planet. | −5 | −5 |
| 42. What happened to the respect for faculty; the belief that they actually know what they are doing? | 3 | −3 |
| 43. Have not we really skipped a step in this process? Is everyone really on the same page when it comes to the purpose of education? And do not we need to resolve this first before we attack assessment? | 1 | −1 |
| 44. What goes on in the classroom on a daily basis does not “count.” What “counts” is “documented” learning, that is, the product-as-educational-widget. We would do well to push back as hard as we can so that the assessment movement does not gobble up and spit out higher education. | 2 | −4 |
| 45. It’s not like teaching and assessing are some separate, episodic events, but rather they are, or should be, ongoing, interrelated activities focused on providing guidance for improvement. | −1 | 5 |
| 46. No Child Left Behind has shown us the effectiveness of assessment taken away from the school teachers. We now have a generation that is good at taking standardized tests, but cannot do basic arithmetic or write a coherent sentence. | 2 | 1 |
| 47. The assessment movement offers a fundamental change of our higher education system: learning is now non-negotiable and the claims for learning are clear. This is a profound change and stands to reverse the erosion of quality in higher education. | −5 | −2 |
| 48. It is incumbent on academics to decide for themselves how to assess whether their students are learning, less to satisfy external calls for accountability than because it is the right thing for academics, as professionals who care about their students, to do. | 2 | 4 |
| 49. Although assessment is data driven, it is being driven by those who seek to know the cost and benefit of everything, but know nothing of the values of things taught and accomplished. | 3 | −3 |
| 50. What can be so wrong about asking someone to systematically and empirically demonstrate that they actually do accomplish their stated goals and objectives? | −3 | 2 |
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research and/or authorship of this article.
