Interview with Thomas Schwandt
Mel: Tom, to begin, could you describe your journey into social inquiry and evaluation? For some people it's a relatively straight line. For others, it zigs and zags like a pinball. Could you talk a bit about that part of your history?
Tom: I studied with Egon Guba, who had these dual interests, one qualitative inquiry, and then of course all the work he did in evaluation. So, in some sense I inherited that kind of framework. And I would say most of my work up until about, I don't know, the early 2000s, was primarily focused on issues in qualitative inquiry. I was motivated there largely by questions that had to do with putting qualitative inquiry on a firmer epistemological foundation. Around 2000 and something, more of my work shifted with some intention to evaluation. I did stuff in the late 1990s on evaluation issues but the resulting book that came out in 2002 in evaluation (Schwandt, 2002), and then 2015 (Schwandt, 2015) on evaluation foundations. It says, here are the issues you ought to think about. It doesn't necessarily take any position on those issues, but it lays them out. If you're going to do evaluation, here's what you need to think about. Then the latest one with Emily [Gates] (Schwandt & Gates, 2021), those were all about evaluation, and that's where my interest has been for the last 20-some years. I stopped writing about qualitative inquiry because I thought it was getting pretty boring, and I also thought that it was getting off the rails. It was becoming more political than I felt comfortable with. I expressed that concern in a paper called Opposition Redirected (Schwandt, 2006) which appeared around 2000, basically just to argue that it's time to turn this sort of political machine about the positioning of qualitative inquiry into doing something good.
Mel: You started your response at the time you were working with Guba. Any highlights about the pathway that led you there, that began this journey you just described to qualitative and evaluation?
Tom: Well, I wrote two or three tributes to Guba when he passed away (Schwandt, 2008a, 2008b, 2013). Before I started working with Guba, I was interested initially in doing a PhD in philosophy, particularly philosophy of education. I sat down with a number of those professors and they said, “If you want a job, don't do this.” So even though I took a bunch of courses in philosophy of social science, I decided to do something different. And Indiana had a program called Inquiry Methodology. I took all kinds of courses in psychometrics, design of experiments, statistics, field research, and all kinds of stuff. I was preparing myself to be a general methodologist in social science. When I graduated, there were no jobs. So that's when I went to Arthur Anderson and tried to use my skills there because they were developing a new unit in evaluation.
1
Mostly it was educational training evaluation, but they allowed me to do all kinds of other interesting things. I read a lot about auditing, philosophy of auditing and accounting standards, and I learned a great deal about evaluation in the corporate world. I can distinctly remember Egon and I used to stay in touch a lot, periodically with letters. It was when people actually wrote letters to each other. I remember a line when he wrote to me, quote, “You sound like you're extremely successful, but totally unsatisfied.” And that pretty much summed up my experience at Anderson. If I had stayed there, I would've retired by 50. I'd have been a partner. It was probably not the smartest move I ever made financially, but intellectually and personally it was. So, I left there and bounced around for a while at the medical school at Illinois Chicago, taught a couple of courses in evaluation, taught a research course in internal medicine, clinical research. Then the offer came from Indiana to join the faculty there and become essentially Egon's replacement. Bless his soul, he kept his nose out of that search entirely. I was grateful for that. I was there I think for about 10 years. Then Illinois came knocking at the door for both me and Jennifer [Greene], and she and I discussed both of us coming. I think her situation at Cornell then wasn't ideal. I was not really happy at Indiana with the way things were going. Jennifer and I both ended up going to Illinois, which was nice. We spent there together 17 years or so, something like that.
Robin: Tom, because many of the people who read these interviews may not be familiar with your past work, I think it's helpful for readers to set the trajectory of your career within some sort of historical space, to have some landmarks in time for things like when your interest in philosophy of science developed, and the other topics that you studied. So, it would be great if you could reflect a bit on what was happening in the U.S., in the world that could locate you in historical space as your career started.
Tom: I had a couple of courses in philosophy as an undergraduate, but that wasn't my major. I had a major in English literature. But I went to seminary. And in seminary I studied two things that were particularly important to me coming out of that. One is I studied something called process philosophy and process theology, which is a way of looking at how, both from a theological point of view, we make sense of the world and God. But it's also about how we make sense of our way of navigating the world as individuals. I also read a lot of stuff about hermeneutics. Now, of course, most of that at that time was biblical hermeneutics, but it's generally the same ideas. I began to become very interested in how a hermeneutical perspective would influence both qualitative research as well as evaluation.
My early experience at Anderson gave me a not very happy view of evaluation as anything other than the application of certain methods to get certain answers, as opposed to “where was the discussion of values and why are we doing this in the first place.” So those are two influences right there. And then the third one would be Egon and the work that he was doing at the time. I was a student of his right when he was doing all that stuff [like the book] Naturalistic Inquiry (Lincoln & Guba, 1985), [which] came out while I was a student of his. Our conversations about that were influential. While I was there with Egon, I was taking courses in philosophy of social science, in philosophy of science, not a lot, just three or four in graduate school. While he was talking about [the early twentieth Century philosophy of science, logical] positivism and so on and so forth, I was studying that stuff and I thought, gee, there's some things that don't jive here. So those are three big influences, historically, on where my ideas came from.
I don't know if this helps, but I think it's important for you and for readers of this to understand how I see my work. I would say something like this: Begin from the assumption that the field of evaluation theory and practice is highly pluralistic. Now whether that's normatively, that it should be that way, is another question. But let's just stay at the descriptive level: If you look at the practice of evaluation, it's highly pluralistic. It's not at all uniform in its aims and in its self-understanding, if you compare it to auditing or ministry or physical medicine. So, I believe that in a pluralistic environment like that, it requires professionals, anybody, to engage the points of view of others in a way that admits your own point of view could be wrong. The fancy term for that is engaged fallibilistic pluralism.
2
It also means that we need to be more than just aware of the fact that there is this variety. We need to engage it. It's not just a question of saying, “Oh, there's some people who favor experimental methods in evaluation, some who prefer stakeholder approaches, some who think Indigenous evaluation approaches are the way to go. And so on and so forth.” It's more than that. That's just being aware of the pluralism. I’m talking also about engaging the issues that inform this pluralism—that becomes a responsibility of someone who calls themselves a professional evaluator. So, my first concern is, what does it mean to be a professional evaluator? That means we have to develop reasoned responses to a whole set of issues. What does evaluation mean? Who is it for? What do you need to know to be a professional evaluator? What do I need to know about words like value, fact, evidence, ethics, the politics of both, the stance of the evaluator and the political environment in which evaluation unfolds. Key issues that you have to somehow take a stance on to call yourself a professional evaluator. And that's the kind of thing I tried to do in the book called Evaluation Foundations Revisited (Schwandt, 2015)—here are the issues that we need to engage. I didn't necessarily tell you how to engage them and what you ought to do, but this is the landscape of issues that you have to grapple with to be considered a professional evaluator. In some ways, that is sort of the first thing about how I'd characterize my work.
Second, I offered how I think about that. I wrote papers and another book or two about what I think it means to be a professional evaluator…about ethics, values, evidence, the authority and professional responsibility of an evaluator and so on and so forth. I hope to have done that in a way that was engaging the views of others. The book that Emily and I wrote (Schwandt & Gates, 2021) clearly is a book about how I or we think about some of those issues. You could categorize papers that I've written as answers, my own answers to some of those questions. When I’ve written about civic science or civic studies as a model, and when I've played around with critical hermeneutics as a way of thinking, those are ways I'm trying to wrestle with core evaluation issues and what I think about them.
Mel: I have a follow-up. Tom, I don't know if you ever had a conversation with Ernie House in which he talked about his background, especially the very different kinds of life experiences that he and his sister had. He says this spurred the interest in social justice that's been so important in his writing. He’ll also sometimes do that kind of analysis about others, describing how he sees some other notable evaluation figures’ backgrounds being reflected in their writing. Is there any kind of autobiographical version of that from Thomas Schwandt?
Tom: I think that if there is something in my biography that influenced me, it was probably the introduction of a kind of moral discourse issue that arose as I thought about process theology. I kept thinking over time about: Where is the discussion of what it is right to do in evaluation? And of course, that immediately leads to questions like, who does evaluation serve? What do we do about values? What about your values? What about the values of these people? What about the values embedded in the very idea of what we're evaluating? I don't think I would've gone further in examining that if I didn't get inspired through the study of hermeneutics, because one of the big issues in philosophical hermeneutics when I was reading it was the effort to turn everyday practice into a technical exercise so you could look up evidence of how to parent, evidence on how to be a good leader, evidence on how to be a good father. One of Gadamer's famous arguments was that that has nothing to do with it. These are practical exercises. These are exercises that involve your moral perspectives and your political perspectives and so on and so forth. And so yeah, those things came together early on in my career and sorting them out was something that I was interested in.
Mel: I'm going to see if we can get you to take one step backwards from that. What led you to be in the seminary and get juiced by reading about hermeneutics and moral philosophy?
Tom: The war in Vietnam. My father had died a short time before I got the notice that I was being drafted. My number was really [low] in the lottery.
3
So, my mother was there and at that time, my 13 or so year-old brother. So, I went to my draft board, and I said, “well, I need an exemption because my father just died.” And they said, “forget it. There's never been an exemption for something like that in the history of our draft board.” Nice. Very reasonable guys. So, I went to Chicago downtown for my physical, and I didn't know what to do. This was 1968 or ‘69. I don't remember, but it was right before when Nixon was illegally bombing Cambodia. So, I showed up for my physical and found guys from my high school who were in my same birth month who were there that were trying to get out of going to Vietnam.
I went through my physical. I had bad knees and that didn't matter, a knee operation and that didn't matter. So, I'm sitting in the room at the end, we're all done and all of us are sitting there in our underwear. We have to fill out this questionnaire, and one of the questions was “Are you now, or have you ever been a member of the following organizations?” It listed about 25 radical organizations from something as mild as SDS, Students for Democratic Society, to the Communist Party. And I checked every one of them. Then you go through a bunch of other questions asking if you're mentally sound of mind and all that kind of stuff. You get to the bottom and you're supposed to sign. I didn't sign. Then you pass all these papers in. Everybody's filing out of the room, and they go, “Schwandt, stop. Come back here. You forgot to sign this.” I said, “No, I didn't forget to sign it. I'm not signing it off.” The military fellow in charge said, “well, you have to sign it.” I said, “I don't have to do anything. I'm a civilian.” Eventually they let me leave.
I had no idea what to do. Later, I was talking to a friend of mine who asked, “do you ever talk to the American Friend Service Committee?” And I said, “well, that's a good idea.” So, I went and they made an appeal to my draft board, which was immediately rejected. And then one of my friends from college said, “well, I can see you really only have one alternative, and that's to enroll in a seminary to get a seminary deferment.” I said, “oh, man.” He was going to seminary, but he really wanted to do it. I had been raised in a family of Lutherans, and so we went to church. My dad was a church treasurer and sang in the choir and everything. It wasn't like I was unfamiliar. So, I wrote an application and I got accepted, and that's how I ended up there. I was a draft dodger.
Then I lived in Chicago. I went to what was called the Chicago Cluster of Theological Schools. I took courses at University of Chicago Divinity School, Catholic Theological Union, Lutheran School of Theology at Chicago McCormick Seminary.
It was a heady time. I was not the only one in seminary who was dodging the draft. But then this was the time in seminaries when, so to speak, they began to develop a social conscience, this women's liberation movement. We were all assigned as first-year seminarians to some social effort. I worked in a church that was concerned about runaway kids as well as kids on drugs. I worked there and learned a lot about that. So, you talk about social conscience, social justice, that was one of my first introductions to that.
Val: Tom, you've interacted with an impressive cast of characters in evaluation in addition to Guba. What other early evaluation luminaries had an important role in shaping your ideas and your career?
Tom: Well, one thing to understand is that I never just read stuff in evaluation. I'm a multidisciplinary kind of guy. I had lots of conversations, and I regret having thrown away all my copies of all these letters with people like John K. Smith, Martyn Hammersley, and Cleo Cherryholmes. I intersected with and discussed a lot of things with Wolf Carr in the United Kingdom, action researchers in Norway. So, the way I think about evaluation was influenced by all those things. Of course, I read [Michael] Scriven's stuff, I read Ernie's [House] stuff, and it raises questions. And as I said, I believe when I got the Lazarsfeld Award,
4
we may not stand on the shoulders of the giants, but we certainly rubbed shoulders with them and as a result are influenced by their ideas. I would say those two [Scriven and House] particularly. And then of course, I got to know Bob Stake much better when we went to Illinois. Bob was retired, but we did a couple of things together.
But then I met Peter [Dahler-Larson]. I think we first met in an EES [European Evaluation Society] conference in Lausanne [Switzerland]. I think that's where we first just started having a drink and chatting. I ended up shortly thereafter spending six months or so in Denmark on sabbatical. We did a thing together on evaluation, and then we just kept talking. Because he's trained in a kind of continental tradition [in philosophy], the stuff that I was reading and thinking about was not unfamiliar to him. So, for example, we could talk about the Nicomachean ethics, we could talk about Habermas, we could talk about the difference between critical theory in the United States and critical theory as Habermas understood it. We became good intellectual friends, and then of course personal friends. I'd stay in his house and meet his children.
And so that led to us thinking about our mutual interests. I had never really focused a lot on what I would call the social institutional aspect of evaluation, which was Peter's bailiwick. So, we had a way of talking about my interest in practice and his interest in the sort of social institutional framework, which he laid out probably the best in his book The Evaluation Society (Dahler-Larsen, 2012). In addition to Peter, I got to meet all kinds of people in Europe that were involved in evaluation.
Emily: Can I ask a quick follow up? Can you say a little bit about what the heart of the connection is for you and Peter—why did you care about evaluation? This is a question I've always wanted to know the answer to.
Tom: I think I was stimulated largely by what I saw when I was working at Anderson. It may be a function of the way Anderson worked, evaluation as a largely tick-box technical activity. And I thought, how could that be? Where are we discussing values? So, I wrote that paper in 1988 that says Recapturing moral discourse in evaluation (Schwandt, 1989a). The book Evaluation Foundations Revisited (Schwandt, 2015) was my statement saying, I'm tired of reading about model number 31 in evaluation. How many more models, approaches or frameworks do we need? There are a set of issues defining our practice that we have to study: reasoning, evidence and argument, the relationship between politics and evaluation, the idea of use and so on and so forth.
All that was influenced by conversations with colleagues. I was part of a group on the Carnegie Foundations for Advancement of Teaching. We received a grant when I was department chair to look at the issue of the preparation of professionals in the field of educational psychology, particularly practicing psychologists. I got to know Bill Sullivan very well. Bill wrote a number of very interesting books on professionalism from which I learned a lot. The subtitle of the [Evaluation Foundations Revisited] book (2015), Cultivating A Life of the Mind for Practice, comes from Bill. It's these intersections between people that often fueled ideas.
I think the other thing that was true of my work, and Mel [Mark] could probably be the best one to testify whether this is a true statement or not, is I was never an ideological proponent of one view or the other. That's one of the reasons I left Indiana, because I was called the qualitative guy. I remember Mel and I were on a number of panels where I would say, “yeah, I get the whole point about the importance of genuine experiments as well as quasi-experiments.” I mean if you're investing 8 billion in a vaccine, it better work. And I'm still like that. It was also motivation for saying that there's a place for different ways of understanding how we are in the world and what we're trying to do.
In my classes I would say that there's no way you can get through the day without, on the one hand, being able to enumerate something or use mathematics like, gee, are paper towels cheaper if I buy 24? And at the same time, you also have to have the capacity that, if you bump into somebody unknowingly in the grocery store who you don't know, is it appropriate to say, excuse me, or should you just ignore it? In other words, you need a way to interpret your navigational capacities in the world. That is not a quantitative skill. It's an interpretive one. And there's no way you can function without both of those.
Robin: You were talking about colleagues, and you've had such an impressive list of colleagues with whom you formed an intellectual community and who've influenced your work, Peter obviously being one of those people. But I remember when Jennifer retired, one of the things you talked about was how important it was that you and she came together. So, I'm wondering if you would talk a little bit about Jennifer and your relationship and her influence on your thinking and work, as well as any others who we might have not discussed already.
Tom: Well, Jennifer and I were friends for a long time before the move to Illinois. She and I actually applied for the same position at Cornell, but she was done and had a year or two of work experience and I was just finishing. But I had such a great time during the interview there with Charles [McClintock] and Bill [Trochim] and her, and we just stayed friends through this whole thing. And then I always got invited to Cornell. Cornell would have this thing for its students every year at AEA [the American Evaluation Association's annual conference], and I got invited to that and I think that's how I got to know Val, and that's how I got to know Leslie Goodyear and all that whole crew that went through their PhD program.
Anyway, Jennifer and I were friends all along. Way back when Egon organized some kind of alternative conference, he invited a bunch of us to give papers, and she gave a paper on qualitative stuff. We started talking about that. I've read almost everything she has ever written, and we had lots of interesting discussions about mixed methods. I have a very different view of that than she does. We also had lots of discussions about democratic evaluation. Of course, Ernie [House] is part of that conversation. So is Tineke Abma in the Netherlands. About Jennifer and me, she had a great statement when I retired before her by a year or two. I was lamenting the demise of our group there in evaluation and she said, look, we had a great run and it's over. And I thought, okay, that's smart.
And Stafford [Hood, founder of CREA, the Center for Culturally Responsive Evaluation and Assessment] and I went back. We overlapped when I was lecturing at Northern Illinois, and then he had this idea of bringing what he had started in Arizona to Illinois, but to make it a Center. So we started talking about it and, because we knew each other personally, we knew how to work together. He started that [Center] and I supported that … [and] helped with early conferences …. If we had any disagreement, [it] was, I kept saying, we need to expand beyond education. That culturally responsive stuff is not just in education. It's in community psychology, nursing. The bigger the coalition we can form here, the greater our chances of getting some outside money. He was more content to stay within the educational arena…The danger was that it was just preaching to the choir of like-minded people getting together. [But] they needed a place where like-minded people could get together. That was an important thing.
Emily: Tom, something else I wonder about is the formation of the American Evaluation Association. There was discussion about whether it should be called such a thing. Also, the idea of professional associations seems related to the heart of a topic you’ve discussed, what it means to be a professional. Any backdrop on the formation of the field, so to speak, in terms of professional associations?
Tom: I was a member of both ERS [the Evaluation Research Society] and ENet [the Evaluation Network; the two predecessors of AEA]. I don't recall taking a stance on the issue of merger. Maybe I did, but I forgot. I believe it was a little contentious at the beginning. There were some rough moments, including [the consecutive AEA Presidential Addresses of] Yvonna Lincoln (Lincoln, 1990) and Lee Sechrest (Sechrest, 1992). Remember that? That was nasty. I wrote a paper about that (Schwandt, 1989b). This was in the middle of the quantitative-qualitative bruhaha. Tom Cook [and Chip Reichardt] edited a kind of conciliatory book about it (Cook & Reichardt, 1979). And then Jennifer [Greene] and Gary Henry took on co-editorship of NDE to try to show the ecumenical view. I believe Sharon Rallis [and Chip Reichardt were later] involved in trying to say something interesting about that (Reichardt & Rallis, 1994). That was part of what was happening in the seventies or now, early eighties.
Emily: Anybody who I'm teaching or engaging with generationally that's younger than me assumes that AEA has almost always been around. So, there's not a sense of personal and political history of starting an association. I'm very curious about the global context for your work, your ideas, your arguments. Because evaluation as an organized compensated activity is really at different places geographically. That's another context we haven't talked about, which is really salient in a different way than in the U.S. right now.
Tom: Well, I can say something about each of those issues. The context, those of us who were students of Stake and [Donald] Campbell and Guba and so on, and that includes Mel and me and Jennifer, we were in a heady environment because there was a lot going on… early on AEA was a place where we talked about ideas. We really did. I mean questions like, what is evaluation and what is it we ought to be doing here. It was a heady environment with an academic focus. It simply was. And I gave a number of those speeches, and so did Mel. These were lectures on issues. So that's part of the context in which some of us who were sort of the first generation of students of those, I don’t know if you'd call them founders, but certainly something like that. We were developing our work. I'm not so sure that there was a strong generation that followed us, but I'll leave it to others to make that judgment.
The professional international context is a different thing to me because that to me was a one-on-one issue. I got invited to do some work in Norway. I got invited to do some work in Denmark. I got invited to give some work talking about action research in England through a conference that Elliot [Stern] organized. Those just grew as individual professional context. And when we talked about evaluation in Europe, there wasn't anything like the obsession it seems to me there was in the U.S. with, what's the difference between research and evaluation. They weren't obsessed with that, nor were they obsessed with the idea of forming an association. Why would we do that? We engage in critical research all the time, which is evaluation.
Robin: I want to return to some of your practice experiences and the synergy between those experiences and the development of your ideas over time. You mentioned Arthur Anderson and how that pushed you to think about evaluation as not being a tick-box activity. I'm wondering about other experiences.
Tom: I did some work evaluating nursing curriculum in a medical setting, a lot of contract work for a while with what was the North Central Regional Educational Laboratory, which is no longer in existence. Jerry Nowakowski was the director of that lab and we are good friends. We did a huge three-year case study evaluation in Wisconsin. It was a big deal and included me ending up doing videos for Wisconsin Public Television on the issues involved. They had to do with, and this was very early on, the introduction of using technology for distance learning and reading and the fact that the rural areas of Wisconsin were too far apart for them to hire reading specialists for all those districts. So how could they use some technology to bring that specialty to the schools, and how is that working. That was a very big project, and I did a couple of smaller ones for them also.
Then for almost 9 years, I consulted with the United Nations Development Program [UNDP] on their external international evaluation panel. While I didn't do evaluations there, I read a bazillion of them and advised them on issues related to how they were doing their work. It was based on real simple issues: “I don't think you can talk like this because you don't have any evidence for that.” Causality issues, issues having to do with how much context you have to provide in a given country to understand what's happening there. When you talk about evaluation and the effect of the small amount of money that UNDP put into issues, like improving civic education or improving political discourse, I did a lot of advising there. I did some advising for the Rockefeller Foundation too. Jennifer and I had a lot of conversations about this as well. I think she felt like, and you’d have to ask her this, but I think she felt that I wasn't enough of a practitioner. And I said, “Well, to me, you're assuming a kind of distinction between theory and practice that I don't think is viable.”
Robin: Can you say more about that?
Tom: Yeah, well you can read about that too, because I gave the inaugural lecture at the Eleanor Chelimsky forum at the [Eastern Evaluation Research Society], EERS (Schwandt, 2014). In that talk I explained how I thought about the distinction between theory and practice and how there are ways of thinking about it that are totally bankrupt and ways of thinking about it that are illuminative … It's much easier to think of the theory-practice relationship in terms of the relationship between case law and cases. Case law is a malleable, developing body of legal knowledge. It's not fixed in stone. The reason it's not fixed in stone is because certain cases help you think about that body of case law. And that in turn changes the case law. Over time it can change the way you interpret the cases and vice versa. It's this kind of, heaven help us, hermeneutic circle that is involved in that set of circumstances. And that goes back to why early on I tried to do all this stuff with practical hermeneutics, which fell on deaf ears because it is a difficult thing to understand and because nobody can spell hermeneutics. So that's part of the major problem there …
When we finally got around to the parts of our book (Schwandt & Gates, 2021), I'm saying, look, I’ve been looking for ways in which we can realize this relationship between the empirical stuff that is characteristic of evaluation practice and the moral-political discourse stuff. How do we model that? And I ended up—if you agree [Emily], and I guess you agreed because you let me publish it—I ended up saying, well, here's a model called civic science, and it has to do with saying what are the circumstances that we face? What do we know about that situation? In other words, what are the facts? What values are involved in that situation? And where do we go from here? What's the strategy? So, I thought, well, maybe with that framework we can talk about the kinds of issues that concern me.
But this is a long-standing issue in one aspect of social science. Martin Rein has written about this. Lindblӧm too. And I read all those guys too. A guy I learned a lot from about this was, again, a totally serendipitous thing. I got asked to be on the National Research Council committee that looked at the relationship between evidence and public policy. The guy who chaired that committee, his name is Ken Pruitt at Columbia. He has been retired for a long time now but was Director of the Census under Clinton. He was the vice president or president of the Rockefeller Foundation. Brilliant guy, really interesting to talk to. And we spent a lot of time talking about the place of values in thinking about use of evidence in public policymaking. It took us a couple of months of meetings to even get the guys, and they were guys—there was one woman—to get these guys to even understand that we were talking about use, not method. They wanted to discuss the value of quasi-experimentation and experimentation. And I kept saying, guys, the committee is about use. They knew nothing. I don't think a single one of them ever read anything by Carol Weiss. It took us a couple of months to turn the ship to begin talking about use. Anyway, he [Ken Pruitt] was a big influence on me.
Mel: I recalled a criticism while you were talking about drawing on hermeneutics, about civic engagement, about cultivating the life of the mind, and so on. The criticism was that your work seemed to apply more clearly to evaluation that informs practitioners, such as in an educational setting, rather than policy makers, those people who are choosing to fund or not fund a new program and so on. Could you comment on that and talk about the fit of your work in that latter context as well?
Tom: That's a great question, and the assumption behind the question is true. I always began from the assumption that evaluation is a local phenomenon, meaning it's embedded in the kind of practices that we as teachers, social workers, clinicians engage in on a daily basis. And if evaluation isn't helping those people better understand how to evaluate their practices, then we're not doing so good. That doesn't mean I'm not concerned about the place of evaluation in large scale institutional reform, but that begins in the norms and practices and assumptions of the institutions where these practices exist.
Alright, I can give one example. I was in a meeting around the Sustainable Development Goals with policymakers, and you could just see that how they thought about evidence and values and facts isn't something they invented. They were echoing the norms and practices of the institution in which they were located. So, if you wanted to get at that, you’ve got to start talking to them about where do you think these things came from? Why do you think that? I've always been that way, it seems to me. And it was my conversation with Peter and his work. He's always been concerned about how institutional context, contexts of policymaking, in his case, a lot of testing and examinations in Denmark, influenced the way we think about what's going on there.
So, you're right about that. I never really tackled the question of how do we improve evaluation at the higher policy level, in part because, from my own point of view, policy is an incredibly blunt instrument. I mean, just look at what they're doing to SNAP and TANF,
5
and it's an incredibly blunt instrument. We talked about that at length when we were writing the book about using evidence in public policy (National Research Council, 2012) … you can't channel the evidence just right to do this because that policy's going to be multiply interpreted and the rules that are written for its implementation and so on and so forth. It's a very complicated process and interpreted in different ways.
Mel: I’d like to turn for a minute to the 2018 EES meeting where you gave a plenary on post-normal evaluation. It was published the next year in Evaluation (Schwandt, 2019). I wonder if you could clarify what you mean by post-normal evaluation and anything you would add or expand or modify, perhaps with an intensifier for the current scene, which seems even more post normal, or post-post-normal the term might be.
Tom: Well, keep in mind that you give speeches for particular reasons to particular audiences and that the EES audiences are largely composed of people who work in the state apparatus, so to speak, and consultants. There's not a huge number of academics who go to that conference. So, part of what I had in mind was to try to shake things up a little bit. I think there no doubt has been a long tendency, and you can read about it everywhere, to think of “this is what it means to do good social science work,” including evaluation. But it struck me as I was reading a number of other things about reflexivity and changes in institutions—and this was a little bit early before AI [artificial intelligence], and now all of this is true in some sense—that we weren't positioned … to deal with these developing issues in any significant way.
I think that AI—and Emily probably can talk to this better than I can—that it has come home to roost. If you were to inventory the number of M&E [monitoring and evaluation] specialists that the UNDP hires, they're out of a job because AI can do the work much faster. All they need to do is collect the data on their phone or laptop, right? Writing these country-level reports, once you write a program about what AI ought to be looking at in terms of background characteristics, context, what have you, and you add in these data and you ask the right questions, AI will write the report.
So, for a lot of these M&E people, I was trying to say something's going on here that I don't think we're well positioned for. And we're not nimble. I remember concluding that talk by saying, how nimble can the profession be to adapt to some of these issues? And given my background in working with UNDP for all those years, I thought, geez, we're screwed. That bureaucracy is not nimble at all. If anything, those big apparatuses that make up the international evaluation and the cooperation between the Independent Evaluation Office at UNDP and the World Bank's Independent Evaluation Office on capacity building or whatever they were talking about—it's a humongous, almost unmovable apparatus. So that's what I was trying to do there. It got some traction.
Emily: How do you understand the book that we [wrote] as offering guidance to practitioners?
Tom: Well, I think it's kind of two books in one, and it kind of reflects the way each of us think and do. I mean, if left to my own devices, that would've been a highly philosophical book and the stuff wouldn’t be there about, here are these ways about thinking differently about method, here are these ways in which this stuff can be incorporated into the way you normally do that. So we wrote a book that tried to blend those two things together. I thought the review that Dollinger (2023) gave of the book was spot on in the regard that we were trying to find a new way of thinking about and doing evaluation that took into consideration not only the value question, but the systems question.
Emily: After the book was out, Tom, we had a conversation about what makes a good book be revelatory. I don't know if you want to comment a little bit.
Tom: Yeah, I mean I used to lecture about that concept. That's a very hermeneutic concept that you don't consume texts, you engage them. When I was teaching in the medical context, that wasn’t true. They just consumed whatever they were reading because they had to pass the test. You don't engage a book on medical diagnosis, you memorize it. It seems to me that at least in the last few years that I was teaching, students didn't understand the concept. Does anybody read books anymore? That's how I was trained: you engage the thing you're reading about, and you think about it, you comment on it. In the last few days, I read a few things that I wrote 20 years ago, 25 years ago, and I thought, no, I'm engaging that in a very different way than when I wrote it. The meaning of a text changes depending on where you stand, on your experience and your prior knowledge and so on and so forth. And that informed my pedagogy as a teacher, though I didn't talk about it a lot.
Emily What you would say is “So go think about that.”
Tom: Yeah. I would say get back to me.
Emily: Yes, think about that and get back to me. And that's how you engage in it pedagogically.
Tom: I think in some ways I learned that from Egon. He wasn't a very good classroom teacher. He used to stand up there with his hands in his pocket jiggling change and reading his lectures. But individually as a tutor, the guy was fantastic. I spent a lot of hours, accumulated a lot of PhD credit hours in tutorials with him. I would give him a paper and he'd give it back after writing almost as much to me as I wrote to him. Then we would go back and forth and back and forth like that. I learned a great deal from him about thinking through issues and arguments. And I endeavored to try to be that way in my own way as a teacher.
Robin: I'm curious because throughout this discussion you've mentioned the importance of thinking about civil society, about the political apparatus and values, all these ideas that have threaded through your work, and the extent to which the moment we're in now changes the way you think about those kinds of ideas. The civil society space is closing. A profession that, at least from my perspective, doesn't seem to be nimbly responding like a profession that's under attack both in the academic quarter and the practice quarter. A development space that is really very different now than it was eight months ago. So, how are you thinking about the current moment?
Tom: That's a great question. Other than the fact that we're totally screwed. How do we fashion a kind of institutional, and I would say local, response to the circumstances? I don't think there is a uniform general response that the field can make. In your area, you and your collective community of practice can probably focus on something that you can deal with.
I think that's why these large institutional responses from UNDP and World Bank and IOCE [the International Organization for Cooperation in Evaluation], I just don't think they're going to be effective. We have no evidence to believe that all that Marxian argument, that these grand theories don't do anything. This has got to bubble up from the bottom. That's the only way I can see it. And there are probably, I mean I just haven't kept up, but I'm sure there are pockets of practice that are flourishing. And the question would be what role does a thing like an AEA or EES play in promoting those pockets of success? In my judgment, nothing. They're not playing any role. What good is an AEA to the average practitioner other than you have a place to go for conferences?
Mel: Could I ask, if you imagined a document being developed, something like “Evaluation Project 2029,” what you would encourage should be included in its contents? Imagine a change in administrations that would be more open to evaluation and would give consideration to its implications. What might be in the content of that document?
Tom: Well, I think one thing would be to take the theme that we ended up in the book using science as evidence in public policy, where we argued that our best hope is for evidence to influence politics. That would be an important thing to talk about because we want to say it's either political and it ignores the evidence or it's the evidence and it ignores the political. No. Policymaking is a political process … . The best we can hope for is some mechanism for evidence-influenced politics.
Emily: The idea of evidence-influenced politics now, in the U.S. context and sort of the global conversation for transforming evaluation to support the transformation we need socio-ecologically. These are two, I think, mega-discourses that influence evaluation. How do they interact? What do you think about the latter one, then, in relation to the former?
Tom: Well, this is an intellectual history question, right? I mean, you'd have to go back and look at how we move from social science, from a validly, unquestionably positivistic way of thinking of social science, to a way of saying there's some merit in that way of thinking but it isn't the solution. And then how did we move from that kind of more ecumenical point of view to saying, geez, we're not thinking enough about a systems perspective, which has largely been in this little niche over here in organizational development and so forth. How would that influence the way we think about what we're doing? And then now how do we think about—and again, this is an intellectual history question—how do we think about this increased focus on socioecological wellbeing and how does that influence what we think we're doing as a profession and so on and forth.
I don’t know which comes first, but we need some clear thinking about that. If you think of that as a kind of intellectual history of the development of the value of social science, we need some people talking about that. There's nobody talking about that. What they're talking about is how do we put these two things together? Where's the right tools for this? And we lack the kind of scholars that are trying to unite some of these things in evaluation that say this is the kind of framework that might be useful for us to think about the practice. Alright? And that framework goes beyond the five or six things that are the chapter titles in the book I wrote about cultivating a life a mind for practice.
They are new things. But social science has gone through these kinds of changes and it's going through a kind of change. Now think about the rejection of grand theory in social theory and its replacement by something else. Evaluation, in my humble opinion, has been too disconnected from these kinds of developments. That's why I have read widely about social science because evaluation's origins, whether you want to agree with me or not, are in social science. Its origins aren't in philosophy. Its origins aren't in anthropology. Its origins are in social science.
Emily: Why does there need be an academic home for evaluation? Why does there need to be such thing as an evaluation academic, an evaluation theorist, an evaluation scholar, however you label it.
Tom: There doesn't need to be one, but that's an institutional question and a political question. I mean, when I first started out lecturing in Norway and Denmark, they’d go, what do you mean you have an association for evaluation? What does that mean? I spent time with a famous Norwegian sociologist. He was of a critical bent and he said, well, that's exactly what I do. I look at social policy in Norway and I critique it—isn't that evaluation?—and here's what I think we ought to be doing instead. It's whether we ought to have an evaluation society or is a political institutional question because evaluation is ubiquitous. And the question is, is there any room for a professional practice of evaluation that helps us understand the everyday practice of it? It helps us improve the way we're going about things in an everyday way. I would say the same thing. Is there a need for a society of auditors that are doing some stuff in that society that helps the everyday auditor do a better job?
I'm on a nonprofit board. We're totally screwed with the Big Beautiful Bill. We're losing right now about $77,000 because of the cuts to vouchers for early childhood education. The state has taken away all the money that we use in grants to support families that need utility assistance or rental assistance. That could end up costing us right now about $300,000 over the next two years, and we had a budget of $2.1 million. I mean, we're a small nonprofit dedicated to eliminating second-generation poverty. You raise the cost of childcare for these families $5 a week. They can't afford it without these vouchers. They live right on the line. Right on the line in terms of support. And we [the nonprofit] are trying to get them off that line and out of poverty. But what social service agency and professional group helps the people in the organization whose board I'm on helps them do their work better? I think that's the function of those places.
And I think in some ways, and this is just now my curmudgeonly-ness finally showing full strength, that when we moved from an academic orientation at AEA, which means that we invited interesting speakers to talk about issues that are of relevance to the field, as we moved away from that towards a membership organization that sought to entertain people when they came. This is what went down the tubes. We used to have some really interesting discussions with a lot of people … I don't know when we made that change, but it's been some time now.
Val: I’ve read your tributes to Egon Guba, and I think he'd be very pleased that you talk about his value to you and about your desire that your career reflect well on your teacher, as is certainly the case. Across your tributes it seemed that he helped open up your mind at a time of foment in a number of different disciplines, sort of like a zeitgeist across social sciences and other sciences. That gave rise to what we look back on as the paradigm wars, which shaped the beginning of the evaluation community as I recall it. At other times, I think that you’re talking about the discipline still being siloed, and that evaluation could advance as a discipline by learning from and coalescing knowledge from across the disciplines. So I am wondering if you see any potential now for a similar zeitgeist that might lead us out of this post-truth crisis. Any suggestions or lessons for you from your earlier experience that might apply to our efforts at evaluation today?
Tom: That's a good question. I think that first of all, these things die down by virtue of just becoming exhausting to talk about. But they also then arise because somebody, some group of people our trying to advance a new way of thinking. I mean, that's the whole concept around multidisciplinary and interdisciplinary and transdisciplinary thinking that started in the late 1990s, but probably early 2000s. I remember being on an evaluation panel for the Keck Foundation because they spent a lot of money in trying to bring natural scientists in particular areas across disciplines together to solve problems. And we evaluated that. They were looking for particular methods that worked for bringing these groups together. It's very difficult because they speak different languages, they have different ways of understanding the sort of physical and biological worlds and so on and so forth. But it's some group that's going to lead these efforts for us to think differently. And I think that's what's happening on the background of evaluation now with certain people like Emily and the people she works with, Pablo [Vidueira] and others, who are kind of on the periphery. I'm not sure they have a full grasp of what these issues around systems thinking and the value of complexity science to evaluation. I think they've read some of this stuff, but I would not say they really understand it. But nonetheless, it's part of the conversation. Elliot Stern and I have talked about this quite a bit, I think that's where this next transformation, or whatever way you want to call that, it's not a paradigm, it's just a different way of thinking … about how we bring together some ideas that actually originated in ecology and climate science into evaluation. It's the same thing to me with the cultural competence argument that arose not just in CREA. That was an issue that arose early on in nursing and in clinical medicine. Robin can probably testify to this, that to what extent are we involving our patients in thinking about what's happening here and do we really understand what cultural influences there are on their own behaviors when it comes to the way we're trying to think about treatment and so on and so forth. That's another thing that can be brought in. And it has been, in a very little way, brought into the field of evaluation. I think that's where these things come from. They come from a group of people trying to advocate for a perspective that we ought to think about. I don't know if they're paradigm shifts. I don't like that word anyway, because things don't really go away. They just kind of go to the background.
Val: Well, I think it's similar to what you said before about your own scholarship and how you hope that it represents the other influences that you took into consideration as you try to formulate your ideas.
Tom: That's true. I don't have a clear academic path. When I was chair of the department and, well, you have your bachelor's in psychology, you have a master's in psychology, and now you're going to do a PhD in psychology. How much more psychology do you need? That's not me. I started out studying chemistry and I would've kept going. But my father died when I was in college, and I had to take six weeks off while I was in the midst of organic chemistry. Of course that was a disaster. I never could catch up. So, I ended up with some minor in chemistry or something, but I went to turn myself to English literature.
Mel: I'd like to ask you either or both of two things to consider. First, would you think about evaluators, particularly early and emerging evaluators, who find themselves in positions that are perhaps congenial to evaluation. That might be in some U.S. states, maybe foundations, maybe local organizations. Think about the advice that you would have for them. Option B, or part two if you want to address both, would be to offer advice to individuals who are leaving evaluation at the moment because of changed opportunities. What is it they should know as evaluators that positions them better for going in some non-evaluation kinds of career trajectories? So either question or both. I hope both, but it's up to you.
Tom: Well, I think the second one is going back to my concern about a number of issues you need to address to be considered more than just an everyday evaluator, but a professional evaluator. I think if you have truly studied issues around what evidence is, how something counts as evidence as opposed to data; the phenomenon of critical thinking and what it means to make a reasoned argument about value, and the role that values play in shaping one's practice. Those things are transportable to any environment. So maybe that's a brief answer to the second part.
To the first part, I think there's some similar sense to that too. I would say that when you look at the people that might be doing evaluation in the nonprofit world where I'm working with the small nonprofit, we're very small, not the Rockefeller Foundation. Those things [I just mentioned] are really what's needed. You don't have to call it evaluation, what you can call it is what is motivating our way of understanding what we're doing here and how we decide it's any good. These are common questions that you all are familiar with. Just because we produced these kind of outputs, does that mean we're doing good? How do we really judge the outcomes of what we're doing? Where I'm on the board, we can produce lots of outputs. We can say we served 1,500 meals every other day. So how did that actually address this food insecurity issue in our community? I think, and this is something Elliot [Stern] and I have talked about quite a bit, that these are some skills, understandings, knowledge that are transportable across settings, whether you call yourself an evaluator or not.
I don’t know if that gets at what you're after, but I think in some ways that's the way we ought to be thinking about things. You can send evaluators off in lots of ways to learn particular methodological techniques. That's absolutely true. But somewhere along the line, they've got to learn these ideas about evidence and facts and values and ethical perspective. They have to learn that stuff and it's transportable.
Even in simple things like, we had an issue in the [nonprofit] board about the behavior of one of the teachers. It called for a personnel evaluation. We didn't call it that, but that's exactly what we were doing. And the issue there is, what was the ethical thing to do? Alright, how do we evaluate the behavior of this teacher in terms of some sort of ethical norms or practice norms, and where do they come from and can we defend that? I don't think we used the word evaluation and no one referred to me as the evaluator in the conversation. It was just a discussion about stuff that had to do with our understanding of, are we doing the right thing and we doing it well? I can't think of a situation in which we don't do that. It goes on all the time. I think evaluation is right in front of us all the time.
Mel: Tom, thanks so much for your time and your thoughts. I also want to add that this has been fun. I've enjoyed this immensely.