Abstract
Constructive Alignment (CA) is a pedagogical tool for designing student-centered instruction aligned to learning outcomes. Despite strong evidence that CA and student-centered instruction are superior to lecture-based pedagogy, the latter remains prevalent across higher education. This descriptive-explanatory case study (n=20) investigates how programs of reciprocal, non-evaluative peer observation can help faculty understand and use CA at the lesson level. Analysis of exit interviews and faculty-faculty dialogue reveals that participants are able to apply principles of CA at the lesson level; most report this is new learning. Two program features that support this learning are described.
Learning goals are a dime a dozen. Everyone’s got them. We always put them at the front [of the syllabus]. I mean sure, no problem. But it was really useful to sit down and really think: “what is the actual skill or thing or point, here, that I want them to get? And how does that actually connect up to the task that’s assigned?” Because I’ve oftentimes found myself thinking, “I’ll just have them read this, and then it’ll be fine.” And then I get annoyed and frustrated when I find that people don’t get it. And I think, “why didn’t they get it!? They just read about it!” So [the program] was useful in really thinking, on a micro level, “the student does this. So what are they actually getting out of the tasks that I’ve assigned?” (Keith, Linguistics instructor)
Introduction
When Keith shared this, he had just completed a professional learning (PL) program in which he and another professor took turns observing each other’s lessons and discussing the observed student learning. Woven into Keith’s account are two significant, pedagogical “aha moments.” These—and a third described shortly— are at the heart of this study, which is concerned with how to elicit such learning among faculty. What are these aha moments? First, Keith explains he is thinking differently about the utility of learning goals. I will call these “learning outcomes” (LOs): articulations of what we hope students will learn as a result of our instruction. Instead of just “put[ting] them at the front [of the syllabus],” Keith is now analyzing how they “actually connect up to the task that’s assigned.” In other words, he is seeing value in aligning learning activities to his LOs.
Second, Keith is now thinking about active learning: what “the student does” and what they are “actually getting out of the tasks.” He is recognizing a learning activity can sound reasonable (“I’ll just have them read this”) and yet, “people don’t get it.” Keith is thinking of learning as something students construct, rather than something that follows unproblematically from the instructor’s intentions. These two truths and one other constitute a pedagogical framework called Constructive Alignment (CA). More formally, these three principles of CA (expressed at the lesson level) are:
Learning is constructed through activities students carry out, not through what we do as teachers; thus active learning is the lifeblood of meaningful instruction.
We start by articulating an LO, and we ask ourselves, “if this is what I hope students will learn during the lesson, what do they need to do to get there?” This guides our design of teaching and learning activities aligned to the LO.
With our LO in hand, we also ask, “as learning unfolds, how will I know where students are, relative to where I hope they’re headed?” and this guides our design of formative assessment aligned to the LO (Biggs, 1996).
Like many frameworks where the first step is getting clear about LOs, CA can be used to design learning at the level of programs, courses, or individual teaching events (Hussey and Smith, 2008), and because this study focuses on the third of these, Biggs’ ideas are expressed above at that level (i.e. lesson-level LOs, formative assessment). CA is one of several frameworks for designing instruction (e.g. Formative Assessment, Backwards Design) that use a student-centered logic. That is, they begin by articulating LOs and then design learning activities and assessments aligned to those LOs (Hailikari et al., 2021). Student-centered instruction, and CA in particular, is a game changer for students. Using it to design instruction leads to deeper, more lasting learning, especially for minoritized or underserved students (Cohen, 1987; Habel, 2012; Hailikari et al., 2021; Larkin and Richardson, 2013; Leber et al., 2018; Smith and Baik, 2021; Wang et al., 2013).
In contrast to student-centered logic, content-centered logic understands instructional design as “an organization of the content of the teacher’s knowledge for transmission to the students” (Light and Calkins, 2008: 28). Lessons designed with this logic often prioritize the efficient delivery of disciplinary content, and thus rely on traditional lectures rather than opportunities for active learning. The problem motivating this study is that despite evidence of the effectiveness of CA and similar student-centered approaches, content-centered pedagogy remains prevalent in higher education (White et al., 2016). To address this problem, I present evidence that PL programs structured around peer observation can foster faculty understanding and use of CA at the lesson level.
Background and study aims
This study is situated in two bodies of scholarship. The first considers how PL can help faculty understand and use LOs and alignment (e.g. Bosman and Voglewede, 2019; Meij and Merx, 2018; Stevenson et al., 2005; Stewart and McCormack, 1997; White et al., 2016). These studies are valuable because they provide important evidence that such shifts in thinking and practice are possible. For example, in their study of an online, curriculum-mapping tool, Meij and Merx (2018) found that the tool “contributed to discussion, development, and visibility of learning trajectories, and the awareness among both students and teachers of overarching curricular alignment” (p. 229).
A few of these studies suggest that shifts toward CA may be facilitated by PL that allows faculty to learn alongside their colleagues. White et al. (2016) studied a multi-faceted, institution-level PL initiative designed to help faculty implement active learning. Participants reported that one of the most influential drivers of change was learning from fellow faculty colleagues engaged in the PL. This finding was echoed in Bosman and Voglewede’s (2019) study of faculty communities of practice, in which participants met regularly, read the same literature, and observed each other’s instruction. The authors found that the observation element of the program, in particular, led to the adoption of active learning strategies.
This promising but limited evidence that collaborative PL may help faculty embrace CA points to the second body of scholarship informing this study: the research on peer observation of teaching. Peer observation is a widely used form of PL, and programs can differ in several ways, including the relationship between the two participants, who controls the information generated, and the purpose of the observation (Georgiou et al., 2018; Yiend et al., 2014). Evaluative programs are used to monitor teaching (i.e. a quality-assurance function), whereas the purpose of non-evaluative programs is to help instructors improve their practice through reflection and feedback (Gosling, 2009).
Non-evaluative programs can help instructors reflect on their teaching, whether in face-to-face or online courses (Bell, 2001; Engin and Priest, 2014; Hammersley-Fletcher and Orsmond, 2005; Harper and Nicolson, 2013; Jones and Gallen, 2016; Walker and Forbes, 2018). This study focuses on non-evaluative programs of reciprocal observation: both partners observe the other, and neither is assumed to have more pedagogical expertise (Georgiou et al., 2018). In such programs, observing and discussing lessons triggers reflection and positions instructors to try new pedagogical strategies (Hammersley-Fletcher and Orsmond, 2005; Hendry et al., 2021).
What is missing from the research described above are studies that consider reciprocal, non-evaluative peer observation as a tool for helping faculty embrace CA. That is the first gap in the literature the present study addresses. The second gap is the widely recognized need across PL scholarship for research focused on particular design features that facilitate intended faculty learning (Saroyan and Trigwell, 2015). This study fills the aforementioned gaps by addressing two research questions about programs of reciprocal, non-evaluative, CA-grounded peer observation. (RQ1) When engaged in such a program, how do faculty understand and/or use the three principles of lesson-level CA? (RQ2) What program design features seem to support faculty understanding and use of the three principles of lesson-level CA?
Peer-Assisted Reflections On Student Learning or “PAROSL”
Institutional context
PAROSL was launched in 2019 at a large, research university in the US. During the COVID19 pandemic, the program expanded from its original, in-person format (supporting face-to-face instruction) to an online format (supporting synchronous online instruction). It now operates in either modality. The program is a collaborative effort among the campus’s several teaching-and-learning centers and is generally regarded positively by chairs, deans, and other academic leaders. Goals include helping faculty use CA and integrate more active learning into lessons, promoting inclusive teaching, and strengthening teaching dossiers of junior faculty.
From theory to design
PAROSL was designed based on three theoretical insights from the PL literature. Faculty learning is enhanced when (1) faculty reflect on their practice (Kreber and Cranton, 2000; Mezirow, 1998; Schön, 1984), (2) faculty learn alongside colleagues in a community of practice (Cox, 2004; Stoll et al., 2006; Wenger, 1998), and (3) the learning is embedded in authentic contexts (Boud, 1999; Brown et al., 1989; Kolb, 2015).
With these insights and the goal of encouraging CA in mind, a program was designed in which faculty learn by engaging in peer-to-peer conversations (community of practice) that are about observed lessons (authentic context). These conversations are structured to support instructor reflection (reflection) and focus on student learning rather than teaching (the “constructive” part of CA). And they are organized around questions (more details below) that trace the logic of alignment (the “alignment” part of CA).
Questions that trace the logic of CA
Meetings are structured with the observer posing questions to which the instructor responds. These questions guide the instructor carefully through the logic of CA as it plays out at the lesson level. For example, “What do you want students to learn during this lesson and how will you know if they’ve learned it?,” “What will students be doing to move towards the intended learning? What will you be doing to support their learning?,” and “How will current learning (relative to intended learning) be made visible?”
The observer structures their observation notes using this same logic (e.g. What are students doing to move toward the intended learning? How is current learning (relative to intended learning) being made visible?). Notice the questions do not use the terms “learning outcomes,” “assessment,” or “evidence.” This choice was made based on interviews conducted with campus stakeholders during program development. It became clear that some of our faculty associate these terms with quality-assurance agenda. Thus, we opted for the more neutral language of “intended student learning” (for LOs) and “making learning visible” (for “assessment” and “evidence of learning”).
Best practice in non-evaluative peer observation
PAROSL also incorporates several more fine-grained design characteristics that build on what we know about non-evaluative peer observation in higher education. Each participant observes the other (de Lange and Wittek, 2022; Gosling, 2002). Observers do not assume an evaluative stance (e.g. Bell and Cooper, 2013; Sachs and Parsell, 2013), and their primary role is to support their partner’s reflection. Meetings are facilitated by a pedagogy expert who answers questions and ensures all meetings are scheduled and conducted. If instructors want to work within an “expert-peer-participant” triad (Georgiou et al., 2018; Harris et al., 2008), they can invite the facilitator into their conversations. Planning meetings precede observations, which are followed by debrief meetings (Jones and Gallen, 2016; Martin and Double, 1998). Each participant observes and is observed twice; after the first observation, they try out a new “teaching innovation” (i.e. strategy, technique), decided on with their partner’s help (Bell, 2001). Instructors own the forms filled out during the process and may choose to keep them entirely to themselves. (Bennett and Barp, 2008; Jones and Gallen, 2016; McMahon et al., 2007). They use them to write their narrative summary at the end of the program (Harris et al., 2008). Participants select their own partners (Bennett and Barp, 2008), and join the program voluntarily (Harper and Nicolson, 2013). Two online training modules—one asynchronous (45 minutes) and one synchronous (90 minutes)—orient participants to the program and CA at the start of the term.
Methods
This research involved 10 pairs of participants and used a descriptive-explanatory case study approach (Hamilton and Corbett-Whittier, 2013). For eight pairs, the researcher worked in a “participant observation” tradition (Guest et al., 2013), acting as both facilitator and researcher. (Note: Even though the researcher was observing, I do not refer to them as an “observer.” That term refers to faculty participants in that role.) The researcher spent extended time with these pairs (approximately 10 hours per pair, including the training and data collection). This meant they could build rapport, thus “reducing the problem of reactivity” (p. 81) and strengthening the validity of the research. A faculty developer from a campus teaching-and-learning center served as the facilitator for two pairs. Because the researcher was not present at these meetings, they reviewed associated audio recordings and transcripts.
Participants
At the time of writing, the fifth iteration of PAROSL is being implemented. This article focuses on the second and third iterations, which engaged faculty from Engineering, Finance, Language, Linguistics, Literature, Psychology, and Statistics. Course size ranged from under 25 students (10 courses) to over 100 (5 courses). Pairs were admitted in the order they applied, and they learned about the program through word of mouth or announcements from campus leaders.
Data and analysis
The following types of data were analyzed: fieldnotes from all meetings of the eight pairs facilitated by the researcher; audio recordings from all (~30-minute) planning and (~45-minute) debrief meetings of all pairs (~50 hours, total); and notes and audio recordings from (~30-minute) individual, semi-structured exit interviews with all participants (these occurred at the end of the process).
Data collection and preliminary data analysis were concurrent, with the researcher completing analytic memos immediately after collecting data. Once all data were collected, analysis became more concentrated, using a constant comparative approach (Stern, 2008). Changes to participants’ original speech include redacting identifying information, paraphrasing to clarify, or minor adjustments for readability (e.g., removing “um”). I use the following conventions for quotations: [. . .] speech not directly relevant has been omitted for brevity; {} identifying information has been redacted; [un-italicized text] contextual information; [italicized text] what was said has been paraphrased for clarity or to protect privacy.
RQ1 findings
Below, I describe broad patterns I observed in faculty understanding and use of CA, and I include descriptions from faculty of their own learning. This section is organized into three subsections corresponding to the three principles of lesson-level CA on which this study focuses.
Learning is constructed by students
Several instructors described that during PAROSL, they began to think about their lessons differently: in terms of the students’ experience and what they needed to do to move through the learning. Seo-Jun, a finance instructor, explains: “After participating in the program, I realized that my focus was mostly on my own delivery of the material rather than being on how the material was absorbed by the students.” Here, Keith describes a similar shift and how it came about: Researcher: One of the aha moments that you just described to me was thinking really intentionally about what you want students to learn and then about the activities or questions—that kind of really intentional thinking. Can you remember, Keith, what prompted that realization or that shift in thinking? Was it something somebody said? Was it— Keith: [interrupts] I think it was actually one of the questions on the form. Something like, “You’re going to be doing X. What are the students doing?” That question was just—I always plan out what I’m doing, but I hadn’t thought, literally, “what are the students doing during this thing? If there’s one student in charge, what is everyone else supposed to be doing?” [. . .] The question of, “what are the students supposed to be doing?” was extremely useful. Because it means you can’t really just dump this there and say, “I’m going to do this!” You really have to think, “Well, I’m going to say this, I’m going to talk about this. But then what’s supposed to happen? Everyone’s sitting there, what are they going to do?” And then you think, “Okay, so maybe there needs to be something where [. . .] I’m giving them an exercise and they’re supposed to respond in some way.”
For, Miguel, an engineering instructor, a similar realization came after careful reflection on one of his own lessons. Here is Miguel’s response when asked whether he thought his innovation (which involved creating space during lecture for active learning) was a success: I was successful, in that I have some system where students can discuss. But it was not long enough to be effective. And the reason is, I was rushing things towards the last—almost last lecture. I wanted to get through, and you don’t have much [time] [. . .] And that actually made me realize that I should not really cram all my lectures with so much information, irrespective of what [the students] need. [. . .] I’m not the driver here. I’m the mediator. I’m not the person who is learning here. And I think that lecture made me recognize that. Two hundred students should have their time and voice heard. That 20 or 30 minutes is more important than the one hour and 30 minutes that I am going to speak and give them all the information. [. . .] So that was my take-home message: [. . .] Recognize this is their time, give them some time to reflect.
LOs orient teaching and learning activities
During PAROSL, several participants had realizations about how and why to align teaching and learning activities to LOs. For example, in her second planning meetings, Tracy, a psychology instructor, mentioned to her partner that after their previous meeting, she had found it necessary to overhaul one of her lessons. In order to probe Tracy’s thinking about the role of LOs in this “overhaul,” the researcher provided Tracy a transcript of her quote during her exit interview, and then proceeded: Researcher: I want to ask a couple of questions about that retooling or overhauling process [. . .] How did you know what changes to make or how were you thinking about retooling the lesson? Tracy: I think the meeting that we had allowed me to think through and talk through my own logic. For example, we talked quite a bit about my wanting students to understand the connection between emotion and temperament. And it was the first time that I’d really clearly articulated that that’s one of the goals. [. . .] And so as soon as that happened, it was clear. The reorganization process sort of made itself known [. . .] The flow and the logic for the lecture shifted.
Tracy used LOs prospectively, to design upcoming activities. Other participants used them retrospectively, to reflect on observed lessons. We can see this in the following exchange between Sandra and Tatum (two literature instructors) after Tatum’s lesson, in which students learned about the role of social media in activism. Notice how Sandra helps Tatum look at the lesson through the lens of the stated LOs. This reflection allows the pair to see that part of the lesson may have been unnecessary, as it did not support the stated LOs: Sandra: I really liked your goals, and I had them in front of me when you were teaching. And I guess the one part that maybe took time was the history [. . .] Maybe they didn’t need the history to know about Facebook [. . .] The only part that maybe you didn’t need was the history of [country] because every single one of these movements will have a long history [. . .] Tatum: That’s really helpful. Yeah, I think I do this a lot, actually, with history. Sandra: Oh, really? Tatum: Now that I’m thinking about it: yeah. A lot of my material is really history heavy. Even when I’m teaching a novel, I’ll go through half an hour of history about whatever the context is. [. . .] I mean, I guess I’ve always just been like, “I’m going to situate this.” Sandra: I can see it in a literature class. But here, really, it’s about the activism. There’s so many intricate parts, but your goal was really to teach them, “what does Facebook do that’s different or what are the new techniques?” [. . .] It didn’t seem in accord with the goal because your goal didn’t say, “Teach them about [country].” That wasn’t one of your goals. So [. . .] then it wasn’t fulfilling a goal—that part of the class, I think.
Engaging in this type of LO-focused reflection seemed to pay off for Sandra and Tatum. Toward the end of the program, they both spoke extensively about their realization that using LOs to design teaching and learning activities allowed them to focus their lessons and saved them time and stress. Sandra explains during her exit interview: Strangely, I haven’t always thought about what students should get out of the class. I always think it’s my responsibility to just give them tons of stuff. [. . .] Doing PAROSL, I really realized, “what do I want them to learn about [topic 1]? What do I want them to learn about [topic 2]?” And that took so much weight off of me, it took a huge burden off of me [. . .] Instead of “I’m going to have to sit up all night and learn every {}, and I’m going to be so tired, and when the students see me at 11:00 I’m going to be exhausted, but I’m going to tell them about all 50 {}, and I’m going to know everything about them and I’m going to try and memorize everything.” Instead of doing that—which is how I did it—you’re like, “oh, I can think creatively for an hour and come up with a lesson plan [. . .] that they could learn from.”
LOs orient formative assessment
Many participants seemed to experience “aha moments” around why and how to make students’ progress toward LOs visible during their lessons. In this exchange from their exit interview, Jesse (a language instructor) describes this new learning—in particular, how it led to more frequent formative assessment: Jesse: I think the aspect that I liked the most about the program was being more cognizant about how students’ learning is made visible in every single point of the lecture or of the class. [. . .] That concept of “making it visible” was essential to me—I don’t know why. I never thought in that way about my classes [. . .] Researcher: When you are more cognizant of making student learning visible, how does that change the way you design a lesson, for example? Jesse: Because I integrate them more throughout the class in many small, different ways. So before, I would present this information and then they would go to the breakout room and speak about it. During that time, the learning is made visible, of course. But sometimes the lectures are very long [. . .] and until they had the opportunity to work in groups, I was not so conscious about whether they were actually understanding [. . .] Researcher: Jesse, do you remember what it was that caused that realization for you? Or helped you start focusing on making learning visible? Was it something that [partner] said? Was it something about the program? How did you— Jesse: [interrupts] It was actually when I was writing the PAROSL forms. There is a specific question on the forms. So that was the moment. Because I stopped. And I remember very well that I had the time to think about it, write about it, and then when I met with [partner], I was actually already using that vocabulary.
For Angela, a literature instructor, her “aha moment” about LO-aligned assessment was less about frequency and more about intentionality. (In the account below, Angela uses the term “assignment,” but her point is about “see[ing] that learning is taking place,” fundamentally an assessment move.)
Researcher: [In your narrative summary], you wrote about designing assignments that are more focused on student outcomes or intended learning. Can you give me some examples of what that looks like? How your approach to lesson design is different than it might’ve been before? Angela: Before, I would design an assignment based almost on some philosophical approach that I had for that particular thing we’re working on [. . .] but it’s all very in my head. I don’t know how to explain it. It’s sort of dreamlike, and it doesn’t feel very polished [. . .] I have some general idea of what I want them to do, but I don’t go back and really fine tune and think, “when am I going to see that learning is taking place?” [. . .] I do it on the spot. By the seat of my pants [. . .] But I’ve realized there is another way, and you can be so extremely reflective about this. [. . .] This forces me to look at every assignment I have through a different lens.
In addition to frequency and intentionality, a third type of realization around the principle of LO-aligned assessment was about whether the learning of all students was made visible. Heather, Keith’s partner and linguistics colleague, explains this was an important “revelation” for both of them: Kind of a revelation that Keith and I both had was that we were always focused on thinking about—in these graduate courses—“what is the student who is onstage at that moment supposed to be doing and learning and how are we supposed to see their learning?” And we hadn’t given that much thought to [the other students]: “What are they supposed to be learning? How will we know if they are learning it?” So that’s something that we both kind of shifted our attention to because of PAROSL this quarter.
A final theme, mentioned by only a few participants—but perhaps on the minds of more—was the time it takes to design and conduct thoughtful, LO-aligned assessment. Bianca (a psychology instructor) describes this, in response to her partner Tracy’s question, “Can you sum up in a phrase or principle, what you’ve learned during our work together in PAROSL?”: The value of taking the plunge with being deeply intentional about student learning. [. . .] Because I do think that when I really slow down—when I can really slow down—the issues that you identified of making student learning visible and then being able to act on it. It’s not like I’ve never done that. But it really needs a lot more time than you see me—and I’ve seen me—give. Most of the time, there’s just nowhere near enough time allocated. And I haven’t been willing to let go of curriculum, of material, of coverage [. . .] to learn less, more.
Before moving on to RQ2, a quick note about the representativeness of the RQ1 findings. Thirteen participants reported PAROSL’s logic of lesson-level CA (described in the above subsections) was new to them, with several making comments like this, from Tatum: Thinking intentionally about, “okay, what are the objectives?” [. . .] is pretty standard. [. . .] It’s something easy to put on a syllabus, and then not necessarily bring into each individual class [. . .] So, I think that bringing it to that particular level is somewhat new for me. And I think the “making student learning visible” is also something that I’m still going to need to work on because I think that one can be a little slippery.
Bianca described similar learning: “I mean, the basic idea of intended student learning was something I knew of probably since the first time I taught [. . .] But the other pieces of that process were new to me.” And this is echoed by Heather: “I’m very familiar with it from a top-down, course-design point of view. But in terms of what’s going to happen in any given class session, I think that way of formulating it was new to me. It immediately made sense though.” Here, Sandra compares lesson-level to course-level CA: It wasn’t entirely new [. . .] Somewhere I was doing a class where we were doing learning outcomes [. . .] But the exact form it was very new: that one actually goes through the questions like that, methodically. I had never done that before. Perhaps I had also not seen outcomes on such a micro level. I had seen outcomes on the macro: what should the syllabus do, on the whole? I had not thought before of outcomes of one class.
The other seven participants explained applying the principles of CA at the lesson level was not a new approach for them, but they valued the opportunity PAROSL provided to practice and develop this approach in a structured way.
RQ2 Findings
In my investigation of RQ2, two program features emerged as potentially significant in fostering the faculty learning described above. Faculty repeatedly and explicitly identified both as program features that drove their learning.
Going small: Lesson-level articulation of CA
That PAROSL focuses on the logic of an individual lesson was helpful for many participants. Several mentioned struggling with other LO-grounded frameworks that were pitched at the course level. Here, for example, Scott, a statistics instructor, contrasts PAROSL’s approach with a PL program on Backwards Design: But I just couldn’t wrap my mind around it because they kept saying, “What’s the idea you want to get across?” and then “Design the class backwards.” I just couldn’t get it to work. The way PAROSL did it was more about, “what do you want them to learn?” Not “the big idea,” but “what is it that you want them to learn?” which is really different for me. It could be just a basic, little thing you want them to learn that day. [. . .] Let’s say you’re like, “okay today I just want them to understand what a confidence interval is,” for example. Then you work towards that. And I think for instructors, it’s a little more streamlined and not so overwhelming. You could stay on focus with that idea.
Keith made a similar observation, noting that CA logic can be “really vague” if it is not embedded in “the smallest unit, a class”: [PAROSL] was focused on a very specific lesson. I was applying this reasoning structure to something very specific, so that was also a key thing, I think. If you say, “Hey, Keith! Think about what you want students to learn, think about how they’re going to learn it, think about” [trails off], then it’s really vague. It’s really hard. But I think that embedding this process in an actual—really, down to the smallest unit, a class—makes things much clearer as to what you’re supposed to do and how to do it.
Going carefully: Questions that structure conversations
In the findings for RQ1, Keith and Jesse describe how the questions on PAROSL forms triggered new learning. In the interest of brevity, I will not repeat those quotations here but simply note they reflect experiences of many faculty. Angela, for example, agreed with Jesse that thinking of assessment in terms of “making learning visible” was transformative for her: Having terms to describe that process, I’m someone who needs that. I like to call things by their name, and that brings clarity to me. And so, just having those labels or those names was incredibly, incredibly helpful [. . .] Thinking about this [. . .] in a very deliberate way and calling it, “making learning visible”—it changed everything for me. I don’t think I’m going to go back and discuss my teaching and my approaches in a different way. The language will stay with me, absolutely. [PAROSL has] done a wonderful job with the language in the questions. To me, that has been the success of the program.
Another literature instructor, Skye, remarked the term “learning outcomes” can be off-putting to faculty, and “intended student learning” worked better for them: I liked the questions. I felt like somehow the wording you were using felt a little bit more approachable or un-annoying, as opposed to other types of language that have been given to me. Like, “What are the outcomes?” I think sometimes the jargon of it might be what’s annoying faculty or making them resist. They think, “I have my own jargon. I don’t want this jargon. Leave it out. So much jargon.” I don’t know if that’s part of it. But just to say, “what are you intending students to learn?” It feels like a more stripped-down way of saying [what] experts in teaching say in some other fancy, jargony way.
In addition to particular phrasing, participants noted the sequence of questions—all anchored in student learning—was key. Here, Bianca describes how this worked for her: Researcher: You said in your [narrative summary] that the PAROSL process encouraged or required you to be more intentional and explicit about intended student learning and making it visible. What aspects of the experience pushed you to be more intentional about that? Was it something Tracy said, for example? Was it something about the structure? How did that come about? Bianca: It was just structured that way, right? [. . .] It starts right there: what’s going on with the context and then—immediately—what’s the intended student learning? And then everything else builds on that. So it was beautifully present in the structure from the get-go.
Discussion
As noted by others (e.g. Harper and Nicolson, 2013), this study confirms non-evaluative peer observation can be an effective approach to helping faculty think about and design instruction differently. In particular, the present study provides evidence that if such programs are grounded in CA, they can help faculty better understand and/or implement principles of CA at the lesson level. For 13 of 20 participants, learning was observed around the following principles: learning is constructed by students; LOs orient teaching and learning activities; and LOs orient formative assessment.
Prior scholarship (e.g. Gosling, 2002) has demonstrated the importance of a handful of basic design features for non-evaluative peer observation (e.g. pre- and post-observation meetings, prompts to support reflection not evaluation). Building on this work, this study suggests two more design features that are key in helping faculty embrace CA. First, faculty work through the logic of CA at the level of individual lessons, rather than courses or programs. Second, questions that structure faculty-faculty conversations are tightly focused on student learning and use language that is accessible to faculty and free of charged buzzwords. Notice these two features are not specific to programs of peer observation, so they may be useful in other PL formats (e.g. consultations, workshops, etc.) that help faculty use CA.
Although not explored in this study, time might be a third program feature necessary for lesson-level CA adoption. PL research suggests “intellectual and pedagogical change requires professional development activities to be of sufficient duration” (Desimone, 2009: 184). PAROSL requires about 13 hours over the term, and participants step through the logic of lesson-level CA a total of eight times during the program (planning and debrief meetings for each of two observations, for self and partner). Further research is needed to determine whether the depth of learning observed in this study would be maintained if the extent of engagement were reduced.
One limitation of this study is its small sample: although 40 faculty have engaged in the program with similar results, I report on only 20 here. Another is self-selection into the program. Participants are likely non-representative (e.g. interest in PL, willingness to open their class to a colleague, etc.).
The sources of evidence in this study can be understood both as a limitation and a strength. A limitation is participants reported on changes in their thinking and practice, but this study included no direct evidence of those changes (e.g. longitudinal observations of teaching) or changes in student outcomes. Without such data, this study cannot speak to whether these self-reported changes continue to manifest beyond the program or improve student outcomes. A strength of the self-reported data in this study is pointed out by Webster-Wright (2009) in her work on authentic faculty learning: “measurement of activities and outcomes does not necessarily equate with learning” (p. 727), and an overreliance on such approaches means we “miss the opportunity to develop insights found when we listen to professionals describing how they learn” (p. 725). Listening to Keith’s, Sandra’s, and others’ meetings and exit interviews uncovered a level of detail and nuance about how to structure PL that is unavailable via surveys or analyses of student grades.
This study provides practical ways to draw instructors into the work of CA in a way that allows them to see its intrinsic value. I will close by letting Tracy describe this value: Tracy: I feel compelled to apply this system, even just within myself, to my future lectures, particularly the ones I’m a little more “meh” about. This has been [. . .] transformative for my teaching, genuinely. Researcher: What do you mean “the system”? Tracy: [. . .] Just really getting in—on a granular level—into my logic for why a lesson looks the way it does, and how I can be more explicit about student learning [. . .] It makes me feel good to think that I could be doing my job well, and that I have very definitive ways of working to improve lectures where I feel like maybe something’s not right.
Footnotes
Acknowledgements
The author is grateful to the brave and generous faculty who allowed their lessons and conversations to be part of this research. Special thanks to Adrienne Lavine for her keen editing and staunch support of the project. Collaboration with her and the following individuals made the work possible: Katie Dixie, Lisa Felipe, Jess Gregg, Noelle Griffin, Beth Goodhue, Rachel Kennison, David MacFadyen, Erin Sanders O’Leary, Christian Reyes, David Schaberg, Shanna Shaked, and Amelia Tobiason.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Andrew W. Mellon Foundation under grant number 41500604; and University of California Los Angeles funds in support of the university’s ongoing accreditation by the WASC Senior College and University Commission.
