Desistance Narrative

The desistance narrative argues that faculty desist using research-based innovations, and it’s both dominant and harmful.

Author

Eleanor C Sayre

Published

April 6, 2022

On this page

Narratives that shape our field

What is physics education research (PER) supposed to do? What are the metrics of success? How should we conceptualize our joint mission? What epistemological commitments do we hold dear?

See more on epistemological commitments in a discussion of two different theories in Research: A Practical Handbook.

The stories we tell about ourselves – the narratives for our field – constitute the our answers to these central questions of field identity. There are, of course, many different narratives possible for education research. Some of them are overlapping and commensurate; others are opposed.

What is the Desistance Narrative?

The Desistance Narrative is a dominant narrative in PER. There are a host of papers which use it implicitly, and a few which spell it out though do not name it. It is a really common narrative from critics of curriculum propagation efforts and faculty development programs, both within the field and outside of it; critically, this narrative has also been picked up by funding agencies who use it to shape solicitations.

Within this narrative, the are four key pieces, each with their own metrics of success. Education research is supposed to:

  1. Conduct research into student learning. Yeah, we’re awesome at this. Decades of work, thousands of researchers, multiple journals, etc.

  2. Develop curricula and assessments that help students learn. Yeah, we’re awesome at this too. See the PhysRev focused collection, Physport recommendations, and PhysPort Teaching resources.

We do that work so that:

  1. Physics faculty use our curricula. In the desistance narrative, they might take up one of our best practice guides and use our stuff, but generally they don’t. Or they do, but not with fidelity. Or they use it for a while, then stop. Or one person uses it, but it’s not widespread in the department.

  2. Students will learn more. Actually, we’re quite good at this too, as long as faculty generally use our stuff. Here’s a small selection of review articles and syntheses.

Desistance is failure

The central part of the desistance narrative is one of failure.

A dear friend put it like this:

Bluntly speaking, the curriculum reform strand of PER [Physics Education Research] has not had the impact that it wants. PER materials are hyped up, get used in a place, students struggle (for many reasons), and slowly the effort fades away as value systems crash up against each other. Best intentions are stymied by the systems in which people work. Efforts to get people to know about and learn about PER… have a track record of “creating” champions, but those champions too often work in isolation and aren’t going to rock the boat in their departments.

In short, while we’re awesome at doing research (1) and generating curricula (2), we’re not doing well at broad scale change to student learning (4) because faculty aren’t taking it up (3) as much as we want.

Unpacking the desistance narrative

I want to unpack the desistance narrative a little bit as a narrative by naming some of its assumptions and offering some alternatives to those assumptions.

Faculty change and curricular change

In the desistance narrative, faculty change and curricular change are almost interchangeable: PER dissemination efforts cause curricular change in departments by convincing faculty to change; faculty change by adopting research-based instructional strategies (RBIS). We know that the faculty change because they adopt/adapt the RBIS, and we know that the curriculum changed because the RBIS is used. It’s one piece of evidence for two different things, and those two different things are treated as if they are inseparable.

Centralizing RBIS as PER products feeds into fidelity-first arguments, devalues fundamental work in PER, increases the barriers to full participation in our field, and brings ammo to anti-PER faculty. Gross.

However, faculty are humans and curricula are activities. They’re different ontological categories. It could be true that humans have changed because they engage in specific new activities, but humans could also be different because they’re engaging in other new activities, or because they’ve taken up new ways to engage in old activities.

We could improve the desistance narrative by separating these two kinds of change. If we center faculty change, we might notice when/if faculty take ideas from (say) clickers & peer instruction and use them in small enrollment upper division classes. If we center curricular change, we might notice (say) how different faculty take up different grading schemes in physics classes, and how that affects/effects drop-withdraw-fail (DFW) rates.

Centering intervention efforts

The desistance narrative is predicated, fundamentally, on a paradigm which centers specific intervention efforts: before intervention faculty do X; during intervention they do Y; afterwards they sustain Y (or not… usually not). The intervention could be a workshop for faculty; it could be departmental action teams or faculty learning communities; it could be workshops on particular RBISs or whatever.

Whatever the intervention is, the desistance narrative relies on pre-post testing of particular interventions. For example, we’re thinking about pre-post testing of a longstanding faculty development workshop when we say things like “this workshop has failed to create lasting change”.

Taking up an intervention-centric paradigm is easy within current funding structures. We can write grants, papers, and evaluation reports that show how much practices are different after an intervention. NSF has certainly bought into this narrative in the way it structures project evaluation expectations and solicitation language around education research and instructional change.

On a research side, though, these kinds of reports really struggle to isolate effects that occurred because of the intervention, and that means we struggle to understand the effects of the interventions themselves. To what extent did DFW rates fall because of a new TA training program, and to what extent was it because the enrollment demographics changed, or faculty taught in new ways, or paying attention to students improves retention? Which elements of the TA program are essential and which are only helpful?

Because the desistance narrative relies on centering interventions to measure their effect, giving up the centrality of interventions is hard in this narrative.

Alternately, we could start centering other things like faculty (lifelong learning) or departments (departmental growth and change). PhysPort has been doing some work on centering faculty, and we’re getting radically different data than groups who use the desistance narrative, especially around faculty practices and values. I think professional societies like APS and AAPT could be wonderful partners in this kind of switch, even though it’s a very different perspective on if/how PER is effective (and at what).

Sequelae of these two assumptions together

Change interventions fail because adopters/adapters desist.

Faculty/curricular change interventions fail because adopters/adapters desist. Because curricular change and faculty change are entangled, this paradigm doesn’t measure changes that faculty may have to their beliefs or practices beyond this curricular change; because the curricular change is attached to a particular RBIS, any change away from that RBIS counts as desisting and therefore failure.

This narrative of failure is not honest to actual classroom practices! Faculty change their teaching all the time as they teach new classes, sub out assignments or units within old ones, or try new pedagogical approaches. Sometimes these teaching experiments look like “hey, I tried this RBIS for a semester, didn’t like it, and tried something else”. Within the desistance narrative, desistance = faculty/curricular change failure, even though faculty rarely go back to exactly what they did before and classes are rarely taught the same way by different faculty.

I would rather frame this behavior as “continued experimentation”, and I frame ongoing experimentation as a success in faculty/curricular change.

If we take up the perspective that faculty are constant curricular experimenters, it changes how we think about PER dissemination efforts. It’s unexcitingly normal for faculty to take up, change, and discard curricular pieces. We can continue to put a lot of pieces and ideas in front of them – they like that – but we should focus our change efforts on helping them build skill in bricolage and evaluation, not on taking up any particular RBIS. PEER has this idea built into our core values.

Measuring progress with pre/port testing is deficit thinking.

Basing our ideas of progress on pre/post intervention testing (and then failure) is deficit thinking. We are ascribing deficits to faculty and departments; the deficit is not doing what we want. Because we’re not looking for their affirmative changes or evidence of growth (outside of our very narrow view), we can’t see their growth and we ignore their expertise and assets.

For analogous reasons that deficit thinking about students is bad, deficit thinking about faculty is bad. If we switched to an asset-based view of faculty, we could better understand and affect their practices. This is a subtle shift, but huge in implication. Let’s take up two implications, one for research on faculty & departmental change and one for faculty development practice.

  • Research: an asset-based view of faculty better captures actual faculty practices, including changes they make and their reasoning for change. Better data -> better results.

  • Practice: a deficit-based view of faculty supports the idea that PER is all about forcing faculty to do things they don’t want (e.g. use specific RBIS, even when those RBIS aren’t well-tuned for the students enrolled). An asset-based view honors faculty expertise, which helps them feel better about enacting new practices. Feeling respected -> better practices.

I don’t know how to (cheaply) scale up programming using an asset-based paradigm and I don’t know how to (cheaply) evaluate PER efforts to change the field outside of an intervention-centric paradigm. But I would really like to co-think on possibilities.

Change efforts require a champion

Under the desistance narrative, change efforts are successful when there is a departmental champion, someone who can constantly push for (and support) a particular intervention.

Centralizing particular curricular changes as synecdoche for departmental change encourages us to look for particular faculty champions as synecdoche for faculty change. Champions can be effective, yes, but there are limitations to thinking about change as requiring a champion.

However, relying on champions as a model for identifying & sustaining changes means that we miss other kinds of change for which champions are a poor explanation. It could be that each department has a single champion for hiring a new person or trying a new thing, but we could also frame the same departmental change as coming from a broad base of people advocating for diverse new hires, or an ecology of many instructors / TAs / support staff operating at different levels to enact new policies.

Focusing on the ecology of change rather than the synecdoche of change lets the system be more messy, and it helps us understand or explain how and why some changes persist and other changes continue to change. From a research perspective, this shift is super important; from a development perspective, it might help ideas persist even as the nominal champion moves on.

Also, single champion narratives grow from and promote white supremacy and misogyny, so I would like us to think about how to support them less.

Furthermore, faculty development workshops for new faculty, such as the longstanding and long-reaching “New Faculty Workshop for Physics and Astronomy” (NFW, now the Faculty Teaching Institute, FTI), rely on junior faculty to be champions, which is fraught with risk for them. We could instead teach them about working within their departments rather than championing changes: how to find other would-be changers, partnering with CTLs, bringing a bricolage & evaluation lens to campus initiatives. FTI is moving in this direction already, and it’s amazing.

Back to top