It’s interesting trying to find literature on program assessment—is this an area that’s under-researched?  Most literature seems to cover classroom evaluation.  A few keyword searches have mostly turned up K-12 case studies, though I will be reading through a few to see if anything can transfer.  Most promising are two articles published in Assessing Writing that study the writing program at Washington State University, as well as some starting questions/considerations I came across from the WPA council.

In the meantime, there is an excellent (and longish) chapter on program assessment in O’Neill, Moore and Huot.  This book has been almost overwhelmingly helpful, and I suspect it will be the most on-target research I will find this summer.  I like the way they differentiate between classroom and program assessment:

Program assessment differs from other types of writing assessments because the focus is not on individual student performance but on collective achievement.  So while a program assessment might include evaluation of student writing as a data-gathering method, it requires that the writing be considered in terms of what it says about student learning generally and how that learning is supported by curricula, instruction, and instructional materials. (109)

When we look at a writing program, they say, we are looking at the entire “learning context” and evaluating how all parts interact—what aspects are working, how are they working, and why? (109)

Now then…

As these authors advocated so strongly for a contextualized, historically-driven and theoretically-informed approach in their introduction, it’s no surprise that they suggest first reflecting on purpose (how will this assessment be used?) and the program itself.

Starting questions/considerations (from O’Neill, Moore and Huot 110-113)

  1. How will we use the results of this assessment?
  2. How do we define our program?  What elements of our program are we assessing?  (For instance, are we assessing FYC courses, or extending beyond into upper-level courses or developmental courses?)
  3. What is it that we want to know?  (What’s currently happening?  Is it what we expected to see?  What in the program seems to be working or not working?)
  4. What information do we already have?  (Do we already have student demographics that will prove useful?  Standardized teaching observation reports?)

Much of the third consideration above seems to be tied closely with program outcomes.  (I’m thinking that it’s possible that our own program at UC, which has some pretty clear outcomes established for FYC and intermediate composition courses, has already done a lot of the legwork in examining this.)

Coming up next: Designing the assessment itself (“Matching methods to Guiding Questions”)

Also, found a great website from the WPA: http://www.wpacouncil.org/assessment-gallery