I’ve been meaning to post for awhile, now (four months, to be exact!). Last April I attended the Research Network Forum at 4Cs as a work-in-progress presenter, in the hopes of developing a viable project out of my beginning forays into assessment literature. Now that I have the time to pursue projects that aren’t directly related to my coursework, I’d like to briefly summarize some of the feedback I received and develop a tentative plan of action for pursuing a research project using my assessment literature.

The two project ideas that I came home with from discussions with my generous and knowledgeable session-mates was to consider pursuing annotated bibliographies on particular subjects for the Journal of Writing Assessment, or (as my WPA table suggested) develop a whitepaper that distills down best practices and essential resources for specific contexts and exigencies facing WPAs (who are always in a time crunch!).

Some suggested contexts for such a whitepaper or annotated bibliography included: assessing portfolios, proficiency exams/placement, connection between writing assignments, basic writing, assessing multimodal assignments, rubrics, responding to student writing, professional writing courses, and special populations (e.g., non-traditional students).

I hope to make some headway on one of these project ideas in the coming months, before the start of term (late September for us, as UC is on the quarter system). Stay tuned for more, and in the interim, I’d welcome any other suggestions for contexts/topics.

I recently came across the following websites for calculating sample size:

http://www.surveysystem.com/sscalc.htm
http://www.raosoft.com/samplesize.html

Caveat is that I haven’t used either, but they may prove useful to anyone unsure of what a proper sample size is when assessing N students.

 

One WPA-L member recently mentioned her institution’s portfolio assessment process: http://www.grinnell.edu/academic/writinglab/faculty/assessment/portfolio-project/project-description

Thought this might be interesting to share, as it shows us one face of a logitudinal study; this institution follows a set of 12 hand-picked students over four years, assessing an annual portfolio.  (Student choose writing they’ve done that year that meets certain criteria, write a cover letter, and reflect.)

I’d still like to blog in coming days about the pros/cons of value-added studies.  (There’s much literature on the subject, so I don’t get too deep, but Haswell argues that it can be incorporated into writing assessment.)

In the meantime…

Scott Jaschik at Inside Higher Ed recently reported on a new study conducted by economists James Monks and Robert Schmidt that points to negative impacts of larger class sizes on student learning.

Says Jaschik:

The results in analyzing student evaluations showed a clear (negative) impact of increasing class size. “[T]he larger the section size, the lower the self-reported amount learned, the instructor rating, the course rating,” the paper by Monks and Schmidt says. The same is true, to a slightly lesser degree, for instructors who teach more students overall (across all of their sections).

Delving further into the evaluations of student experience, the authors find that increasing course size or number of students taught overall “has a negative and statistically significant impact on the amount of critical and analytical thinking required in the course, the clarity of presentations, the effectiveness of teaching methods, the daily preparedness of the instructor for class” and many other factors.

This is some exciting news!  Not that it’s having a negative impact, of course, but that we now have some empirical evidence pointing to what we composition pedagogues already know to be true.

I’d like to look closer at their methodology and see if anything can be resonably/realistically incorporated into assessment of a writing program.

The next model I’d like to examine comes from the WPA website—and is useful in that, along with the  WPA-NCTE whitepaper on assessment, “illustrate[s] that good assessment reflects research-based principles rooted in the discipline, is locally determined, and is used to improve teaching and learning.”  Theory into practice for the win!

– – – – – – – – – – – – – – – – – – – – – – – –

A quick look at the University of Kentucky:

  • Large university (4,000 students enrolled in the writing program annually; about 100 TA and contingent instructors)
  • Required FYC writing program

Assessment Background and Guiding Questions:

  • In 2004, a two-course FY sequence was changed to a single, 4-credit FY course (ENG 104).
  • Assessment was undertaken in 2006 to review the new course.  It was prompted by and guided by the three questions listed below.
  • Question 1: Support of student learning—”[T]o what extent are pedagogical practices in ENG 104 encouraging and enabling students to achieve the expected learning outcomes for the course?”
  • Question 2: Instructor consistency—How consistently are instructors incorporating the writing strategies and critical thinking skills from the course outcomes into their course design and classroom pedagogy?
  • Question 3: Transfer—How did skills and experiences from ENG 104 carry on in future writing situations for students?

Assessment Methods:

UK took a well-rounded approach to its assessment.  They were “interested in showing the many nuances that evaluating student writing entails, in including student and instructor perceptions of the course itself, and in more clearly articulating what we [UK] meant by the terms identified in the learning outcomes.”  To this end, they conceived of a three-phase (but non-hierarchical) assessment—which seems to closely align with the guiding questions that prompted and drove the assessment in the first place:

  • Focus area I: course design and pedagogy. How well do instructor assignments support critical thinking and effective writing?
  • Focus area II:  student and instructor perceptions of instruction scope and quality—how well do they see instruction supporting cognitive skills and critical thinking?
  • Focus area III: to what extent does student writing fulfill learning outcomes (demonstrate “effective writing strategies and critical thinking skills”)?
Step One: Surveys

The committee began its process by addressing focus area II through surveys: they used a modified version of an existing student survey (adapting to more directly address assessment goals) and they created an instructor survey.   Both were scaled on a 5-point Likert scale, were anonymous, and included space for narrative responses.

By surveying students and teachers first, the committee was able to not only work from existing material (the previous student survey), but it was able to “build community and lay the groundwork for the more difficult work that lay ahead—devising workable rubrics to assess instructor assignments and actual student writing.”

Step Two: A Rubric Addressing Student Writing

After surveying, the committee spent some time discussing their program’s “rhetorical values” for the FYC program, in order to begin assessing focus area III:

By far, these conversations proved to be our most contentious and ultimately the most productive for the creation of our scoring rubric—an analytical (as opposed to holistic) rubric that took into account dimensions of critical thinking skills and effective writing strategies by designating five specific traits that could be scored according to varying levels of student mastery.  Disenchanted with the rubrics that were available to us from outside sources even with modification, the committee sought another approach to the formation of a rubric that could be more responsive to local needs and dynamics.  We ultimately took our lead from Bob Broad’s What We Really Value and his notion of “dynamic criteria mapping” as the process by which we would identify the values that matter most to our UK first-year writing community, and thus would help us define the criteria for the scoring rubric.

There are many different approaches an assessment team can take in reading and scoring student essays.  Our last case study, TU, used a value-added approach; at UK, they developed an analytical rubric that addressed traits of ethos, structure, analysis, evidence, and conventions, scored on a 4-point scale.

The reading process (again, to assess focus area III) used said rubric proceeded as follows:

  • The committee read 250 different 10-page research essays (all taken from the same assignment from across all sections).
  • They were particularly thorough in their norming: after reading 50 essays, 6 “anchor” essays were established, and 15 additional essays were referred to throughout the reading process to “recalibrate” readers.
  • Three of those 15 recalibration essays were also used to “blindly” check (i.e., check un-announced) inter-rater reliability during the scoring session.
Step Three: A Rubric to Assess Instructor Assignments

Instructor assignments (focus area I) were assessed after student writing had been assessed

  • Used the same 5 criteria (ethos, structure, analysis, evidence, and conventions)
  • Instead of a 5-point Likert scale, used a 3-point scale, “indicating whether each criteria was explicit, implicit, or absent in each assignment.”

Beyond…

UK’s assessment allowed it to identify a concrete list of strengths, weaknesses, and general observations on its writing program.  These findings, they say, influenced its orientation for new instructors and all-staff meetings, training of instructors, professional development, and focus when encouraging instructors.

– – – – – – – – – – – – – – – – – – – – – – – –

Here’s what stuck out to me about UK’s assessmenet:

  • UK’s assessment was driven by questions about student learning, was focused and goal-directed, and was guided by principles of good assessment.
  • UK’s assessment was locally designed and developed—the assessment committee “engaged in a yearlong series of conversations about what [they] valued in student writing, what was essential for a good assignment, and which of the various approaches for the direct assessment of student writing seemed most applicable to [their] situation.”  This aligns with the last case study I investigated, in which Temple University found that reading and scoring student writing in a program assessment process was only successful when qualities of “good writing” were locally defined and contextualized.
  • OM&H tell us that assessment can often use instruments and resources already in place.  UK did this when surveying students—they began with a student survey already in place, simply “tweaking the language and revising the questions to more specifically address the new curricular goals.”
  • UK found it easiest to start surveying, then move into the more difficult work of developing rubrics.
  • UK’s approach to scoring essays reminds me that the simple act of creating a rubric/score sheet for reading essays can be lengthy, involved, and contentious.  (See above for descriptions of the Focus area III rubric.)
  • UK consistently kept communication open and was perennially mindful of building a community around assessment.  They mention, in particular, building community as important because of their TA/contingent faculty.
  • Guiding principles were selective.  UK looked through WPA-NCTE assessment principles and listed four that were seen as most important for their own needs and purposes (assessment should build community; it should be both descriptive and informative; it should be localized, aware of teachers’ rhetorical values, and use a range of data [not just direct assessment] to appropriately contextualize the WPA; and it should be low-stakes and non-threatening for instructors and students).

Creative Commons License

Creative Commons License
Investigating Writing Program Assessment by Christina M. LaVecchia is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Permissions beyond the scope of this license may be available at https://ucwpassessment.wordpress.com/contact/ .