Sunday, March 20, 2011

Role of Assessment

What's the role of assessment in any theory of composition?

16 comments:

  1. No matter whether one aligns him/herself with the expressive, cognitive, historical, or social-epistemic theories of composition, assessment plays a major role on the writing curriculum and how the writing instructor responds to student work. In Yancey’s essays, we are guided through a history of writing assessment since the inception of the CCCC. Thus far, we have seen four stages of assessment working on a global scale: 1) objective tests (usually multiple choice and indirect), 2) holistically-scored essays, 3) portfolios of writing, and 4) a combination of the previous methods with the addition of digital composition. Each change has indicated a shift in attitude about the purpose of writing.

    At the onset of the CCCC, the attitude about assessment in terms of composition was one that focused on student error. This is evidenced by standardized tests and placement tests that required students to identify proper grammar, usage, and vocabulary to indicate their level of mastery in terms of writing. For students, these tests could have dire consequences, including taking (and paying for) remedial writing courses, even if the student was actually a good writer. During the 1970’s we saw a shift in writing assessment, which moved away from indirect assessment of writing. This was in large part a result of compositionists formation of their own identities and theories. As expressionism and social-epistemic theories became more popular assessments started to change as well. Writing instructors knew there had to be a more valid way of assessing student writing for high stakes measures and over the next forty years we saw changes in assessments that more closely aligned with these theories.

    Today, as Yancey points out, global writing assessments are "complicated, dynamic, and [ . . . ] in flux" (Writing Asessment, 7). Perhaps this is because literacy as we know it is shifting. In his book Changing Our Minds, Miles Myers (1996) historicizes how literacy has impacted teaching and assessment trends in the United States since the 1800’s. New technologies and changing attitudes from society at large are often precursors to shifts in literacy. Right now, we are experiencing changes in both of these areas. Web 2.0 has revolutionized how we know writing and the implementation of federally mandated high-stakes testing has the public angry. The emphasis of the effectiveness of instructors, especially at the K12 level, over the past decade is probably part of the reason for the shift of assessments that focus on student errors to assessments that focus on how well a curriculum is working or how well a teacher performs, which of course has educators calling for writing assessments that are valid not just cost effecient.

    In our classrooms, we must ask ourselves what is the role of writing assessments? Cognitivists may believe that the goal of a writing assessment is to check for appropriate use of form or correct use of grammar and usage. An expressionist may believe that writing assessments should show growth in the ability to think and articulate ideas clearly. A social-epistemologist may believe that writing assessments should reflect an understanding of oneself in relation to society. No matter which theory one subscribes to, it is important that the test developed is a direct measure of the type of writing being taught. If an assessment is not valid, it should not be used.

    ReplyDelete
  2. What’s the role of assessment in any theory of composition?

    At first, I was thrown off by the phrasing of this question—does assessment play a role in every theory of composition? The readings for this week are focused on writing in educational settings, and I considered assessment to be limited to composition in educational settings—something that perhaps not all theories of composing are directly connected to, or limited to. However, reading the readings closely allowed me to see that assessment does have a role in every theory of composing, regardless of locations of composition considered in the theory.

    So then, what are the roles of assessment in composing theory? A few stick out to me: assessment as a social component of composition, assessment as an acknowledgment of key concepts, and assessment as a location of comp. theory discourse. Assessment acts as a social exchange that occurs as part of a writing process. For example, in a discussion of responding to student texts, Haswell negotiates the student writing assessment he’s in charge of as a classroom teacher. He finds that reading part of a paper with a student, and conversing about issues off the page can be more helpful for students than seeing flat comments on the page (1281). This social exchange where he and students converse about the writing is a method of assessment that acts as social interaction, in line with theories of composition that promote composition as a social practice.

    Another role that assessment sometimes plays to theories of composition is that of an acknowledgment of key concepts. In other words, the methods of assessment, and things assessed, reveal what a theory of composition privileges. In the “Primer,” Yancey notes that the WPA outcomes “are statements about what students know and can do, in this case at the conclusion of a first-year writing program of any variety.” The WPA Outcomes Statement is a declaration of what components of writing might be integral to an intro-level college writing class. The addition of a fifth outcome in 2008 (on using digital technologies) demonstrates that outcomes should continue to reflect what is considered key to the composition theory underlying the location being assessed.

    Finally, I see assessment playing a role as a discussion starter (or ender). Assessment, if done correctly, should do real work. I’m thinking of “The Phenomenology of Error” and what gets noticed or ignored in assessment—are we assessing merely what is easy to assess (but what may not be important, or even noticeable in other contexts) or are we assessing something of value? And from there, we have to think about what holds value in composition, who is being affected by the assessment and their relationship to the work being assessed, who should be helped by the assessment, etc. Assessment seems a contact zone of sorts, where people in different positions (with different theories of composition) contemplate questions similar to the ones above, in order to negotiate [assessement] action.

    ReplyDelete
  3. Assessment complicates composition theory in that it forces responders to student writing to examine how they practice their given theoretical framework. It is fairly simple for an instructor to assert that they value the content of a student’s writing. Yet, confronted with a stack of fifty papers, it is just as simple to revert back to scanning for surface errors, because they are easily detectable and warrant minimal response from the instructor. Responding to student writing is time consuming, arduous, and uncomfortable, because the “teacher-reader” must simultaneously assess the student’s writing, how the instructor’s pedagogy reflects within that writing, and then place a value on the intersection of the two. In other words, assessment is an ugly and complicated beast, as illustrated/complicated by the readings this week.
    Williams’s article, “The Phenomenology of Error,” portrays writing assessment as inherently difficult because of the varied perceptions of what constitutes an “error.” For Williams, it seems, this is the result of grammatical system proposed by “authorities” who emphasize correctness instead of content. Extending on this critique, Haswell explores shortcut responses to student writing – the ways in which these shortcuts are definitely easier for the “teacher-readers” responding to the writing (in that the shortcuts create a system that focuses on surface errors, which are most easily detectable or critiqued), yet fail to communicate effectively with students. Yancey examines these assessment strategies through an exploration of the historical/educational contexts from which they developed, illustrating that assessment strategies are slow to respond to changing perceptions of the nature of composition and composing practices.
    From this, it seems that assessment is closely linked to composition theory (in that a theory of assessment will determine how one practices composition theory in pedagogy). However, as Yancey illustrates (and this is something that Elizabeth touches on in her blog as well), the implementation of new assessment strategies based on changing perceptions of what should constitute assessment seems to be alarmingly slow. The WPA only added a fifth outcome concerning digital technologies three years ago, not to mention the fact that multiple choice still dominates the ACT , FCAT, and AP exams. I wonder then, is it the perception of assessment that is slow to change or is it the implementation? That is, does the legislator believe the FCAT is really effective at measuring whatever it is it claims to measure, or is it just the least costly tool of measurement?

    ReplyDelete
  4. Assessment—both as a subject area for research and as a task undertaken by writing teachers, writing students and writing programs—provides (forces?) and opportunity to articulate values. What constitutes “good” writing? Who decides? What agendas and power structures are furthered by promoting any given assessment method or type of writing? What do we (as Yancey notes in “Historicizing Writing Asssessment) hope to learn/what type of knowledge to we hope to produce with any given mode of assessment?
    In “The Phenomenology of Error,” Joseph M. Williams finds himself amused and perplexed by emotional responses to writing errors and asks, “But if we do compare serious nonlinguistic gaffes to errors of usage, how can we not be puzzled over why so much heat is invested in condemning a violation who consequence impinges not at all on our personal space?” (415). For Williams, accidentally knocking into a stranger on a street is much more jolting than reading a piece of writing that contains noticeable grammar and usage errors and is surprised that these errors seem to elicit responses suggesting that the writer owes the reader an apology. I would argue that this emotional response is primarily because discourse does occupy a particular social space with understood if not clearly defined rules and that to break these rules constitutes a type of trespassing. Though I think most of us can read Williams piece and laugh along with him at the indignation with which self-titled grammarians describe errors which they themselves break, sometimes even in the exact sentence in which they are naming the rule, I’m not sure that this is so much because we have moved beyond indignation but because our ideas about what type of assessment is most appropriate (and therefore what type of writing and teaching are most appropriate) have changed.
    As “Historicizing Writing Assessment” notes, our writing assessments in the third wave no longer rely strictly upon experts and upon “correct form” but are more interested in complexity of thought and student reflection within a localized social space. Because the objective expert is not valued as highly in this newer model, trespassing becomes much more passive—not so much a failure to assert authority—but an unwillingness to engage in critical and reflective thought. Perhaps it is this valorization of effort and engagement on the part of teachers and students that makes Haswell’s piece so necessary and interesting. If there is no longer a “correct” answer or response but simply one that is more or less reflective and aware of a diverse set of contexts, how do we provide “good responses” in a way that does not exhaust us?

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. When I think of writing assessment and the role that it plays in our theories of composition I can’t help but think of what is one of my favorite passages from Errors and Expectations. In this oft quoted passage, Shaughnessy writes:

    “I keep in my files a small folder of student papers that go back ten years in my teaching career. They are the first papers I ever read by severely unprepared freshman writers and I remember clearly the day I received them. The students who wrote the papers were then enrolled in the SEEK [Search for Education, Elevation, and Knowledge] Program at City College . . . . I remember sitting alone in the worn urban classroom where my students had just written their first essays and where I now began to read them, hoping to able to assess quickly the sort of task that lay ahead of us that semester. But the writing was so stunningly unskilled that I could not begin to define the task nor even the sort of difficulties. I could only sit there, reading and re-reading the alien papers, wondering what had gone wrong and trying to understand what I at this eleventh hour of my students’ academic lives could do about it.”

    Here, Shaughnessy emphasizes her desire to assess these papers as a way of understanding the needs of her students and her task as a writing instructor. This is where Errors and Expectations_ starts: assessment. It ends, however, with the logic behind these errors. Or, in other words, a new frame of assessment that is no longer revealing what is “alien” within the Open Admissions university. Shaughnessy’s initial assessment of these students, as clearly seen here, is filtered through many other factors – among which race, class, and gender figure prominently. We see these other factors come to fruition in an early description of her students:

    [A]cademic winners and losers from the best and worst high schools in the country, the children of the lettered and the illiterate, the blue-collared, the white-collared, and the unemployed, some who cold barely afford the subway fare to school and a few who came in the new cars their parents had given them as a reward for staying in New York to go to college; in short, the sons and daughters of New Yorkers, reflecting that city’s intense, troubled version of America.

    Her assessment of these students, confronts this diversity of backgrounds that converged in American universities during Open Admissions. These two passages illustrate, in my opinion, the close interconnection between assessment, the understanding of students that develops through this practice, and the ways in which this practice is more accurately located within a broader social milieu rather than the isolation of the classroom.

    The theories of composition that emerge as we create knowledge about our own practices through assessment (Yancey) often reflect on the writing student in vague, undifferentiated terms. Here, students are often represented as a stock character. However, in a text such as Shaughnessy’s that illuminated the Basic Writer but mentions few individuals by name, we know that behind the Basic Writer to which she applies her theory of composition is a Basic Writer – many, in fact. For example, one of Shaughnessy’s first students, Lottie Wilkins, surely came to play an important role in Shaughnessy’s work.

    ReplyDelete
  8. With this in mind, there seem to be five primary roles of assessment for our theories of composition. First, assessment provides a critical framework (complete with ideologies of a community) through which we come to know our students; this practice plays a central role in the interpersonal relationships that develop within the classroom. This role includes the emotional component of writing assessment discussed by Marian via Williams as well as Logan’s observation via Haswell that the frames through which we come to know our students are not discrete from the frameworks of our practice of assessment. Secondly, our work in assessment forces us to, both collectively and individually, articulate what might otherwise remain silent assumptions buried as givens within our theories of compositions (Haswell). Or, as Elizabeth suggests, work in assessment facilitates the “acknowledgement of key terms.” For example, criteria, genre and mode rules, disciplinary style, and standards. Third, assessment is not only a way that we come to know our students; it is also a way in which we create knowledge about our own practices (Yancey). And finally, assessment is one node within a network of practices that illuminates the ways in which the classroom is an inadequate means of locating the practices that shape our engagement with students. Instead, our work in assessment illustrates the importance of acknowledging the ways in which the classroom and broader contexts inform each other. Angie offers a great example of this in her reference to current scholarship on literacy. As “literacy as we know it” shifts, so too does our understanding of assessment. It is in this way that assessment, as a node within a broader network, is “complicated, dynamic and […] in flux” (Yancey).

    ReplyDelete
  9. Assessment seems to have two key functions. One, it combines past theories of composition with new issues that will affect future theories of composition. Yancey's wave metaphor is useful here: "One way to historicize those changes is to think of them as occurring in overlapping waves, with one wave feeding into another but without completely displacing waves that came before" (1186). Like the second wave essay tests informed the third wave portfolio assessment, past theories of composition often provide the institutional authority for new methods of writing assessment.

    The second focus of writing assessment is what it reveals about our own teaching practices. Yancey contributes this concern with the third wave. Writing assessment "reflects back to us that practice, the assumptions undergirding it, the discrepancy between what it is that we say we value and what we enact. It helps us understand, critique and enhance our own practice, in other words, because its location--in practice--and because it makes that practice visible and thus accessible to change" (1195). I know from the assessments I did at UD with Melissa that a teacher's rubric often indicates that it values content over form but his or her marginal comments to the student are often intensely focused on surface-level errors; thus, revision becomes an exercise of fixing commas, contractions, and colloquialisms but rarely making meaningful changes to a piece of writing. Looking at Haswell's chronological list of shortcuts (1272-73), I was struck by how many of them likely result in just surface-level comments. Error has always seemed to be a concern. Williams describes error this way: "We were all locating error in very different places. For all of us, obviously enough, error is in the essay, on the page, because that is where it physically exists. But of course, to be in the essay, it first has to be in the student. But before that, it has to be listed in a book somewhere. And before that in the mind of the writer of the handbook. And finally, a form of the error has to be in the teacher who resonated--or not--to the error on the page on the basis of the error listed in the handbook" (417). Williams historicizes error as something that has been embedded in writing pedagogies. According to Yancey, error, during the first wave, determined which classrooms students entered. During the second wave, error entered the classroom but functioned as a tool for teachers about where to start and what to do next. During the current wave, error is seen as its own text, a way for students and teachers to make meaning.

    Assessment makes both theory and practice visible so that we can see where we've been, where we're going, and how our classroom practices can be improved to better enact the practices we really value.

    ReplyDelete
  10. What's the role of assessment in any theory of composition?

    It seems to me as though assessment is fairly close to the backbone of a theory of composition from the perspective of the kind of people who currently do composition theory: that is, educators and those writing for educators. Evaluation - of current "fluency", of error, of progress, of retention, of attention - is central to an instructor's interaction with students... otherwise, what are we here for?

    What, exactly, we are assessing, though, is the evergreen point of contention for the matrix of people and contexts that overlap the writing classroom. What we are assessing provides the frame for understanding assessment and for evaluating what may or may not be error. As Yancey (1999) points out, assessment - what gets assessed and by what standards and why - is an evolving paradigm. A range of causes feed this evolution, not the least of which being the natural evolution of the use of language, both written and spoken. With this in mind it is untenable to stake out an unqualified "standard" for the uses and evaluation of English. At the same time, we need a standard to work with or else what are we even teaching?

    What is interesting to me with regards to assessment and error is something that Williams touches on and that we've discussed before in class: the phenomenon of teachers using vernacular language yet assessing based on an idealized form of the prestige language. I think this disconnect is also at work in a broader sense between spoken and written language for the general public. That is to say, we will self-consciously write differently than we usually speak, as though writing were a holier exercise of language than speaking. On the one hand, I wonder why we do not or cannot write the way we speak. Very few of us consistently use the prestige language of the academy and yet we make ourselves understood perfectly well in general, even sometimes performing a convincing membership in the academy while speaking. On the other hand, having seen some people write the way they speak, the writing seems unintelligible even while the hearing is not. We seem to subconsciously round off the errors and forget the chaos of spoken language through hearing.

    All of this is to say that assessment in writing, like all education, is a constructed (and therefore unstable) exercise in discourse management. Our writing may have little relationship to our speech, but that doesn't matter for the purposes of assessment. What we do through assessment is reinforce the current standards of the discourse that the students are working in.

    ReplyDelete
  11. I think, as it was noted in one of our readings, that the role of assessment in most theories of composition, aside from those that deal explicitly with assessment, has been one of marginalization and afterthought or invisibility and implicitness. On one hand, no one wants to talk about assessment because the only thing worse than having to grade papers is having to talk about how one grades papers. On another hand, I think most theories of comp (at least the work of most theorists that we have read thus far this semester) look more at the work (processes, products, progressions) of students than they do that of the teacher. Only when we consider the role of the teacher do we begin to conceive of assessment as an object of consideration. And the two perspectives -- one focused on the student's work and the other focused on the teacher's work -- are interdependent in terms of our understanding of both and each. Either way, it goes back to Berlin's ideology and the other elements of theory that are in place.

    For instance, by Yancey's telling (as many of you have explained here), the original model of assessment said that there is an objective (albeit indirect) standard of assessment, an objective means of judging students' ability to "write" well. And with so-called objectivity comes little room for discussion. So there is less need to have assessment components to theory.

    I also think that elements of difference and community also create the need for looks at our assessment models. As Marian points out, "agendas and power structures" are advanced any time we assess student's papers. And as there is more difference among students, teachers, and administrators, there is more conflict about what agendas or power structures should be served by the assessment. Which creates the need for more conversation.

    So maybe the role of assessment in comp theory is a growing one, or a more relevant one, made so by the other changes that we see.

    ReplyDelete
  12. In light of the Williams and Dr. Yancey articles, I wanted to think a little about error in the context of a creative writing classroom. Dr. Yancey writes about the pitfalls we fall into when practice does not conform to theory. As an example, she mentions the first wave of writing assessment when “considerable comment [was] provided on how important response is in helping students,” but the assessment method of objective testing included “no provision for response” (1198). This is similar to the amusing contradiction Williams points out in Barnet and Stubb’s textbook, where the two authors mandate against negative sentence constructions and then immediately violate their own rule (419). Williams argues that we approach student writing with a “pre-reflexive experience of error” (420); that is, we go into assessment looking for error. In fact, when grading a student paper, I sometimes catch myself thinking ‘I need something to say,’ and I am therefore unconsciously scanning for flaws instead of treating the student-writer as writer.

    A similar disjunction between practice and theory occurs in my 3310 creative writing workshop. The goal of the course is to teach craft and technique, which means, more often than not, I am teaching rules. In the course of the semester, I often find myself tossing off this familiar line: “Oh, sure, you can break the rules once you know what you are doing.” I reflexively say this line because everything I have them reading breaks the rules, much like Barnet and Stubb or E.B. White break them in the Williams examples. In fact, very fewof the masterworks I have my students read conform to the “rules” as I’ve adumbrated them in class. I’m not sure how it applies to the composition classroom, or if comp teachers have a similar feeling that the artificial set of rules we hold students is not the same set by which we judge ourselves and certainly not our mentors and favorite authors/scholars. There seems to be an unspoken, unconscious agenda to the disjunction in my practice, one that reifies an undemocratic system of power in writing assessments. For one, I’m probably replicating the way I was taught to write fiction. Second, this issue reminds me of Fulkerson’s notion of modal confusion, whereby a teacher uses one classroom methodology to teach and a different set of value judgments for assessment.

    ReplyDelete
  13. In many ways, assessment as it is enacted runs parallel to the classroom practice of writing instruction, which runs parallel—or perhaps tangential—to what might be called an over-arching conception or theory of composition. Put another way, the abstract conception of composition is often located outside of what writing teachers do in their classrooms, which is in turn informed by assessments that are also (largely) located outside the classroom. In my own experience as a writing teacher, I must balance the need to provide instruction that aligns with what I know theoretically about writing, with the need to prepare my students adequately for whatever assessments await them.
    For example, many of my students will take the AP English Language and Composition exam, an assessment that does not fit with my understanding of composition as a self-directed, recursive, and multi-step process because of its prompt-directed, impromptu, single-score structure (which is securely grounded in second-wave assessment, as described in Yancey's articles). To prepare my students for this assessment, I must teach at least in the direction of the test by providing my students opportunities to explore, understand, and practice the types of tasks they will be expected to complete during those four hours. This does not necessarily match my ideas about writing, but it is required of me by the administrative Powers That Be in my institutional context. To balance the artificiality of the AP assessment, I spend time with my students during the course of the year working on what I consider real writing: free-writing-esque Literary Lingerings inspired by texts selected for their relevance to my students' lives; responses to rich, thought-provoking literature; rhetorical analyses of texts focused on identifying authorial moves that the students themselves might incorporate into their own writing to make it more effective or powerful; and extended, multi-draft essay assignments designed to give students an opportunity to argue, extend, and support their own ideas. I respond to their writing in multiple ways—an often onerous and frustrating task, as Haswell so aptly describes—I meet with them in conferences, I give whole-class and small-group minilessons on errors and techniques that need to be eradicated or incorporated in their writing. This balancing act is informed by my overall understanding of composition as a theory, but it is also informed by the assessment looming outside my classroom.
    I imagine that many—most?—writing teachers juggle similar sets of concerns in their own classrooms. Sometimes it seems almost a luxury to have a true theory of composition that operates as an abstract understanding of the nature of the art and practice of writing; perhaps it would be easier to simply follow the rules of the course description, the standards, the test, the handbook, and let the chips fall where they may. Of course, this is unacceptable to those of us who see clearly the disjuncture between The Test and the reality of writing. I applaud the idea of portfolio assessments because they seem to be much closer to the real nature and goals of writing and writing instruction, but they are, as Yancey says, messy and expensive, so many states and institutions have not adopted them. So it continues to fall to the classroom teacher to balance the external assessment with the theory through her own instructional practices.

    ReplyDelete
  14. One thing is for certain: “the role” of assessment is not going away. One of the reasons it is not going away is because teachers, compositionists, and administrators are constantly trying to answer the question: what is “good writing”? Assessment is necessary because it attempts to distinguish between good writing and, well, not good writing.

    One question that thinking about assessment raises is: what am I supposed to teach my students? I have sitting before me a fresh electronic “stack” of my students’ research assignments, and after reading this week’s readings, I wonder, how should I approach these? Is Haswell’s assertion correct, that “simplifying a complex task may take the challenge and fascination out of it and end up making it laborious in a new way”? (1263). He asks, “How can that formative act of response be made more workable, for teacher and for student?” (1263). As teachers, we are (at least I am) repeatedly confronted by this obstacle. However, I’m disheartened when I read that students “want lots and certain kinds of response, but have trouble doing much with what they ask for” (Haswell 1270). Perhaps, then, it’s not a matter of length or amount of response. It’s apparent that I need to work with my students in helping them interpret and negotiate my response in a way that helps them improve as writers. It is also obvious that I need to be aware of how my responses might be construed, but I wonder, how can I best do this? I guess a shortcut is not an option.

    In addition to Haswell’s assertion, I was also intrigued by Williams’. He writes: “Real readers reading real texts don’t respond to error as grammarians want them to” (414). He explains, “Because errors seem to exist in so many places, we should not be surprised that we do not agree among ourselves about how to identify it, or that we do not respond to the same error uniformly” (417). This reminded me of the recent CCC roundtable we had. Granted we weren’t identifying “errors,” but we were aiming to evaluate the example manuscript. The individuals of the group maintained different evaluations, a result that reinforces the underlying idea that Williams points to. And this is confusing for both new and experienced teachers. Williams offers a suggestion: “…if we could read those student essays unreflexively, if we could make the ordinary kind of contract with those texts that we make with other kinds of texts, then we could find many fewer errors” (420). Right, but what should be considered “errors”?

    Yancey offers the wave metaphor in “Historicizing Writing Assessment,” which provides a helpful tool for understanding assessment. I’m so curious, twelve years later, what are these future waves? Are we still in the third wave? Or have we begun the fourth?

    In sum, the role of assessment in any theory of composition is inevitable; it’s contextual, but should also consider global factors; it’s driven by multiple, overlapping forces such as teachers, administrators, and students; and what I’ve come to find out, it’s not always a reliable indicator of success or future achievement. And according to Yancey, “…assessment must be specific, purposeful, contextual, ethical” (1201). Taking all of this into consideration, I am forced to think about the validity of any assessment. I’m reminded of a discussion we had earlier in the semester about teachers purporting to want one thing (based on the assignment prompt) and actually evaluating something else. There was an issue with miscommunication between teacher and student. In “A Primer,” Yancey asks, “Who is authorized and who has the appropriate expertise to make the best judgments?” (3). And I wonder, am I?

    ReplyDelete
  15. What strikes me about assessment and its role in composition theory is, based on my understanding, that it is often seen as an end-goal, something that takes place at the end of everything. Yancey's texts certainly point otherwise, looking specifically at the idea in the third wave of "consequential validity", or the idea that assessment is valid to the degree that it helps a student learn. Any pedagogical composition theory, I would suggest, that doesn't seek to aid student understanding of composition is flawed, and therefore assessment must be a valid part of the theory because it is one of the few ways of assessing the effectiveness of the theory.
    As evidenced in the Yancey pieces, assessment casts a wide net in composition classrooms--student assessment, teaching assessment, and program assessment. Assessment informs a student on how well they can compose, informs the teacher on how well they are teaching that student to compose, and informs the program whether its teachers can effectively teach the institution's idea of composition pedagogy. The creation of outcomes to help dictate assessment in the current age, or the use of portfolios as both a form of classroom grading as well as a means of assessment, are both a result of our current idea that student learning rather than simply student error should be central to our theories of composition.
    Haswell and Williams present interesting challenges to assessment, mainly whether teachers are any good at it. I will admit, by the end of Williams' piece, I could not accurately state all 100 of his errors through my first reading. Also, admittedly, I found a blurring of student and teacher perspectives on composition in Haswell's piece when he outlines concepts like, "Students place the most importance on vocabulary, teachers on substance." I would like to simply attribute it to my fledgling ideas of composition theory, but I think it also has to do with my fledgling experience with assessment and being able to differentiate my student want of thinking from my teacher way of thinking about a text. To echo Jen, reading that students want as much teacher commentary as they can get, but don't know what to do with it, is very disheartening for me. Right now, my composition theories has as much to do with improvement as it has to do with instruction. But I am often confused about how I can create that improvement and whether or not my instruction is meeting those needs. Assessment is, in my mind, one of the best ways to figure this out, and the current push regarding outcomes is a great place for a new TA like myself to start.

    ReplyDelete
  16. As Marian articulated so well, assessment has to do with deciding what does and does not constitute “good writing.” As assessment has moved away from objective measures of error implemented outside the classroom and towards more holistic approaches tied more and more closely to the classroom and specific curricula (as detailed in Yancey’s “Historicizing” piece), I feel that there is a certain reluctance to even label student compositions “good” or “bad,” particularly bad. This is a productive reluctance in light of the history of writing assessment, but the necessity of assessment provides a backstop and forces compositionists to question what exactly we are trying to teach students. If we are to determine a viable curriculum, we must also have a way of determining whether or not (and/or to what degree) students have learned it. Assessment provides a framework and rationale for this task.

    But even this definition seems situated in the second or third wave and beyond. As Yancey points out, location of assessment is critical in determining its role. Standardized placement tests that occur before the composition class cannot be said to assess how well students have absorbed the lessons of a given curriculum. So, assessment tells us not only what we value in a theory of composition, but when and where we value it as well.

    I was particularly struck by Yancey’s remark in “Historicizing” about the third wave, that locating assessment in practice also allows us to reflect on that practice and, ideally, to improve that practice by bringing it closer to our stated values. In my own pedagogy, I can see clearly just how frequently my theory and my practice are in conflict. Haswell’s piece identifies the many complexities of response, which helps to explain why practicing what we preach is not a simple matter. When I respond to student writing, I bring not only what I know of composition theory, but my own experience with writing and response throughout my education, much of which has been based on ideas that seem much closer to first or second wave assessment than third or beyond. Thinking carefully not only about a theory of composition, but also specifically a theory of assessment allows me to make visible those beliefs about “good writing” that may be counter to what I would like to be teaching or intend to teach, but remain ingrained in me from my own training. And by making them visible, I can hope to improve my response practice.

    ReplyDelete