Popham, Chapter 11 Pondertime (p. 267, #1, #2), Chapter 12 Pondertime (p. 303, #2, #4), Due February 22, 2012.

Chapter 11 Pondertime (p. 267, #1, #2),
image
1. Why is it difficult to generate discrimination indices for performance assessments consisting of only one or two fairly elaborate tasks?

Popham (2011) writes that “because educators have far less experience in using (and improving) performance assessments and portfolio assessments, there isn’t really a delightful set of improvement procedures available for those assessment strategies (p. 265).”

But let me see if I can tease out a reason why this might be so.  Discrimination indices are based on students getting an answer wrong, and flat wrong, which is so easy to do in a selected-response test.  In a constructed-response test, one can imagine that the range of scores is more spread, i.e. due to partial credit on two “fairly elaborate tasks” the student scores are fairly spread out.  Thus “right” and “wrong” on either task depends on the cut score, which doesn’t really indicate exactly what a students knows or doesn’t know on the item, merely that some knew more, some knew less.

Without a breakdown of scoring for each sub-task or criteria, it would be impossible to say which part of instruction was weakest, and I don’t think that is being given to us in this example.

And I have no idea where to start if the two elaborate tasks are being graded holistically…

2. If you found there was a conflict between judgmental and empirical evidence regarding the merits of a particular item, which form of evidence would you be inclined to believe?

In the case of a conflict between judgmental and empirical evidence, I would tend to go with judgmental evidence, since there is nothing really comparable to human feedback on an exact question.  However, now that I said that, the number geek in me loves the idea of getting students unfamiliar with the instruction to also take my test so that I have an approximation of discriminators from uninstructed groups.  For, it seems like that would take away the bias of colleagues that “want to help me out” and may avoid giving me extremely objective feedback.

Chapter 12 Pondertime (p. 303, #2, #4)
image
2. What strategies do you believe would be most effective in encouraging more teachers to adopt the formative-assessment process in their own classrooms?

A couple of ideas spring to mind, let’s take a look at each one in turn.

First, provide a technology or trick that makes it easy to get real-time feedback from the class on how much they are understanding.  I think teachers do this all the time, the hopelessly incomplete and inaccurate question posed to a classroom full of scribbling or sometimes distracted kids:  “how are people doing? are you getting this?”

But that’s just enabling zero-th order, or the-simplest-interpretation of formative assessment.  The real strategy for encouraging more teachers is to make sure they understand formative assessment, and then wage an all out education blitz that formative assessment (billboards?  formative assessment trailers on all campaign ads?) is a useful strategy.

I was most curious to see that the initial positive impetus for formative assessment happened right around ESEA/NCLB.  That may have tainted it, to be fair, since it is widely held that ESEA/NCLB is at best a large stick-without-a-carrot, and at worst a failed effort.

The re-authorization of ESEA/NCLB is by no means certain, but were it to be championed or improved dramatically, we could write our legislators and ask them to mention formative assessments in the re-authorization legislation?  Hmmm…

4. The chapter was concluded with some speculation about why it is that formative assessment is not used more widely.  If you had to choose the single most important impediment that prevents more teachers from employing formative assessment, what would this one impediment be?  Do you think it is possible to remove this impediment?

In my opinion the single biggest impediment to teachers doing formative assessment is inertia, or as Popham (2011) writes “the inherent difficulty of getting people to change their ways (p. 297).”  It seems like teachers are bombarded these days with methods and workshops that claim to make their learning more effective, and there is no planning period time to actually improve or innovate on lessons.  Thus teachers are stuck in a cycle of wanting to improve lessons, but facing large workloads of grading and keeping up, so that the improving of new lessons takes a back seat.  So I guess I am actually saying that I think the biggest impediment is resistance to change, magnified by the utter paucity of planning period time.  The removal of this impediment would be an increase in planning period time, in other words give teachers more structured time to re-think lessons.

Recall that all teachers (me included) think I am doing a pretty OK job right now.  To convince me otherwise and show that dramatic improvement is possible with a little change or a modest effort, is key to overcoming the inertia impediment.

References

Popham, W.J. (2011). Classroom Assessment: What Teachers Need to Know. (6th ed.). Boston, MA: Pearson Education, Inc.

 

Definition of Formative Assessment
image
Graphical Depiction of Typical Learning Progression (Popham, p. 282)
image
The Four Levels of Formative Assessment (Popham, p. 287)
image
Advertisements
Trackbacks are closed, but you can post a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: