EDU6978: Week 06: Due 2012-08-05

Reflection
Some really good reading and thinking this week on assessing not only formatively, but also assessing higher-order-thinking.  Overall I think we were a little bit hand-wavy (at least I was) on exactly how we were going to do that.  Remaining for me is to
1. be absolutely explicit on what you are assessing
2. connect your assessment content with relative standards
3. be clear about the rubrics which you are using to assess and realize that they can be used in instruction.

That seems to be the most problematic for me.  You can create a project that seems interesting to you.  But then you have to make sure it is interesting to the state (i.e. it connects with Standards) and then that it connects with students (authenticity) and then that it connects with them in a deep way (assessing for learning).

Wow.

Schedule
image

Notes
(Verbatim from source unless italic)

Embedded Formative Assessment (Wiliam, 2011)

Chapter 5
Providing Feedback That Moves Learning Forward

It seems obvious that feedback to students about their work should help them learn, but it turns out that providing effective feedback is far more difficult than it appears. Much of the feedback that students get has little or no effect on their learning, and some kinds of feedback are actually counterproductive. This chapter reviews the research on feedback; why some kinds of feedback are, at best, useless and, at worst, actually lower performance; and how teachers can give their students feedback that moves learning forward.

Wiliam, Dylan (2011-05-01). Embedded Formative Assessment (Kindle Locations 2206-2209). Ingram Distribution. Kindle Edition.

  • The Quality of Feedback
    • The students receiving the constructive feedback learned twice as fast as the control-group students—in other words, they learned in one week what the other students took two weeks to learn.  (Elawar & Corno, 1985).
    • Example of research on feedback types students were observed who were given:  only scores, only comments, and both scores and comments. (Butler, 1988).
    • Those given only scores made no progress from the first lesson to the second—their work was no better.
    • The students given only comments scored, on average, 30 percent higher on the work done in the second lesson than that done in the first (although, of course, they did not know this because they had not been given scores), and all these students indicated that they wanted to carry on doing similar work.
    • What do you suppose happened for the students given scores + comments?
    • Many think that scores + comments is the best of both worlds.
    • This study (and others like it, to follow) shows that if teachers are providing careful diagnostic comments and then putting a score or a grade on the work, they are wasting their time.
    • Another study (Butler,1987).  Here are the feedback groups
      • comments
      • grades
      • written praise
      • no feedback at all
    • Result:  only those getting comments had improved.  Grades and praise were comparable in effect to no feedback at all.
    • Follow-up questionnaire.  Specifically, the questionnaire was designed to elicit whether the students attributed their expenditure of effort and their success to ego-related factors or to task-related factors…
    • As noted, the provision of grades and written praise had no effect on achievement; their only effect was to increase the sense of ego-involvement. This, as anyone involved in guidance and counseling work in schools knows, is bad news.
    • It is the quality rather than the quantity of praise that is important, and in particular, teacher praise is far more effective if it is infrequent, credible, contingent, specific, and genuine (Brophy, 1981).
    • The timing of feedback is also crucial.  Not too early!
    • Example of computer usage versus pencil and paper.  Students using pencil and paper had more “mindfulness” and thus learned more.
    • Students given the scaffolded response learned more and retained their learning longer than those given full solutions (Day & Cordón, 1993).
    • Example of feedback that doesn’t give answers, but instead asks the student to take another look at the problem, and then promises to “be back in a few minutes.”  But I would probably say that and never come back…
    • Example of art critique that lists, rubric-style, what needs to happen, and then merely gives a check or “x”.
    • However, from their observations, the researchers indicated that whether the feedback was given orally or in writing was much less important than the fact that group 2 was given time, in class, to use the feedback to improve their work.  [Argument for a flipped classroom!!]
    • Some types of feedback actually lower performance.  (Kluger & DeNisi, 1996).
    • This was concluded by looking at a bunch of studies.  Of course, these studies varied in their quality, and to be sure that poor-quality studies were not being included, Kluger and DeNisi established a number of criteria for inclusion in their review.
    • Only 4% of studies were deemed useful, even after double-checking.
    • Just as surprisingly, in 50 of the 131 accepted studies, providing feedback actually lowered performance.
    • When the feedback tells an individual that he has already surpassed the goal, one of four things can happen.
      • Make a tougher goal
      • Slack off
      • Goal is worthless
      • Reject feedback
    • When, as is more common, the feedback indicates that current performance falls short of the goal, there are again four responses.
      • Change the goal
      • Abandon the goal
      • Reject the feedback.
      • Change one’s behavior

    image

    • Only the two italicized responses are likely to improve performance. The other six, at best, do nothing and, at worst, lower performance, sometimes a considerable degree.
    • The research reviewed by Kluger and DeNisi (1996) also shows that it is very difficult, if not impossible, to predict which of these responses will occur.
    • They suggest that, instead, research on feedback should focus less on the actual impact on performance and more on the kinds of responses that are triggered in the individual as a result of the feedback.
    • Dweck and her colleagues found that there were three strong themes running through the students’ responses (Dweck, 2000) to the following questions:
      • When you get an A, why is that?
      • If you got an F, why might that be?
    • The first was whether the success or failure was due to factors relating to the individual or due to outside factors (in other words, how the attribution was personalized).
      • Internal attribution
      • External attribution
    • The second theme was whether success was seen as being due to factors that were likely to be long lasting or transient (in other words, the permanence or stability of the factor).
      • stable factor
      • unstable factor
    • The third was the specificity of the attribution: whether success or failure is seen as being due to factors that affect performance in all areas or just the area in question.
      • overgeneralized successes/failures
    • Boys and girls
      • Boys attribute success to stable causes (e.g. ability), and failures to unstable causes (lack of effort, bad luck)
      • Girls attribute successes to unstable causes (effort) and failures to stable causes (such as lack of ability)
    • The best learners consistently attribute both success and failure to internal, unstable causes. They believe: “It’s up to me” (internal) and “I can do something about it” (unstable).
    • image
    • Examples from sports, such as Michael Jordan, Tom Brady, and Mike Piazza.
    • Each of these three individuals received feedback that they weren’t good enough, but each decided in the face of that feedback to improve rather than give up and do something else. The determination to do better was crucial in each of these cases.
    • Of course, whether a student sees feedback as relating to something that is permanent or transient depends on the student’s attitude.
    • Therefore, what we need to do is ensure that the feedback we give students supports a view of ability as incremental rather than fixed: by working, you’re getting smarter.
  • A Recipe for Future Action
    • All this suggests that providing effective feedback is very difficult.
    • In other words, the school functions rather like an oil refinery—its job is to sort the students into different layers. Those involved in athletics programs cannot afford to do this.
    • They [coaches] see their job not as just identifying talent, but also nurturing it, and even producing it, often getting out of athletes more than the athletes themselves believed they could achieve.
    • Coaches do this through the provision of feedback that moves learning forward.
    • Feedback functions formatively only if the information fed back to the learner is used by the learner in improving performance.
    • Example:  tell the fast-pitch softballer to get her ERA down, but she needs to know how.
    • So the coach says to the pitcher, “I know what’s going wrong. It’s your rising fastball. It’s not rising.” Again, accurate but not helpful.
    • The secret of effective feedback is that saying what’s wrong isn’t enough; to be effective, feedback must provide a recipe for future action.
    • Feedback comes from engineering.  Example:  thermostat.
    • For engineers, feedback about the discrepancy between the current state and the desired state is useless unless there is also a mechanism within the feedback loop to bring the current state closer to the desired state.
    • This skill of being able to break down a long learning journey—from where the student is right now to where she needs to be—into a series of small steps takes years for even the most capable coaches to develop.
  • Grading
    • From the research discused previously, it should be clear that the grading practices prevalent in most US middle schools and high schools are actually lowering student achievement.
    • Example:  who should get the higher grade?  Why?
      • Lesley gets A, A, A, A, C, C, C, C
      • Chris gets C, C, C, C, A, A, A, A
    • The fact is that our current grading practices don’t do the one thing they are meant to do, which is to provide an accurate indication of student achievement. (Clymer & Wiliam, 2006/2007, p. 36)
    • The key to doing this [providing accurate information] is a principle outlined by Alfie Kohn (1994): “Never grade students while they are still learning (p. 41).”
    • If grades stop learning, students should be given them as infrequently as possible.
    • Many administrators realize this but continue to mandate grades because they believe that parents want them, and surveys of parents often show support for grades, but this is hardly an informed choice.
    • …as Paul Dressel remarked over half a century ago, “A grade can be regarded only as an inadequate report of an inaccurate judgment by a biased and variable judge of the extent to which a student has attained an undefined level of mastery of an unknown proportion of an indefinite material” (Dressel, 1957, p. 6).
    • We need classroom assessment systems that are designed primarily to support learning and deal in data that are recorded at a level that is useful for teachers, students, and parents in determining where students are in their learning.
    • Example:  swimming coach grades each aspect of swimming to diagnose training needed.
    • [Clymer] 
      • For each marking period, the key learning outcomes are identified.
      • For each of the ten areas of interest, sources of evidence are identified.
      • Spreadsheet is then used to do conditional formatting on the individual scores and on the composite scores.
    • When a student needs to know what they need to do to get an A, they are told the areas where they need to demonstrate competence.
    • At the end of the marking period, the students take a test, which is used to confirm the evidence collected up to that point.
    • If a student shows mastery of something at the beginning of the marking period but then fails to do so later, his grade can go down.
    • Students became more engaged in monitoring their own learning [using this system]; frequently asked for clarification, both from the teacher and from their peers; and regarded the teacher more as a coach than a judge.
    • This system avoids the ratchet effect, that a grade can never go down, which a system based on resubmission frequently experiences.
    • Deana Holen table

image

    • Another way to provide similar incentives is to allocate 50 percent of the available points to the first submission and 50 percent to the improvement shown in the work as a result of responding to the feedback
    • Joe Rubin example:  only put one of two grades on an assignment A or “not yet”.
  • Practical Techniques
    • If I had to reduce all of the research on feedback into one simple overarching idea, at least for academic subjects in school, it would be this: feedback should cause thinking.
    • As soon as students compare themselves with someone else, their mental energy becomes focused on protecting their own sense of well-being rather than learning anything new.
    • Or try the “-,=,+” example, which means “worse than, consistent with, better than” prior work.  And thus slides with the student, both high achieving and lower achieving.
    • To be effective, feedback needs to direct attention to what’s next rather than focusing on how well or badly the student did on the work, and this rarely happens in the typical classroom.
    • If, however, we embrace the idea of feedback as a recipe for future action, then it is easy to see how to make feedback work constructively: don’t provide students with feedback unless you allow time, in class, to work on using the feedback to improve their work. Then feedback is not an evaluation of how well or how badly one’s work was done but a matter of “what’s next?”
      • Example:  the “three questions” method of feedback, everyone has to answer 3 questions on their returned work.
    • The first fundamental principle of effective classroom feedback is that feedback should be more work for the recipient than the donor.
      • Kerrigan and Shakespeare example:  figure out in groups which feedback goes with which essay.
    • A second principle of effective feedback is that it should be focused.
      • Wiliam himself learned this in giving feedback to intern teachers.
    • A third principle is that the feedback should relate to the learning goals that have been shared with the students.
    • [Math teaching is *NOT* different.]  As noted previously, however, what is important is not the form that the feedback takes but the effect it has on students.
    • Putting a check or a cross next to each of the solutions leaves nothing for the student to do, except maybe correct those that are incorrect. An alternative would be to say to the student, “Five of these are wrong. You find them; you fix them.”
    • The important point is that the feedback is focused, is more work for the recipient than the donor, and causes thinking rather than an emotional reaction.
Conclusion

The word feedback was first used in engineering to describe a situation in which information about the current state of a system was used to change the future state of the system, but this has been forgotten, and any information about how students performed in the past is routinely regarded as useful. It is not. In this chapter, we have seen that in almost two out of every five carefully designed scientific studies, information given to people about their performance lowered their subsequent performance. We have also seen that when we give students feedback, there are eight things that can happen, and six of them are bad (table 5.2, page 115).

Some ways to give effective feedback have been described in this chapter, but every teacher will be able to come up with many more, provided that the key lessons from the research on feedback are heeded. If we are to harness the power of feedback to increase student learning, then we need to ensure that feedback causes a cognitive rather than an emotional reaction—in other words, feedback should cause thinking. It should be focused; it should relate to the learning goals that have been shared with the students; and it should be more work for the recipient than the donor. Indeed, the whole purpose of feedback should be to increase the extent to which students are owners of their own learning, which is the focus of the next two chapters.

Blooming Butterfly (Learning Today, 2009 October 22)

image

Cognitive Complexity Comparison (source?)

image

Item Examples (source?)

image

Blooming Orange (Learning Today, 2009 November 9)

image

Brookhart’s Chart (Brookhart, 2010)

image
image
image

How to Assess Higher Order Thinking (Brookhart, 2010)

Constructing an assessment always involves these basic principles:

  • Specify clearly and exactly what it is you want to assess.
  • Design tasks or test items that require students to demonstrate this knowledge or skill.
  • Decide what you will take as evidence of the degree to which students have shown this knowledge or skill.

This general three-part process applies to all assessment, including assessment of higher-order thinking. Assessing higher-order thinking almost always involves three additional principles:

  • Present something for students to think about, usually in the form of introductory text, visuals, scenarios, resource material, or problems of some sort.
  • Use novel material—material that is new to the student, not covered in class and thus subject to recall.
  • Distinguish between level of difficulty (easy versus hard) and level of thinking (lower-order thinking or recall versus higher-order thinking), and control for each separately.

This chapter discussed three general assessment principles, three specific principles for assessing higher-order thinking, and ways to interpret or score the student work from such assessments. I think of the material in this chapter as "the basics." These principles underlie all the assessment examples in the rest of the book. As you read the more specific examples in the following chapters, think of how each one works out these basic principles in the specific instance. This should help you develop the skills to apply these principles when you write your own assessments

Brookhart’s Interview ASCD. (ASCD, 2011)

Why did you write this book?
    • What HOT is (isn’t)
    • How to write test questions/formative assessment
   
What kinds of HOT are there?
    • Different categories are good.
    • 5 different ways of thinking
    • Analyze/Evaluate/Create (Synthesize)
    • Logic/Reasoning
    • Judgments, Critical Thinking
    • Problem Solving
    • Creativity

HOT is a 21st Century Skill?  Not new is it?
    • No, Plato , Socrates still admired.
    • What’s new is what you learn today may be updated tomorrow

Are most teachers addressing HOT?
    • I think so, yes,
    • You need to address HOT throughout
    • Still recall…
    • A story, the teachers were all asked to bring an assessment
        ○ All names and dates (students had to do thinking)  But just recall
        ○ Remembering obscure facts is not HOT.
       
Can you assess HOT with multiple skills tests?
    • You can but you sometimes need more creative testing plans
    • You need interpretive material to do it.
    • If you are doing the same question as in class
    • Same passage different question isn’t intepretive.
    • Line graphs on a test, aren’t the same.
   
Is a harder test question more HOT?
    • No that is not what we are saying.
    • There are some really difficult recall
    • And some really easy HOT questions.
    • You need to control both in your assessments.
    •
Do some teachers shy away from bringing in HOT?
    • The teachers who struggle really struggle
    • The difficulty is not the same…
    • Teachers who perpetuate this thinking are many
    • We somewhere stop asking higher cognitive
    • Pre-digested text book chapters
   
Your chapter on creativity takes a different spin?
    • Assessing creativity is a pet peeve
    • Teachers have rubrics that are cute or niche or interesting
    • Creativity is no so easy to assesss.
    • Creativity is putting things together in new ways.
    • Or seeing something others miss, or
    • Making something not made before.
    • Define what they are
   
Should students be in on the questions about what you are looking for
    • Students shouldn’t have to play guessing games.
    • They shouldn’t be in the dark.
    • Come up with something original
    • Students need to help the world change.

References

ASCD. (2010). Talks with an Author: Susan Brookhart. Retrieved July 31, 2012 from http://www.ascd.org/Publications/Books/ASCD-Talks-With-an-Author.aspx

Brookhart (2010) How to Assess Higher Order Thinking in Your Classroom. ASCD. Chapter 1. Retrieved July 31, 2012 from http://www.ascd.org/publications/books/109111/chapters/General_Principles_for_Assessing_Higher-Order_Thinking.aspx

Learning Today. (2009, October 22). Blooms Taxonomy for Elementary Teachers:  The Blooming Butterfly. Retrieved July 30, 2012 from http://blog.learningtoday.com/blog/bid/22740/Bloom-s-Taxonomy-Poster-for-Elementary-Teachers

Learning Today. (2009, November 9). Blooming Orange: Bloom’s Taxonomy Helpful Verbs Poster. Retrieved July 31, 2012 from http://blog.learningtoday.com/blog/bid/23376/Blooming-Orange-Bloom-s-Taxonomy-Helpful-Verbs-Poster

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, Indiana: Solution Tree Press.

Advertisements
Trackbacks are closed, but you can post a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: