Tag Archives: EDU6978

EDU6978: Week 07: Due 2012-08-12

Only one more week of this course, and then our projects are due on 8/24!
Time is flying and classes will be starting soon enough after that…

Common Core
I had a mini-breakthrough this week with the Common Core State Standards in Math (CCSI, 2009).  It came while I was developing some flashcards (via quizlet.com) to drill myself on the abbreviations of the Domains used in the Standards.  For example, when I see CC.9-12.G.MG.3, I want to be able to quickly say “Common Core, High School, Geometry, Modeling with Geometry, Standard #3.”  If I want to be really insane, I could eventually learn that Standard #3 in that Cluster is:

CC.9-12.G.MG.3 Apply geometric concepts in modeling situations. Apply geometric methods to solve design problems (e.g., designing an object or structure to satisfy physical constraints or minimize cost; working with typographic grid systems based on ratios).*

The next breakthrough was doing an internet search on that string verbatim “CC.9-12.G.MG.3” and finding some really cool resources for lessons on that topic.  This is the power of having shared standards, and it suddenly dawned on me.  The other thing that dawned on me is that by drilling through 495 standards, I was getting more familiar on what topics are there and where there is more emphasis.  That was a total bonus.

Of course, this wild romp through the Standards means I am late in getting my PBL Final Project to my classmates.  (Please accept my apologies Cohort 10 A!)

Our discussion question this week was whether, in our view, the new Common Core  or Next Generation Science were moving toward or away from STEM.  Most folks felt that NGSS was definitely moving toward STEM, since Crosscutting ideas mention engineering, and technology.


(Verbatim from source unless italic)

Embedded Formative Assessment (Wiliam, 2011)

Chapter 6
Activating Students As Instructional Resources for One Another


Even though there is a substantial body of research that demonstrates the extraordinary power of collaborative and cooperative learning, it is rarely deployed effectively in classrooms. This chapter explores the role that learners can play in improving the learning of their peers and concludes with a number of specific classroom techniques that can be used to put these principles into practice.

Wiliam, Dylan (2011-05-01). Embedded Formative Assessment (Kindle Locations 2685-2688). Ingram Distribution. Kindle Edition. 

Wiliam (2011) describes some great techniques for getting students to act as instructional resources for each other. The author also makes a compelling argument for why this is necessary by citing his personal experience with both boys and girls who readily admitted to have "pretended that they understood something [the teacher had said in a 1-1 conversation] when in fact they didn’t."

After cautioning that collaborative work is often not structured to demand both group and individual accountability at the same time, Wiliam describes some practical techniques for fostering true collaboration.

C3B4ME: is a strategy where the teacher reminds the students that he/she is not the only teacher in the classroom.

Peer Evaluation of Homework: a good trick for doing formative feedback from student to student, with a side-effect of getting students to do their homework both more consistently and more legibly. (Huge pet peeve of mine!)

Homework Help Board: provides a means of hooking those who need help up with those that might be able to provide help.

Two Stars and a Wish: encourages giving both positive and constructive feedback between peers on tasks and assignments.

End-of-Topic Questions: Uses groups to break through the "I don’t want to look silly" barrier, and also helps in literacy skills if questions need to be presented in written format.

Error Classification: When errors can be grouped easily, allows strong students to be paired with weaker students very readily.

What Did We Learn Today? Another group gets together and forms consensus on what was clear and what wasn’t clear at the end of the day.

Student Reporter: One student is selected each day to summarize the day, or answer any remaining questions.

Preflight Checklist: a great way to get higher quality work and to build in accountability in students, is having a checklist that they must go through before work is submitted.

I-You-We Checklist: is good for assessing how group dynamics are working and contributing to the learning process.

Reporter at Random: Ahh, this is the POGIL-style collaborative model, where each member of the group has a particular role, but in this case you don’t pick a reporter until they are needed so students don’t tune out when they aren’t the reporter.

Group-Based Test Prep: By asking each member to prepare for a section of material that is on the test and then present it to the group, you build in some review skills, help peers give feedback on learning and find some good questions for the test.

If You’ve Learned It, Help Someone Who Hasn’t: Wiliam saves the best for last, since this is a criticism often leveled at collaborative learning. Namely, that the bright kids are held back and the kids who struggle aren’t helped. We are reminded that an efficient group pairs those who know with those who don’t and in the process both are well served.

This was a good chapter with a lot of practical techniques that I think I will try.


In this chapter, we have seen that activating students as learning resources for one another produces tangible and substantial increases in students’ learning. Every teacher I have ever met has acknowledged that you never really understand something until you try to teach it to someone else. And yet, despite this knowledge, we often fail to harness the power of peer tutoring and other forms of collaborative learning in our classrooms. This chapter has presented a number of classroom techniques that can be used with students of almost any age and that can readily be incorporated into practice. Many of these techniques focus specifically on peer assessment, which, provided it is geared toward improvement rather than evaluation, can be especially powerful—students tend to be much more direct with each other than any teacher would dare to be. However, it is important to realize that peer assessment is also beneficial for the individual who gives help. When students provide feedback to each other, they are forced to internalize the learning intentions and success criteria but in the context of someone else’s work, which is much less emotionally charged. Activating students as learning resources for one another can, therefore, be seen as a stepping-stone to students becoming owners of their own learning—the subject of the next chapter.

Wiliam, Dylan (2011-05-01). Embedded Formative Assessment (Kindle Locations 2898-2908). Ingram Distribution. Kindle Edition.


[NOTE:  I like to keep PDFs with my own annotations in Mendeley which is a client that roams your PDFs, supports deep search, and keeps track of bibliographic information for each file.  You might want to check it out.]

They are listed in References, but here are some pretty accepted abbreviations of each with the most current dates for the governing documents.

CCSS-Math (2009):  Common Core State Standards-Math
CCSS-ELA (2010):  Common Core State Standards-English Language Arts
EdTech-WA (2008) :  Washington State Educational Technology Standards
NGSS (May 2012):  Next Generation Science Standards (May 2012 Draft)
ITEA-STL (2007):  Or maybe just STL, Standards for Technological Literacy

[No real significant Engineering Standards??]

Commentaries on Standards

I like to think of booklets such as “A Framework for K-12 Science Education” (NRC, 2012) as guides to help you interpret or apply the standards.  That souce is listed below but in addition we had some others in our Optional Readings for this week.

This is just to give you a flavor for what is out there.  I didn’t have time to digest all of these.  The titles are fairly descriptive, when I have time I will put my experiences with these in the comments.


Common Core State Standards Initiative [CCSI]. (2009). Common Core State Standards for Mathematics. National Governors Association. Retrieved June 24, 2012 from http://corestandards.org/assets/CCSSI_Math%20Standards.pdf

Common Core State Standards Initiative [CCSI]. (2010). Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects. National Governors Association. Retrieved August 7, 2012 from http://corestandards.org/assets/CCSSI_ELA%20Standards.pdf

International Technology Education Association [ITEA]. (2007).  Standards for Technological Literacy:  Content for the Study of Technology.  (3d ed.).  Reston, VA:  ITEA.  Retrieved August 9, 2012 from http://www.iteea.org/TAA/PDFs/xstnd.pdf

National Research Council [NRC]. (2012). A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas. Washington, DC: The National Academies Press  Retrieved August 11, 2012 from http://www.nap.edu/catalog.php?record_id=13165

Next Generation Science Standards [NGSS].  (2012, May) [DRAFT]  Next Generation Science Standards.  Retrieved August 7, 2012 from http://www.cascience.org/csta/pdf/NGSS_Draft_May2012.pdf

Talbert, G. (2008). Washington State K-12 Educational Technology Learning Standards December 2008. Olympia, WA:  Office of the Superintendent of Public Instruction.  Retrieved August 8, 2012 from http://www.k12.wa.us/EdTech/Standards/pubdocs/K-12-EdTech-Standards_12-2008b.pdf

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, Indiana: Solution Tree Press.

EDU6978: Week 06: Due 2012-08-05

Some really good reading and thinking this week on assessing not only formatively, but also assessing higher-order-thinking.  Overall I think we were a little bit hand-wavy (at least I was) on exactly how we were going to do that.  Remaining for me is to
1. be absolutely explicit on what you are assessing
2. connect your assessment content with relative standards
3. be clear about the rubrics which you are using to assess and realize that they can be used in instruction.

That seems to be the most problematic for me.  You can create a project that seems interesting to you.  But then you have to make sure it is interesting to the state (i.e. it connects with Standards) and then that it connects with students (authenticity) and then that it connects with them in a deep way (assessing for learning).



(Verbatim from source unless italic)

Embedded Formative Assessment (Wiliam, 2011)

Chapter 5
Providing Feedback That Moves Learning Forward

It seems obvious that feedback to students about their work should help them learn, but it turns out that providing effective feedback is far more difficult than it appears. Much of the feedback that students get has little or no effect on their learning, and some kinds of feedback are actually counterproductive. This chapter reviews the research on feedback; why some kinds of feedback are, at best, useless and, at worst, actually lower performance; and how teachers can give their students feedback that moves learning forward.

Wiliam, Dylan (2011-05-01). Embedded Formative Assessment (Kindle Locations 2206-2209). Ingram Distribution. Kindle Edition.

  • The Quality of Feedback
    • The students receiving the constructive feedback learned twice as fast as the control-group students—in other words, they learned in one week what the other students took two weeks to learn.  (Elawar & Corno, 1985).
    • Example of research on feedback types students were observed who were given:  only scores, only comments, and both scores and comments. (Butler, 1988).
    • Those given only scores made no progress from the first lesson to the second—their work was no better.
    • The students given only comments scored, on average, 30 percent higher on the work done in the second lesson than that done in the first (although, of course, they did not know this because they had not been given scores), and all these students indicated that they wanted to carry on doing similar work.
    • What do you suppose happened for the students given scores + comments?
    • Many think that scores + comments is the best of both worlds.
    • This study (and others like it, to follow) shows that if teachers are providing careful diagnostic comments and then putting a score or a grade on the work, they are wasting their time.
    • Another study (Butler,1987).  Here are the feedback groups
      • comments
      • grades
      • written praise
      • no feedback at all
    • Result:  only those getting comments had improved.  Grades and praise were comparable in effect to no feedback at all.
    • Follow-up questionnaire.  Specifically, the questionnaire was designed to elicit whether the students attributed their expenditure of effort and their success to ego-related factors or to task-related factors…
    • As noted, the provision of grades and written praise had no effect on achievement; their only effect was to increase the sense of ego-involvement. This, as anyone involved in guidance and counseling work in schools knows, is bad news.
    • It is the quality rather than the quantity of praise that is important, and in particular, teacher praise is far more effective if it is infrequent, credible, contingent, specific, and genuine (Brophy, 1981).
    • The timing of feedback is also crucial.  Not too early!
    • Example of computer usage versus pencil and paper.  Students using pencil and paper had more “mindfulness” and thus learned more.
    • Students given the scaffolded response learned more and retained their learning longer than those given full solutions (Day & Cordón, 1993).
    • Example of feedback that doesn’t give answers, but instead asks the student to take another look at the problem, and then promises to “be back in a few minutes.”  But I would probably say that and never come back…
    • Example of art critique that lists, rubric-style, what needs to happen, and then merely gives a check or “x”.
    • However, from their observations, the researchers indicated that whether the feedback was given orally or in writing was much less important than the fact that group 2 was given time, in class, to use the feedback to improve their work.  [Argument for a flipped classroom!!]
    • Some types of feedback actually lower performance.  (Kluger & DeNisi, 1996).
    • This was concluded by looking at a bunch of studies.  Of course, these studies varied in their quality, and to be sure that poor-quality studies were not being included, Kluger and DeNisi established a number of criteria for inclusion in their review.
    • Only 4% of studies were deemed useful, even after double-checking.
    • Just as surprisingly, in 50 of the 131 accepted studies, providing feedback actually lowered performance.
    • When the feedback tells an individual that he has already surpassed the goal, one of four things can happen.
      • Make a tougher goal
      • Slack off
      • Goal is worthless
      • Reject feedback
    • When, as is more common, the feedback indicates that current performance falls short of the goal, there are again four responses.
      • Change the goal
      • Abandon the goal
      • Reject the feedback.
      • Change one’s behavior


    • Only the two italicized responses are likely to improve performance. The other six, at best, do nothing and, at worst, lower performance, sometimes a considerable degree.
    • The research reviewed by Kluger and DeNisi (1996) also shows that it is very difficult, if not impossible, to predict which of these responses will occur.
    • They suggest that, instead, research on feedback should focus less on the actual impact on performance and more on the kinds of responses that are triggered in the individual as a result of the feedback.
    • Dweck and her colleagues found that there were three strong themes running through the students’ responses (Dweck, 2000) to the following questions:
      • When you get an A, why is that?
      • If you got an F, why might that be?
    • The first was whether the success or failure was due to factors relating to the individual or due to outside factors (in other words, how the attribution was personalized).
      • Internal attribution
      • External attribution
    • The second theme was whether success was seen as being due to factors that were likely to be long lasting or transient (in other words, the permanence or stability of the factor).
      • stable factor
      • unstable factor
    • The third was the specificity of the attribution: whether success or failure is seen as being due to factors that affect performance in all areas or just the area in question.
      • overgeneralized successes/failures
    • Boys and girls
      • Boys attribute success to stable causes (e.g. ability), and failures to unstable causes (lack of effort, bad luck)
      • Girls attribute successes to unstable causes (effort) and failures to stable causes (such as lack of ability)
    • The best learners consistently attribute both success and failure to internal, unstable causes. They believe: “It’s up to me” (internal) and “I can do something about it” (unstable).
    • image
    • Examples from sports, such as Michael Jordan, Tom Brady, and Mike Piazza.
    • Each of these three individuals received feedback that they weren’t good enough, but each decided in the face of that feedback to improve rather than give up and do something else. The determination to do better was crucial in each of these cases.
    • Of course, whether a student sees feedback as relating to something that is permanent or transient depends on the student’s attitude.
    • Therefore, what we need to do is ensure that the feedback we give students supports a view of ability as incremental rather than fixed: by working, you’re getting smarter.
  • A Recipe for Future Action
    • All this suggests that providing effective feedback is very difficult.
    • In other words, the school functions rather like an oil refinery—its job is to sort the students into different layers. Those involved in athletics programs cannot afford to do this.
    • They [coaches] see their job not as just identifying talent, but also nurturing it, and even producing it, often getting out of athletes more than the athletes themselves believed they could achieve.
    • Coaches do this through the provision of feedback that moves learning forward.
    • Feedback functions formatively only if the information fed back to the learner is used by the learner in improving performance.
    • Example:  tell the fast-pitch softballer to get her ERA down, but she needs to know how.
    • So the coach says to the pitcher, “I know what’s going wrong. It’s your rising fastball. It’s not rising.” Again, accurate but not helpful.
    • The secret of effective feedback is that saying what’s wrong isn’t enough; to be effective, feedback must provide a recipe for future action.
    • Feedback comes from engineering.  Example:  thermostat.
    • For engineers, feedback about the discrepancy between the current state and the desired state is useless unless there is also a mechanism within the feedback loop to bring the current state closer to the desired state.
    • This skill of being able to break down a long learning journey—from where the student is right now to where she needs to be—into a series of small steps takes years for even the most capable coaches to develop.
  • Grading
    • From the research discused previously, it should be clear that the grading practices prevalent in most US middle schools and high schools are actually lowering student achievement.
    • Example:  who should get the higher grade?  Why?
      • Lesley gets A, A, A, A, C, C, C, C
      • Chris gets C, C, C, C, A, A, A, A
    • The fact is that our current grading practices don’t do the one thing they are meant to do, which is to provide an accurate indication of student achievement. (Clymer & Wiliam, 2006/2007, p. 36)
    • The key to doing this [providing accurate information] is a principle outlined by Alfie Kohn (1994): “Never grade students while they are still learning (p. 41).”
    • If grades stop learning, students should be given them as infrequently as possible.
    • Many administrators realize this but continue to mandate grades because they believe that parents want them, and surveys of parents often show support for grades, but this is hardly an informed choice.
    • …as Paul Dressel remarked over half a century ago, “A grade can be regarded only as an inadequate report of an inaccurate judgment by a biased and variable judge of the extent to which a student has attained an undefined level of mastery of an unknown proportion of an indefinite material” (Dressel, 1957, p. 6).
    • We need classroom assessment systems that are designed primarily to support learning and deal in data that are recorded at a level that is useful for teachers, students, and parents in determining where students are in their learning.
    • Example:  swimming coach grades each aspect of swimming to diagnose training needed.
    • [Clymer] 
      • For each marking period, the key learning outcomes are identified.
      • For each of the ten areas of interest, sources of evidence are identified.
      • Spreadsheet is then used to do conditional formatting on the individual scores and on the composite scores.
    • When a student needs to know what they need to do to get an A, they are told the areas where they need to demonstrate competence.
    • At the end of the marking period, the students take a test, which is used to confirm the evidence collected up to that point.
    • If a student shows mastery of something at the beginning of the marking period but then fails to do so later, his grade can go down.
    • Students became more engaged in monitoring their own learning [using this system]; frequently asked for clarification, both from the teacher and from their peers; and regarded the teacher more as a coach than a judge.
    • This system avoids the ratchet effect, that a grade can never go down, which a system based on resubmission frequently experiences.
    • Deana Holen table


    • Another way to provide similar incentives is to allocate 50 percent of the available points to the first submission and 50 percent to the improvement shown in the work as a result of responding to the feedback
    • Joe Rubin example:  only put one of two grades on an assignment A or “not yet”.
  • Practical Techniques
    • If I had to reduce all of the research on feedback into one simple overarching idea, at least for academic subjects in school, it would be this: feedback should cause thinking.
    • As soon as students compare themselves with someone else, their mental energy becomes focused on protecting their own sense of well-being rather than learning anything new.
    • Or try the “-,=,+” example, which means “worse than, consistent with, better than” prior work.  And thus slides with the student, both high achieving and lower achieving.
    • To be effective, feedback needs to direct attention to what’s next rather than focusing on how well or badly the student did on the work, and this rarely happens in the typical classroom.
    • If, however, we embrace the idea of feedback as a recipe for future action, then it is easy to see how to make feedback work constructively: don’t provide students with feedback unless you allow time, in class, to work on using the feedback to improve their work. Then feedback is not an evaluation of how well or how badly one’s work was done but a matter of “what’s next?”
      • Example:  the “three questions” method of feedback, everyone has to answer 3 questions on their returned work.
    • The first fundamental principle of effective classroom feedback is that feedback should be more work for the recipient than the donor.
      • Kerrigan and Shakespeare example:  figure out in groups which feedback goes with which essay.
    • A second principle of effective feedback is that it should be focused.
      • Wiliam himself learned this in giving feedback to intern teachers.
    • A third principle is that the feedback should relate to the learning goals that have been shared with the students.
    • [Math teaching is *NOT* different.]  As noted previously, however, what is important is not the form that the feedback takes but the effect it has on students.
    • Putting a check or a cross next to each of the solutions leaves nothing for the student to do, except maybe correct those that are incorrect. An alternative would be to say to the student, “Five of these are wrong. You find them; you fix them.”
    • The important point is that the feedback is focused, is more work for the recipient than the donor, and causes thinking rather than an emotional reaction.

The word feedback was first used in engineering to describe a situation in which information about the current state of a system was used to change the future state of the system, but this has been forgotten, and any information about how students performed in the past is routinely regarded as useful. It is not. In this chapter, we have seen that in almost two out of every five carefully designed scientific studies, information given to people about their performance lowered their subsequent performance. We have also seen that when we give students feedback, there are eight things that can happen, and six of them are bad (table 5.2, page 115).

Some ways to give effective feedback have been described in this chapter, but every teacher will be able to come up with many more, provided that the key lessons from the research on feedback are heeded. If we are to harness the power of feedback to increase student learning, then we need to ensure that feedback causes a cognitive rather than an emotional reaction—in other words, feedback should cause thinking. It should be focused; it should relate to the learning goals that have been shared with the students; and it should be more work for the recipient than the donor. Indeed, the whole purpose of feedback should be to increase the extent to which students are owners of their own learning, which is the focus of the next two chapters.

Blooming Butterfly (Learning Today, 2009 October 22)


Cognitive Complexity Comparison (source?)


Item Examples (source?)


Blooming Orange (Learning Today, 2009 November 9)


Brookhart’s Chart (Brookhart, 2010)


How to Assess Higher Order Thinking (Brookhart, 2010)

Constructing an assessment always involves these basic principles:

  • Specify clearly and exactly what it is you want to assess.
  • Design tasks or test items that require students to demonstrate this knowledge or skill.
  • Decide what you will take as evidence of the degree to which students have shown this knowledge or skill.

This general three-part process applies to all assessment, including assessment of higher-order thinking. Assessing higher-order thinking almost always involves three additional principles:

  • Present something for students to think about, usually in the form of introductory text, visuals, scenarios, resource material, or problems of some sort.
  • Use novel material—material that is new to the student, not covered in class and thus subject to recall.
  • Distinguish between level of difficulty (easy versus hard) and level of thinking (lower-order thinking or recall versus higher-order thinking), and control for each separately.

This chapter discussed three general assessment principles, three specific principles for assessing higher-order thinking, and ways to interpret or score the student work from such assessments. I think of the material in this chapter as "the basics." These principles underlie all the assessment examples in the rest of the book. As you read the more specific examples in the following chapters, think of how each one works out these basic principles in the specific instance. This should help you develop the skills to apply these principles when you write your own assessments

Brookhart’s Interview ASCD. (ASCD, 2011)

Why did you write this book?
    • What HOT is (isn’t)
    • How to write test questions/formative assessment
What kinds of HOT are there?
    • Different categories are good.
    • 5 different ways of thinking
    • Analyze/Evaluate/Create (Synthesize)
    • Logic/Reasoning
    • Judgments, Critical Thinking
    • Problem Solving
    • Creativity

HOT is a 21st Century Skill?  Not new is it?
    • No, Plato , Socrates still admired.
    • What’s new is what you learn today may be updated tomorrow

Are most teachers addressing HOT?
    • I think so, yes,
    • You need to address HOT throughout
    • Still recall…
    • A story, the teachers were all asked to bring an assessment
        ○ All names and dates (students had to do thinking)  But just recall
        ○ Remembering obscure facts is not HOT.
Can you assess HOT with multiple skills tests?
    • You can but you sometimes need more creative testing plans
    • You need interpretive material to do it.
    • If you are doing the same question as in class
    • Same passage different question isn’t intepretive.
    • Line graphs on a test, aren’t the same.
Is a harder test question more HOT?
    • No that is not what we are saying.
    • There are some really difficult recall
    • And some really easy HOT questions.
    • You need to control both in your assessments.
Do some teachers shy away from bringing in HOT?
    • The teachers who struggle really struggle
    • The difficulty is not the same…
    • Teachers who perpetuate this thinking are many
    • We somewhere stop asking higher cognitive
    • Pre-digested text book chapters
Your chapter on creativity takes a different spin?
    • Assessing creativity is a pet peeve
    • Teachers have rubrics that are cute or niche or interesting
    • Creativity is no so easy to assesss.
    • Creativity is putting things together in new ways.
    • Or seeing something others miss, or
    • Making something not made before.
    • Define what they are
Should students be in on the questions about what you are looking for
    • Students shouldn’t have to play guessing games.
    • They shouldn’t be in the dark.
    • Come up with something original
    • Students need to help the world change.


ASCD. (2010). Talks with an Author: Susan Brookhart. Retrieved July 31, 2012 from http://www.ascd.org/Publications/Books/ASCD-Talks-With-an-Author.aspx

Brookhart (2010) How to Assess Higher Order Thinking in Your Classroom. ASCD. Chapter 1. Retrieved July 31, 2012 from http://www.ascd.org/publications/books/109111/chapters/General_Principles_for_Assessing_Higher-Order_Thinking.aspx

Learning Today. (2009, October 22). Blooms Taxonomy for Elementary Teachers:  The Blooming Butterfly. Retrieved July 30, 2012 from http://blog.learningtoday.com/blog/bid/22740/Bloom-s-Taxonomy-Poster-for-Elementary-Teachers

Learning Today. (2009, November 9). Blooming Orange: Bloom’s Taxonomy Helpful Verbs Poster. Retrieved July 31, 2012 from http://blog.learningtoday.com/blog/bid/23376/Blooming-Orange-Bloom-s-Taxonomy-Helpful-Verbs-Poster

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, Indiana: Solution Tree Press.

EDU6978: Week 05: Due 2012-07-29

This was a full week of diving into project-based learning (PBL), both the essentials of a good project, and scouring the web for samples of different types of projects.  General sentiments in the discussion forums is that the work that lies ahead of us, as we aspire to inspire students through PBL is difficult, but worthwhile.  For fun I have signed up for the first course on PBLU.org entitled “How to Launch a Project”.

We also talked a bit about getting students to act as assessors of their peers, and the impacts that would have on their own learning and quality of work.  This is an essential step for embedded formative assessment, and takes the teacher out of that hot seat, and puts a sometimes even stronger critic in the position, namely a student’s peers!

In my student teaching assignment we did a lot of individual projects with students, but not a lot collaborative ones.  And we certainly did not check off the 8 essential ingredients for good PBL.  It was not hard to sell me on the value of PBL, it is just difficult for me to envision how it is done with a group of 25-45 students at one time.

I plan to do project(s) related to irrigation and water resource management, including power generation in canals, since those topics are authentic to this area, and seem to be very rich in STEM content.

Notes (Verbatim from source unless italic)

Embedded Formative Assessment (Wiliam, 2011)

Chapter 7
Activating Students as Owners of Their Own Learning

In the introduction to his book Guitar, Dan Morgan (1965) wrote, “No one can teach you to play the guitar” (p. 1). This was rather puzzling, since the subtitle of the book is The Book That Teaches You Everything You Need to Know About Playing the Guitar. However, Morgan clarified by adding, “But they can help you learn.” This is pretty obvious really. Whether learning to play a musical instrument, a sport, or a whole range of other human endeavors, we intuitively grasp that teachers do not create learning; only learners create learning. And yet our classrooms seem to be based on the opposite principle—that if they try really hard, teachers can do the learning for the learners. This is only exacerbated by accountability regimes that mandate sanctions for teachers, for schools, and for districts, but not for students.

This chapter reviews the research evidence on the impact of getting students more involved in their learning and shows that activating students as owners of their own learning can produce extraordinary improvements in their achievement. The chapter concludes with a number of practical techniques for classroom implementation.

  • Student Self-Assessment
    • …students can assess themselves quite accurately for summative purposes…but only when the stakes are low.  [But] the topic of this chapter…is whether students can develop sufficient insights into their own learning to improve it.
    • …Twenty-five elementary school teachers…met for two hours each week, during which they were trained in the use of of a structured approach to student self-assessment
      • Prescriptive component.  The prescriptive component took the form of a series of hierarchically organized activities, from which the teacher selected on the basis of diagnostic assessments of the students.
      • Exploratory component.  For the exploratory component, each day at a set time, students organized and carried out individual plans of work, choosing tasks from a range offered by the teacher.
    • Two weeks: students chose structured tasks…asked to assess their own performance
    • Four weeks:  constructed their own mathematical problems…required to identify any problems they had had
    • Four weeks:  given additional sets of learning objectives…had to devise problems, but were not given examples.
    • Ten weeks:  Finally, in the last ten weeks, students were allowed to set their own learning objectives, to construct relevant mathematical problems, to select appropriate apparatus, and to identify suitable self-assessments.
    • There was a control group (313 students) to the 354 students in this study.  The study group improved average scores by 15 points, versus 7.8 points for the control group.  How, exactly, attention to student self-assessment improves learning is not yet clear, but the most important element appears to be the notion of self-regulation.
  • Self-Regulated Learning
    • The basic idea of self-regulated learning is that the learner is able to coordinate cognitive resources, emotions, and actions in the service of his learning goals (Boekaerts, 2006).
      • Cognitive aspects? (Winne, 1996)
      • Motivation or volition? (Corno, 2001)
    • Metacognition
      • John Flavell (1976), widely credited with inventing the term, defined metacognition as follows:   “Metacognition” refers to one’s knowledge concerning one’s own cognitive processes and products or anything related to them, e.g., the learning-relevant properties of information and data.
      • The research shows clearly that “the most effective learners are self-regulating” (Butler & Winne, 1995, p. 245) and, more importantly, that training students in metacognition raises their performance (for example, Lodico, Ghatala, Levin, Pressley, & Bell, 1983) and allows them to generalize what they have learned to novel situations (Hacker, Dunlosky, & Graesser, 1998).
    • Motivation
      • If individuals undertake only those things that are inherently interesting or enjoyable, then they are unlikely to learn to read, write, or play a musical instrument.
      • Motivation is discussed in one of two ways:  the student has it inherently, or the teacher must supply it.
      • There is another way to think about motivation—not as a cause but as a consequence of achievement.
      • Examples of Flow from Mihaly Csikszentmihalyi
        • Dancer
        • Rock climber
        • Mother and small daughter
        • A chess player
      • Csikszentmihalyi described this sense of being completely absorbed in an activity “flow.” This sense of flow can arise because of one’s intrinsic interest in a task, as with the mother reading to her daughter, but can also arise through a match between one’s capability and the challenge of the task. When the level of challenge is low and the level of capability is high, the result is often boredom. When the level of challenge is high and the level of capability is low, the result is generally anxiety. When both are low, the result is apathy. However, when both capability and challenge are high, the result is “flow”.
      • This way of thinking about motivation is radical because it does not locate “the problem” in the teacher or the learner but in the match between challenge and capability.
      • However, it will not be enough that an activity is absorbing if the cost of engaging in the task is seen by the student as being too high, whether this is in terms of the opportunity cost that attempting a task might take or negative consequences such as the risk to one’s self-image if unsuccessful (Eccles et al., 1983).
      • …but when the goals seem out of reach, students may give up on increasing competence and instead avoid harm, by either focusing on lower-level goals they know they can reach or avoiding failing altogether by disengaging from the task…
      • It is also worth noting that while students’ motivation and their belief in their ability to carry their plans through to successful completion—what Albert Bandura (1997) termed self-efficacy—tend to decline as students go through school, what the teacher does can make a real difference.
    • Integrating Motivational and Cognitive Perspectives
      • This discussion may appear to have brought us a long distance from classroom formative assessment, but fulfilling the potential of formative assessment requires that we recognize that assessment is a two-edged sword. Assessment can improve instruction, but it can also impact the learner’s willingness, desire, and capacity to learn (Harlen & Deakin Crick, 2002).
      • When students are invited to participate in a learning activity, they use three sources of information to decide what they are going to do:
        • Their perceptions of the task and its context…
        • Their knowledge about the task and what it will take to be successful.
        • Their motivational beliefs, including their interest and whether they think they know enough to succeed.
      • The student then weighs the information and begins to channel energy along one of two pathways, focusing on either growth or well-being.
      • We cannot possibly anticipate all the factors that a student may take into account in deciding whether to pursue growth rather than well-being, but there are a number of things that can be done to tip the scales in the right direction:
        • Share learning goals…
        • Promote the belief that ability is incremental…
        • Make it more difficult for students to compare themselves with others in terms of achievement.
        • Provide feedback that contains a recipe for future action…
        • Use every opportunity to transfer executive control of learning…to the student
      • And if you figure out a way to do all that, please let me know. The fact that we know what needs to be done is not the same as doing it.
  • Practical Techniques
    • There is no doubt that activating students as owners of their own learning produces substantial increases in learning, but it is not a quick fix.
    • As we will see, self-assessment can be uncomfortable for both student and teacher, but the benefits are great, and once teachers get used to involving the students in their own learning, it is almost impossible to go back.
    • …following are some techniques that are specifically designed to encourage students to reflect upon their own learning.
    • Traffic Lights
      • Green indicates confidence that the intended learning has been achieved. Yellow indicates either ambivalence about the extent to which the intended learning has been achieved or that the objectives have been partially met. Red indicates that the student believes that he or she has not learned what was intended.
      • Traffic lights and test preparation.  Students flag what they know as green, and what they don’t as red/yellow.  Students might be more honest since they are the consumers of the assessment.
    • Red/Green Disks
      • At the beginning of the period, the green side faced up, but, as the lesson progressed, if students wanted to signal that they thought the teacher was going too fast, they flipped the disk over to red.
      • Example of where students help other students flip to red…
      • Example of the teacher who was brave enough to stop when a student said “Sir, this isn’t working, is it?”
    • Colored Cups
      • In her classroom, each student is given one of each of the colored cups, and the lesson begins with the green cup showing. If the student wants to signal that the teacher is going too fast, then the student shows the yellow cup, and if a student wants to ask a question, then the red cup is displayed.
      • This technique neatly encapsulates two key components of effective formative assessment—engagement and contingency.
    • Learning Portfolios
      • Many schools encourage students to keep portfolios of their work, but too often, these are maintained in the same way as an artist’s portfolio—to display the latest and best.
      • For an incremental view of ability, a “learning portfolio” is far more useful. When better work is done, it is added to the portfolio rather than replacing earlier work to allow students to review their learning journeys. 
        • Development trajectory
        • Incremental improvement
      • Students can start developing such learning portfolios at a very young age.
    • Learning Logs
      • One technique that teachers have found useful as a way of getting students to reflect on their learning is to ask students to complete a learning log at the end of a lesson.
      • Prompts
        • Today I learned…
        • I was surprised by…
        • The useful thing I will take from this lesson is…
        • I was interested in…
        • What I liked most about this lesson was…
        • One thing I’m not sure about is…
        • The main thing I want to find out more about is…
        • After this session, I feel…
        • I might have gotten more from this lesson if…
      • Getting students to choose which three of these statements they respond to seems to encourage a more thoughtful approach to the process of reflecting on their learning.

Teachers have a crucial role to play in designing the situations in which learning takes place, but only learners create learning. Therefore, it is not surprising that the better learners are able to manage their learning, the better they learn. All students can improve how they manage learning processes and become owners of their own learning. However, this is not an easy process. Reflecting critically on one’s own learning is emotionally charged, which is why developing such skills takes time, especially with students who are accustomed to failure.

This chapter has provided research evidence along with a number of practical techniques that teachers have used to increase the engagement of their students and their own responsiveness to their students’ needs. In the epilogue, the main themes of this book are reviewed, concluding with a few words of advice for teachers in taking the ideas presented in this book into their own classrooms.

[Video] Self and peer assessment (Wiliam, n.d.)


Here’s my favorite quote from this video:

When students have given feedback to others about a piece of work, their own subsequent attempts at that task are better, because they know what quality work looks like.

What Project Learning Isn’t (Robin, 2011)

Most thought-provoking bit of this video:


"The opposite of project based learning is project-oriented learning (not straight lecturing)." 

Project-oriented learning is like the cart (learning) before the horse (project).

Project-Based Learning [Video] (Larmer, 2012)


Part of the above video is another video.  I thought it was weird to see a video within a video, so I found a link to the original video.  That is below


The Main Course, Not Dessert (Larmer & Mergendoller, 2010a)

This was a longer discussion of the 8 characteristics of a good project-based learning experience.  This paper also discusses some necessary environmental success factors that can help a school implement good project-based learning.

8 Essentials for Project-Based Learning (Larmer & Mergendoller, 2010b)

This paper is a discussion of a case study project at High Tech High School in San Diego and uses that case study to elucidate the 8 essential characteristics for project-based learning.

Project-Based Learning:  Engaging Students in Science (Lippy, 2006)

  • What is Project-Based Learning?
  • High Quality PBL
  • Why Do Project-Based Learning?
  • Our Project-Based Learning Story
  • Science at North Mason High School
    • An Integrated Context
    • Level 1 Courses
    • Level 2 and 3 Courses
  • How We Use Our Unique Context and Community
    • The Aquatic World:  PBL in the Real World
    • Scientific Content and Process (60 percent of term grade)
    • Hood Canal Institute (40 percent of term grade)
  • A PBL Unit in the Aquatic World
  • Service-Learning Expands Project-Based Learning
  • Examples of Hood Canal Institute Projects
    • Habitat Box Restoration and Monitoring
    • Sediment Transport Study
    • Benthic Macroinvertebrate Monitoring
    • Water Quality Monitoring
    • Native Plant Propagation
    • Environmental Explorations
    • Belfair Creek Project
    • Sign Psychology
    • Bird Silhouettes
  • PBL In Your Classroom
    • Look at yourself and your role in the classroom
    • Look at your students
    • Where do you find time in your curriculum?
    • Examine the constraints and assets of bringing PBL to your classroom
    • Once you have thought about all these things, start planning…and planning…and planning!
  • Challenges To Be Ready For
  • Don’t Take My Word For It
  • Appendix A:  Integrated Science A:  Unit:  Human Body Systems
  • Appendix B:  Integrated Science A:  Unit:  Human body Systems (Rubric)
  • Appendix C:  Hood Canal Institute:  Project Placement Application
  • Appendix D:  Curriculum Map:  Unit:  Nitrogen Cycle – Matter Cycles
  • Appendix E:  Cycle-O-Rama:  A Model of the Nitrogen Cycle
  • Appendix F:  Cycle-O-Rama:  Rubric
  • Appendix G:  Hood Canal Institute:  Final Reflections
  • Appendix H:  Resource List
  • About the Author
  • North Mason High School’s Science Team

Teaching Students to Think:  Project-Based Learning (David, 2008)

I found David’s article to be more cautionary tale, almost to the point of fear-mongering about project-based learning and the challenges to doing it effectively.  Here two quotes:

To use project-based learning effectively, teachers must fully understand the concepts embedded in their projects and be able to model thinking and problem-solving strategies effectively (Blumenfeld et al., 1991). Worthwhile projects require challenging questions that can support collaboration, as well as methods of measuring the intended learning outcomes. Without carefully designed tasks, skilled teachers, and school conditions that support projects, project-based learning can devolve into a string of activities with no clear purpose or outcome.

And similarly

These studies suggest that project-based learning, when fully realized, can improve student learning. However, the research also underscores how difficult it is to implement project-based learning well. Together these findings suggest caution in embracing this practice unless the conditions for success are in place, including strong school support, access to well-developed projects, and a collaborative culture for teachers and students.

Caution is good, but small steps in the right direction are never wrong, are they?

Resource.  Click to read this blog, various authors, various topic, no specific topic assigned.

Differentiated Instruction Through Project-Based Learning (McCarthy, 2011)


How do I differentiate effectively in a PBL Unit?
Need to knows (Edmodo for PBL, pose questions)
[Look up these books on Amazon]
Left:  how teachers design their units and lessons.
Right:  how students enter into PBL
8 Essentials for PBL
There is a plethora of strategies to help support learning.

Significant Content:  What skills do students bring to the table.  How do we scaffold content and skills?  How do you keep students engaged through struggle.  Crossroad

Project Teaching and Learning Guide:  a PBL tool, e.g. the Missions Project in CA. Major Products, Knowledge and Skills Needed by Students, Scaffolding/Materials/Lessons to be Provided.
21st Century Skills:  reminds us that differentiated instruction is for all students, those who struggle and those who are advanced.  Have a variety of sources of evidence to make high quality assessment of the learning for all types of students.
Formative assessment is critical, day-by-day, for getting a good handle on where students are at.

In-Depth Inquiry:  Help students dig deeply.  RAFTS is an interesting tool.  Students have to pick the Format they would use.  And you need to have clear criteria, so students know what is expected of them.

Driving Questions:  The thesis statement of the article or something that needs to be answered by the end of the project.  Students need an authentic audience (to drive accountability).  Students need good driving questions or guiding questions

Need to Know:  Interests are the differentiators here.
The entry events are a key here, be creative!

Voice and Choice:  Want to establish opportunity for students to express their preferences.
There are only 3 types of intelligences, which we all have capacity for, but we each favor one of these some precedence.

[Check out the Articles from Education Leadership, ASCD.]


Revision and Reflection:  we have many examples of pure experiences of differentiation.
This peer assessment can actually save time.

Public Audience:  key is clear criteria.
It is necessary to do ongoing formative assessment.  Track progress before summative assessments.
”Need to knows” provide a great way to synchronize class work output with teacher expectations.  Students let the teacher know when they don’t understand in feedback.  Thumbs up, thumbs side, thumbs down.                                                                                                                                
But how do you grade?  That’s the elephant in the room.
Differentiated instruction in PBL is a journey.  We remove scaffolds along the way.  As far as final assessment, there must be a standard for all students.  And there can be different approaches to the assessment.
Teaming is a great way to do differentiation.  You may regroup kids throughout the unit.  Learning teams are short-term teams, that you can do mini-workshops with separate teams.  Initial reports are from students themselves, and then the teacher can make tweaks, and regroup thoughtfully so each member has a cognitive role or responsibility.
Post questions on Edmodo, for BIE.

PBL Starter Kit:  To-the-point Advice, Tools and Tips for Your First Project (Larmer, 2009)

  • What is Project Based Learning?


  • PBL Misconceptions
    • Not just the dessert
    • Not just a string of activities
    • Not just “making something” or “hand on” or “activity learning”
  • PBL’s Effectiveness:  What Experience and Research Tell Us




  • The Role of the Teacher in PBL
    • Guide on the side, not sage on the stage.
    • It feels unnatural and challenging at first, but gets better, you will never go back!

Real Life.  Real Knowledge.  Real Fun!  Project Based Learning (Pacific Education Institute, 2012)

This blog describes the experiences of a PEI rep at a recent PBL World conference.  The conference was put on by the Buck Institute for Education.  In particular at this conference a new web site was announced which offers free PBL training to educators.  The web site is PBLU.org, short for PBL University.  Here’s a quote from the blog:

On Friday, the Buck Institute for Education (BIE) announced the launch of PBLU – Project Based Learning University!  In partnership with the National Environmental Education Foundation and BIE, the Pacific Education Institute created theSchoolyard Habitat Projectfor PBLU teachers who are interested in connecting their students to a real-world environmental project.

K-12 teachers can sign up at PBLU.org by choosing a project (such as the Schoolyard Habitat Project!) and then “sign up and sign in” to take five related classes – all for free!  The first round of classes starts on July 30th, with a second round scheduled for October of 2012.  Each 2-week class provides insight into project planning and implementation and is designed for a time commitment of 5 to 6 hours.

PBLU looks very interesting.



Common Craft. (2010).  Project-Based Learning Explained.  Commissioned by Buck Institute for Education.  [Video].  Retrieved July 29, 2012 from http://youtu.be/LMCZvGesRz8

Larmer, J. (2011). Project Based Learning. Buck Institute for Education. Retrieved July 24, 2012 from http://www.youtube.com/watch?v=Pou61mRWzlE&feature=youtu.be

Larmer, J. (2009). PBL Starter Kit: To-the-point Advice, Tools and Tips for Your First Project. Novato, CA: Buck Institute for Education.  pp 1-8.  Retrieved July 7, 2012 from


Larmer, J. & Mergendoller, J.R. (2010a) The Main Course, Not Dessert: How Are Students Reaching 21st Century Goals? With 21st Century Project Based Learning. Buck Institute of Education. Retrieved July 24, 2012 from http://www.bie.org/images/uploads/useful_stuff/Main_Course.pdf

Larmer, J. & Mergendoller, J.R. (2010b). 8 Essentials for Project-Based Learning. Educational Leadership. 68(1). Pp 52-55.

Lippy, K. (2006). Project-Based Learning. Engaging Students in Science. Small Schools Project. Retrieved July 29, 2012 from http://edvintranet.viadesto.com/media/EDocs/designseries_PBLscience.pdf

McCarthy, J. (2011)  Differentiated Instruction Through Project-Based Learning.  Buck Institute for Education.  [Video].  Retrieved July 29, 2012 from http://youtu.be/Grd_ozJQE_E

Pacific Education Institute. (2012). Real Life. Real Knowledge. Real Fun! Project Based Learning. Retrieved July 24, 2012 from http://pacificeducationinstitute.com/2012/06/29/real-life-real-knowledge-real-fun-project-based-learning/

Robin, J. (2011). What Project Based Learning Isn’t. HighTechHigh. [Video]. Retrieved July 24, 2012 from http://howtovideos.hightechhigh.org/video/265/What+Project+Based+Learning+Isn%27t

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, Indiana: Solution Tree Press.

Wiliam, D. (n.d.). Self and Peer Assessment. [Video]. Retrieved July 29, 2012 from http://www.journeytoexcellence.org.uk/videos/expertspeakers/selfandpeerassessmentdylanwiliam.asp

EDU6978: Week 04: Due 2012-07-22

The part of this weeks reading that really resonated with me was the article on “weed out” classes in college.  I see STEM pipeline leaks at college level as a mix of social justice issues (getting minority and underrepresented students into college) and then education reform at college (and community college) levels to keep those students in school and help them graduate with STEM qualifications.
Notes (Verbatim from source unless italic)

Science, Technology, Engineering & Math (SETDA, 2008)

Executive Summary

The students in kindergarten this year will graduate in 2020. It is our responsibility
to ensure that our children are prepared to lead our country in the 21st Century and
compete in the global marketplace. In order to do that, we need to provide our children
with an education that includes a solid foundation in science, technology, engineering,
and mathematics (STEM). We also need to encourage the students of today to pursue
careers in STEM-related fields. The opportunity cost for not addressing this challenge
is too high for our country to ignore. In this paper, SETDA discusses the importance of
STEM education, the current state of STEM education, and barriers to implementing
STEM education and recommends what stakeholders and policymakers can do to
support STEM education.

  • What is STEM Education?  key words:  emphasis, integrate, entire curriculum
  • Why STEM Education is Important?  key words:  prepared, demand, projections, compete, growing
  • Current State of STEM Education
    • The initial force behind STEM education initiatives was to develop future engineers and scientists through the implementation of specialty or magnet high schools focusing on science, technology, engineering, and mathematics.
    • While this approach works for students enrolled in these high schools, the majority of kids in most school districts in the country do NOT have STEM school options.
  • STEM Education Initiatives
  • Barriers to STEM Education
    • What Hinders Districts from Offering High-Quality STEM Education Programs in ALL Schools?
      • Curriculum and credit issues
      • Lack of funding
      • Lack of qualified teachers
      • Inadequate policies to recruit and retain STEM-Educated Teachers
    • What Hinders our Teachers?
      • Difficult to retain teachers with STEM background
      • STEM-trained professionals often don’t pursue teaching because of low compensation
      • STEM teachers have difficulty advancing professionally
      • Lack of adequate preparation for teachers by higher education
      • Classroom time constraints
    • What Hinders our Kids
      • Societal and cultural beliefs that mathematics, science, engineering, and technology are not for everyone.  Parents, teachers, and the community say to kids:
        • “I’m not good in science”
        • “I don’t have the engineering gene”
        • “I’m doing fine without mathematics skills”
        • “I didn’t need the Internet when I was in school”
      • Kids don’t see relevance of STEM education
      • Difficult to attract and keep kids in STEM careers
  • Key Recommendations
    • Where Do We Want to Go?  key concepts:  integrated curriculum, ALL children, start in Kindergarten
    • How Are We Going to Get There?  need strategic plan involving community, parents, school districts and states
      • Obtain societal support for STEM education
        • Not only do our students need a strong foundation in STEM in order to be successful in the workforce, as educated citizens, our students need a solid background in these areas so that they can make informed decisions in all parts of their lives – from the kind of car they drive and its impact on their budget, to the type of energy sources available for heating their homes, to the technology needed to stay connected with friends and family.
      • Expose students to STEM careers (examples)
      • Provide on-going and sustainable STEM professional development
        • Online Professional Development
        • Online Courseware
      • Provide STEM pre-service teacher training
        • Cincinnati Initiative for Teacher Education (CITE)
        • The U.S. Department of Energy, Office of Science:  Office of Workforce Development for Teachers and Scientists
      • Recruit and retain STEM teachers
        • Teachers Learning in Networked Communities (TLINC)
        • The UTeach Program
          • As a result, greater numbers of graduates with degrees in STEM fields are choosing teaching careers. Of those who graduated from the UTeach program and started teaching four years ago, approximately 82% are still teaching.


        • California Mathematics and Science Teacher Corps at California State University, Dominguez Hills
          • This program was created to provide training and credentials to retired and laid-off aerospace workers interested in becoming elementary or secondary mathematics and science teachers.
        • George Washington University, Washington DC Teacher Preparation Program – QUEST

In conclusion, education stakeholders have a responsibility to ensure that all students have access to high quality instruction in the STEM areas. STEM is a critical component of transforming our educational system and ensuring our students are prepared to thrive in the 21st century global economy. SETDA will continue to add resources and programs to:

Gauging the STEM Effect (Petrinjak, 2012)

Nearly two-thirds of educators say science, technology, engineering, and mathematics (STEM) education in the form of programs, courses, or certifications has been introduced in their states, according to a recent NSTA poll. However, few (10.5%) report receiving more time for teaching science as a result or having a dedicated STEM lab space (18%). Half of the participants indicated their schools offer engineering courses, and nearly 48% said computer science is not considered part of STEM education.

Approximately 41% said STEM professional development was regularly offered to teachers in their state, while slightly more than 16% did not know if any was offered. In addition, only 12% said teachers in their state or school were certified in STEM.

  • Article is basically some selected quotes from the poll grouped into sections entitled “Lacking Time, Resources” and “NCLB Hurdles” and “Integrated Approaches”
  • Roughly half of the quotes reveal that we have a long way to go yet in STEM education.  Here are a couple of quotes that particularly spoke to me:imageimage

Gender Math Gap is Cultural, Not Biological (Welsh, 2011)

  • This article summarizes research paper which I discuss below.
  • A couple of direct quotes from the authors of the paper
    • "This is not a matter of biology: None of our findings suggest that an innate biological difference between the sexes is the primary reason for a gender gap in math performance," study researcher Janet Mertz, of the University of Wisconsin-Madison, said in a statement. The study suggests that "the math-gender gap, where it occurs, is due to sociocultural factors that differ among countries, and that these factors can be changed."
    • "The girls living in some Middle Eastern countries, such as Bahrain and Oman, had, in fact, not scored very well, but their boys had scored even worse, a result found to be unrelated to either Muslim culture or schooling in single-gender classrooms," study researcher Jonathan Kane, of the University of Wisconsin-Whitewater, said in a statement.
    • "We found that boys — as well as girls — tend to do better in math when raised in countries where females have better equality," Kane said. "It makes sense that when women are well-educated and earn a good income, the math scores of their children of both genders benefit."

Debunking Myths About Gender and Mathematics Performance (Kane & Mertz, 2012)

    In summary, we conclude that gender equity and other sociocultural factors, not national income, school type, or religion per se, are the primary determinants of mathematics performance at all levels for both boys and girls. Our findings are consistent with the gender stratified hypothesis, but not with the greater male variability, gap due to inequity, single-gender classroom, or Muslim culture hypotheses. At the individual level, this conclusion suggests that well-educated women who earn a good income are much better positioned than are poorly educated women who earn little or no money to ensure that the educational needs of their children of either gender with regard to learning mathematics are well met. It is fully consistent with socioeconomic status of the home environment being a primary determinant for success of children in school. At the national level, the United States ranked only thirty-first in mean mathematics performance out of the sixty-five countries that participated in the 2009 PISA. Eliminating gender discrimination in pay and employment opportunities could be part of a win-win formula for producing an adequate supply of future workers with high-level competence in mathematics. Wealthy countries that fail to provide gender equity in employment are at risk of producing too few citizens of either gender with the skills necessary to compete successfully in a knowledge-based economy driven by science and technology.

Evidence Persists of STEM Achievement Gap for Girls (Robelen, 2012)

  • That’s right. Name your [AP] subject. Chemistry? Check. Biology? Check. Computer science. Statistics. Calculus. And on and on. In all 10 courses, the finding is the same: Boys on average outperform girls.
  • The latest [NAEP] data, for 2011, show a 5-point gap for 8th graders on NAEP’s 0-500 scale. (That was the only grade level tested in 2011.) Two years prior, science data for 2009 show average scores for girls trailing boys at all three grade levels tested. But what’s striking here is that the gap appears to widen as students get older, from 2 points in 4th grade to 6 points by 12th.
  • On PISA, boys—on average—outperform girls in math across the 34-member nations of the Organization for Economic Cooperation and Development. In science, there is no measurable difference. (This is based on the most recent data, from 2009, for the Program for International Student Assessment, an exam for 15-year-olds.)
    • But guess what? Not only did the U.S. data show a gender gap in BOTH math and science (with females behind). In each case, the PISA report said, the U.S. gap was among the largest of any country tested.
  • To make matters a little more confusing, another global database suggests that girls may have an achievement edge in both math and science when looking across nations. These data are for the most recent round, in 2007, of TIMSS, the Trends in International Mathematics and Science Study
  • To be sure, global comparisons are complicated. It’s not just a matter of what goes on in the education system, but how that fits into the larger social and cultural context. And maybe that’s exactly the point when it comes to an issue like STEM education, where many experts and advocates believe the United States still needs to see a change in attitudes, from how students view themselves to the messages they receive from educators, their parents, and society at large.

“Weed Out” Classes Are Killing STEM Achievement (Koebler, 2012)

  • Not enough American students are showing interest in studying for degrees in science, technology, engineering and math, but what experts are more shocked by is the fact that colleges are throwing out the students who are interested.
  • But in a country where more scientists are desperately needed, that culture needs to change, says Freeman Hrabowski, president of the University of Maryland Baltimore County. Hrabowski was named as one of TIME Magazine’s 100 most influential people earlier this week for the University’s success in graduating minority students in STEM.
  • "A lot of people will say [unprepared freshmen] is the problem of the high schools," says David Seybert, dean of the Bayer School for Natural and Environmental Sciences as Duquesne University. "But we have to be part of that solution."



State Educational Technology Directors Association. (2008). Science, Technology, Engineering & Math. SETDA. Retrieved July 16, 2012 from http://www.setda.org/c/document_library/get_file?folderId=270&name=DLFE-257.pdf

Petrinjak, L. (2012, May 8). Gauging the STEM Effect. NSTA WebNews. Retrieved July 16, 2012 from http://www.nsta.org/publications/news/story.aspx?id=59372

Welsh, J. (2011, December 12). ‘Gender math gap’ is cultural, not biological. MSNBC.msn.com. Retrieved July 17, 2012 from http://www.msnbc.msn.com/id/45646131/ns/technology_and_science-science/#.T_0GXCtYvdI

Kane, J. M., & Mertz, J. E. (2012). Debunking Myths about Gender and Mathematics Performance. Notices of the American Mathematical Society. 59(1). pp 10-21.  Retrieved July 18, 2012 from http://www.ams.org/notices/201201/rtx120100010p.pdf

Robelen, E. (2012, June 11). Evidence Persists of STEM Achievement Gap for Girls. Education Week. Retrieved July 17, 2012 from http://blogs.edweek.org/edweek/curriculum/2012/06/evidence_persists_of_stem_achi.html

Koebler, J. (2012, April 19). Experts: ‘Weed Out’ Classes Are Killing STEM Achievement. US News and World Report. Retrieved July 17, 2012 from http://www.usnews.com/news/blogs/stem-education/2012/04/19/experts-weed-out-classes-are-killing-stem-achievement

EDU6978: Week 03: Due 2012-07-15


During my internship year of teaching my primary of way of eliciting evidence of student learning was through discussion in class.  The format of the class was short review of topics related to the Math SAT, then a sample Math section of the SAT, then a review of the questions covered.  I could provide evidence of student learning if I saw students doing better on geometry, algebra, numeracy, or chart and graph questions that we worked in class.

With the information of students either getting problems right during sample tests or being able to describe where they went wrong, I was able to suggest other homework or exercises that students might complete.

Since there was not grade in the class, the students were all preparing to take the SAT on June 3rd.  That was the final summative assessment, if you will.  What I think I could have improved on was the gathering of concrete data that students were getting more answers correct in various subtopics of the Math SAT.  I have spoken with my mentor teacher at length about a more effective use of homework, but I think it is fair to say that homework is not something that students are motivated to do at my school, and the value of pressing the issue was unclear.

I have to admit, too, that I was way more focused on the presentation of material and of the simulating of the testing environment and getting the students to see questions, than whether the students were improving their performance on the test.  An embedded formative assessment model would enable me to tailor my lesson delivery based on student ability to complete lessons or problems.  Also I would enable students to help each other learn and push them to be in charge of their own learning.

I was extremely happy to see students engaging each other on explanations of problems they got right, i.e. to help other students who didn’t get the answers.  However I didn’t harness that information to feed back into what the instruction part of the course was doing.  Also for all the test sample questions we took and talked about, I never kept a running tally of who was getting which questions right or wrong.  That would have enabled me to tailor some instruction appropriately.


STEM Education Quality Framework (Dayton STEM Network, 2011)

  • Really liked this summary.


Framework in some detail


Links to STEM Model Projects and Programs

To be evaluated against the Dayton Regional STEM Education Quality Framework.


Embedded Formative Assessment (Wiliam, 2011)

Chapter 4 Outline (Verbatim from author unless italic)

Eliciting Evidence of Learners’ Achievement

We discovered in chapter 3 the importance of being clear about what we want students to learn. Once we’ve accomplished that, we need to ascertain where the students are in their learning. In many classrooms, the process of eliciting such evidence is done mainly on the fly—teachers almost always plan the instructional activities in which they will engage their students, but they rarely plan in detail how they are going to find out where the students are in their learning. This chapter emphasizes the importance of planning this process and provides guidelines on what makes a good question, as well as some alternatives to questions. The chapter also offers some practical guidance on how to use questions effectively to adjust instruction to meet students’ needs.

Wiliam, Dylan (2011-05-01). Embedded Formative Assessment (Kindle Locations 1487-1492). Ingram Distribution. Kindle Edition.

  • Finding Out What Students Know
    • TIMSS example questions that “seem” the same many times aren’t.


    • Opinions vary on why these two sample questions are so different in success rate.
    • In answering the second question, 39 percent of the students chose B. Since this question was answered correctly by 46 percent of the students, it was answered incorrectly by 54 percent, but 39 percent chose the same incorrect answer.
    • This suggests that students’ errors on this item are not random but systematic.
    • Students seem to misapplying a rule from first experiences in fractions.
    • Smallest fraction=largest denominator (OK).  Largest fraction=smallest denominator (NOT OK).
    • In other words, there is strong evidence that many students who got the first question right got it right for the wrong reason.
    • Teachers need to ask questions better aimed at uncovering misconceptions.
  • Where Do Students’ Ideas Come From?
    • “upside-down triangle”


    • However, some students describe this shape as an upside-down triangle even though they know that the orientation does not matter for the naming of the shape, because they are using vernacular, rather than mathematical, language.
    • In the world outside the mathematics classroom, however, the word square is often used to describe orientation rather than shape…
    • What seems like a misconception is often, and perhaps usually, a perfectly good conception in the wrong place.
    • When a child says, “I spended all my money,” this could be regarded as a misconception, but it makes more sense to regard this as overuse of a general rule.
    • Some people have argued that these unintended conceptions are the result of poor teaching.
    • The key insight here is that children are active in the construction of their own knowledge.  correlation <> causation, trees and wind.
    • The second point is that even if we wanted to, we are unable to control the students’ environments to the extent necessary for unintended conceptions not to arise.
    • 2.3 * 10 <> 2.30.  But 7 * 10 =70!  We could make such a “misconception” less likely to arise by introducing decimals before teaching multiplying single-digit numbers by ten, but that would be ridiculous.
    • Thus, it is essential that teachers explore students’ thinking before assuming that students have understood something.
    • Pair of equations


    • When asked what a and b are, many students respond that the equations can’t be solved.  Skills versus beliefs.
    • The point here is that had the sixteen in the second equation been any other number at all, provided they had the necessary arithmetical skills, students would have solved these equations, and the teacher would, in all likelihood, assume that the class’s learning was on track.
    • Questions that give us insight into student learning are not easy to generate and often do not look like traditional test questions. Indeed, to some teachers, they appear unfair.  Example


    • The question is perceived as unfair because students “know” that in answering test questions, you have to do some work, so it must be possible to simplify this expression; otherwise, the teacher wouldn’t have asked the question—after all, you don’t get points in a test for doing nothing.
    • Example, “which is larger”


    • The fact that this item is seen as a trick question shows how deeply ingrained into our practice is the idea that assessment should allow us to sort, rank, and grade students, rather than inform the teacher what needs to be done next.
    • Example, what is between water molecules?  “water”.
    • Questions that provide a window into students’ thinking are not easy to generate, but they are crucially important if we are to improve the quality of students’ learning.
    • So the important issue is this: does the teacher find out whether students have understood something when they are still in the class, when there is time to do something about it, or does the teacher only discover this once he looks at the students’ notebooks?
    • As noted previously, questions that give us this window into students’ thinking are hard to generate, and teacher collaboration will help to build a stock of good questions.
  • Practical Techniques
    • Teacher-led classroom discussion is one of the most universal instructional practices.
    • So although many people assume that American teachers talk too much, they actually talk less than teachers in countries with higher performance. It would appear that how much students learn depends more on the quality than the quantity of talk.
    • Less than 10 percent of the questions that were asked by teachers in these [Brown & Wragg, 1993] classrooms actually caused any new learning.
    • I suggest there are only two good reasons to ask questions in class: to cause thinking and to provide information for the teacher about what to do next.  Example, triangle with 2 right angles.
    • The other reason to ask questions is to collect information to inform teaching…
    • …American classrooms were characterized by relatively low degrees of student engagement.
    • Student Engagement
      • Malcolm Gladwell, Outliers, ages of professional hockey players
      • Other sports, as well.
      • In almost any classroom, some students nearly dislocate their shoulders in their eagerness to show the teacher that they have an answer to the question that the teacher has just asked.
      • High-engagement=students working together and using language as a tool, show higher achievement.
      • …[Students] who are participating are getting smarter, while those avoiding engagement are forgoing the opportunities to increase their ability.
      • This is why many teachers now employ a rule of “no hands up except to ask a question” in their classrooms (Leahy, Lyon, Thompson, & Wiliam, 2005). The teacher poses a question and then picks on a student at random.
      • Some teachers claim to be able to choose students at random without any help, but most teachers realize that when they are in a hurry to wrap up a discussion so that the class can move on, they are often drawn to one of the usual suspects for a good answer.  Popsicle sticks.
      • The major advantage of Popsicle sticks can also be a disadvantage. It is essential to replace the sticks to ensure that students who have recently answered know they need to stay on task, but then the teacher cannot guarantee that all students will get a turn to answer.
      • Most teachers realize that being called upon at random will be a shock for students unused to participation in classrooms. However, moving to random selection can also be unpopular with students who participate regularly.
      • For other students, random questioning is unwelcome because they are unable to control when they are asked questions.
      • Lemov (2010).  Cold-calling and No Opt Outs.
      • Often, students will say, “I don’t know,” not because they do not know, but because they cannot be bothered to think.
      • “phone a friend”, “ask the audience”, “fifty-fifty”…All these strategies derive their power from the fact that classroom participation is not optional…
    • Wait Time
      • However, the amount of time between the student’s answer and the teacher’s evaluation of that answer is just as, if not more, important.
    • Alternatives to Questions
      • Asking questions may not be the best way to generate good classroom discussions.  Try statements.
      • The quality of discussion is usually enhanced further when students are given the opportunity to discuss their responses in pairs or small groups before responding (a technique often called “think-pair-share”).
    • Evaluative and Interpretive Listening
      • [John Wooden] was once asked why other coaches were not as successful, and he said, “They don’t listen.
      • When teachers listen to student responses, many focus more on the correctness [Evaluative listening] of the answers than what they can learn about the student’s understanding (Even & Tirosh, 1995; Heid, Blume, Zbiek, & Edwards, 1999).
      • However, when teachers realize that there is often information about how to teach something better in what students say—and thus how to adjust the instruction to better meet students’ needs—they listen interpretively.
    • Question Shells
      • There are a number of general structures that can help frame questions in ways that are more likely to reveal students’ thinking.  Example “Why is ___ an example of ___.”


      • Another technique is to present students with a contrast and then ask them to explain the contrast, as shown in table 4.2.


    • Hot-Seat Questioning
      • In hot-seat questioning, the teacher asks a student a question and then a series of follow-up questions to probe the student’s ideas in depth.
      • If teachers are to harness the power of high-quality questioning to inform their instructional decisions, they need to use all-student response systems routinely.
    • All-Student Response Systems
      • The problem with such techniques [“thinking thumbs” or “fist to five”] is that they are self-reports, and, as we know from literally thousands of research studies, self-reports are unreliable.
      • However, a very small change [asking cognitive not affective questions] can transform useless self-reports into a very powerful tool.
        • Example students that signal correct or incorrect know they will be followed up with.
        • Mercury phosphate equation example.
        • Example about the length of a line on a grid.
        • Example about incorrect classification of levers.
      • In each of these four examples, the teacher was able to ensure both student engagement—after all, it is very easy to tell if a student has not voted—and high-quality evidence to help decide what to do next.
    • ABCD Cards
      • Each student has a set of cards with letters.  Example.


      • Most students should recognize that A and B represent one-fourth, and hopefully, most students will also realize that in diagram C, the fraction shaded is not one-fourth.
      • The use of multiple correct answers allows the teacher to incorporate items that support differentiation, by including some responses that all students should be able to identify correctly but also others that only the ablest students will be able to answer correctly. Such differentiation also helps keep the highest-achieving students challenged and, therefore, engaged.
      • Example Heysel Stadium disaster 1985.  …she was pleased to see that the class now had a much more complex view of the tragedy…
      • One elementary school teacher takes the idea of cards one step further with what she calls “letter corners.”
      • ABCD cards can also be used to bridge two lessons.  Teacher verified that content covered that day was understood and content for the next day wasn’t understood yet.
      • A major difficulty with ABCD cards is that they generally require teachers to have planned questions carefully ahead of time, and so they are less useful for spontaneous discussion….
    • Mini Whiteboards
      • Whiteboards are powerful tools in that the teacher can quickly frame a question and get an answer from the whole class….
      • One teacher wanted to use whiteboards, but there was insufficient money in the school’s budget to acquire these, so instead, she placed sheets of letter-sized white card stock inside page protectors to provide a low-cost alternative.
    • Exit Passes
      • When questions require longer responses, teachers can use the exit passes….
      • Exit pass questions work best when there is a natural break in the instruction; the teacher then has time to read through the students’ responses and decide what to do next.
      • All these techniques create student engagement while providing the teacher with evidence about the extent of each student’s learning so that the teacher is able to adjust the instruction to better meet the students’ learning needs.
    • Discussion Questions and Diagnostic Questions
      • Example


      • Students who say A or B have some understanding but not of the value of the index of a sequence.
      • This question can lead to a valuable discussion in the mathematics classroom, since it allows the teacher to challenge the idea that mathematics is a right-or-wrong subject.
      • The teacher learns little just by seeing which of these alternatives students choose. She has to hear the reasons for the choices. That is why this question is a valuable discussion question, but it is not a good diagnostic question.
      • Example


      • In some ways, this is a sneaky question, because there are two correct—and four incorrect—responses.
      • The crucial feature of such diagnostic questions is based on a fundamental asymmetry in teaching; in general, it is better to assume that students do not know something when they do than it is to assume they do know something when they don’t. What makes a question useful as a diagnostic question, therefore, is that it must be very unlikely that the student gets the correct answer for the wrong reason.
      • Science Example


      • A teacher who has been focusing on Archimedes’ principle hopes that the students choose B, but there are valid reasons for choosing alternatives.
      • Example:  physics/forces.


      • A seems reasonable, but is naïve.
      • B has also an element of truth.
      • C was what the teacher had been hoping for, but is a little imprecise.
      • D is similar to B
      • E is clearly mystical and wrong.
      • The first and last responses, therefore, are obviously incorrect and related to well-known naïve conceptions that students have about the physical world.
      • Physics is an unnatural way of thinking about the world—if it were natural, it wouldn’t be so hard for students to learn.
      • This question is not really asking, “What’s happening here?” It is asking, “Can you think about this situation like a physicist?”
      • History question example.  World War II started 1937, 1938, 1939, 1940, or 1941?
      • Again, the point is that the teacher learns little about the quality of student thinking from hearing which answer a student chooses; the teacher needs to hear reasons for the choice, and that means hearing from every student in the classroom.
      • Example:  Diagnostic question in history.
      • It is the quality of the distractors that is crucial here—it is only because the distractors are so plausible that the teacher can reasonably conclude that the students who choose D have done so for the right reason.
      • Example:  Thesis statement question.
      • Example:  Modern foreign language question.
      • Diagnostic questions can be used in a variety of ways.
        • They can be used as “range-finding” questions to find out what students already know about a topic before beginning the instruction.
      • [D]iagnostic questions are most useful in the middle of an instructional sequence to check whether students have understood something before moving on.  [Hinge-point questions.]
      • Hinge-point question is a craft.  Two guidelines/principles.
        • First, it should take no longer than two minutes, and ideally less than one minute, for all students to respond to the question
        • Second, it must be possible for the teacher to view and interpret the responses from the class in thirty seconds (and ideally half that time).
      • Example NAEP flights from Newton to Oldtown.
      • [S]tudents who think there are a hundred minutes in an hour and those who know there are just sixty get the same answer.
      • Earlier in this chapter, I proposed that there are two good reasons to ask questions in classrooms: to cause thinking and to provide the teacher with information that assists instructional decision making.
        • Or, to put it another way, ideally it would be impossible for students to get the right answer for the wrong reason
        • One way to improve hinge-point questions is to involve groups of teachers.  See also Larry Cuban on this point.
        • No question will ever be perfect, but by constantly seeking to understand the meaning behind students’ responses to our questions, we can continue to refine and polish our questions and prompts
        • A second requirement of questions to assist instructional decision making—much less important than the first but still useful to bear in mind when designing questions—is that the incorrect answers should be interpretable. That is, if students choose a particular incorrect response, the teacher knows (or at least has a pretty good guess) why they have done so.
      • Multiple-choice questions are often criticized because they assess only low-order skills such as factual recall or application of a standard algorithm, although as the previous examples show, they can, if carefully designed, address higher-order skills.
      • In the classroom, these are much less important considerations, and multiple-choice questions have one great feature in their favor: the number of possible student responses is limited.
      • Sometimes it makes sense to administer the question as a series of simple questions.
      • Example lines of symmetry.  How many does each shape have?


      • She doesn’t try to remember how each student responds. Instead, she focuses on just two things: 1. Are there any items that a significant proportion of the class answers incorrectly and will need to be retaught with the whole class? 2. Which two or three students would benefit from individualized instruction?
      • She realizes that the incorrect answers may not necessarily indicate poor mathematical understanding.
      • In this episode, the teacher administered, graded, and took remedial action regarding a whole-class quiz in a matter of minutes, without giving herself a pile of grading to do. She does not have a grade for each student to put in a gradebook, but this is a small price to pay for such an agile piece of teaching.
      • However, high-quality questions seem to work across different schools, districts, states, cultures, and even languages [portability]. Indeed, sharing high-quality questions may be the most significant thing we can do to improve the quality of student learning.
  • Conclusion

Years ago, David Ausubel (1968) argued that the most important factor influencing learning is what the learner already knows and that the job of the teacher is to ascertain this and to teach accordingly. Students’ conceptions are not random aberrations but the results of sophisticated and creative attempts to make sense of their experiences. Within a typical classroom, there is clearly not enough time for the teacher to treat each student as an individual, but with careful planning and the thoughtful application of the techniques presented in this chapter, the teacher can make the classroom a much more engaging place for students and one in which the teacher is able to make rapid and effective instructional adjustments to meet the learning needs of all students. Once the teacher knows where learners are in their learning, she is in a position to provide feedback to the learners about what to do next—and this is the subject of the next chapter.



The Dayton Regional STEM Center. (2011).  STEM Education Quality Framework.  Retrieved July 10, 2012 from http://www.washingtonstem.org/images_load/STEM%20Ed%20Quality%20Framework_FINAL%202.pdf

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, Indiana: Solution Tree Press.

EDU6978: Week 02: Due 2012-07-08

I spent the past year as a STEM Specialist in my teaching internship, but this week I took my first really critical look at the integration of the teaching of the components:  science, technology, engineering, and mathematics.  Better late than never!  I also was exposed to the first–and I argue underlying–concept of embedded formative assessment, namely the sharing of learning goals and expectations in a way that students can use.

The class all agreed with Wiliam (2011) that the purpose of sharing learning expectations is so that students know where they have come from, where they are, and where they are going.  I pointed out that this knowledge is critical for the other strategies of formative assessment.  As I reflect on my practice from the past year, I believe I started every class session with a sense of where we were in the book, where we had come from and where we were going, but I don’t think I communicated the specifics of what we aimed to accomplish with the content of the course.  Although I had very little opportunity or practice with formative assessment, I am looking forward to practicing it at my next job.

When the whole class was asked to reflect on the “faces of STEM” (Lippy & Henrikson, 2012), it was clear that no one had interned in a location that had exemplary integration of techology and engineering with science and math.  Lantz (2009) has several ideas why that is, namely that there are no real established standards, or endorsements, training or accountability for looking at those subjects in a unified way.  A report I found

Nevertheless, there is much reporting that project-based learning (Edutopia, 2010) shows great promise for helping students in math and science as well as technology and engineering.  it was certainly my experience during my intern teaching that projects are much more authentic to the students, and therefore show more promise for generating enthusiasm for subjects than lecture and testing.


Edutopia:  An Introduction to Project Based Learning (Edutopia, 2010)

Seattle Physics Teacher, Scott McComb. Aviation High School.

Linda Darling-Hammond:  Broad tasks that have real problems that students can solve.

Students create something that demonstrates what they have learned.

Seymour Papert:  “get rid of curriculum, learn this where you need it.”

Project Learning

  • In-Depth Investigations of Subject Matter: 
  • Outside Experts That Supplement Teacher Knowledge

Benefits of Project Learning

  • Increased Academic Achievement
  • Increased Application and Retention of Information
  • Critical Thinking
  • Communication
  • Collaboration

Mike Bonfitz (FAA):  for 9th graders to pull this off, is amazing.

Science, Technology, Engineering, and Mathematics (STEM) Education:  What Form?  What Function? (Lantz, 2009)

Outline (Verbatim from author unless italic)

  • STEM education offers students one of the best opportunities to makes sense of the world holistically, rather than in bits and pieces
  • [STEM education] is actually trans-disciplinary in that it offers a multi-faceted whole with greater complexities and new spheres of understanding that ensure the integration of disciplines.
  • The four recommendations [from Rising Above the Gathering Storm, 2005] were:
    • Increase America’s talent pool by vastly improving K-12 mathematics and science education
    • Sustain and strengthen our nation’s commitment to long-term basic research
    • Develop, recruit and retain top students, scientists, and engineers from both the United States and abroad
    • Ensure that the United States is the premier place in the world for innovation
  • Have we seen far reaching innovations in curriculum and program design and in the structure of schools that would add to this STEM movement?…”No.”
  • American high schools still remain highly departmentalized, stratified, and continue to teach subjects in isolation, with little to no attempts to draw connections among the STEM disciplines.
  • Teachers at [elementary and middle school] levels are ill prepared to teach the STEM disciplines of science and mathematics, as revealed by the low numbers of highly qualified teachers.
    • No STEM standards
    • No STEM teacher certification
    • Goals need better delineation
    • Discipline needs to be better defined
  • The work of the committee [for Rising Above the Gathering Storm] is most laudable; however, it still falls far short of providing an operational definition of world-class standards and concomitant curriculum.
  • Although the function of STEM education seems to be converging slowly (in definition and consensus), the form (how it looks in the classroom) has not been proposed.
  • Standards to be used to develop trans-disciplinary STEM exist
    • National Science Education Standards (NRC, 1996)
    • National Council of Teachers of Mathematics Standards (NCTM, 1989 and 2000)
    • National Education Technology Standards for Students (ISTE, 1998, 2007)
    • Standards for Technological Literacy (ITEA, 2007)
  • Barriers to STEM Education (misconceptions) [a good list]


  • One of the misconceptions identified as a barrier to STEM education was “STEM education consists only of the two bookends—science and mathematics”
  • The engineering component of STEM education puts emphasis on the process and design of solutions, instead of the solutions themselves. [I personally think that definition is too narrow.  How can you deal with solutions process and design without caring about the solution?  That makes no sense.]
  • The technology component allows for a deeper understanding of the three other components of STEM education.
  • None of these curricula (below) fit our definition of trans-disciplinary
    • Engineering by Design, from Center for Advancement of Teaching Technology and Science (CATTS)
    • Engineering is Elementary (EiE), from the National Center for Technological Literacy (NCTL)
    • Invention, Innovation, and Inquiry, from the International Technology Education Association (ITEA)
  • What philosophical and theoretical elements should be used to guide the design and development of such a curriculum?
    • Standards driven
    • Understanding by Design (UbD)
    • Inquiry-based teaching and learning
    • Problem-Based Learning
    • Performance-based teaching and learning
    • 5E (Engagement, Exploration, Explanation, Elaboration and Evaluation) Teaching, Learning and Assessing Cycle
    • Digital curriculum integrated with digital teaching technologies
    • Formative and summative assessments with both task and non-task specific rubrics.
  • Consequently, STEM education curricula should be driven by engaging engineering problems, projects, and challenges, which are embedded within and as culminating activities in the instructional materials.

The Different Faces of STEM (Henrikson & Lippy, 2012)


Embedded Formative Assessment (Wiliam, 2011)

Chapter 3 Outline (Verbatim from author unless italic)

Clarifying, Sharing, and Understanding Learning Intentions and Success Criteria

It seems obvious that students might find it helpful to know what they are going to be learning, and yet, consistently sharing learning intentions with students is a relatively recent phenomenon in most classrooms. This chapter reviews some of the research evidence on the effects of ensuring learners understand what they are meant to be doing and explains why it is helpful to distinguish between learning intentions, the context of the learning, and success criteria. The chapter also provides a number of techniques that teachers can use to share learning intentions and success criteria with their students.

  • Why Learning Intentions Are Important
    • “imagine oneself on a ship sailing across an unknown sea, to an unknown destination. An adult would be desperate to know where he is going. But a child only knows he is going to school…” (White, 1971, p. 340)
    • Not all students have the same idea as their teachers about what they are meant to be doing in the classroom.
    • If I show a piece of writing to a group of third graders and ask them why I think it’s a good piece of writing, some will respond with contributions like, “It’s got lots of descriptive adjectives,” “It’s got strong verbs,” or “It uses lots of different transition words.”
    • A number of research studies have highlighted the importance of students understanding what they are meant to be doing.
    • To illustrate this, I often ask teachers to write 4x and 4½. I then ask them what the mathematical operation is between the 4 and the x, which most realize is multiplication. I then ask what the operation is between the 4 and the ½, which is, of course, addition. I then ask whether any of them had previously noticed this inconsistency in mathematical notation—that when numbers are next to each other, sometimes it means multiply, sometimes it means add, and sometimes it means something completely different, as when we write a two-digit number like 43. Most teachers have never noticed this inconsistency, which presumably is how they were able to be successful at school.
    • Study of science classrooms
      • Seven of the classrooms were seventh grade; three were eighth grade; and two were ninth grade.
      • ThinkerTools curriculum, 7 modules
      • Each module incorporated a series of evaluation activities.
        • Six classes did discussion-based evaluation
        • Other six classes did reflective assessment, using criteria, peer ratings 

[to be continued]



Edutopia. (2009).  An Introduction to Project-Based Learning.  [Video file].  Retrieved July 5, 2012 from http://www.youtube.com/watch?v=dFySmS9_y_0

Lantz, H. B., Jr. (2009). STEM Education: What Form? What Function? Retrieved July 3, 2012 from http://stem.afterschoolnetwork.org/sites/default/files/stemeducation-whatformfunctionarticle.pdf

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, Indiana: Solution Tree Press.

EDU6978: Week 01: Due 2012-07-01


Since NRC (2009) describes formative assessment as a key component in classroom environments that produce effective learning and since Wiliam (2011) describes the five key strategies of formative assessment, lets see how I did in the SAT Math Review course that I conducted this past year. Note that since the rest of Wiliam (2011) is comprised of deep dives on each strategy, my self-assessment below lacks an in-depth rubric.

Strategy Grade Comments.
Learning Intentions and Criteria for Success are Clear A- I think it was implicitly clear from the nature of the course what our learning intentions were, i.e. “to cover math that is going to be on the SAT.”  Criteria for success was also clear, i.e. “students need to answer questions of similar difficulty in order to get a good score on the SAT.”  I ding myself here only because I believe that I never made that explicit, or probed to see if that was explicit.
Discussions, Activities and Learning Tasks Elicit Evidence of Learning C

There was little variety in the activities used in the class. We mostly did lecture and group discussion, with some pairwise discussion.  I ding myself further since the activities we did weren’t eliciting individually any evidence of learning.  Since students alone knew their answer and whether it was correct or incorrect, I couldn’t adjust the tempo or flow of the class relative to mass mis/understanding in the class.

Note that NRC (2009) derives a teaching implication of their key finding that teachers need to cover concepts and examples in depth.  However, due to lack of time, I was not able to do this consistently.

Feedback that Moves Learning Forward B+ When covering questions and answers in class, feedback was immediate and showed students where conceptual understanding or process was flawed. However, I am at a loss to prove that learning was moved forward. For example if a student got a problem wrong, what were the steps they could take to do better next time. That was unclear.
Activating Learners as Instructional Resources for One Another A The class was small, and the students were familiar with one another.  Thus student participation in discussions was good.  Students were regularly called upon to describe their thinking to other students who were having trouble.
Activating Learners as the Owners of Their Own Learning C+ Students were owners of their own learning, since they all knew the SAT was coming, and they knew that this class was not graded.  However, were they activated in that role?  It was suggested that the regular homework was not stressed enough as a tool to help students take control of their learning.  Homework also had answers available so students could think they are owning their learning but perhaps that gave opportunity to not be completely honest with themselves.

I believe I am not alone relative to my other classmates in this class that I will be challenged in implementing formative assessment equitably across the whole class in the time allotted..  Take students that may not have made uniform progress in prior grades relative to the Math Practices (CCSSI, 2009) and getting the whole class to a uniform place for assessment and instruction may be difficult.  Compound that with Meyer’s (2010) recommendation that math and science classes need to practice and model patient problem solving on real-world problem solving, and it sounds like the main challenge for me will be finding the time.  However, for the sake of real learning there is no option to not do formative assessment.


How We Learn (NRC 1999)

In chapter 2 of the first reading (National Research Council [NRC], 1999), three key findings are elucidated as having “strong implications for how we teach (pg 10).”

Three Core Learning Principles (NRC, p. 10)


No reason to think this has changed from 1999 to present. Learners come to grips with the world long before they darken the door of a classroom. Even in the classroom, unless the learning supplants their hard-won intuition from the real-world, then true learning will not occur.


An expert doesn’t just know more, they also have frameworks that help them reconstruct/re-derive relationships/laws that they may have forgotten, and a schema to interpret new data and see new patterns.


That’s what we are talking about, metacognitive approach to instruction is assessment for learning, is formative assessment, where the student has all the information and tools to take control of their own learning.

Three Implications for Teaching (NRC, p. 15)

  1. Teachers must draw out and work with the preexisting understandings that their students bring with them.
    1. Child is not an empty vessel, teacher must reveal student initial conceptions.
    2. Assessments must be frequent, and provide feedback, so that student thinking is modified, refined, and deepened.
    3. Schools of education should teach teachers
      1. how to recognize common preconceptions
      2. how to uncover less common preconceptions
      3. challenge and replace preconceptions
  2. Teachers must teach some subject matter in depth, providing many examples in which the same concept is at work and providing a firm foundation of factual knowledge.
    1. teach in-depth, even at the cost of coverage, but get coverage by coordinating across school years
    2. teachers must know the subject: progress of inquiry, terms of discourse, organization of information, growth of student thinking
    3. teachers need to balance the tradeoff between assessing depth and assessing objectively
  3. The teaching of metacognitive skills should be integrated into the curriculum in a variety of subject areas. Foster a student’s internal dialogue.
    1. Teachers should integrate metacognitive instruction and discipline-based learning.
    2. Schools of education should help develop strong metacognitive strategies for instruction and classroom use.

Bringing Order to Chaos (NRC, p. 18)

Proliferation of teaching strategies can be bewildering. The truth is you need them *all* because you are trying to uncover preconceptions, and there is no universal best teaching practice. Teachers should teach basic skills in a way that students can learn and then apply to problems.


Four Ways of Designing Classroom Environments (NRC, p. 19)

  1. Schools and classrooms must be learner centered
    1. cultural differences affect student learning.
    2. student theories of intelligence (fixed/malleable) can affect effort and motivation (Dweck, 1989).
  2. To provide a knowledge-centered classroom environment, attention must be given to what is taught (information, subject matter), why it is taught (understanding) and what competency or mastery looks like.
    1. Expertise is well-organized knowledge that supports understanding.
    2. Learning with understanding is not memorizing or disconnected facts.
    3. Knowledge-centered environments are not just about activities, it is about doing with understanding.
  3. Formative assessments—ongoing assessments designed to make students’ thinking visible to both teachers and students—are essential. They permit the teacher to grasp the students’ preconceptions, understand where the students are in the “developmental corridor” from informal to formal thinking, and design instruction accordingly. In the assessment-centered classroom environment, formative assessments help both teachers and students monitor progress.
    1. Think and do, not memorize and regurgitate.
  4. Learning is influenced in fundamental ways by the context in which it takes place. A community centered approach requires the development of norms for the classroom and school, as well as connections to the outside world, that support core learning values.
    1. Teachers should build a culture where it is ok to take risks, make mistakes, obtain feedback, and revise.
    2. Teachers need to build a community where students can learn together, and help each other.
    3. Teachers need to create communities that can encourage themselves and their peers.
    4. Schools need to connect learning to other aspects of students’ lives.

Applying the Design Framework to Adult Learning (NRC, p. 23)

Even adults could benefit from using the principles in How People Learn, e.g. professional development programs for teachers:

  1. Are not learner centered
  2. Are not knowledge centered
  3. Are not assessment centered
  4. Are not community centered

Dan Meyer: TEDxNYED (Meyer, 2010)

Five signs that you are doing math reasoning wrong in the classroom:


Goal: Patient problem solving. Why are students impatient?

  1. TV situation comedies (problems resolve in 22 minutes).
  2. Books have problems that are just re-hashes of sample problems.
  3. Books have illustrations/pictures that overlay too much information at once.
  4. Formulation of the problem is key, but we just give problems to students.

Goal: Real world problems in our curriculum.

  1. Take out the extraneous information, make students decide.
  2. Take a video of the problem, “bait the hook”
  3. Get students on a level playing field.
  4. Conversations about error have been great (why does theory not match experiment).

Things you can do to improve your math reasoning in the classroom:


Standards for Math Practice (CCSSI, 2009)

  1. Make sense of problems and persevere in solving them.
  2. Reason abstractly and quantitatively.
  3. Construct viable arguments and critique the reasoning of others.
  4. Model with mathematics.
  5. Use appropriate tools strategically.
  6. Attend to precision.
  7. Look for and make use of structure.
  8. Look for and express regularity in repeated reasoning.

Connecting the Standards for Mathematical Practice to the Standards for
Mathematical Content
The Standards for Mathematical Practice describe ways in which developing student
practitioners of the discipline of mathematics increasingly ought to engage with
the subject matter as they grow in mathematical maturity and expertise throughout
the elementary, middle and high school years. Designers of curricula, assessments,
and professional development should all attend to the need to connect the
mathematical practices to mathematical content in mathematics instruction.

The Standards for Mathematical Content are a balanced combination of procedure
and understanding. Expectations that begin with the word “understand” are often
especially good opportunities to connect the practices to the content. Students
who lack understanding of a topic may rely on procedures too heavily. Without
a flexible base from which to work, they may be less likely to consider analogous
problems, represent problems coherently, justify conclusions, apply the mathematics
to practical situations, use technology mindfully to work with the mathematics,
explain the mathematics accurately to other students, step back for an overview, or
deviate from a known procedure to find a shortcut. In short, a lack of understanding
effectively prevents a student from engaging in the mathematical practices.

In this respect, those content standards which set an expectation of understanding
are potential “points of intersection” between the Standards for Mathematical
Content and the Standards for Mathematical Practice. These points of intersection
are intended to be weighted toward central and generative concepts in the
school mathematics curriculum that most merit the time, resources, innovative
energies, and focus necessary to qualitatively improve the curriculum, instruction,
assessment, professional development, and student achievement in mathematics. (p. 8)

What a lot of jargon, bordering on gibberish!  I would translate thus: “look for the word understand in the standards and use the practices to help measure extent of understanding for that content.”

Framework for K-12 Science Education (NRC, 2012)

On pages 50-53 of this source, the differences between Science and Engineering are described in 6 categories, entitled “Distinguishing practices in science from those in engineering.”

  1. Asking Questions and Defining Problems
  2. Developing and Using Models
  3. Planning and Carrying Out Investigations
  4. Analyzing and Interpreting Data
  5. Using Mathematics and Computational Thinking
  6. Constructing Explanations and Designing Solutions
  7. Engaging in Argument from Evidence
  8. Obtaining, Evaluating and Communicating Information

Anyone who has seen the TV Show “Big Bang Theory” might be familiar with the animosity between 3 of the main characters and Howard, the lowly engineer, who merely implements solutions, or solves experimental problems and doesn’t do original science.  Of all these distinguishing practices, I think that is the defining one the crux of the matter, I actually don’t think these are distinguishing practices at all, if they are worded identically and for the engineer it says:  and then builds something that average people can use, and for the scientist it doesn’t say that.

My question is:  where are the standards for engineering education, the engineering standards?  It seems as though there have been some working groups and draft proposed but I couldn’t find a definitive engineering standards link.

Of course, that begs the question of what the technology education standards are.  Those standards do exist, and have existed since 2000.  I found Technology Education standards downloadable here:  http://www.iteea.org/TAA/Publications/TAA_Publications.html

Embedded Formative Assessment (Wiliam, 2011)

  • Chapter 1 Outline (verbatim from author unless italic)
    • Why Educational Achievement Matters
      • Wiliam basically rehashes the arguments that dropouts are worth less to society (as measured by lifetime earnings) and college-educated are worth much more.  My question:  is that all adjusted for the debt that the college educated accrue?
      • “Higher levels of education man lower health care costs, lower criminal justice costs, and increased economic growth
    • The Increasing Importance of Educational Achievement
      • Hourly pay has changed depending on level of educational achievement
      • Dropouts earn less.
      • Higher levels of education are associated with better health…
      • People with more education live longer… (the converse is not necessarily true!)
      • Educational achievement matters for society (taxes, health care costs, criminal justice costs)
      • Higher levels of education are needed in the workplace.  In fact, …young people today are substantially more skilled than their parents and grandparents.
      • Scores on standardized tests…have been rising steadily
      • Only 6 percent of children in 1947 performed as well as the average child of 2001 on the Similarities test.
      • However, the quality of teaching in public schools is, on average, higher than in private schools in the United States…[emphasis mine]
      • …[T]he scores of students attending private schools in the United States were higher than those attending public schools.
      • This, however, does not show that private schools are better than public schools, because the students who go to private schools are not the same as those who go to public schools.
      • Schools have improved dramatically, but the changes in the workplace have been even more extraordinary.
      • In the 1960s and ‘70s, the average workingman (or woman) needed neither to read nor to write during the course of a workday; there were many jobs available to those with limited numeracy and literacy skills, but now those jobs are disappearing.
      • But the speed of the down escalator has been increasing—technology and outsourcing are removing jobs from the economy—and if we cannot increase the rate at which our schools are improving, then, quite simply, we will go backward.
      • Changes in technology actually affect white-collar jobs more than they affect blue-collar jobs.
      • Routine data entry work and jobs as call-center operators were among the first to be outsourced, but now high-skilled jobs are outsourced, too.
      • The one really competitive skill is the skill of being able to learn. It is the skill of being able not to give the right answer to questions about what you were taught in school, but to make the right response to situations that are outside the scope of what you were taught in school. We need to produce people who know how to act when they’re faced with situations for which they were not specifically prepared. (Papert, 1998).
      • Getting every student’s achievement up to 400 (the OECD’s definition of minimal proficiency) would be worth $70 trillion, and matching the performance of the world’s highest-performing countries (such as Finland) would be worth over $100 trillion.  But what would it cost to get there?  Social Cost?  Political Cost?
    • Why is Raising Student Achievement So Hard?
      • …[T]he depressing reality is that the net effect of the vast majority of these measures [to raise standards] on student achievement has been close to, if not actually, zero.
      • [Structural changes:  size]  However, the promise of such smaller high schools was not always realized.
      • In other cases, the potential benefits of small high schools were not realized because the creation of small high schools was assumed to be an end in itself, rather than a change in structure that would make other needed reforms easier to achieve.  [like smaller student to teacher = engagement, =relationships, e.g. looping]
      • Other countries are going in the opposite direction. …[B]ut as yet, there is no evidence that this has led to improvement.
      • [Structural changes:  governance].    As the characteristics of successful charter schools become better understood, it will, no doubt, be possible to ensure that charter schools are more successful, but at the moment, the creation of charter schools cannot be guaranteed to increase student achievement (Carnoy, Jacobsen, Mishel, & Rothstein, 2005).
      • In England, a number of low-performing schools have been reconstituted as “academies”, [but] a comparison with similarly low-performing schools that were not reconstituted as academies shows that they improve at the same rate (Machin & Wilson, 2009).
      • For-profit schools in Sweden, also don’t show a significant improvement.
      • Specialist schools in England (similar to US magnet schools) get more money per student, and thus get better scores, but so do public schools.
      • [Curriculum reform].  Trying to change students’ classroom experience through changes in curriculum is very difficult. A bad curriculum well taught is invariably a better experience for students than a good curriculum badly taught: pedagogy trumps curriculum. Or more precisely, pedagogy is curriculum, because what matters is how things are taught, rather than what is taught.  [emphasis mine]
      • Three levels of curriculum:  intended, implemented and achieved.  The greatest impact on learning is the daily lived experiences of students in classrooms, and that is determined much more by how teachers teach than by what they teach.
      • [Textbooks]  Reviews of random-allocation trials of programs for reading in the early grades and for programs in elementary, middle, and high school math concluded that there was little evidence that changes in textbooks alone had much impact on student achievement.
      • [Scale] Challenges of scale ensures that small pilots almost never perform comparably when rolled out more widely.
      • [Techology]   While there is no shortage of boosters for the potential of computers to transform education, reliable evidence of their impact on student achievement is rather harder to find.
      • [Computers] With the exception of Cognitive Tutor ® Algebra I for 9th graders, there is little evidence that computers as teaching tools really work.  See What Works Clearinghouse page:  http://ies.ed.gov/ncee/wwc/interventionreport.aspx?sid=87
      • [Interactive White Boards]  These tools are prohibitively expensive, and need a lot of training to use effectively.
      • Teacher’s aides also don’t help student achievement.
    • Three Generations of School Effectiveness Research
      • The fad of “school effectiveness” was a result of economic thinking applied to schools.
      • You can’t emulate characteristics of an effective school to reproduce that schools effectiveness elsewhere.  Unless you:
        • First, get rid of the boys.
        • Second, become a parochial school.
        • Third, and most important, move your school into a nice, leafy, suburban area.
      • But seriously, all we learned from school effectiveness studies is that differences in school scores is the result of differences in students.
      • This, in turn, means that only 8 percent of the variability in student achievement is attributable to the school, so that 92 percent of the variability in achievement is not attributable to the school.
      • It turns out that as long as you go to school (and that’s important), then it doesn’t matter very much which school you go to, but it matters very much which classrooms you’re in.  [emphasis mine]
      • It turns out that these substantial differences between how much students learn in different classes have little to do with class size, how the teacher groups the students for instruction, or even the presence of between-class grouping practices (for example, tracking). The most critical difference is simply the quality of the teacher.
    • The Impact of Teacher Quality
      • It was never considered that teacher quality could differ so markedly.
      • Teachers have long been treated as a commodity, identical, fungible, low value.
      • Performance related pay for teachers when calculated from test scores is fundamentally flawed because teachers hand off each year, who should get the credit?
      • Paying bonuses to teachers will not improve student test scores.
      • Which surprises economists, by the way.
      • [You must read Sanders & Rivers (1996), it shows that certain teachers had amazing impact relative to other teachers]
      • More recent studies (for example, Rockoff, 2004; Rivkin, Hanushek, & Kain, 2005) have confirmed the link between teacher quality and student progress on standardized tests, and it appears that the correlation between teacher quality and student progress is at least 0.2, and may be even larger (Nye, Konstantopoulos, & Hedges, 2004).
      • In other words, the most effective teachers generate learning in their students at four times the rate of the least effective teachers.
      • Excellence in classrooms is infectious, standard deviation of scores decreases as scores increase.
      • In their work in kindergarten and first-grade classrooms, Bridget Hamre and Robert Pianta (2005) found that in the classrooms of the teachers whose students made the most progress in reading, students from disadvantaged backgrounds made as much progress as those from advantaged backgrounds, and those with behavioral difficulties progressed as well as those without.
      • This last finding is particularly important because it shows that Basil Bernstein was wrong—education can compensate for society provided it is of high quality.
      • Equitable outcomes will only be secured by ensuring that the lowest-achieving students get the best teachers, which, in the short term, means taking them away from the high-achieving students; this is politically challenging to say the least.  [empasis mine]
      • Rather than thinking about narrowing the gap, we should set a goal of proficiency for all, excellence for many, with all student groups fairly represented in the excellent.
    • How Do We Increase Teacher Quality
      • Two options
        • The first is to attempt to replace existing teachers with better ones, which includes both deselecting serving teachers and improving the quality of entrants to the profession.
        • The second is to improve the quality of teachers already in the profession.
      • Deselection is the only option that we really haven’t tried yet, and thus must be the way we should go forward.
      • [Deselection] may not work because, to be effective, you have to be able to replace the teachers you deselect with better ones, and that depends on whether there are better potential teachers not currently teaching.
      • The third problem with deselection is that it is very slow.
      • We [the US versus Finland] can’t afford to turn away anyone who might be a good teacher, so we need to have better ways of identifying in advance who will be good teachers, and it turns out to be surprisingly difficult, because many of the things that people assume make a good teacher don’t.
      • It has been well known for quite a while that teachers’ college grades have no consistent relationship with how much their students will learn, and some have gone so far as to claim that the only teacher variable that consistently predicts how much students will learn is teacher IQ (Hanushek & Rivkin, 2006).
      • Some progress has been made in determining what kinds of teacher knowledge do contribute to student progress.
        • The MKT (mathematical knoweldge for teaching) for elementary school teachers.
        • This suggests that subject knowledge accounts for less than 10 percent of the variability in teacher quality.
      • A study of over 13,000 teachers, involving almost 1 million items of data on over 300,000 students in the Los Angeles Unified School District (LAUSD), found that student progress was
        • unrelated to their teachers’ scores on licensure examinations,
        • nor were teachers with advanced degrees more effective (Buddin & Zamarro, 2009).
        • Most surprisingly, there was no relationship between the scores achieved by LAUSD teachers on the Reading Instruction Competence Assessment (which all elementary school teachers are required to pass) and their students’ scores in reading.
        • So did we discourage some people from teaching that would have on the contrary made excellent teachers?
      • [Extended analogy to finding a good quarterback in the NFL via New Yorker article from Malcolm Gladwell]
      • Although efforts continue to try to predict who will do well and who will not within the NFL, Gladwell suggests that there is increasing acceptance that the only way to find out whether someone will do well in the NFL is to try him out in the NFL.
      • The same appears to be true for teaching.
      • And it will also take a long time!
      • If we are serious about securing our economic future, we have to help improve the quality of those teachers already working in our schools…
    • Conclusion (complete/verbatim)
      • Improving educational outcomes is a vital economic necessity, and the only way that this can be achieved is by increasing the quality of the teaching force. Identifying the least effective teachers and deselecting them has a role to play, as does trying to increase the quality of those entering the profession, but as the data and the research studies examined in this chapter have shown, the effects of these measures will be small and will take a long time to materialize. In short, if we rely on these measures to raise student achievement, the benefits will be too small and will arrive too late to maintain the United States’ status as one of the world’s leading economies. Our future economic prosperity, therefore, depends on investing in those teachers already working in our schools.
  • Chapter 2 Outline (verbatim from author unless italic)
    • The Case for Formative Assessment
      • minute-by-minute and day-by-day formative assessment is likely to have the biggest impact on student outcomes
    • Professional Development
      • Having a masters degree as an elementary teacher makes no difference on student scores
      • It is clear that the value added by a teacher increases particularly quickly in the first five years of teaching…


      • Note that there is no requirement for teachers to improve their practice or even to learn anything. The only requirement is to endure 180 hours of professional development.
      • Teachers need professional development because the job of teaching is so difficult, so complex, that one lifetime is not enough to master it.
      • Andre’ Previn quit one day because he “wasn’t scared anymore”.  Teachers wiould never say that.
      • Even the best teacher fail.
      • No teacher is so good—or so bad—that he or she cannot improve.  That is why we need professional development.
      • However, there is consensus that the “one shot deals”—sessions ranging from one to five days held during the summer—are of limited effectiveness, even though they are the most common model.
      • Learning Styles
        • However, there is little agreement among psychologists about what the learning styles are, let alone how they should be defined.
        • Although a number of studies have tried to show that taking students’ individual learning styles into account improves learning, evidence remains elusive.
        • “Students need to learn both how to make the best of their own learning style and also how to use a variety of styles, and to understand the dangers of taking a limited view of their own capabilities.”  (Adey, Fairbrother, Wiliam, Johnson & Jones, 1999, p 36)
      • Educational Neuroscience
        • Another potential area for teacher professional development—one that has received a lot of publicity in recent years—is concerned with applying what we are learning about the brain to the design of effective teaching.
        • Some of the earliest attempts to relate brain physiology to educational matters were related to the respective roles of the left and right sides of the brain in various kinds of tasks in education and training despite clear evidence that the conclusions being drawn were unwarranted (see, for example, Hines, 1987).
      • Content Area Knowledge
        • After all, surely the more teachers know about their subjects, the more their students will learn
        • Subject matter expertise does not make teacher more successful.
        • What we do know is that attempts to increase student achievement by increasing teachers’ subject knowledge have shown very disappointing results.
        • An evaluation of professional development designed to improve second-grade teachers’ reading instruction found that an eight-day content focused workshop…[had] no impact on the students’ reading test scores.
        • Although the professional development had been specifically designed to be relevant to the curricula that the teachers were using…there was no impact on student achievement….
        • …the relationship between teachers’ knowledge of the subjects and their students’ progress is weak…
        • However, there is a body of literature that shows a large impact on student achievement across different subjects, across different age groups, and across different countries, and that is the research on formative assessment.
    • The Origins of Formative Assessment
      • 1967, Michael Scriven, Formative evaluation
      • 1969, Benjamin Bloom, “By formative evaluation we mean evaluation by brief tests used by teachers and students as aids in the learning process.”
      • “Evaluation which is directly related to the teaching-learning process as it unfolds can have highly beneficial effects…”  (Bloom, 1969, p.50)
      • CGI=Cognitively Guided Instruction, integrating assessment with instruction
      • In the original CGI project, a group of twenty-one elementary school teachers participated, over a period of four years, in a series of workshops in which the teachers were shown extracts of videotapes selected to illustrate critical aspects of children’s thinking. The teachers were then prompted to reflect on what they had seen, by, for example, being challenged to relate the way a child had solved one problem to how she had solved or might solve other problems (Fennema et al., 1996). Throughout the project, the teachers were encouraged to make use of the evidence they had collected about the achievement of their students to adjust their instruction to better meet their students’ learning needs. Students taught by CGI teachers did better in number fact knowledge, understanding, problem solving, and confidence (Carpenter, Fennema, Peterson, Chiang, & Loef, 1989), and four years after the end of the program, the participating teachers were still implementing the principles of the program (Franke, Carpenter, Levi, & Fennema, 2001).
      • The power of using assessment to adapt instruction is vividly illustrated in a study of the implementation of the measurement and planning system (MAPS), in which twenty-nine teachers, each with an aide and a site manager, assessed the readiness for learning of 428 kindergarten students.
      • [Fuchs & Fuchs 1986] found that regular assessment (two to five times per week) with follow-up action produced a substantial increase in student learning.
      • Over the next two years, two further research reviews, one by Gary Natriello (1987) and the other by Terence Crooks (1988), provided clear evidence that classroom assessment had a substantial—and usually negative—impact on student learning.
      • In 1998, Paul Black and I sought to update the reviews of Natriello and Crooks.
      • We concluded that the research suggested that attention to the use of assessment to inform instruction, particularly at the classroom level, in many cases effectively doubled the speed of student learning.
      • “Despite the existence of some marginal and even negative results, the range of conditions and contexts under which studies have shown that gains can be achieved must indicate that the principles that underlie achievement of substantial improvements in learning are robust. Significant gains can be achieved by many different routes, and initiatives here are not likely to fail through neglect of delicate and subtle features.” (Black & Wiliam, 1998a, pp. 61–62)
      • Black & Wiliam did more research in England in 2003.
      • Most of the teachers’ plans contained reference to two or three important areas in their teaching in which they were seeking to increase their use of formative assessment, generally followed by details of techniques that would be used to make this happen.
      • …the teachers could be observed implementing some of the ideas they had discussed in the workshops and could discuss…
      • Nevertheless, in this study, using scores on externally scored standardized tests, the students with which the teachers used formative assessment techniques made almost twice as much progress over the year (Wiliam, Lee, Harrison, & Black, 2004).
    • What, Exactly, Is Formative Assessment
      • Paul Black and I defined formative assessment “as encompassing all those activities undertaken by teachers, and/or by their students, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged” (Black & Wiliam, 1998a, p. 7).
      • What is notable about these definitions is that, however implicitly, formative assessment is regarded as a process.
      • The difficulty with trying to make the term formative assessment apply to a thing (the assessment itself) is that it just does not work.
      • The assessment that the teacher used—an AP calculus examination—was designed entirely for summative purposes. AP exams are designed by the College Board to confer college-level credit so that students passing the exam at a suitable level are exempt from introductory courses in college. However, this teacher used the assessment instrument formatively—what Black and I have called “formative use of summative tests.”
      • Some people (for example, Popham, 2006; Shepard, 2008) have called for the term formative assessment not to be used at all, unless instruction is improved.
        • 1. The provision of effective feedback to students

          2. The active involvement of students in their own learning

          3. The adjustment of teaching to take into account the results of assessment

          4. The recognition of the profound influence assessment has on the motivation and self-esteem of students, both of which are crucial influences on learning

          5. The need for students to be able to assess themselves and understand how to improve

      • Instead, they suggested that it would be better to use the phrase assessment for learning, which had first been used by Harry Black (1986) and was brought to a wider audience by Mary James at the 1992 annual meeting of the ASCD in New Orleans.
      • “If formative assessment tells users who is and who is not meeting state standards, assessment FOR learning tells them what progress each student is making toward meeting each standard while the learning is happening—when there’s still time to be helpful. (Stiggings, 2005, pp. 1–2)
      • The problem, as Randy Bennett (2009) points out, is that it is an oversimplification to say that formative assessment is only a matter of process or only a matter of instrumentation.
      • The original, literal meaning of the word formative suggests that formative assessments should shape instruction—our formative experiences are those that have shaped our current selves—and so we need a definition that can accommodate all the ways in which assessment can shape instruction.
      • Scenarios
        1. Summer math teacher PD used to improve teaching methods so that students do better on ratio and proportion questions on math standardized tests.
        2. Algebra I teachers look at performance on certain problems on state-wide tests and plan to improve delivery.
        3. Interim tests (every 6-10 weeks) determine which students must attend Saturday sessions.
        4. A quiz is given and examined diagnostically to plan future sessions of a class.
        5. Exit passes on bias in historical sources are used to determine whether class can move on or not.
        6. Students are given cards about literary devices and asked to show the appropriate card when a sample sentence is read.
        7. AP calculus teacher has students graph a function on personal whiteboards.
      • In each of these seven examples, evidence of student achievement was elicited, interpreted, and used to make a decision about what to do next.
      • Each can be considered an example of formative assessment.  (Even though time frames may differ.)
      • [Definition] 
      • An assessment functions formatively to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have made in the absence of that evidence.
      • Points about this definition.
        1. formative is used to describe the function that the evidence from the assessment actually serves
        2. Assessment decisions can be made by teachers, learners, or peers.
        3. The third point is that the focus is on decisions instead of on the intentions of those involved…
        4. The probabilistic formulation (that the decisions are likely to be better) reflects the fact that even the best-designed interventions will not always result in better learning for all students.
        5. Here, the term instruction refers to the combination of teaching and learning, to any activity that is intended to create learning (defined as an increase, brought about by experience, in the capacities of an individual to act in valued ways).
        6. The formative assessment might not change the course of action but instead simply show that the proposed course of action was right.
      • The emphasis on decisions as being at the heart of formative assessment also assists with the design of the assessment process.
      • However, if the formative assessments are designed without any clear decision in mind, then there is a good chance that the information from the assessment will be useless.
      • The alternative is to design the assessments backward from the decisions.  (decision-pull versus data-push).
    • Strategies of Formative Assessment
      • The discussion thus far has established that any assessment can be formative and that assessment functions formatively when it improves the instructional decisions that are made by teachers, learners, or their peers.
      • All teaching really boils down to three key processes and three kinds of individuals involved. The processes are: finding out where learners are in their learning, finding out where they are going, and finding out how to get there. The roles are: teacher, learner, and peer.


      • Key strategies of formative assessment
        1. Clarifying, sharing, and understanding learning intentions and criteria for success.
        2. Engineering effective classroom discussions, activities, and learning tasks that elicit evidence of learning.
        3. Providing feedback that moves learning forward.
        4. Activating learners as instructional resources for one another.
        5. Activating learners as the owners of their own learning.
      • The big idea is that evidence about learning is used to adjust instruction to better meet student needs—in other words, teaching is adaptive to the learner’s needs.
    • Assessment:  The Bridge Between Teaching and Learning
      • Assessment occupies such a central position in good teaching because we cannot predict what students will learn, no matter how we design our teaching.
      • [Denvir experiment (Denvir & Brown, 1986a) with student Jy]
        • Knowledge gaps elucidated.
        • Instruction planned and delivered to address gaps.
        • Surprisingly, on the posttest, Jy could not demonstrate mastery of any of the skills that she had been specifically taught…
        • The skills that Jy acquired were consistent with the hierarchies that Denvir had identified—they just weren’t the skills her teacher had taught, and the same was found to be true for other students in the study (Brown & Denvir, 1986b).
      • This is why assessment is the central process in instruction. Students do not learn what we teach. If they did, we would not need to keep gradebooks. We could, instead, simply record what we have taught.
      • The truth is that we often mix up teaching and learning,
      • After all, what sense does it make to talk about a lesson for which the quality of teaching was high but the quality of learning was low?
      • In some languages, the distinction between teaching and learning is impossible to make…
      • To say that learning is more important than teaching is a bit like saying that traveling is more important than driving.
      • Every action that a teacher takes, provided it is intended to result in student learning, is teaching, but the teacher cannot do the learning for the learner; teaching is all the teacher can do.
      • The influence has shifted from “what am I going to teach and what are the pupils going to do?” towards “how am I going to teach this and what are the pupils going to learn?” (Black, Harrison, Lee, Marshall, & Wiliam, 2004, p. 19)
      • Two extremes
        • [Teacher does all the work.]  That is why I often say to teachers, “If your students are going home at the end of the day less tired than you are, the division of labor in your classroom requires some attention.”
        • [Teacher merely facilitates.]  Presumably, the teachers are just hanging around, hoping that some learning will occur.
        • Teaching is difficult because neither of these extremes is acceptable. When the pressure is on, most of us behave as if lecturing works, but deep down, we know it’s ineffective. But leaving the students to discover everything for themselves is equally inappropriate. For this reason, I describe teaching as the engineering of effective learning environments. And sometimes, a teacher does her best teaching before the students arrive in the classroom.
      • Many teachers have had the experience of creating an effective group discussion task in which the students engage completely in a really tricky challenge that they must resolve.  The only problem is there is nothing for the teacher to do.
      • The teacher’s job is not to transmit knowledge, nor to facilitate learning. It is to engineer effective learning environments for the students. The key features of effective learning environments are that they create student engagement and allow teachers, learners, and their peers to ensure that the learning is proceeding in the intended direction. The only way we can do this is through assessment. That is why assessment is, indeed, the bridge between teaching and learning.
    • Conclusion
      • In this chapter, we learned that the regular use of minute-by-minute and day-by-day classroom formative assessment can substantially improve student achievement. Although many different definitions of formative assessment have been proposed, the essential idea is simple. Teaching is a contingent activity. We cannot predict what students will learn as a result of any particular sequence of instruction. Formative assessment involves getting the best possible evidence about what students have learned and then using this information to decide what to do next.
      • There are five key strategies of formative assessment. The next five chapters probe each of these five strategies in more depth, offering details of research studies that provide evidence of their importance and a number of practical techniques that can be used to implement the strategies in classrooms.



Bennett. (2009).  As cited in Wiliam (2011).

Carnoy, Jacobsen, Mishel, & Rothstein. 2005.

Common Core State Standards Initiative. (2009).  Common Core State Standards for Mathematics.  National Governors Association.  Retrieved June 24, 2012 from http://corestandards.org/assets/CCSSI_Math%20Standards.pdf

Machin & Wilson. 2009.

Meyer, D. (2010, March 6).  TEDxNYED:  Math Class Needs a Makeover.  Retrieved from http://www.youtube.com/watch?v=BlvKWEvKSi8 (2012, June 24). 

National Research Council. (1999). How People Learn: Bridging Research and Practice. Washington, DC: The National Academies Press.

National Research Council. (2012). A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas. Washington, DC: The National Academies Press.

Papert (1998).

Stiggins (2005).  As cited in Wiliam (2011).

Wiliam, D. (2011).  Embedded Formative Assessment.  Bloomington, Indiana:  Solution Tree Press.

%d bloggers like this: