Tag Archives: STEM

EDU6978: Week 02: Due 2012-07-08

Reflection
I spent the past year as a STEM Specialist in my teaching internship, but this week I took my first really critical look at the integration of the teaching of the components:  science, technology, engineering, and mathematics.  Better late than never!  I also was exposed to the first–and I argue underlying–concept of embedded formative assessment, namely the sharing of learning goals and expectations in a way that students can use.

The class all agreed with Wiliam (2011) that the purpose of sharing learning expectations is so that students know where they have come from, where they are, and where they are going.  I pointed out that this knowledge is critical for the other strategies of formative assessment.  As I reflect on my practice from the past year, I believe I started every class session with a sense of where we were in the book, where we had come from and where we were going, but I don’t think I communicated the specifics of what we aimed to accomplish with the content of the course.  Although I had very little opportunity or practice with formative assessment, I am looking forward to practicing it at my next job.

When the whole class was asked to reflect on the “faces of STEM” (Lippy & Henrikson, 2012), it was clear that no one had interned in a location that had exemplary integration of techology and engineering with science and math.  Lantz (2009) has several ideas why that is, namely that there are no real established standards, or endorsements, training or accountability for looking at those subjects in a unified way.  A report I found

Nevertheless, there is much reporting that project-based learning (Edutopia, 2010) shows great promise for helping students in math and science as well as technology and engineering.  it was certainly my experience during my intern teaching that projects are much more authentic to the students, and therefore show more promise for generating enthusiasm for subjects than lecture and testing.

 
Schedule
image
Notes

Edutopia:  An Introduction to Project Based Learning (Edutopia, 2010)

Seattle Physics Teacher, Scott McComb. Aviation High School.

Linda Darling-Hammond:  Broad tasks that have real problems that students can solve.

Students create something that demonstrates what they have learned.

Seymour Papert:  “get rid of curriculum, learn this where you need it.”

Project Learning

  • In-Depth Investigations of Subject Matter: 
  • Outside Experts That Supplement Teacher Knowledge

Benefits of Project Learning

  • Increased Academic Achievement
  • Increased Application and Retention of Information
  • Critical Thinking
  • Communication
  • Collaboration

Mike Bonfitz (FAA):  for 9th graders to pull this off, is amazing.

Science, Technology, Engineering, and Mathematics (STEM) Education:  What Form?  What Function? (Lantz, 2009)

Outline (Verbatim from author unless italic)

  • STEM education offers students one of the best opportunities to makes sense of the world holistically, rather than in bits and pieces
  • [STEM education] is actually trans-disciplinary in that it offers a multi-faceted whole with greater complexities and new spheres of understanding that ensure the integration of disciplines.
  • The four recommendations [from Rising Above the Gathering Storm, 2005] were:
    • Increase America’s talent pool by vastly improving K-12 mathematics and science education
    • Sustain and strengthen our nation’s commitment to long-term basic research
    • Develop, recruit and retain top students, scientists, and engineers from both the United States and abroad
    • Ensure that the United States is the premier place in the world for innovation
  • Have we seen far reaching innovations in curriculum and program design and in the structure of schools that would add to this STEM movement?…”No.”
  • American high schools still remain highly departmentalized, stratified, and continue to teach subjects in isolation, with little to no attempts to draw connections among the STEM disciplines.
  • Teachers at [elementary and middle school] levels are ill prepared to teach the STEM disciplines of science and mathematics, as revealed by the low numbers of highly qualified teachers.
    • No STEM standards
    • No STEM teacher certification
    • Goals need better delineation
    • Discipline needs to be better defined
  • The work of the committee [for Rising Above the Gathering Storm] is most laudable; however, it still falls far short of providing an operational definition of world-class standards and concomitant curriculum.
  • Although the function of STEM education seems to be converging slowly (in definition and consensus), the form (how it looks in the classroom) has not been proposed.
  • Standards to be used to develop trans-disciplinary STEM exist
    • National Science Education Standards (NRC, 1996)
    • National Council of Teachers of Mathematics Standards (NCTM, 1989 and 2000)
    • National Education Technology Standards for Students (ISTE, 1998, 2007)
    • Standards for Technological Literacy (ITEA, 2007)
  • Barriers to STEM Education (misconceptions) [a good list]

image

  • One of the misconceptions identified as a barrier to STEM education was “STEM education consists only of the two bookends—science and mathematics”
  • The engineering component of STEM education puts emphasis on the process and design of solutions, instead of the solutions themselves. [I personally think that definition is too narrow.  How can you deal with solutions process and design without caring about the solution?  That makes no sense.]
  • The technology component allows for a deeper understanding of the three other components of STEM education.
  • None of these curricula (below) fit our definition of trans-disciplinary
    • Engineering by Design, from Center for Advancement of Teaching Technology and Science (CATTS)
    • Engineering is Elementary (EiE), from the National Center for Technological Literacy (NCTL)
    • Invention, Innovation, and Inquiry, from the International Technology Education Association (ITEA)
  • What philosophical and theoretical elements should be used to guide the design and development of such a curriculum?
    • Standards driven
    • Understanding by Design (UbD)
    • Inquiry-based teaching and learning
    • Problem-Based Learning
    • Performance-based teaching and learning
    • 5E (Engagement, Exploration, Explanation, Elaboration and Evaluation) Teaching, Learning and Assessing Cycle
    • Digital curriculum integrated with digital teaching technologies
    • Formative and summative assessments with both task and non-task specific rubrics.
  • Consequently, STEM education curricula should be driven by engaging engineering problems, projects, and challenges, which are embedded within and as culminating activities in the instructional materials.

The Different Faces of STEM (Henrikson & Lippy, 2012)

image

Embedded Formative Assessment (Wiliam, 2011)

Chapter 3 Outline (Verbatim from author unless italic)

Clarifying, Sharing, and Understanding Learning Intentions and Success Criteria

It seems obvious that students might find it helpful to know what they are going to be learning, and yet, consistently sharing learning intentions with students is a relatively recent phenomenon in most classrooms. This chapter reviews some of the research evidence on the effects of ensuring learners understand what they are meant to be doing and explains why it is helpful to distinguish between learning intentions, the context of the learning, and success criteria. The chapter also provides a number of techniques that teachers can use to share learning intentions and success criteria with their students.

  • Why Learning Intentions Are Important
    • “imagine oneself on a ship sailing across an unknown sea, to an unknown destination. An adult would be desperate to know where he is going. But a child only knows he is going to school…” (White, 1971, p. 340)
    • Not all students have the same idea as their teachers about what they are meant to be doing in the classroom.
    • If I show a piece of writing to a group of third graders and ask them why I think it’s a good piece of writing, some will respond with contributions like, “It’s got lots of descriptive adjectives,” “It’s got strong verbs,” or “It uses lots of different transition words.”
    • A number of research studies have highlighted the importance of students understanding what they are meant to be doing.
    • To illustrate this, I often ask teachers to write 4x and 4½. I then ask them what the mathematical operation is between the 4 and the x, which most realize is multiplication. I then ask what the operation is between the 4 and the ½, which is, of course, addition. I then ask whether any of them had previously noticed this inconsistency in mathematical notation—that when numbers are next to each other, sometimes it means multiply, sometimes it means add, and sometimes it means something completely different, as when we write a two-digit number like 43. Most teachers have never noticed this inconsistency, which presumably is how they were able to be successful at school.
    • Study of science classrooms
      • Seven of the classrooms were seventh grade; three were eighth grade; and two were ninth grade.
      • ThinkerTools curriculum, 7 modules
      • Each module incorporated a series of evaluation activities.
        • Six classes did discussion-based evaluation
        • Other six classes did reflective assessment, using criteria, peer ratings 

[to be continued]

Rubric
image

References

Edutopia. (2009).  An Introduction to Project-Based Learning.  [Video file].  Retrieved July 5, 2012 from http://www.youtube.com/watch?v=dFySmS9_y_0

Lantz, H. B., Jr. (2009). STEM Education: What Form? What Function? Retrieved July 3, 2012 from http://stem.afterschoolnetwork.org/sites/default/files/stemeducation-whatformfunctionarticle.pdf

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, Indiana: Solution Tree Press.

Advertisements

EDU6978: Week 01: Due 2012-07-01

Reflection
image

Since NRC (2009) describes formative assessment as a key component in classroom environments that produce effective learning and since Wiliam (2011) describes the five key strategies of formative assessment, lets see how I did in the SAT Math Review course that I conducted this past year. Note that since the rest of Wiliam (2011) is comprised of deep dives on each strategy, my self-assessment below lacks an in-depth rubric.

Strategy Grade Comments.
Learning Intentions and Criteria for Success are Clear A- I think it was implicitly clear from the nature of the course what our learning intentions were, i.e. “to cover math that is going to be on the SAT.”  Criteria for success was also clear, i.e. “students need to answer questions of similar difficulty in order to get a good score on the SAT.”  I ding myself here only because I believe that I never made that explicit, or probed to see if that was explicit.
Discussions, Activities and Learning Tasks Elicit Evidence of Learning C

There was little variety in the activities used in the class. We mostly did lecture and group discussion, with some pairwise discussion.  I ding myself further since the activities we did weren’t eliciting individually any evidence of learning.  Since students alone knew their answer and whether it was correct or incorrect, I couldn’t adjust the tempo or flow of the class relative to mass mis/understanding in the class.

Note that NRC (2009) derives a teaching implication of their key finding that teachers need to cover concepts and examples in depth.  However, due to lack of time, I was not able to do this consistently.

Feedback that Moves Learning Forward B+ When covering questions and answers in class, feedback was immediate and showed students where conceptual understanding or process was flawed. However, I am at a loss to prove that learning was moved forward. For example if a student got a problem wrong, what were the steps they could take to do better next time. That was unclear.
Activating Learners as Instructional Resources for One Another A The class was small, and the students were familiar with one another.  Thus student participation in discussions was good.  Students were regularly called upon to describe their thinking to other students who were having trouble.
Activating Learners as the Owners of Their Own Learning C+ Students were owners of their own learning, since they all knew the SAT was coming, and they knew that this class was not graded.  However, were they activated in that role?  It was suggested that the regular homework was not stressed enough as a tool to help students take control of their learning.  Homework also had answers available so students could think they are owning their learning but perhaps that gave opportunity to not be completely honest with themselves.

I believe I am not alone relative to my other classmates in this class that I will be challenged in implementing formative assessment equitably across the whole class in the time allotted..  Take students that may not have made uniform progress in prior grades relative to the Math Practices (CCSSI, 2009) and getting the whole class to a uniform place for assessment and instruction may be difficult.  Compound that with Meyer’s (2010) recommendation that math and science classes need to practice and model patient problem solving on real-world problem solving, and it sounds like the main challenge for me will be finding the time.  However, for the sake of real learning there is no option to not do formative assessment.

Schedule
image
Notes

How We Learn (NRC 1999)

In chapter 2 of the first reading (National Research Council [NRC], 1999), three key findings are elucidated as having “strong implications for how we teach (pg 10).”

Three Core Learning Principles (NRC, p. 10)

image

No reason to think this has changed from 1999 to present. Learners come to grips with the world long before they darken the door of a classroom. Even in the classroom, unless the learning supplants their hard-won intuition from the real-world, then true learning will not occur.

image

An expert doesn’t just know more, they also have frameworks that help them reconstruct/re-derive relationships/laws that they may have forgotten, and a schema to interpret new data and see new patterns.

image

That’s what we are talking about, metacognitive approach to instruction is assessment for learning, is formative assessment, where the student has all the information and tools to take control of their own learning.

Three Implications for Teaching (NRC, p. 15)

  1. Teachers must draw out and work with the preexisting understandings that their students bring with them.
    1. Child is not an empty vessel, teacher must reveal student initial conceptions.
    2. Assessments must be frequent, and provide feedback, so that student thinking is modified, refined, and deepened.
    3. Schools of education should teach teachers
      1. how to recognize common preconceptions
      2. how to uncover less common preconceptions
      3. challenge and replace preconceptions
  2. Teachers must teach some subject matter in depth, providing many examples in which the same concept is at work and providing a firm foundation of factual knowledge.
    1. teach in-depth, even at the cost of coverage, but get coverage by coordinating across school years
    2. teachers must know the subject: progress of inquiry, terms of discourse, organization of information, growth of student thinking
    3. teachers need to balance the tradeoff between assessing depth and assessing objectively
  3. The teaching of metacognitive skills should be integrated into the curriculum in a variety of subject areas. Foster a student’s internal dialogue.
    1. Teachers should integrate metacognitive instruction and discipline-based learning.
    2. Schools of education should help develop strong metacognitive strategies for instruction and classroom use.

Bringing Order to Chaos (NRC, p. 18)

Proliferation of teaching strategies can be bewildering. The truth is you need them *all* because you are trying to uncover preconceptions, and there is no universal best teaching practice. Teachers should teach basic skills in a way that students can learn and then apply to problems.

image

Four Ways of Designing Classroom Environments (NRC, p. 19)

  1. Schools and classrooms must be learner centered
    1. cultural differences affect student learning.
    2. student theories of intelligence (fixed/malleable) can affect effort and motivation (Dweck, 1989).
  2. To provide a knowledge-centered classroom environment, attention must be given to what is taught (information, subject matter), why it is taught (understanding) and what competency or mastery looks like.
    1. Expertise is well-organized knowledge that supports understanding.
    2. Learning with understanding is not memorizing or disconnected facts.
    3. Knowledge-centered environments are not just about activities, it is about doing with understanding.
  3. Formative assessments—ongoing assessments designed to make students’ thinking visible to both teachers and students—are essential. They permit the teacher to grasp the students’ preconceptions, understand where the students are in the “developmental corridor” from informal to formal thinking, and design instruction accordingly. In the assessment-centered classroom environment, formative assessments help both teachers and students monitor progress.
    1. Think and do, not memorize and regurgitate.
  4. Learning is influenced in fundamental ways by the context in which it takes place. A community centered approach requires the development of norms for the classroom and school, as well as connections to the outside world, that support core learning values.
    1. Teachers should build a culture where it is ok to take risks, make mistakes, obtain feedback, and revise.
    2. Teachers need to build a community where students can learn together, and help each other.
    3. Teachers need to create communities that can encourage themselves and their peers.
    4. Schools need to connect learning to other aspects of students’ lives.

Applying the Design Framework to Adult Learning (NRC, p. 23)

Even adults could benefit from using the principles in How People Learn, e.g. professional development programs for teachers:

  1. Are not learner centered
  2. Are not knowledge centered
  3. Are not assessment centered
  4. Are not community centered

Dan Meyer: TEDxNYED (Meyer, 2010)

Five signs that you are doing math reasoning wrong in the classroom:

image

Goal: Patient problem solving. Why are students impatient?

  1. TV situation comedies (problems resolve in 22 minutes).
  2. Books have problems that are just re-hashes of sample problems.
  3. Books have illustrations/pictures that overlay too much information at once.
  4. Formulation of the problem is key, but we just give problems to students.

Goal: Real world problems in our curriculum.

  1. Take out the extraneous information, make students decide.
  2. Take a video of the problem, “bait the hook”
  3. Get students on a level playing field.
  4. Conversations about error have been great (why does theory not match experiment).

Things you can do to improve your math reasoning in the classroom:

image

Standards for Math Practice (CCSSI, 2009)

  1. Make sense of problems and persevere in solving them.
  2. Reason abstractly and quantitatively.
  3. Construct viable arguments and critique the reasoning of others.
  4. Model with mathematics.
  5. Use appropriate tools strategically.
  6. Attend to precision.
  7. Look for and make use of structure.
  8. Look for and express regularity in repeated reasoning.

Connecting the Standards for Mathematical Practice to the Standards for
Mathematical Content
The Standards for Mathematical Practice describe ways in which developing student
practitioners of the discipline of mathematics increasingly ought to engage with
the subject matter as they grow in mathematical maturity and expertise throughout
the elementary, middle and high school years. Designers of curricula, assessments,
and professional development should all attend to the need to connect the
mathematical practices to mathematical content in mathematics instruction.

The Standards for Mathematical Content are a balanced combination of procedure
and understanding. Expectations that begin with the word “understand” are often
especially good opportunities to connect the practices to the content. Students
who lack understanding of a topic may rely on procedures too heavily. Without
a flexible base from which to work, they may be less likely to consider analogous
problems, represent problems coherently, justify conclusions, apply the mathematics
to practical situations, use technology mindfully to work with the mathematics,
explain the mathematics accurately to other students, step back for an overview, or
deviate from a known procedure to find a shortcut. In short, a lack of understanding
effectively prevents a student from engaging in the mathematical practices.

In this respect, those content standards which set an expectation of understanding
are potential “points of intersection” between the Standards for Mathematical
Content and the Standards for Mathematical Practice. These points of intersection
are intended to be weighted toward central and generative concepts in the
school mathematics curriculum that most merit the time, resources, innovative
energies, and focus necessary to qualitatively improve the curriculum, instruction,
assessment, professional development, and student achievement in mathematics. (p. 8)

What a lot of jargon, bordering on gibberish!  I would translate thus: “look for the word understand in the standards and use the practices to help measure extent of understanding for that content.”

Framework for K-12 Science Education (NRC, 2012)

On pages 50-53 of this source, the differences between Science and Engineering are described in 6 categories, entitled “Distinguishing practices in science from those in engineering.”

  1. Asking Questions and Defining Problems
  2. Developing and Using Models
  3. Planning and Carrying Out Investigations
  4. Analyzing and Interpreting Data
  5. Using Mathematics and Computational Thinking
  6. Constructing Explanations and Designing Solutions
  7. Engaging in Argument from Evidence
  8. Obtaining, Evaluating and Communicating Information

Anyone who has seen the TV Show “Big Bang Theory” might be familiar with the animosity between 3 of the main characters and Howard, the lowly engineer, who merely implements solutions, or solves experimental problems and doesn’t do original science.  Of all these distinguishing practices, I think that is the defining one the crux of the matter, I actually don’t think these are distinguishing practices at all, if they are worded identically and for the engineer it says:  and then builds something that average people can use, and for the scientist it doesn’t say that.

My question is:  where are the standards for engineering education, the engineering standards?  It seems as though there have been some working groups and draft proposed but I couldn’t find a definitive engineering standards link.

Of course, that begs the question of what the technology education standards are.  Those standards do exist, and have existed since 2000.  I found Technology Education standards downloadable here:  http://www.iteea.org/TAA/Publications/TAA_Publications.html

Embedded Formative Assessment (Wiliam, 2011)

  • Chapter 1 Outline (verbatim from author unless italic)
    • Why Educational Achievement Matters
      • Wiliam basically rehashes the arguments that dropouts are worth less to society (as measured by lifetime earnings) and college-educated are worth much more.  My question:  is that all adjusted for the debt that the college educated accrue?
      • “Higher levels of education man lower health care costs, lower criminal justice costs, and increased economic growth
    • The Increasing Importance of Educational Achievement
      • Hourly pay has changed depending on level of educational achievement
      • Dropouts earn less.
      • Higher levels of education are associated with better health…
      • People with more education live longer… (the converse is not necessarily true!)
      • Educational achievement matters for society (taxes, health care costs, criminal justice costs)
      • Higher levels of education are needed in the workplace.  In fact, …young people today are substantially more skilled than their parents and grandparents.
      • Scores on standardized tests…have been rising steadily
      • Only 6 percent of children in 1947 performed as well as the average child of 2001 on the Similarities test.
      • However, the quality of teaching in public schools is, on average, higher than in private schools in the United States…[emphasis mine]
      • …[T]he scores of students attending private schools in the United States were higher than those attending public schools.
      • This, however, does not show that private schools are better than public schools, because the students who go to private schools are not the same as those who go to public schools.
      • Schools have improved dramatically, but the changes in the workplace have been even more extraordinary.
      • In the 1960s and ‘70s, the average workingman (or woman) needed neither to read nor to write during the course of a workday; there were many jobs available to those with limited numeracy and literacy skills, but now those jobs are disappearing.
      • But the speed of the down escalator has been increasing—technology and outsourcing are removing jobs from the economy—and if we cannot increase the rate at which our schools are improving, then, quite simply, we will go backward.
      • Changes in technology actually affect white-collar jobs more than they affect blue-collar jobs.
      • Routine data entry work and jobs as call-center operators were among the first to be outsourced, but now high-skilled jobs are outsourced, too.
      • The one really competitive skill is the skill of being able to learn. It is the skill of being able not to give the right answer to questions about what you were taught in school, but to make the right response to situations that are outside the scope of what you were taught in school. We need to produce people who know how to act when they’re faced with situations for which they were not specifically prepared. (Papert, 1998).
      • Getting every student’s achievement up to 400 (the OECD’s definition of minimal proficiency) would be worth $70 trillion, and matching the performance of the world’s highest-performing countries (such as Finland) would be worth over $100 trillion.  But what would it cost to get there?  Social Cost?  Political Cost?
    • Why is Raising Student Achievement So Hard?
      • …[T]he depressing reality is that the net effect of the vast majority of these measures [to raise standards] on student achievement has been close to, if not actually, zero.
      • [Structural changes:  size]  However, the promise of such smaller high schools was not always realized.
      • In other cases, the potential benefits of small high schools were not realized because the creation of small high schools was assumed to be an end in itself, rather than a change in structure that would make other needed reforms easier to achieve.  [like smaller student to teacher = engagement, =relationships, e.g. looping]
      • Other countries are going in the opposite direction. …[B]ut as yet, there is no evidence that this has led to improvement.
      • [Structural changes:  governance].    As the characteristics of successful charter schools become better understood, it will, no doubt, be possible to ensure that charter schools are more successful, but at the moment, the creation of charter schools cannot be guaranteed to increase student achievement (Carnoy, Jacobsen, Mishel, & Rothstein, 2005).
      • In England, a number of low-performing schools have been reconstituted as “academies”, [but] a comparison with similarly low-performing schools that were not reconstituted as academies shows that they improve at the same rate (Machin & Wilson, 2009).
      • For-profit schools in Sweden, also don’t show a significant improvement.
      • Specialist schools in England (similar to US magnet schools) get more money per student, and thus get better scores, but so do public schools.
      • [Curriculum reform].  Trying to change students’ classroom experience through changes in curriculum is very difficult. A bad curriculum well taught is invariably a better experience for students than a good curriculum badly taught: pedagogy trumps curriculum. Or more precisely, pedagogy is curriculum, because what matters is how things are taught, rather than what is taught.  [emphasis mine]
      • Three levels of curriculum:  intended, implemented and achieved.  The greatest impact on learning is the daily lived experiences of students in classrooms, and that is determined much more by how teachers teach than by what they teach.
      • [Textbooks]  Reviews of random-allocation trials of programs for reading in the early grades and for programs in elementary, middle, and high school math concluded that there was little evidence that changes in textbooks alone had much impact on student achievement.
      • [Scale] Challenges of scale ensures that small pilots almost never perform comparably when rolled out more widely.
      • [Techology]   While there is no shortage of boosters for the potential of computers to transform education, reliable evidence of their impact on student achievement is rather harder to find.
      • [Computers] With the exception of Cognitive Tutor ® Algebra I for 9th graders, there is little evidence that computers as teaching tools really work.  See What Works Clearinghouse page:  http://ies.ed.gov/ncee/wwc/interventionreport.aspx?sid=87
      • [Interactive White Boards]  These tools are prohibitively expensive, and need a lot of training to use effectively.
      • Teacher’s aides also don’t help student achievement.
    • Three Generations of School Effectiveness Research
      • The fad of “school effectiveness” was a result of economic thinking applied to schools.
      • You can’t emulate characteristics of an effective school to reproduce that schools effectiveness elsewhere.  Unless you:
        • First, get rid of the boys.
        • Second, become a parochial school.
        • Third, and most important, move your school into a nice, leafy, suburban area.
      • But seriously, all we learned from school effectiveness studies is that differences in school scores is the result of differences in students.
      • This, in turn, means that only 8 percent of the variability in student achievement is attributable to the school, so that 92 percent of the variability in achievement is not attributable to the school.
      • It turns out that as long as you go to school (and that’s important), then it doesn’t matter very much which school you go to, but it matters very much which classrooms you’re in.  [emphasis mine]
      • It turns out that these substantial differences between how much students learn in different classes have little to do with class size, how the teacher groups the students for instruction, or even the presence of between-class grouping practices (for example, tracking). The most critical difference is simply the quality of the teacher.
    • The Impact of Teacher Quality
      • It was never considered that teacher quality could differ so markedly.
      • Teachers have long been treated as a commodity, identical, fungible, low value.
      • Performance related pay for teachers when calculated from test scores is fundamentally flawed because teachers hand off each year, who should get the credit?
      • Paying bonuses to teachers will not improve student test scores.
      • Which surprises economists, by the way.
      • [You must read Sanders & Rivers (1996), it shows that certain teachers had amazing impact relative to other teachers]
      • More recent studies (for example, Rockoff, 2004; Rivkin, Hanushek, & Kain, 2005) have confirmed the link between teacher quality and student progress on standardized tests, and it appears that the correlation between teacher quality and student progress is at least 0.2, and may be even larger (Nye, Konstantopoulos, & Hedges, 2004).
      • In other words, the most effective teachers generate learning in their students at four times the rate of the least effective teachers.
      • Excellence in classrooms is infectious, standard deviation of scores decreases as scores increase.
      • In their work in kindergarten and first-grade classrooms, Bridget Hamre and Robert Pianta (2005) found that in the classrooms of the teachers whose students made the most progress in reading, students from disadvantaged backgrounds made as much progress as those from advantaged backgrounds, and those with behavioral difficulties progressed as well as those without.
      • This last finding is particularly important because it shows that Basil Bernstein was wrong—education can compensate for society provided it is of high quality.
      • Equitable outcomes will only be secured by ensuring that the lowest-achieving students get the best teachers, which, in the short term, means taking them away from the high-achieving students; this is politically challenging to say the least.  [empasis mine]
      • Rather than thinking about narrowing the gap, we should set a goal of proficiency for all, excellence for many, with all student groups fairly represented in the excellent.
    • How Do We Increase Teacher Quality
      • Two options
        • The first is to attempt to replace existing teachers with better ones, which includes both deselecting serving teachers and improving the quality of entrants to the profession.
        • The second is to improve the quality of teachers already in the profession.
      • Deselection is the only option that we really haven’t tried yet, and thus must be the way we should go forward.
      • [Deselection] may not work because, to be effective, you have to be able to replace the teachers you deselect with better ones, and that depends on whether there are better potential teachers not currently teaching.
      • The third problem with deselection is that it is very slow.
      • We [the US versus Finland] can’t afford to turn away anyone who might be a good teacher, so we need to have better ways of identifying in advance who will be good teachers, and it turns out to be surprisingly difficult, because many of the things that people assume make a good teacher don’t.
      • It has been well known for quite a while that teachers’ college grades have no consistent relationship with how much their students will learn, and some have gone so far as to claim that the only teacher variable that consistently predicts how much students will learn is teacher IQ (Hanushek & Rivkin, 2006).
      • Some progress has been made in determining what kinds of teacher knowledge do contribute to student progress.
        • The MKT (mathematical knoweldge for teaching) for elementary school teachers.
        • This suggests that subject knowledge accounts for less than 10 percent of the variability in teacher quality.
      • A study of over 13,000 teachers, involving almost 1 million items of data on over 300,000 students in the Los Angeles Unified School District (LAUSD), found that student progress was
        • unrelated to their teachers’ scores on licensure examinations,
        • nor were teachers with advanced degrees more effective (Buddin & Zamarro, 2009).
        • Most surprisingly, there was no relationship between the scores achieved by LAUSD teachers on the Reading Instruction Competence Assessment (which all elementary school teachers are required to pass) and their students’ scores in reading.
        • So did we discourage some people from teaching that would have on the contrary made excellent teachers?
      • [Extended analogy to finding a good quarterback in the NFL via New Yorker article from Malcolm Gladwell]
      • Although efforts continue to try to predict who will do well and who will not within the NFL, Gladwell suggests that there is increasing acceptance that the only way to find out whether someone will do well in the NFL is to try him out in the NFL.
      • The same appears to be true for teaching.
      • And it will also take a long time!
      • If we are serious about securing our economic future, we have to help improve the quality of those teachers already working in our schools…
    • Conclusion (complete/verbatim)
      • Improving educational outcomes is a vital economic necessity, and the only way that this can be achieved is by increasing the quality of the teaching force. Identifying the least effective teachers and deselecting them has a role to play, as does trying to increase the quality of those entering the profession, but as the data and the research studies examined in this chapter have shown, the effects of these measures will be small and will take a long time to materialize. In short, if we rely on these measures to raise student achievement, the benefits will be too small and will arrive too late to maintain the United States’ status as one of the world’s leading economies. Our future economic prosperity, therefore, depends on investing in those teachers already working in our schools.
  • Chapter 2 Outline (verbatim from author unless italic)
    • The Case for Formative Assessment
      • minute-by-minute and day-by-day formative assessment is likely to have the biggest impact on student outcomes
    • Professional Development
      • Having a masters degree as an elementary teacher makes no difference on student scores
      • It is clear that the value added by a teacher increases particularly quickly in the first five years of teaching…

image

      • Note that there is no requirement for teachers to improve their practice or even to learn anything. The only requirement is to endure 180 hours of professional development.
      • Teachers need professional development because the job of teaching is so difficult, so complex, that one lifetime is not enough to master it.
      • Andre’ Previn quit one day because he “wasn’t scared anymore”.  Teachers wiould never say that.
      • Even the best teacher fail.
      • No teacher is so good—or so bad—that he or she cannot improve.  That is why we need professional development.
      • However, there is consensus that the “one shot deals”—sessions ranging from one to five days held during the summer—are of limited effectiveness, even though they are the most common model.
      • Learning Styles
        • However, there is little agreement among psychologists about what the learning styles are, let alone how they should be defined.
        • Although a number of studies have tried to show that taking students’ individual learning styles into account improves learning, evidence remains elusive.
        • “Students need to learn both how to make the best of their own learning style and also how to use a variety of styles, and to understand the dangers of taking a limited view of their own capabilities.”  (Adey, Fairbrother, Wiliam, Johnson & Jones, 1999, p 36)
      • Educational Neuroscience
        • Another potential area for teacher professional development—one that has received a lot of publicity in recent years—is concerned with applying what we are learning about the brain to the design of effective teaching.
        • Some of the earliest attempts to relate brain physiology to educational matters were related to the respective roles of the left and right sides of the brain in various kinds of tasks in education and training despite clear evidence that the conclusions being drawn were unwarranted (see, for example, Hines, 1987).
      • Content Area Knowledge
        • After all, surely the more teachers know about their subjects, the more their students will learn
        • Subject matter expertise does not make teacher more successful.
        • What we do know is that attempts to increase student achievement by increasing teachers’ subject knowledge have shown very disappointing results.
        • An evaluation of professional development designed to improve second-grade teachers’ reading instruction found that an eight-day content focused workshop…[had] no impact on the students’ reading test scores.
        • Although the professional development had been specifically designed to be relevant to the curricula that the teachers were using…there was no impact on student achievement….
        • …the relationship between teachers’ knowledge of the subjects and their students’ progress is weak…
        • However, there is a body of literature that shows a large impact on student achievement across different subjects, across different age groups, and across different countries, and that is the research on formative assessment.
    • The Origins of Formative Assessment
      • 1967, Michael Scriven, Formative evaluation
      • 1969, Benjamin Bloom, “By formative evaluation we mean evaluation by brief tests used by teachers and students as aids in the learning process.”
      • “Evaluation which is directly related to the teaching-learning process as it unfolds can have highly beneficial effects…”  (Bloom, 1969, p.50)
      • CGI=Cognitively Guided Instruction, integrating assessment with instruction
      • In the original CGI project, a group of twenty-one elementary school teachers participated, over a period of four years, in a series of workshops in which the teachers were shown extracts of videotapes selected to illustrate critical aspects of children’s thinking. The teachers were then prompted to reflect on what they had seen, by, for example, being challenged to relate the way a child had solved one problem to how she had solved or might solve other problems (Fennema et al., 1996). Throughout the project, the teachers were encouraged to make use of the evidence they had collected about the achievement of their students to adjust their instruction to better meet their students’ learning needs. Students taught by CGI teachers did better in number fact knowledge, understanding, problem solving, and confidence (Carpenter, Fennema, Peterson, Chiang, & Loef, 1989), and four years after the end of the program, the participating teachers were still implementing the principles of the program (Franke, Carpenter, Levi, & Fennema, 2001).
      • The power of using assessment to adapt instruction is vividly illustrated in a study of the implementation of the measurement and planning system (MAPS), in which twenty-nine teachers, each with an aide and a site manager, assessed the readiness for learning of 428 kindergarten students.
      • [Fuchs & Fuchs 1986] found that regular assessment (two to five times per week) with follow-up action produced a substantial increase in student learning.
      • Over the next two years, two further research reviews, one by Gary Natriello (1987) and the other by Terence Crooks (1988), provided clear evidence that classroom assessment had a substantial—and usually negative—impact on student learning.
      • In 1998, Paul Black and I sought to update the reviews of Natriello and Crooks.
      • We concluded that the research suggested that attention to the use of assessment to inform instruction, particularly at the classroom level, in many cases effectively doubled the speed of student learning.
      • “Despite the existence of some marginal and even negative results, the range of conditions and contexts under which studies have shown that gains can be achieved must indicate that the principles that underlie achievement of substantial improvements in learning are robust. Significant gains can be achieved by many different routes, and initiatives here are not likely to fail through neglect of delicate and subtle features.” (Black & Wiliam, 1998a, pp. 61–62)
      • Black & Wiliam did more research in England in 2003.
      • Most of the teachers’ plans contained reference to two or three important areas in their teaching in which they were seeking to increase their use of formative assessment, generally followed by details of techniques that would be used to make this happen.
      • …the teachers could be observed implementing some of the ideas they had discussed in the workshops and could discuss…
      • Nevertheless, in this study, using scores on externally scored standardized tests, the students with which the teachers used formative assessment techniques made almost twice as much progress over the year (Wiliam, Lee, Harrison, & Black, 2004).
    • What, Exactly, Is Formative Assessment
      • Paul Black and I defined formative assessment “as encompassing all those activities undertaken by teachers, and/or by their students, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged” (Black & Wiliam, 1998a, p. 7).
      • What is notable about these definitions is that, however implicitly, formative assessment is regarded as a process.
      • The difficulty with trying to make the term formative assessment apply to a thing (the assessment itself) is that it just does not work.
      • The assessment that the teacher used—an AP calculus examination—was designed entirely for summative purposes. AP exams are designed by the College Board to confer college-level credit so that students passing the exam at a suitable level are exempt from introductory courses in college. However, this teacher used the assessment instrument formatively—what Black and I have called “formative use of summative tests.”
      • Some people (for example, Popham, 2006; Shepard, 2008) have called for the term formative assessment not to be used at all, unless instruction is improved.
        • 1. The provision of effective feedback to students

          2. The active involvement of students in their own learning

          3. The adjustment of teaching to take into account the results of assessment

          4. The recognition of the profound influence assessment has on the motivation and self-esteem of students, both of which are crucial influences on learning

          5. The need for students to be able to assess themselves and understand how to improve

      • Instead, they suggested that it would be better to use the phrase assessment for learning, which had first been used by Harry Black (1986) and was brought to a wider audience by Mary James at the 1992 annual meeting of the ASCD in New Orleans.
      • “If formative assessment tells users who is and who is not meeting state standards, assessment FOR learning tells them what progress each student is making toward meeting each standard while the learning is happening—when there’s still time to be helpful. (Stiggings, 2005, pp. 1–2)
      • The problem, as Randy Bennett (2009) points out, is that it is an oversimplification to say that formative assessment is only a matter of process or only a matter of instrumentation.
      • The original, literal meaning of the word formative suggests that formative assessments should shape instruction—our formative experiences are those that have shaped our current selves—and so we need a definition that can accommodate all the ways in which assessment can shape instruction.
      • Scenarios
        1. Summer math teacher PD used to improve teaching methods so that students do better on ratio and proportion questions on math standardized tests.
        2. Algebra I teachers look at performance on certain problems on state-wide tests and plan to improve delivery.
        3. Interim tests (every 6-10 weeks) determine which students must attend Saturday sessions.
        4. A quiz is given and examined diagnostically to plan future sessions of a class.
        5. Exit passes on bias in historical sources are used to determine whether class can move on or not.
        6. Students are given cards about literary devices and asked to show the appropriate card when a sample sentence is read.
        7. AP calculus teacher has students graph a function on personal whiteboards.
      • In each of these seven examples, evidence of student achievement was elicited, interpreted, and used to make a decision about what to do next.
      • Each can be considered an example of formative assessment.  (Even though time frames may differ.)
      • [Definition] 
      • An assessment functions formatively to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have made in the absence of that evidence.
      • Points about this definition.
        1. formative is used to describe the function that the evidence from the assessment actually serves
        2. Assessment decisions can be made by teachers, learners, or peers.
        3. The third point is that the focus is on decisions instead of on the intentions of those involved…
        4. The probabilistic formulation (that the decisions are likely to be better) reflects the fact that even the best-designed interventions will not always result in better learning for all students.
        5. Here, the term instruction refers to the combination of teaching and learning, to any activity that is intended to create learning (defined as an increase, brought about by experience, in the capacities of an individual to act in valued ways).
        6. The formative assessment might not change the course of action but instead simply show that the proposed course of action was right.
      • The emphasis on decisions as being at the heart of formative assessment also assists with the design of the assessment process.
      • However, if the formative assessments are designed without any clear decision in mind, then there is a good chance that the information from the assessment will be useless.
      • The alternative is to design the assessments backward from the decisions.  (decision-pull versus data-push).
    • Strategies of Formative Assessment
      • The discussion thus far has established that any assessment can be formative and that assessment functions formatively when it improves the instructional decisions that are made by teachers, learners, or their peers.
      • All teaching really boils down to three key processes and three kinds of individuals involved. The processes are: finding out where learners are in their learning, finding out where they are going, and finding out how to get there. The roles are: teacher, learner, and peer.

image

      • Key strategies of formative assessment
        1. Clarifying, sharing, and understanding learning intentions and criteria for success.
        2. Engineering effective classroom discussions, activities, and learning tasks that elicit evidence of learning.
        3. Providing feedback that moves learning forward.
        4. Activating learners as instructional resources for one another.
        5. Activating learners as the owners of their own learning.
      • The big idea is that evidence about learning is used to adjust instruction to better meet student needs—in other words, teaching is adaptive to the learner’s needs.
    • Assessment:  The Bridge Between Teaching and Learning
      • Assessment occupies such a central position in good teaching because we cannot predict what students will learn, no matter how we design our teaching.
      • [Denvir experiment (Denvir & Brown, 1986a) with student Jy]
        • Knowledge gaps elucidated.
        • Instruction planned and delivered to address gaps.
        • Surprisingly, on the posttest, Jy could not demonstrate mastery of any of the skills that she had been specifically taught…
        • The skills that Jy acquired were consistent with the hierarchies that Denvir had identified—they just weren’t the skills her teacher had taught, and the same was found to be true for other students in the study (Brown & Denvir, 1986b).
      • This is why assessment is the central process in instruction. Students do not learn what we teach. If they did, we would not need to keep gradebooks. We could, instead, simply record what we have taught.
      • The truth is that we often mix up teaching and learning,
      • After all, what sense does it make to talk about a lesson for which the quality of teaching was high but the quality of learning was low?
      • In some languages, the distinction between teaching and learning is impossible to make…
      • To say that learning is more important than teaching is a bit like saying that traveling is more important than driving.
      • Every action that a teacher takes, provided it is intended to result in student learning, is teaching, but the teacher cannot do the learning for the learner; teaching is all the teacher can do.
      • The influence has shifted from “what am I going to teach and what are the pupils going to do?” towards “how am I going to teach this and what are the pupils going to learn?” (Black, Harrison, Lee, Marshall, & Wiliam, 2004, p. 19)
      • Two extremes
        • [Teacher does all the work.]  That is why I often say to teachers, “If your students are going home at the end of the day less tired than you are, the division of labor in your classroom requires some attention.”
        • [Teacher merely facilitates.]  Presumably, the teachers are just hanging around, hoping that some learning will occur.
        • Teaching is difficult because neither of these extremes is acceptable. When the pressure is on, most of us behave as if lecturing works, but deep down, we know it’s ineffective. But leaving the students to discover everything for themselves is equally inappropriate. For this reason, I describe teaching as the engineering of effective learning environments. And sometimes, a teacher does her best teaching before the students arrive in the classroom.
      • Many teachers have had the experience of creating an effective group discussion task in which the students engage completely in a really tricky challenge that they must resolve.  The only problem is there is nothing for the teacher to do.
      • The teacher’s job is not to transmit knowledge, nor to facilitate learning. It is to engineer effective learning environments for the students. The key features of effective learning environments are that they create student engagement and allow teachers, learners, and their peers to ensure that the learning is proceeding in the intended direction. The only way we can do this is through assessment. That is why assessment is, indeed, the bridge between teaching and learning.
    • Conclusion
      • In this chapter, we learned that the regular use of minute-by-minute and day-by-day classroom formative assessment can substantially improve student achievement. Although many different definitions of formative assessment have been proposed, the essential idea is simple. Teaching is a contingent activity. We cannot predict what students will learn as a result of any particular sequence of instruction. Formative assessment involves getting the best possible evidence about what students have learned and then using this information to decide what to do next.
      • There are five key strategies of formative assessment. The next five chapters probe each of these five strategies in more depth, offering details of research studies that provide evidence of their importance and a number of practical techniques that can be used to implement the strategies in classrooms.
      •  

Rubric
image_thumb[3]_thumb_thumb_thumb_thumb_thumb

References

Bennett. (2009).  As cited in Wiliam (2011).

Carnoy, Jacobsen, Mishel, & Rothstein. 2005.

Common Core State Standards Initiative. (2009).  Common Core State Standards for Mathematics.  National Governors Association.  Retrieved June 24, 2012 from http://corestandards.org/assets/CCSSI_Math%20Standards.pdf

Machin & Wilson. 2009.

Meyer, D. (2010, March 6).  TEDxNYED:  Math Class Needs a Makeover.  Retrieved from http://www.youtube.com/watch?v=BlvKWEvKSi8 (2012, June 24). 

National Research Council. (1999). How People Learn: Bridging Research and Practice. Washington, DC: The National Academies Press.

National Research Council. (2012). A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas. Washington, DC: The National Academies Press.

Papert (1998).

Stiggins (2005).  As cited in Wiliam (2011).

Wiliam, D. (2011).  Embedded Formative Assessment.  Bloomington, Indiana:  Solution Tree Press.

The DOE 2011 “Investing in Innovation (i3)” Finalists, 4 Have STEM Focus.

I don’t know if you’ve been following “investing in innovation” grant applicants or process.  Here’s a quick rundown.

This year’s competition required applicants to submit proposals focused on one of 5 absolute priorities, including two new priorities aimed at promoting science, technology, engineering and mathematics (STEM) education and increasing achievement and high school graduation rates in rural schools.

To learn more about i3’s potential 2011 grantees visit: http://www2.ed.gov/programs/innovation/index.html

I go there download this Excel

Detailed list of the 2011 Highest-Rated i3 Applicants
The list of highest-rated applicants is organized by grant type (i.e., Scale-Up, Validation, and Development) and then by Absolute Priority within grant type. The list is not organized by rank order. clip_image001 MS Excel

Filter by applicants listing STEM as Priority.

i3 2011 Highest-Rated Applications

#

Competition

Absolute Priority (AP)

Applicant Name

Project Title

Applicant City

Applicant State

Applicant Type

Students Served (Est.)

Funding Requested

Score

Abstract

1

Scale up

AP2: STEM

Old Dominion University Research Foundation

Technology-facilitated Scale Up of Proven Model of Math Instruction in High Need Schools

Norfolk

VA

Nonprofit w/ consortium

135,000

$24,995,690

86.33

Link

3

Validation

AP2: STEM

National Math and Science Initiative

Partnership to Increase Student Achievement and College Readiness
in STEM Education

Dallas

TX

Nonprofit w/ consortium

90,900

$14,996,367

96.17

Link

10

Development

AP2: STEM

New York City Board of Education

InnovateNYC

New York

NY

LEA

10,000

$2,959,100

99.00

Link

11

Development

AP2: STEM

New York Hall of Science

SciGames: A Technology-enhanced Model for Bridging Informal and Formal Science Learning

Queens

NY

Nonprofit w/LEA

2,000

$2,995,642

98.50

Link

I then click each of those links (far right) to see abstracts of each application.  My comments in the column on the right.

 

Technology-facilitated Scale Up of Proven Model of Math Instruction in High Need Schools

Old Dominion University Research Foundation

Norfolk, VIRGINIA

Indicated Organization Type

Nonprofit w/ consortium of schools

Indicated Grant Type

Scale-up

Private Match Waiver Requested

No

Award Length Requested

5 Years

Federal Funding Requested

$24,995,690.00

Absolute Priority Area

AP2: Promoting Science, Technology, Engineering, and Mathematics (STEM) Education

Competitive Preference Priority Areas

CPP8: Unique Learning Needs, CPP10: Technology

Project Description

The project will provide students in high need middle school with increased access to rigorous and engaging coursework in STEM via scaled-up implementation of a proven cooperative learning model in mathematics instruction, STAD-Math. This project also structures an innovative, high-quality, multi-tiered approach to professional development that employs school based math coaching, an on-line platform, and teacher-made videos of their own practices in a multi-tiered community of learners design. The use of technology will play a key role in enabling professional development to be provided in rural and urban areas in a highly cost-effective way. Expected outcomes are statistically significant improvements in math achievement among students in STAD-Math classrooms relative to controls by the third year of implementation, including closing achievement gaps for limited English proficient students, and students with disabilities. The project will serve 135,000 students in 185 high need middle schools across the U.S. over 5 years. Official partners include Old Dominion University, Norfolk Public Schools, Halifax County (VA) Public Schools, Judd (TX) ISD, United School District 428 (KS), Johns Hopkins University, and the Success for All Foundation.

This is the largest grant which is still alive in the competition. It is also the grant which claims to be able to serve the largest number of students.

This application seems to hinge on something called STAD-Math which I believe stands for “ Student Teams Achievement Division (STAD) instruction, a type of cooperative learning strategy”

Need to do more research on STAD-Math

The National Math and Science Initiative’s Partnership to Increase Student Achievement and College-Readiness in STEM Education

National Math and Science Initiative

Dallas, TEXAS

Indicated Organization Type

Nonprofit w/ LEA

Indicated Grant Type

Validation

Private Match Waiver Requested

No

Award Length Requested

5 Years

Federal Funding Requested

$14,996,367.00

Absolute Priority Area

AP2: Promoting Science, Technology, Engineering, and Mathematics (STEM) Education

Competitive Preference Priority Areas

CPP7: College Access and Success, CPP9: Productivity

Project Description

The National Math and Science Initiative (NMSI) seeks an i3 Validation Grant to scale and replicate the successful Advanced Placement Training and Incentive Program (APTIP) to reach approximately 90,900 students in 40 school districts in Colorado and Indiana. APTIP improves student achievement, especially for high-need students and those traditionally underrepresented in STEM subjects, by significantly enhancing the high school curriculum and increasing the number and diversity of students taking College Board Advanced Placement (AP) courses and passing
AP exams in math, science, and English. The proposal’s objective is to increase the number of students scoring 3 or higher on (passing) AP exams in math, science, and English in order to increase student achievement and collegereadiness in STEM subjects. APTIP accomplishes this objective by: making rigorous STEM courses more accessible to high-need students and those traditionally underrepresented in STEM; establishing an expectation that these students can succeed at that level; and supporting students and teachers who aim for those high standards. Independent research confirms, based on past APTIP replication, that expected outcomes of APTIP are: ( 1) significantly increased numbers and diversity of students taking and passing AP math, science, and English exams, including high-need students and those traditionally underrepresented in STEM and (2) increased college enrollment and persistence, especially for high-need students and those traditionally underrepresented in STEM.
NMSI’s proposal, The National Math & Science Initiative’s Partnership to Increase Student Achievement and College-Readiness in STEM Education, addresses Absolute Priority 2: Promoting STEM Education, and Competitive Preference Priorities 7 (Supporting College Access and Success) and 9 (Improving Productivity). Additional Official Partners are the Colorado Legacy Foundation, The University ofNotre Dame, the American Institutes for Research, and the 40 LEAs listed in Appendix A.

This application hinges on AP tests as a metric and means for getting more students into STEM fields.

InnovateNYC

New York City Board of Education

New York, NEW YORK

Indicated Organization Type

LEA

Indicated Grant Type

Development

Private Match Waiver Requested

No

Award Length Requested

3 Years

Federal Funding Requested

$2,959,100.00

Absolute Priority Area

AP2: Promoting Science, Technology, Engineering, and Mathematics (STEM) Education

Competitive Preference Priority Areas

CPP7: College Access and Success, CPP10: Technology

Project Description

The lack of innovation in education is not due to a lack of creativity, but the misalignment of student and educator need to market supply of innovations. The New York City Department of Education (NYCDOE) seeks funding to develop and evaluate its InnovateNYC Education Innovation Ecosystema network of NYC schools, partner districts, solutions developers, and investors who coordinate their needs, solutions, and resources to better align innovative solutions (existing and yet developed) to the learning challenges impeding student achievement. Through InnovateNYC, the NYCDOE will better articulate student and educator need, provide clear metrics for how solutions will be evaluated, and recruit partner schools to co-design and pilot promising solutions. In doing so, we will leverage the size and diversity of our school system to assess and aggregate true demand for innovations, package compelling incentives for investment in specific solutions, and direct the market to generate innovations that address specific student learning challenges in STEM education.
Partners: College Board identify critical STEM learning challenges using NAEP 4-8 grade math and science results; Ashoka Changemakers, STARTL, EdTech Entrepreneurs Lab publish learning challenges to community of developers and solicit solutions; NY Hall of Science, IDEO evaluate and select submissions that merit piloting; Research Alliance for NYC Schools, Center on Reinventing Public Education evaluate and assess solution and ecosystem Academic Return on Investment (AROI). Funders provide incentive grants for development, testing and evaluation. NYCDOE pre-commit to purchase licenses to high AROI solutions.

I find this project description virtually unintelligible.

SciGames: A Technology-enhanced Model for Bridging Informal and Formal Science Learning

New York Hall of Science

Queens, NEW YORK

Indicated Organization Type

Nonprofit w/ LEA

Indicated Grant Type

Development

Private Match Waiver Requested

No

Award Length Requested

5 Years

Federal Funding Requested

$2,995,642.00

Absolute Priority Area

AP2: Promoting Science, Technology, Engineering, and Mathematics (STEM) Education

Competitive Preference Priority Areas

CPP10: Technology

Project Description

The New York Hall of Science proposes an Investing in Innovation development grant to address Absolute Priority 2Promoting STEM Education. We focus on the goal of increasing the number of individuals from groups traditionally underrepresented in STEM, including minorities, providing them with access to rigorous and engaging coursework in STEM that will prepare them for college and/or careers in STEM. We aim to do this by developing, implementing, and evaluating a new system of technologies (Competitive Preference Priority 10), SciGames, designed to bridge formal classroom and informal playground science learning environments. Research tells us that informal science environments low stakes quality can have a positive impact on students science affect, but are much less effective for science learning than inquiry-based classroom instruction, which, however, has been shown to have limited positive impact on these students affect. We draw from recent studies on guided play and gaming to hypothesize that an informal game with science intrinsically integrated into the gameplay paired with formal classroom inquiry could support student improvement in both affect and learning, both being necessary to enter the pipeline to STEM careers. We propose to develop and test SciGames, a suite of technologies that turns students playground play into a game. The game requires students to learn and use target physics concepts and the technology logs physics data during students playground gameplay. Student data is incoporated into a digital app, which is designed to support deeper inquiry into the core science concepts back in the classroom. We will develop and test three SciGames that address three core 8th grade physics concepts about force and motion. We will conduct an Implementation study in Years 1 and 2 with 2,000 New York City students from groups underrepresented and their 30 teachers. In Years 3 and 4, we will conduct an impact study with 6,000 students and 80 teachers.

This application hinges on the use of games for science learning.

From: U.S. Department of Education [mailto:ed.gov@public.govdelivery.com]
Sent: Thursday, November 10, 2011 6:52 AM
To: Weisenfeld, John
Subject: Twenty-three i3 Applicants Named as 2011 Grantees Pending Private Match

News from the Department of EducationHaving trouble viewing this email? View it as a Web page.

Bookmark and Share

Twenty-three Investing in Innovation Applicants Named as 2011 Grantees Pending Private Match

The U.S. Department of Education announced today 23 highest-rated Investing in Innovation (i3) applicants as potential grantees for the 2011 grant fund of the $150 million. The finalists, selected from nearly 600 applicants, must now secure matching private matching funds equivalent to at least 5% of Scale-up, 10% of Validation, or 15% of Development awards by December 9, 2011, in order to receive their grant.

“Investing in these vital innovations across the country has the potential to dramatically enhance learning and accelerate student performance and to do so cost-effectively” said U.S. Secretary of Education Arne Duncan. “This round of i3 grantees is poised to have real impact in areas of critical need including STEM education and rural communities, on projects ranging from early childhood interventions to school turnaround models that will prepare more students for college and career.”

This year’s competition required applicants to submit proposals focused on one of 5 absolute priorities, including two new priorities aimed at promoting science, technology, engineering and mathematics (STEM) education and increasing achievement and high school graduation rates in rural schools. The remaining three priorities focused on supporting effective teachers and principals, implementing high standards and high-quality assessments, and turning around persistently low-performing schools.

Competitive preference was also given to applicants that demonstrated support for improving early learning outcomes, increasing college access and success, addressing the unique needs of students with disabilities and limited English proficient students, or improving productivity or technology.

“With just 25% of the funding available in round one, i3’s 2011 competition attracted hundreds of innovators from schools, districts and non-profits across the country, addressing many of the most persistent challenges in education,” said Jim Shelton, assistant deputy secretary for the Office of Innovation and Improvement. “In just a few short years, i3 has the potential to provide educators with a rich catalogue of practical solutions that they can confidently use to help advance student achievement at every level – not just increase proficiency.”

This year’s applicants included school districts, groups of districts, and nonprofits in partnership with districts or a consortium of schools, competing for funding in one of the program’s three grant levels.

“Scale-up” grants of up to $25 million to support innovation projects with the strongest evidence and track records of success; “Validation” grants of up to $15 million to fund innovations with proven effectiveness supported by moderate levels of evidence; and, “Development” grants of up to $3 million to support promising but relatively untested innovation projects with high potential for positive impact.

The 23 highest-rated applicants include 1 Scale-up, 5 Validation and 17 Development.

Despite reduced funding, the Department anticipates awarding nearly half as many grants in 2011 as the 49 awarded in 2010, given extensive representation of “Development” projects among the highest-rated applications. Awards will be made in mid to late December.

The President’s fiscal year 2012 budget proposes continued funding education innovation with a request for $300 million to support a third round of i3 grants.

A complete list of the 2011 highest-rated applicants follows.

Aspire Public Schools
Baltimore City Public Schools
Berea College
Boston Public Schools
The College Board
Del Norte Unified School District
Fresno County Office of Education
Kentucky Valley Educational Cooperative
KnowledgeWorks
The Metropolitan Education Commission
National Math and Science Initiative
New York City Board of Education
New York Hall of Science
New Visions For Public Schools, Inc
North Carolina New Schools Project
Oakland Unified School District
Old Dominion University Research Foundation
Ounce of Prevention Fund
Success for All Foundation
Temple University
Texas Tech University
University of Minnesota
University of Alaska Statewide Office of K-12 Outreach

To learn more about i3’s potential 2011 grantees visit: http://www2.ed.gov/programs/innovation/index.html.


STEM Workgroup on OSPI web.

There’s a STEM Workgroup which meets monthly from 10am to 4pm.

From their website this is their plan:

The STEM Workgroup will:

  • Develop a plan with shared vision, goals, and measurable objectives
  • Ensure that a K – STEM careers pathway is established, including:
    • recruiting, preparing, hiring, retraining, and supporting teachers and instructors
    • creating pathways to boost student success
    • closing the achievement gap, and
    • preparing every student to be college and career ready

The workgroup was created by ESSB 6444 501 (1) (c), signed 5/4/2010.

It has a deadline:

ESSB 6444 wording web site wording

The working group shall develop a comprehensive plan and a
report with recommendations, including a timeline for specific actions to be taken, which is due to the governor and the appropriate committees of the legislature by December 1, 2010.

The workgroup will develop a

report with recommendations, including a timeline for specific actions to be taken. The report is due to the Governor and the appropriate committees of the legislature by December 1, 2010.

Here are the members of the workgroup, taken from the web page above.

OSPI’s STEM supervisor, Dennis Milliken, chairs the workgroup. The workgroup also includes at least one representative from the State Board of Education, the Professional Educator Standards Board, the State Board of Community and Technical Colleges, the Higher Education Coordinating Board, the Achievement Gap Oversight and Accountability Committee, and others with appropriate expertise.

  • Jonelle Adams, Executive Director, Washington Alliance for Better Schools (STEM Workgroup Facilitator)
  • SusanEllen Bacon, PhD, Associate Dean of Professional Development Continuing Education, Seattle University
  • Rudi Bertschi, Principal Researcher, OSPI/Center for the Improvement of Student Learning
  • Greta Bornemann, Director, Mathematics, OSPI
  • Bruce Cannard, Principal, Edison Elementary School, Kennewick School District
  • James Dorsey, Director, Washington MESA
  • Jeff Estes, Manager, Science and Engineering Education, Organizational Development, Pacific Northwest National Laboratory
  • Jane Field, M.A., Labor Market and Economic Analysis, Washington State Employment Security
  • Peter D. Finch, Ed.D. Assistant Superintendent for Teaching and Learning, West Valley School District 208
  • Dave Gering, Executive Director, Seattle Manufacturing Industrial Council
  • Susan Jung, Principal, Central Kitsap Junior High, Central Kitsap School District
  • Catherine Kernan, President, Mukilteo Education Association
  • Carolyn Landel, Education First Consulting
  • Kevin Laverty, President, Washington State School Directors Association
  • John Lederer, Associate Director, Washington Higher Education Coordinating Board
  • Kathleen Lopp, Assistant Superintendent, Career and College Readiness, OSPI
  • Dennis Milliken, Supervisor, STEM Education, OSPI (STEM Workgroup Leader)
  • Trish Millines Dziko, Executive Director/CEO, Technology Access Foundation
  • Bill Moore, Coordinator, Assessment, Learning, Teaching, State Board for Community and Technical Colleges
  • Mea Moore, Director of Educator Pathways, Professional Educator Standards Board
  • Rebecca Porter, Career Counselor, Bothell High School, Northshore School District
  • Wes Pruitt, Policy Analyst/Legislative Liaison,Workforce Training and Education Coordinating Board
  • Representative Sharon Tomiko Santos, Washington State Legislature
  • James Sullivan, Teacher, Sci-Ma-Tech, Brier Terrance Middle School, Edmonds School District
  • Kathe Taylor, Policy Director, Washington State Board of Education
  • Gilda Wheeler, Program Supervisor, Environmental and Sustainability, OSPI
%d bloggers like this: