Teaching for Understanding (Part 4 => Ongoing Assessment)

In earlier posts in this series, I introduced the teaching for understanding framework and some elements of the framework (here and here). I encourage you to go through these posts before reading on as each element of the framework builds on the preceding ones.

Here, I will discuss the fifth and final element of the framework viz. ongoing assessment.

Features of ongoing assessment

The research teams that first implemented the teaching for understanding framework attempted to define criteria at the start of the learning unit which they could use to assess any performance of understanding (Wiske, 1998). As the research progressed, they realised that teachers did not always find it advantageous to propose and fix the assessment criteria at the beginning - in fact, often, criteria organically emerged as the teachers began examining the first drafts of their students' work.

Furthermore, it proved insightful to both teachers and students to work on developing the criteria for performances of understanding together and then publicising these criteria during the "guided inquiry" stage (covered in the previous post). Thereafter, these criteria served as anchors to guide the evaluation, feedback and peer-review processes (Wiske, 1998). Furthermore, public posting of assessment criteria disrupted the culture of secrecy and removed the element of subjectivity that usually surrounds assessments.

Ongoing assessment during the "messing about" stage tended to be informal and conducted by the teacher since students were still getting their feet wet. Ongoing assessment became more formal and student-driven during the "guided inquiry" stage and continued to evolve into a tool that children used for self-assessment and peer-assessment in the "culminating performances" stage.

The types of ongoing assessment are manifold - exhibitions, writing a paper, play, poem or song, conducting independent research and preparing a report, maintaining a course journal, carrying out a group project... this list goes on! The only rule is that ongoing assessment be clearly tied to understanding goals as this will make it possible to connect the results of the assessment back to the learning, identify gaps (if any) and provide actionable feedback to students and parents.

In the essay titled The horse before the cart: Assessing for understanding, the author writes about a set of students who were working on designing the floor plans for a hypothetical community centre (Simmons, 1994). In the early stages, the teacher asked each group to share their progress and was surprised to see many groups using the concept and formulae for perimeter instead of area. This provided the teacher with a valuable teaching opportunity to course correct early in the project. Without incorporating ongoing assessment, this would not have been possible and student misconceptions would have likely carried forward through the project.

Thus, conducting ongoing assessments at different stages serves the purpose of formative feedback and continuous improvement of instruction and student work. Well-thought-out rubrics aid this process - this is elaborated upon in the next section.

The power of rubrics

More than what educators say, more than what they write in curriculum guides, evaluation practices tell both students and teachers what counts. How these practices are employed, what they address and what they neglect, and the form in which they occur speak forcefully to students about what adults believe is important.” (Eisner, 1991).

Defining the standards for 'good work' in a classroom is an integral part of ongoing assessments and the teaching for understanding framework. If students know what they are working towards, then that helps them gauge their own understanding along with where they stand in relation to the standards set for good performance (Simmons, 1994). Rubrics go a long way in making these standards clear and explicit.

A rubric is a scoring tool used to evaluate a product/performance based on a list of criteria that describe the characteristics of those products/performances at varying levels of accomplishment (Wolf & Stevens, 2007). An example of a rubric used by the Fédération internationale de natation to judge springboard diving is shown below.

Judges draw on their extensive professional knowledge to evaluate a dive based on five criteria (starting, take-off, approach, flight, entry) by assigning a level of accomplishment (complete failure, unsatisfactory, deficient, satisfactory, good, very good) to each criterion.

Three steps are involved in creating a strong rubric (Wolf & Stevens, 2007):
  • first, identify the performance criteria - usually 3-6 in number - that define the performance. Sometimes, criteria can be weighted if we wish to value some criteria as more important in determining the performance than others. Involving students in generating the crieria can help them deepen and internalise their understanding of the criteria for a quality performance.
  • second, set the performance rating levels. Rubrics usually have 3-6 performance rating levels. If the purpose is to make summative decisions, it is better to have a smaller number of levels as this increases reliability and efficiency in scoring the performance. However, if the goal is formative feedback, then a higher number of levels give more specific information to the student but it takes longer for the teacher to assess the performance.
  • third, create performance level descriptions. The challenge here is to provide enough information to guide the creation and scoring of the product/performance, but not so much that it overwhelms the teacher or the student. See the sample below that contains performance level descriptions for evaluating a speech using the performance criteria (delivery, content, language) with a rating scale (below proficient, proficient, beyond proficient).

By this point, it should be evident that rubrics provide numerous benefits - (i) they make learning targets clear for the teacher and student ; (ii) they guide instruction ; (iii) they make the assessment process transparent and (iv) they give students a tool for self-assessment and peer feedback.

In that case, why are rubrics not employed more frequently in schools?

For starters, creating strong rubrics take time - especially in writing performance level descriptions. In many school systems, teachers have homework, tests and/or assignments to correct, vast syllabi to cover and are pressed for time. Taking time out to build a rubric for a product/performance is unlikely to happen.

Designing a strong rubric requires professional and domain/content knowledge. A poorly designed rubric can act as a straitjacket and prevent creations other than those envisioned by the rubric-maker from unfolding (Wolf & Stevens, 2007). In some countries/systems, teachers do not possess the depth of knowledge required to design (and thereafter use) such rubrics in a productive manner.

Lastly, the need. Rubrics are useful for non-traditional assessments; however, many school systems still favour a pen-paper written examination and other more traditional means of evaluation that don't merit the design and use of a rubric.

Winding down...

In the next and final post of the series, I plan to illustrate how I used some elements of the teaching for understanding framework in designing the course that I took as part of the ASSET Day Scholar Programme.


Eisner, E. (1991). The Enlightened Eye: Qualitative Inquiry and the Enhancement of Educational Practice. New York: Macmillan.

Simmons, R. (1994). The horse before the cart: Assessing for understanding. Educational Leadership, 51(5), pp. 22-23.

Wiske, M.S. (1998). What is Teaching for Understanding? In M.S. Wiske (Ed.) Teaching for Understanding: Linking Research with Practice, pp. 61-86. San Francisco, CA: Jossey-Bass.

Wolf, K. & Stevens, E. (2007). The role of rubrics in advancing and assessing student learning. The Journal of Effective Teaching, 7(1), pp. 3-14.

~  o ~ x ~ o ~

I read the articles quoted in this essay as part of the course EDU T543 Applying Cognitive Science Research to Learning and Teaching at the Harvard Graduate School of Education. The course was intended for those who wanted to develop thoughtful instructional designs for learning. These designs could be in the form of traditional lesson plans or in forms for a variety of other contexts, formal or informal, including massive open online courses (MOOCs), online learning, computer programs and so on. Many of the course examples were drawn from a K-12 context, but the principles apply broadly to life-long learning.


  1. This might come as a cliche but it seems that learning is so much more than reading books. Assimilation of knowledge, testing its understanding and internalisation is also crucial. I also realise that it's very different for kids under sixteen and over. Learning shouldn't become less accessible with age and that's something I intend to figure out.


Post a Comment