Atul Gwande writes that doctors should "count something".
A doctor should be a scientist in his or her world. In the simplest terms, this means that we should count something. If you count something interesting to you, I tell you: you will find something interesting." (Gwande commencement address to Harvard Medical School- June 9, 2005).
In education we should heed Gwande's call to "count something". Not for the purpose of accountability, but for discovery and to improve performance. For this to work the effort has to come from an honest place of curiosity, not as a tool cast blame or identify and remove "bad actors" in the system. We also must focus on counting behaviors or conditions that are antecedent to student learning and focus on parts of the system that we can control and improve. Don't conflate this call to count something in education with accountability.
Counting can be an effective way to improve performance. For example, if you want to lose weight the most effective thing you can do is count calories (or bites). If you want to get faster at running you count how many miles you run and how fast. If you want to get stronger you count how many push-ups you do or how many days you workout. The positive relationship between counting and change can be observed in many contexts and at varying scales.
How might the concept of counting something apply to improving our educational systems?
Let me share an example of how this can work at a district-level. An urban district I know had recently seen a significant increase in graduation rates. In fact, this district was exceeding the state average for graduation rate and far below average in term of dropout rate. However, when it came to other academic outcomes the district's students performed poorly. The district consistently had state assessment scores, low college admissions scores, and high college remediation rates (among the highest in the state). District leaders were frustrated because their efforts to improve graduation rates didn't seem to have much effect on other measures of learning.
To figure out where they needed to improve as a system the district leaders carefully mapped their strategy for success. The leadership determined that an increase in classroom rigor was an important step towards improving student learning (paired with high expectations for all students). In this district's case an increase in classroom rigor was seen as an antecedent condition to increased learning outcomes for students.
What the leaders in this particular district did next is what makes them a little different. Instead of creating a plan to improve classroom rigor, implementing the plan, and waiting to see whether student learning increased, they started by asking "how can we measure classroom rigor?" The leaders in this district wanted to be able to measure what was happening in classrooms in terms of rigor and not wait until a year or more to see if their efforts were paying off in student learning. They wanted to measure rigor because they were curious. They were curious where it seemed most prevalent, whether their professional development improved it, and whether it really had an impact on learning outcomes. The district started counting when and how often they observed rigor. The very act of counting allowed them to see the district in a new way. It inspired them to ask new questions, to challenge their assumptions about how change occurred, and find new ways to support the development of more rigorous classrooms.
It doesn't matter how the school system is designed (e.g. traditional, high tech, project-based), the leaders within the system should accept the challenge to count something. By counting something we can feed our curiosity. We can focus on what we want to improve and we can learn what works to actually improve. Most importantly, by counting something we have a much better shot at improving the quality of the experience for students.
Here are some things you might count:
- Lessons observed at grade-level measured by an outside observer (e.g. colleague, coach, administrator).
- Proportion of students engaged in lesson measured by an outside observer.
- Amount of time in various types of configuration (e.g. whole group, small group, individual work).
- Proportion of students who missed more than 5 days in the past month.
- Proportion of employees who report feeling supported.
- Number of times students come unprepared to class.
- Number of interruptions during class categorized by type (e.g. student interruption, announcement, student leaving or coming, etc...).
- Levels of anxiety and stress among students (see interesting NYT article on this subject).
- Level of rigor of assignments.
- Student reported satisfaction (or other measures)