Impact of Nutritious School Lunch on Student Achievement

In a recent article in The Atlantic there is a summary of a new study that purportedly shows a link between healthy lunches and student achievement.  The study, conducted by economists from the University of California at Berkeley and Case Western, found that students at schools that contract with a healthy school lunch vendor score higher on California state achievement tests, with larger test score increases for students who are eligible for reduced price or free school lunches.  There is no evidence that the healthier school lunches contribute to a decrease in obesity. 


It is argued that offering healthier school lunches is a cost-effective way to increase student achievement.  In fact, in terms of return on investment healthier school lunches is a much better way to increase student achievement.  This study is an important reminder that our schools and districts are systems and that all parts of the system are important.  Food service, maintenance, and other elements of operations must be included in school improvement efforts and see themselves are notable members of the team that contribute to student outcomes.

I have previously argued that measuring tray waste is essential in a school system focused on systemic improvement (for efficiency and nutritional reasons).  The report covered in The Atlantic connects the nutritional quality of the lunches to student outcomes.  Now schools and districts should use this report to justify experiments in their district using the measures I previously described ().  Schools and districts should be asking these questions: when we increase the nutritional quality of the lunches we serve does tray waste increase or decrease?  Does participation increase or decrease?  Does student satisfaction increase or decrease?  It isn’t just a matter of contracting with a healthier school lunch provider.  We still need to focus on the system and see what impact the change has on other important factors, such as student participation. 

Smarter-Balanced Results - New Hampshire (2016)

In New Hampshire it is incredibly difficult for the public to access state assessment results in a user-friendly format.  The NHDOE invests significant sums into tools that are opaque and difficult to use.  The way data are presented make them impossible for the public to use in any meaningful way and nearly impossible for school leaders or boards of education to use.  So, I created a number of dashboards that are intended to make it easier to "see" how districts performed and make comparisons across districts.  The first dashboard is a map of NH with the percent proficient color-coded.  The user can select subject, grade, and a range of free and reduced lunch as filters.  

How can schools reduce food waste? Does it matter?

A report by the Natural Resource Defense Council found that roughly 40% of food in America is thrown away, which is worth about $165 billion per year.  Last year John Oliver critiqued the American obsession with food and propensity for waste.  


Earlier this year I argued that measuring food waste is important for schools because it engages key employees from operations in reducing waste, increasing efficiency, and ultimately improving student achievement by ensuring students have access to desirable and nutritious food and money saved by reducing waste is targeted directly at student need.  

This week Mike Lepene, Richmond Middle School Principal, posted on Twitter a link to a project from the Environmental Research and Education Foundation aimed at reducing cafeteria waste.  The project is requesting schools to volunteer to fill out a short survey (one hour to complete) and measure cafeteria waste several times a year.  The aims:

  • Educating students/school staff regarding more sustainable waste management strategies,
  • Reducing unnecessary food waste,
  • Reducing food costs to schools, and
  • Developing ways to better manage institutional food waste.

This is a wonderful project with ambitious aims.  The data collected could easily be used to focus attention on an area of waste in the system.  What's more, it seems like a great way to engage students in a problem-centered conversation.  Students could lead the measurement, the identification of strategies to reduce waste, and the review of evaluation data.  A project like this could be used to introduce students to the design thinking iterative process of problem solving.  

If we are interested in improving our systems then we should be paying to where we are wasteful. 


Better PowerPoint Slides - Coherence

I recently gave did a short training session on how to more effectively use slides to encourage learning.  Powerpoint (or like software) is prevalent in college courses, but slightly less so at the K-12 level.  The most recent numbers I could find for K-12 were from 2009 survey of teachers.  This study reported that 63% of teachers said they “sometimes” or “always” used software to make presentations.  It’s hard to say whether this report accurately reflects current use levels, but when I asked teachers during my recent training session they said it seemed reasonable given what they had seen among colleagues.

It is even less clear from published research whether using PowerPoint leads to increased learning.  In fact, at the very best research has pointed to mixed success with PowerPoint.  In some cases there appears to be a learning advantage from using PowerPoint while in other studies results suggest that it might actually hinder learning. 

One way to approach this problem is to simply ban PowerPoint from classrooms.  This argument has been made for classrooms specifically, but also for meetings (see this article).  However, what this approach ignores is that there is research showing that learning increases if visuals are used effectively.  

The multimedia principle tells us that people learn more and are better able to apply what they have learned when they are instructed with both words and pictures than when they are instructed with words or pictures alone (Mayer 2009).  This makes sense when we consider the dual coding assumption.  According to Paivio (1971), there are two ways people learn new material: verbal associations and visual imagery. The theory, called dual-coding, proposes that both visual and verbal information is used to represent information in memory. However, visual and verbal information are processed through distinct channels in the human mind, creating separate representations.  Either or both visual and verbal codes can be used when recalling information.  When asked to recall the stimulus, such of an animal, the person can retrieve either the word representing the animal (e.g. “cat”) or the image of the animal individually, or both simultaneously.  Dual coding postulates that storing stimulus two different ways increases probability that the stimulus will be remembered.  If this is true, why isn’t a visual tool like PowerPoint more effective?

The reason the research on PowerPoint is more mixed is probably due to the way it is applied in the learning environment.  In many cases PowerPoint is used to display large amounts of text with no picture or a small (and not always relevant) picture.  Pollock et. al. (2002) argue one of the biggest issues with PowerPoint might be the combination of narration (voice) and text at the same time, which has been referred to as “cognitive load”. 

So, how do we maximize the learning potential that PowerPoint offers us (by using “words and pictures” to increase learning)?  Richard Mayer (2009) came up with twelve principles that can be applied to using multi-media in the classroom to improve learning.  In this post I will cover the first five: coherence, signaling, redundancy, spatial contiguity, and temporal contiguity. 

Our first goal in developing a high quality multimedia experience for students is to reduce extraneous processing demands (avoid cognitive overload).  By applying the five multimedia principles from Mayer we can achieve this goal.  The principles:

Coherence Principle – People learn better when extraneous material is excluded rather than included.  For example, students learning about virus performed better when the lesson did not include interesting, but irrelevant facts about viruses. 

Signaling Principle – People learn better when cues that highlight the organization of the essential material are added.  Cues such as arrows, highlighting, and flashing improved learning.

Redundancy Principle – People learn better from graphics and narration than from graphics, narration, and printed text.  When text and voice are both present it overwhelms the channel the “words” channel and reduces learning.  If text is short it may increase learning. 

Spatial Contiguity Principle – People learn better when corresponding words and pictures are placed near each other rather than far from each other on the page or screen. 

Temporal Contiguity Principle – People learn better when corresponding words and pictures are presented at the same time rather than in succession.  For example, don’t describe the steps in a process then show the process.  Describe and show the process simultaneously. 

One principle we should apply to our development of multimedia (including slides) is coherence.  Below is a slide from a presentation given to the San Francisco Unified School District (SFUSD) Board of Education on September 29, 2015.  The presentation was designed to present the results for the 2015 Smarter-Balanced Assessment Consortium results.  The slide is difficult to read because of the clutter.  There are two types of charts: stacked bar and line.  The stacked bar chart differentiates between two level scores (“met” standard “exceeded” standard) while the line chart displays “met” and “exceeded” combined as a single number.  The bars are labeled with the combined proportion of “met” and “exceed”, but the line chart does not have data labels.  The bars are stacked, but there is no way to tell the value of the “met” versus the “exceeded”.  The y-axis has values, but it very difficult to discern the value of the portions of the stacked bar chart or the line chart.  There is a table below the chart that shows the values of combined “met” and “exceeded” for both the bar chart and the line chart.  This chart is confusing because it lacks coherence.  There is too much information included, which distracts from the important comparison. 

The second example below is the slide redesigned with the aim of improving coherence.  First, I removed unnecessary information: the stacked bars, the table of data, and the y-axis labels.  Second, I used one type of graph to compare SFUSD to the national pilot study data.  Third, I removed extraneous lines that were not necessary.  The redesigned slide is aimed at making it easier for the audience to quickly see what is being compared and analyze the results. 

Whether we are using slides to enhance learning in a classroom full of students or to present to a school board, by applying the principle to coherence to all our slides (not just charts) we vastly improve communication and increase learning.  

Mayer, R. E. (2009). Multimedia learning (2nd ed). New York: Cambridge University Press.

Paivio, A (1971). Imagery and verbal processes. New York: Holt, Rinehart, and Winston

Pollock, E., Chandler, P., & Sweller, J. (2002). Assimilating complex information. Learning and instruction12(1), 61-86.

Balanced Scorecard - Background Information

A few months ago I wrote a blog post arguing that the Balanced Scorecard (performance management system) was a great complement to improvement science.  I argued that the Balanced Scorecard, when implemented correctly, (1) describes the organization and (2) is used for action.  However, the post only listed the key attributes and did not describe these in detail.  Over the next few blog posts I will describe in more detail the balanced scorecard. 

The concept of the balanced scorecard (BSC) was first introduced in a widely cited Harvard Business Review article, “The Balanced Scorecard—Measures that Drive Performance (1992).”  In this article Harvard Business professors Robert Kaplan and David Norton argued that financial results do not accurately describe the health of the organization.  Financial measures are lagging indicators and, as such, are not effective in identifying the drivers or activities that affect financial results.  The problem was that companies used too many indicators and focused on a narrow slice of their organization, which meant they lacked tools to understand the trajectory of the organization.  Leaders within these organizations were often overwhelmed by too much information that came too late for them to make mid-course corrections.   Kaplan and Norton argued that measurement is an effective way to focus organizations, but that most measurement systems ignored the intangible parts of the system that drive the most value.  Companies had been collecting data for a long time, but Kaplan and Norton made two important contributions that vastly improved the effectiveness of performance management.  First, they developed a consistent framework for data collection.  Kaplan and Norton argued that organizations should collect data from four perspectives.

  • The financial perspective. Measures in this perspective should focus on whether the organization is generating financial value for shareholders.
  • The customer perspective.  Measures in this perspective should focus on whether customers perceive that their needs are being met.   
  • Internal processes perspective.  Measures in this perspective should focus on the behaviors that the organization must be great at to be successful.   
  • Learning and growth perspective.  These measures in this perspective should focus on efforts to improve. 

The data collected for performance management should not be a random set of measures, but should be carefully selected to represent each of the four perspectives. 

Second, the BSC is designed to communicate and monitor strategy.  Organizations that develop a BSC must be clear about the cause-effect relationships that make up organizational strategy.  In fact, prior to selecting measures and managing with the BSC a strategy map (a visual tool that displays the cause-effect strategy – example below) must be developed.  Once the strategy is clearly mapped a balanced set of measures is selected to monitor strategy execution. 

The purpose of the BSC is to track a broader range of measures that indicate how the organization is doing with respect to targets (e.g. financial measures) and whether the organization is achieving necessary growth in areas that will result in future performance (e.g. is the organization meeting customer targets). The BSC enables leaders to monitor and adjust the implementation of organizational strategy more regularly than just once per year.  The use of the BSC in nonprofits and government sectors requires that the customer perspective be the focus since value creation for customers is key (as opposed to shareholders or owners).  Thus, the financial perspective is seen as supporting the mission and vision of the organization rather than a central measure of success.

In education the BSC is almost always implemented as an accountability tool.  School districts publish annual scorecards with measures of student achievement (e.g. percent of students proficient of state tests), attendance (e.g. proportion of students absent), or parent engagement (e.g. proportion of parents attending conferences).  School districts set targets and color-code the scorecards based on whether they met the annual targets.  In several future blog posts I will explain why this is the least effective way to use the BSC (and may not even qualify as a BSC).  

How Does Achievement Vary with Income in Your State?

The New York Times published a piece at the end of April that revealed the variance in learning outcomes for students across the United States.  The findings reported in the article were based on data and articles published by Stanford Center for Education Policy Analysis.  Here is the key quote from the third paragraph, " Children in the school districts with the highest concentrations of poverty score an average of more than four grade levels below children in the richest districts."  The next paragraph pointed to even more troubling finding, " Even more sobering, the analysis shows that the largest gaps between white children and their minority classmates emerge in some of the wealthiest communities."  The researchers from Stanford further found that even in districts where students of different races came from similar economic backgrounds white students tended to perform better.  One of the authors, Sean Reardon, noted that in these cases it might be that white children unconsciously are tracked into more rigorous courses or given more challenging work.

The NY Times article came with some great data displays.  I have downloaded the data from Stanford and created a dashboard that allows users to select a single state and compare achievement (measured in grade-level equivalency) with various demographic factors of the district (e.g. median income, household in poverty). One of the things I learned by doing this is that not all states look the same.  For example,. the correlation of income with achievement in Missouri is much smaller than in Connecticut or New York.  

Does Your District Need A New Strategic Plan?

Developing a new strategic plan can be a time consuming and expensive process for a school district.  So, how do you know whether it is time to invest in the development of a new plan?

Here is a very simple assessment I share with district leaders and board members that are considering whether to take the time to develop a new plan.  The purpose of the assessment is to not to make the final decision for the organization whether a new strategic plan is needed.  Rather, the assessment should be used to generate a conversation about how the plan is used currently.  By having a conversation about the utility of the current strategic plan, the board and district leaders should be able to better decide if they are ready to invest in a new plan.  

Education Data System Designers Should Behave More Like Museum Curators

Years ago I was part of a team of school district employees charged with selecting the new data system we would use for analyzing student achievement results.  We made our final selection, which was a system we felt balanced an intuitive interface with the capacity to handle almost any kind of data or analysis (save deeper statistical analysis).  We negotiated a contract and had begun developing our implementation timeline when a developer on our team piped up, "It feels a little like we are buying a Cadillac when all we really needs is a Honda Civic."  The developer asked if he could have a weekend to work on an alternative to show to teachers.  He rolled out a simple, if a little unattractive, solution that teachers seemed to really appreciate.  We ended up not signing the contract with the the big data systems provider and saved the district about $350,000.  

So, what made the Civic a better system?  The locally developed system limited choice.  A user only had about six choices with the original version.  The designer had curated a collection of views, reports, and drill downs.  The designer of the system made choices about what he thought users would find most interesting and be most value-added for teachers (who are incredibly time constrained).  The designer treated the data system he designed the way a museum curator sees a collection.  The curator does not hang every painting they have.  Instead, they chose paintings that represent the period, artist, or mood they are trying to convey for the exhibit.  And, when museums develop audio guides they restrict even further what the patrons learn deeply about.  

Museum designers must make choices because they are space constrained.  Unfortunately, the designers that created data dashboards for educators don't see their users as constrained, so they give them everything.  In designing the data systems that we want school leaders and teachers to use we should behave more like museum guides or curators and lead our users to the most likely places for them to find insights.  


Edward Segel - How to Tell Stories with Data

Edward Segel, an expert in data visualization, gave this great talk about four years ago that is available on Vimeo.  Segel is now the vice president of product development at the cutting edge insurance company Oscar.  Segel's talk is about interactivity and data visualization.  Early in the talk he says there are four main reasons to create interactive visualizations: (1) for analysis, (2) to personalize (e.g. to determine where you fit), (3) for social purposes, and (4) for storytelling. 

Inside of education we tend to focus on interactivity for analysis.  We use dashboards and tools (e.g. Excel) to analyze data that are available to us.  Outside of education data scientists are increasingly using interactive visualizations to tell stories about education (e.g. the recent NPR story about disparity in funding in education). 

Education leaders did to increase their emphasis on telling stories with data.  While we may not be trained to do this we need to understand that if we don't start others will...and we may not like the way they organize the data to tell stories. 

I encourage you to watch Segel's lecture and consider how you might harness the data that you have to make better interactive visualizations that will tell important stories about your school.  Segel's lecture can be a little academic at times, but the ideas and principles that he talks about will be helpful.