People and programs 03/13/2014


How do you evaluate the impact community food programs have on participants' health? It's a tricky endeavour. We chatted with Research and Evaluation Manager Trace MacKay about the successes she's had and challenges she faces in evaluating the impacts of food access, food skills and education programs at The Local, The Table and The Stop.

Everyone wants to see impacts on health. How can you prove a particular program has had a particular impact on someone's health?

Any program or intervention that claims to have health impacts will require measuring some change over time. Before making those claims, it's important to make sure that change is achievable with the target participants, and that the duration of the program is appropriate to influence that change. Make sure impacting participants' health is really an objective of your program before you over-claim the space. 

Health is a complex concept. It means different things to different people. It can encompass the economic, the social, the physical and the mental — there's no one standard definition of what “health” is. Impacts on health are complex and long-term, and I don't believe there's no one silver-bullet intervention that can improve health or prevent chronic disease.

I'm a firm believer in the social determinants of health. I believe healthy behaviour change in individuals or groups can be impacted by lots different variables, many of which are often outside the control of our program interventions. I ask myself lots of questions: Did our community kitchen program fail because some participants are not reporting that they're cooking more healthy meals at home? Well, the participant may have demonstrated increased awareness, knowledge, and confidence around making healthy food purchases and cooking healthy meals, but they may not have their own kitchen space, a sharp knife or easy access to affordable healthy ingredients in their neighbourhood. Or maybe they were already cooking every meal at home prior to the program, but they live alone and joined the program to make new friends and reduce their social isolation.

Health impact is an area where we need to let go of attribution. Any one program or organization can contribute to improved health of individuals or groups.


Then how do you know if a healthy eating-focused program is having an impact?

We recognize that there's a continuum of short- to medium-term outcomes for many of our programs. We try to capture changes in awareness, knowledge, confidence and behaviour/habits. If these early changes are known influencers (or determinants) of health, then we can claim that a program has contributed to healthy changes in program participants. If there are indicators that are already linked to improving long-term health — fruit and vegetable consumption, for example, or cooking and sharing healthy meals at home — they're used as proxy health indicators for healthy eating programs. Impacts, by definition, are very long-term changes. While the short- to medium-term outcomes of a program may contribute to overall impacts, you can't expect to measure impacts resulting from a single program or suite of programs in a short amount of time.

When planning your programs and your evaluation strategy, you need a strong theory of change, one that provides an evidence base to why your program would expect to produce certain outcomes. You can use reported findings from peer-reviewed research, grey literature, evaluation reports from other organizations and your own track record to help inform program planning, your expected number of contact hours and reasonable targets for what level of change you can expect in your participants. This will help identify what evaluation questions need to be asked to capture the changes you expect to see. Because of other barriers that may exist for your program participants, asking participants reporting no change the question "why not?" can help weed out problems with your program from external influencers that may have prevented the changes you would have expected to see.

Can you talk about the different types of evaluation tools do you use?

We use pre- and post-evaluation tools for programs with regular attendance that are delivered over a specified period of time. Pre-surveys give us baseline data, help us define who our participants are and where they’re coming from, and help identify barriers participants face with regard to healthy food choices and eating habits. The assets, needs and interests identified in pre-surveys can also be used to shape program content.  

Don’t have a pre-test? Running a drop-in program? Don’t panic! You can still measure changes in your participants. You just need to capture how long your participants have been attending the program and ask a range of questions that cover the continuum from awareness change to behaviour change. You may find that participants who have just started a program have learned something new about reading food labels, but may not yet have the confidence to make healthier food choices when grocery shopping, whereas longer-term participants may be making healthier food choices with confidence and cooking more healthy meals at home. One important thing to remember is to make sure to frame your questions in the context of your program, for example: “Since you’ve been coming to X program…” or  “Because of what you’ve learned in X program…”.

Because we rely on self-reported changes, we try to capture as much detail about how the program in question has contributed to changes being reported. If you ask someone if they've noted any changes in their physical or mental health since they started attending your program, you're going to want to follow up with a question that digs deeper and asks the participant to describe the change and how they noticed and/or have measured that change.

Open-ended, self-reported qualitative questions can be a great way to capture unexpected outcomes in your programs. When we recognize a good story, we never hesitate to go deeper. While we do our best to keep our Annual Program Survey interview responses anonymous, when we’re speaking to a participant and their story of change is remarkable, we like to follow up to get the whole story. The most significant change story is a great qualitative evaluation tool.  

How reliable are self-reported changes in evaluating health outcomes in program participants?

Unless the specific objective of your program is to change health indicators that can be directly measured with individual participants (i.e. biometrics like cholesterol levels, blood pressure, waist circumference, etc.), you will be relying on participants self-reporting health changes. There is no shame or irrelevance in self-reporting — even Statistics Canada relies on telephone or online surveys for their longitudinal national health data. And direct measurement does not translate any better for attribution. The gold standard in research — a non-intervention population with balanced characteristics of your program population and demonstration of statistical significance between the outcomes of the two groups — often doesn’t fit well with real-life community-based programs and interventions. You need to find tools that are proven, reliable, easy to administer, respectful and minimally invasive to your participants. 

If you do decide to use biometrics in your evaluation, give participants the opportunity to choose what they want to measure. For our new FoodFit program, which combines healthy eating with physical activity components, participants were asked at the start of the program to identify what changes they'd like to see, both as individuals and as a group. We then built biometric measures into the program evaluation, and the capacity to capture them as needed. Using a participatory approach rather than a prescriptive approach can be useful in creating buy-in and increasing motivation of program participants.

What about measuring change in kids? 

We’re piloting some new evaluation tools in our After School Programs this year that are more activity-based. We decided to move away from written tests because kids get enough tests at school. You can get really creative with kids: you can make games or gameshows that demonstrate understanding of taught concepts; you can observe changes in habits like hand-washing, kids helping others with measuring or mixing, or working well as a group; you can ask for drawings of the part of the program kids liked best; you can have kids vote on the healthiest plate in a lesson. The key is to make it fun and meaningful. To capture the changes in the children at home, you can develop an evaluation tool for their parent(s) or caregivers.  

Whatever the target age of the participants in the programs we're evaluating, we try to keep the following in mind: 1) Make it as easy,  fun and meaningful as possible; 2) Be respectful; and 3) Use and share your results!

Related links

+ Log in to The Pod Knowledge Exchange for our nutrition learning module: Nudging People Towards Healthier Eating