The McKnight Foundation invests money in efforts to make changes in the world. These investments are made in organizations whose work aligns with our program goals. For us, the means by which we fulfill our mission is through other organizations and people. This is really easy to forget but is a fundamental thing that, as a foundation, we have to remember. There are two simple reasons that we give these organizations support: one, our missions and goals overlap, and two, we’re set up to legally give them money. The basis for our credibility and authority and relationship with all of these folks is through our grantmaking.
Today I want to share with you a basic overview of McKnight’s working evaluation framework model — which began with the key notion that the only evaluation system that is worthwhile is one that is actually used.
To measure McKnight’s ongoing effectiveness, we wanted to create an evaluation framework that was practical, actionable, and sustainable: practical, because we’d have to incorporate it into our daily lives; actionable, because we wanted it to be useful and to obtain information that we knew how to use; and sustainable, because our previous attempts and what we had seen from other foundations often required heroic amounts of staff time and resources that we didn’t have.
The evaluation framework is based on our grantmaking, but begins with what we call the evaluative relationship, the relationship our program staff have with grantees and other partners. All of our grantees exist in a very complex environment. We’re just one of many forces and influences. Our relationship with grantees becomes fundamental to how successful the work is. If the relationship is good, we’ll have a clear and better understanding of how our missions align and how the work is going. If the relationship is bad, we’ll have less certainty and trust in the information we receive. We work with program staff to remind them that with each and every grant we’re looking for goal alignment and how we can help that organization do its work better. Our grantees’ capacity and ability to be effective is tied to our effectiveness. We’re dependent on the groups that we support.
The second piece of the evaluation framework involves a strategic discussion; we also call it a cumulative program evaluation. Because we receive so many inquiries, it’s easy for us to get stuck in a responsive mode where we fulfill our goals by choosing grantees based on what’s coming in. The underlying problem with this process is that rather than actively thinking about them, our goals are addressed passively and responsively.
Although many program staff routinely think about and analyze the field, looking for new ideas and opportunities, there has been nothing in our organization to help staff have that conversation. One of the things that we’ve started over the past couple years is an annual strategic discussion where we look at all of the work that a program area has done, including grants, convenings, who’s coming in, and who’s not coming in. We try to understand what we’re doing to achieve our goals and then look at what’s not there. This may sound very simple, but it’s the kind of reflection that, if you don’t take the time to do it, it’s not going to happen.
The third piece is simple metrics. External measurements are really useful to help gauge what’s going on in the world outside, so that we can measure our progress against it. Measurements can give us an idea of the positive or negative trends in our fields of work. They help inform a conversation between the foundation, our staff, and the people we are working with in the field, to better understand what is going on and how we can make adjustments and continue our shared work together. The key to this is how useful it is to our work and the field. Research shows that organizations with a few simple metrics that they can use and talk about perform better than those who have a ton of data that they struggle with what to do with.
The last piece is external audits. The McKnight Foundation regularly participates in the Center for Effective Philanthropy grantee perception survey, which is conducted every three years. The information from these surveys is taken seriously and many internal changes are made.
Another example is that each of our programs is now on a five- or six-year external evaluation cycle. We hire an external firm to evaluate our programmatic impacts and programmatic frame for our various program areas. Over the last couple of years, evaluations have been conducted for our arts, housing, and neuroscience programs. We are currently evaluation our funding in SE Asia, and in the next couple of years we plan to evaluate our region and communities program.
When seen all together, the components I’ve outlined form a practical and comprehensive evaluation system. Where useful, we have created more elaborate evaluation systems for some specific program strategies. This more general framework isn’t meant to replace those, but rather to provide a consistent starting place and overall logic for how we incorporate evaluation into our work, across all programs.
Of course, all these things help us understand what we’re trying to do and accomplish together. But, as I said in the opening, the most important and fundamental aspect of any evaluation system is its utility — i.e., can we arrive at key metrics and an overarching evaluation system that will actually get used?
For us at McKnight, central to doing this is to commit the needed human and program resources, and make them a routine and intentional part of our institutional life and our shared work. Our approach starts with existing activities in program areas, and makes thoughtful, reasonable use of internal and external resources. Ongoing, McKnight’s staff and board make course corrections and refinements where needed, to ensure we’re accomplishing our goals and adding value to our work and relationships.
Such ongoing review will likely result in parallel refinements to our overarching evaluation framework over time. But even as our strategies and tactics shift, our core intent will not change. To make the most of the private resources we administer for public good as a family foundation, McKnight’s staff and board believe we have a responsibility to set goals, to measure our results and impact over time, and to hold ourselves accountable for our work.
Neal Cuthbert has been vice president of program at McKnight since 2005, providing leadership and management for McKnight’s program-related activities. He joined McKnight in 1991 as the Foundation's first arts program officer; and was named program director in 2000. Previously, Cuthbert was director and publisher of the monthly art and culture journal Artpaper, and a planner at the Metropolitan Council in both its arts and housing programs. He is an exhibited visual artist, and has been published as a critical writer and commentator both locally and nationally. Cuthbert holds a Bachelor of Arts degree in art history from the University of Michigan.
Opinions in the For Discussion columns are the authors' alone and do not necessarily reflect the views of Minnesota Compass. Compass welcomes a range of views about issues pertaining to quality of life in Minnesota.
Read more about it
Find a host of demographics data for your evaluation and reporting
See the video of Neal Cuthbert at Fact-based fundraising 2010