This blog post was originally published in Harvard’s Hauser Center site.
On March 7, 2010, Sherine Jayawickrama began on these pages a discussion of Charity Navigator’s proposed new rating system. As many readers know, CN is the largest US agency rating nonprofit organizations. Since its launch in 2002, CN has relied and reported upon financial information contained in the federal 990.
It is no secret that over the years this rating system came under considerable censure. Critics charged that solely fiscal measures were flawed for a variety of reasons, and that CN’s ratings could be having the perverse impact of steering investment away from organizations that were actually effective, but which, because their particular circumstances –considerations not readily apparent in the 990 data- had overhead or fund-raising costs higher than CN thought prudent.
We have listened to these criticisms. That is why we have announced that Charity Navigator is moving to a triad rating system, one that will retain fiscal measures (which may well be revised), but also account for an organization’s transparency and accountability and, most importantly, its effectiveness.
Commentator Steve Lawry, has countered that he does not believe that a “simple system for rating outcomes” is achievable. Here he joins the numerous naysayers who have, since the inception of the outcomes movement over a decade ago, argued that the work of specific charities, whole classes of charities, or the entire charitable sector itself, is too complex to be held to any standard of accountability as regards results and effectiveness. Mr. Lawry also states that “Many good charities strive mightily to measure outcomes for their own management purposes.” We believe that he is wrong on both counts.
While no one will argue that charities work in often complex situations, circumstances resulting from and vulnerable to any number of variables, the essential question, “Have you made any discernable, meaningful, and positive difference?” should not be beyond the capabilities of a charity to answer. Unfortunately, however, this is precisely the claim of far too many nonprofits. In fact, instead of the “many good charities” Mr. Lawry cites as striving “mightily” to measures outcomes for their own management purposes, our own investigation led to the inescapable conclusion that fewer than 10% of nonprofits are using outcomes at all, either as a standard by which to measure their effectiveness or as any sort of management tool. Moreover, rather than the evidence which would be available were Mr. Lawry’s assertion accurate, we instead find an overwhelming collection of excuses why nonprofits are, in fact, not applying outcomes to either their work or themselves, “It is too hard,” “It is too costly,” and “We don’t know how” being among the most often cited.
In the end, Mr. Lawry criticizes the effectiveness component of CN’s new initiative as being an impossible task in a multivariate world. We reject that position entirely. No one is asking for a scientifically provable claim of proportionate credit for an incrementally improved situation. What the sector needs, what donors are increasingly demanding, is some sort of reasonable evidence that an organization succeeded in what it claims to have done. There is a distinct difference between the two, and to hide behind the enormity of measuring the big picture as an excuse for neglecting to measure the small picture is, in our opinion, an abdication of the responsibility the sector has for granting donors –governmental, institutional, or individual- more than some small satisfaction in the bromide of “We tried and our intentions were honorable.” Ours is not a “pretense,” as Mr. Lawry put is, that we can credibly score outcomes, but rather a faith that we can report on those outcomes that charities are establishing and achieving themselves.
Moving on, we find Dan Pallotta claiming in the first comment to Mr. Lawry's entry that “No rating system can possibly capture the underlying complexity [of nonprofits’ activities and operating environments], and worse, a rating system enables the public addiction to simplicity.” “Simplicity” is not the goal; trust is; specifically the trust of donors that the information they are given is accurate and verifiable as regards their social investment. There are two points to be made here. The first is that, while everyone recognizes that donors give for a variety of (often emotional) reasons, they nonetheless very often seek and appreciate some guidance. Within virtually any field of nonprofit endeavor, there are usually a number of organizations at work. Which among them, donors often seek to learn, is the preferable investment? The second question is How can I trust what I am being told?
Responding to these questions is the underlying goal of not only CN’s new initiative, but its entire history. This is why CN has remained so fiercely independent, even when an easier fiscal path might have been found in a different arrangement between ourselves and those charities we rate. In a universe of self-serving information, we are trying to be a truly objective source that can answer at least some of the questions thinking donors have. We view neither the questions at hand nor our audience as “simple.”
Finally, Messrs. Mitchell and Schmitz weighed in to the conversation with a balanced assessment of both the inherent flaws in a rating system based solely upon fiscal measures, and the complexities of measuring outcome accountability. They wrote, “Nonprofit organizations need to take more responsibility for demonstrating results to stakeholders. If a nonprofit is really accomplishing something, it should be able to show it - and to the extent that it can show it, the nonprofit can be understood to be effective.”
This is precisely the position that CN is taking as it crafts the tools it will use to report on charities’ effectiveness. While there are several measures that might be considered, there are a few basic question that a charity ought to be able to answer:
1. Is it using outcomes in the design, management, and measurement of its efforts?
2. Are those targets that it sets “reasonable” outcomes? In other words, are they, at minimum, meaningful, sustainable, and verifiable?
3. Is the organization achieving those outcomes?
If an organization cannot or will not answer these questions, what does that say to its donors, both current and potential? Similarly, if an organization cannot or will not reply to inquiries regarding the transparency and accountability of its management and decision-making, then what does that say?
These are the questions Charity Navigator will be posing, and reporting on. We recognize that it is a substantial undertaking we have set for ourselves. We recognize that any system initially devised will require continuous improvement as we go. But we are also firm in our belief of four things:
1. That charitable donations should not be merely “giving”, but rather a social investments
2. That an informed donor is the best social investor
3. That the effective organizations represent the wisest, and most efficacious social investments
4. That we owe it to our constituents to provide the best information we can possibly can to help guide their giving decisions.
In fact, we hope that charities that provide this kind of information to donors will find it easier to attract funding than charities that don’t. We think this is what donors are and will be looking for and NPs that respond appropriately will have an advantage over those that don’t.
In the end, we disagree with Mr. Lawry; we do not think that the challenge is so big that we ought not try to meet it. And we disagree with Mr. Pallotta. The audience is not simple, and neither is the reporting system we intend to launch. While the national assessment apparatus he envisions might be a good idea someday, given current realities that day is far off indeed. Meanwhile donors and the sector need answers now.
We understand the cautions offered by Mitchell and Schmitz, and we are grateful for their thoughtful description of the terrain before us. But we are also determined to follow the course we have set out for ourselves. Mr. Pallotta cites the figure of $300 billion given to charity each year. When other sources of revenue are added (government and fee for service) the total jumps to roughly $1.5 trillion. If his figure is accurate, we believe that all would agree that it is too substantial an amount to be “given hopefully” rather than “invested wisely.” We intend to do our part to see to it that the latter becomes the norm.
Ken Berger is the President and CEO of Charity Navigator. Dr. Penna is an independent outcomes consultant, author of the forthcoming Outcomes Toolbox, and an advisor to Charity Navigator.