Tuesday, March 8, 2016

‘Oops: we made the non-profit impact revolution go wrong’



Originally published in Alliance magazine by Caroline Fiennes and Ken Berger 

The non-profit ‘impact revolution’ – over a decade’s work to increase the impact of non-profits – has gone in the wrong direction. As veterans and cheerleaders of the revolution, we are both part of that. Here we outline the problems, confess our faults, and offer suggestions for a new way forward. 

Non-profits and their interventions vary in how good they are. The revolution was based on the premise that it would be a great idea to identify the good ones and get people to fund or implement those at the expense of the weaker ones. In other words, we would create a more rational non-profit sector in which funds are allocated based on impact. But the ‘whole impact thing’ went wrong because we asked the non-profits themselves to assess their own impact. 

There are two major problems with asking non-profits to measure their own impact

Incentives 

The current ‘system’ asks non-profits to produce research into the impact of their work, and to present that to funders who judge their work on that research. Non-profits’ ostensibly independent causal research serves as their marketing material: their ability to continue operating relies on its persuasiveness and its ability to demonstrate good results. 

This incentive affects the questions that non-profits even ask. In a well-designed randomized controlled trial, two American universities made a genuine offer to 1,419 microfinance institutions (MFIs) to rigorously evaluate their work. Half of the offers referenced a real study by prominent researchers indicating that microfinance is effective; the other half referenced another real study, by the same researchers using a similar design, which indicated that microfinance has no effect. MFIs receiving offers suggesting that microfinance works were twice as likely to agree to be evaluated. 

Who can blame them?

Non-profits are also incentivized to only publish research that flatters: to either bury uncomplimentary research completely or share only the most flattering subsets of the data. We both did it when we ran non-profits. At the time, we’d never heard of ‘publication bias’, which this is, but were simply responding rationally to an appallingly designed incentive. This problem persists even if charity-funded research is done elsewhere: London’s respected Great Ormond Street Hospital undertook research for the now-collapsed charity Kids Company, later saying, incredibly, that ‘there are no plans to publish as the data did not confirm the hypothesis’. 

The dangers of having protagonists evaluate themselves is clear from other fields. Drug companies – who make billions if their products look good – publish only half the clinical trials they run. The trials they do publish are four times more likely to show their products well than badly. And in the overwhelming majority of industry-sponsored trials that compare two drugs, both drugs are made by the sponsoring company – so the company wins either way, and the trial investigates a choice few clinicians ever actually make.

Such incentives infect monitoring too. A scandal recently broke in the UK about abuses of young offenders in privately run prisons, apparently because the contracting companies provide the data on ‘incidences’ (eg fights) on which they’re judged. Thus they have an incentive to fiddle them, and allegedly do.

Spelt out this way, the perverse incentives are clear: the current system incentivizes non-profits to produce skewed and unreliable research

Resources: skills and money 

Second, operating non-profits aren’t specialized in producing research: their skills are in running day centres or distributing anti-malarial bed nets or providing other services. Reliably identifying the effect of a social intervention (our definition of good impact research) requires knowing about sample size calculations and sampling techniques that avoid ‘confounding factors’ – factors that look like causes but aren’t – and statistical knowledge regarding reliability and validity. It requires enough money to have a sample adequate to distinguish causes from chance, and in some cases to track beneficiaries for a long time.  Consequently, much non-profit impact research is poor. One example is the Arts Alliance’s library of evidence by charities using the arts in criminal justice. About two years ago, it had 86 studies. When the government looked for evidence above a minimum quality standard, it could use only four of them. 

The material we’re rehearsing here is well known in medical and social science research circles. If we’d all learned from them ages ago, we’d have avoided this muddle. 

Moreover, non-profits’ impact research clearly isn’t a serious attempt at research. If it were, there would be training for the non-profit producers and funder consumers of it, guidelines for reporting it clearly, and quality control mechanisms akin to peer review. There aren’t.

Non-profits should use research rather than produce it

Given that most operating non-profits have neither the incentives nor the skills nor the funds to produce good impact research, they shouldn’t do it themselves. Rather than produce research, they should use research by others. 

So what research should non-profits do? First, non-profits should talk to their intended beneficiaries about what they need, what they’re getting and how it can be improved. And heed what they hear. 
Second, they can mine their data intelligently, as some already do. Most non-profits are oversubscribed, and historical data may show which types of beneficiary respond best to their intervention, which they can use to target their work to maximize its effect.

Put another way, if you are an operating non-profit, your impact budget or impact/data/M&E people probably shouldn’t design or run impact evaluations. There are two better options: one is to use existing high-quality, low-cost tools that provide guidance on how to improve. The other is to find relevant research and interpret and apply it to your situation and context. A good move here is to use systematic reviews, which synthesize all the existing evidence on a particular topic.    

For sure, this model of non-profits using research rather than producing it requires a change of practice by funders. It requires them to accept as ‘evidence’ relevant research generated elsewhere and/or metrics and outcome measures they might not have chosen. In fact, this will be much more reliable than spuriously precise claims of ‘impact’ which normally don’t withstand scrutiny. 

What if there isn’t decent relevant research?

Most non-profit sectors have more unanswered questions than the available research resource can address. So let’s prioritize them. A central tenet of clinical research is to ‘ask an important question and answer it reliably’. Much non-profit impact research does neither.  Adopting a sector-wide research agenda could improve research quality as well as avoiding duplication: each of the many (say) domestic violence refuges has to ‘measure its impact’, though their work is very similar. 

Organizations are increasingly using big data and continuous learning from a growing set of non-profits’ data to expand knowledge on what works. As more non-profits use standardized measures, they can make increasingly accurate predictions of the likelihood of changed lives, and prescribe in more detail the evidence-based practices that a non-profit can use. 

In summary


Non-profits and donors should use research into effectiveness to inform their decisions; but encouraging every non-profit to produce that research and to build their own unique performance management system was a terrible idea. A much better future lies in moving responsibility for finding research and building tools to learn and adapt to independent specialists. In hindsight, this should have been obvious ages ago. In our humble and now rather better-informed opinion, our sector’s effectiveness could be transformed by finding and using reliable evidence in new ways. The impact revolution should change course. 

Caroline Fiennes is founder of Giving Evidence. Email caroline.fiennes@giving-evidence.com
Ken Berger is managing director of Algorhythm. Email ken@algorhythm.io

Wednesday, February 3, 2016

The Occupy Charity Problem: Big Money in Few Hands





An 11 minute podcast that describes a little known or discussed reality in the nonprofit sector - the tremendous concentration of resources among a relatively small number of organizations. The implications of this "Occupy Charity" problem are also considered.

https://soundcloud.com/tinyspark/big-money-in-few-hands

Monday, February 1, 2016

Friday, January 29, 2016

Thursday, January 28, 2016

Winning the Battle for the Soul of the Social Sector

This is a 50 minute presentation, followed by 20 minutes of Q&A on my more recent thinking on this subject. Thanks to my work at Algorhythm, I now have a deeper understanding of what is required to win this battle! Check it out.

The presentation was conducted at the Maxwell School of Citizenship and Public Affairs at Syracuse University.






Tuesday, October 13, 2015

The Democratization of Social Impact Measurement: Why I Joined Algorhythm




I spent roughly thirty years helping to manage human service and health care organizations dedicated to serving those most in need. I then spent almost 7 years at Charity Navigator. As a result, I was lifted out of the trenches of direct service and exposed to the intoxicatingly “thin air” of thought leaders, consultants and academics who dwell at the 50,000 foot level of the nonprofit and social sector.  The ideas and principles of many of those individuals are brilliant and exciting. However, more often than not, their ideas are either 20 to 30 years ahead of where most of the sector is today or just simply wrong (nice in theory but not in practice). 

Nonetheless, there was one fundamental concept that some of them promoted that made complete sense to me - the need to have nonprofits pay attention to data and measure what they do to be certain they are meeting their mission. For thirty years in the trenches I collected plenty of data, but it was mostly just counting stuff and rarely indicative of meaningful change in the lives of people being served. Therefore, about six months into my job at Charity Navigator I announced to the world (on my blog site) that we were going to change the way we rated charities over time to focus on outcomes. 

Over the years that followed I became an increasingly outspoken advocate for managing and measuring what matters most to achieve nonprofit and social enterprises good works.  However, I also became increasingly aware of a fundamental problem, I called it the Occupy Charity problem. That is, that roughly 1% of nonprofits in the USA (registered here but serving every country in the world), take in about 86% of the $2 trillion dollars that comes into the sector each year. In fact, it is a global problem and their is a similar situation in most countries. 

I observed that the leaders of the 1% tend to dominate the conversations around all things having to do with the sector in general. Not surprisingly, the consultants and institutes that developed models of performance management and measurement have predominantly been geared to them as well. After all, that’s where the bulk of the money is! As a result, a typical response to my speeches about performance management and measurement by the leaders of small and mid-sized nonprofits around the country was, “How will we ever afford to do that stuff?”

That was a very good question. My answers were very limited and over time even less so, until 2013. That was the year I began talking to Peter York about his new company called Algorhythm. He described a low cost, scalable tool he was developing to help the other 99% take advantage of Big Data, machine learning and other cutting edge technologies. He also mentioned how the tool gave front line staff the ability to know even before a program begins the likelihood of success, as well as things they could do proactively to make the program more effective. He noted that, through aggregation of data from many small nonprofits, they could learn together and get even better at delivery of high quality services. Amazingly, it could all be accomplished at 10 to 20 times less than the traditional tools and systems.

So when I left Charity Navigator and was considering what to do next in my career, the offer to join Algorhythm was a no brainer! I had met with nonprofits and experts on measurement from  around the world. There was and is no one else I am aware of that has a tool like Algorhythm. I came to this realization two years ago, while still at Charity Navigator, and have been promoting them ever since with absolutely no financial “skin” in the game. Yes that has changed since I now work at Algorhythm and could arguably be biased. However, working here has only deepened my appreciation for the immense value these tools can bring to organizations that are willing to consider them. 

Below is a list of some of the outstanding things that the Algorythm - iLearning System can help a nonprofit or social enterprise to do:

  1. Identify all pathways to success for their beneficiaries.
  2. Provide on-demand insights to the frontline staff.
  3. Provide big-picture strategic insights to leadership.
  4. Empower and engage beneficiaries in the learning and improvement process.
  5. Connect everyone to an evolving learning network.
  6. Transform data for reporting into data for meaningful improvement.


Given all this, I believe that Algorhythm has “cracked the code” for the 99% of small and mid-sized charities that have been left out of the social impact revolution. The wait is over for a system that can provide meaningful information on what matters most to every nonprofit or social enterprise’s mission. No longer will these organizations have to face the increasing demands of funders or investors for outcome data without a viable affordable option to meet that need. No longer will front line staff be faced with yet another meaningless reporting requirement that adds no value to their work. No longer will beneficiaries of services be voiceless and disengaged from the program design and improvement process. 


I hope that funders, investors, experts, as well as leaders of nonprofits and social enterprises will begin to stand up and take notice of this one of a kind accomplishment. We have heard about the wonders that Big Data and machine learning are doing in the traditional for profit world. It’s now time to finally have our turn and create the most effective and high performing organizations imaginable. As a result, we will be able to help many more communities and people in need in measurable ways. The world can be a much better place as a consequence. Please join us. The future is now. 

Monday, August 17, 2015

My New Job

Ken Berger Joins Algorhythm
Former President & CEO of Charity Navigator to Further Focus Nonprofits and Funders on Managing and Measuring their Efficacy


Profile Picture 4 - 4-8-11.JPG

PHILADELPHIA, Aug. 17, 2015 - Algorhythm, a technology company dedicated to fostering greater social impact through data-driven decision making, announced today it has appointed Ken Berger as its new managing director effective August 17th. Mr. Berger joins Algorhythm from Charity Navigator, where he was the president and CEO for the past seven years.
Algorhythm helps nonprofits manage and measure their performance. Berger will play a crucial role in expanding these efforts by increasing the company’s reach to new groups of nonprofits, social businesses, funders and investors.


"Ken and I have worked together for years, in collaborative working groups as well as presenting before nonprofits and funders,” said Algorhythm’s Founder and CEO, Peter York.   “We have always shared the same vision and values for where the sector needs to go. This is an exciting opportunity for us to work together toward the common goal of helping nonprofits become better at what they do, using tailored and affordable measurement and analytic tools. Algorhythm’s tools have unique and powerful capabilities that help nonprofits make clear, concise, evidence-based decisions. The tools also assist those that support them (foundations and individual donors) to increase their impact."
At Charity Navigator - the largest charity rating agency in the world – Berger led the organizations’ effort to move its rating system away from a primary emphasis on overhead, toward measuring how charities report on the results of their work, especially outcomes. He also spoke frequently before the media and within the philanthropic sector on matters of concern to the sector.


“While I was at Charity Navigator, many nonprofit leaders would ask me, ‘How can we possibly build the kind of performance management systems that are required to do a better job and satisfy our funders?’ When I learned about the work of Algorhythm, I came to the conclusion that they were the only ones that had truly ‘cracked the code’ and filled that need for nonprofits of all sizes. They accomplished this by developing powerful and affordable tools to help nonprofits manage and measure what matters most to meet their mission. Joining forces with them is the logical next step for me, as we continue our work to transform the nonprofit sector.”


Berger also intends to continue writing and speaking about issues that are of concern to nonprofits and funders (individual, foundation, corporate and government) alike. He also expects to keep up with his blog, Ken’s Commentary, and carry on with speaking frequently before the philanthropic community about what he calls “The Battle for the Soul of the Social Sector”.


Before his work at Charity Navigator, Mr. Berger spent nearly 30 years in various leadership positions of human service and health care nonprofit organizations dedicated to serving the underserved.


Mr. Berger holds a Master’s Degree in Developmental/Clinical Psychology from Antioch University and a Master in Business Administration from Rutgers University.


About Algorhythm (https://algorhythm.io/)
Algorhythm was founded in 2013 with a mission to provide data-driven decision making for social impact. Algorhythm offers its Impact Learning (iLearning) Systems to provide forward-looking analytics that can help nonprofits learn and adapt over time to improve overall performance as well as increase their measurable outcomes. Algorhythm can be contacted by email at info@algorhythm.io or by phone at 267-225-8066.