Wednesday, July 6, 2016

Transition Back to the Trenches



Eight years ago I left the world of nonprofit direct service to run Charity Navigator (CN). Thanks to that experience,  I met some of the deepest thinkers and change makers around the globe.  I had the unique opportunity to have a platform where I could speak out on important nonprofit and social sector issues on both the national and international stage. I also helped to influence where millions, if not billions of charitable dollars go each year. And I learned about the vital importance of nonprofits becoming adaptive, learning organizations that use data to make certain they are having meaningful results.

Then, I formed Greater Good Associates to try my hand at consulting to nonprofits and social enterprises, as well as working as an Interim Executive Director. I also joined Algorhythm where I got to continue much of what I did at CN. Plus, I was able to see in detail how nonprofits of almost any size can learn from data. I came to deeply appreciate how Algorhythm is a quicker, better and cheaper way than all others, for nonprofits to finally not just measure outcomes to satisfy funders, but to get evidence based guidance on specific practices to make programs and organizations as a whole, as effective as possible. However, Algorhythm is at a stage in its development where I am most useful as a Partner and advisor,  rather than an employee.

This leads to my latest transition,  back to my roots - serving children in need and their families.  Back into the "trenches" of direct service. This is where I worked for over 25 years before CN.

The agency I am now working at is called CTC academy. CTC Academy provides educational and therapeutic services for multiply disabled children with developmental and physical challenges. A multi-sensory approach is utilized in a nurturing and caring environment to enrich the lives of students and maximize their potential while lending support to their families. I serve in the role of Executive Director. I am honored to have this amazing opportunity to help lead this special, caring place.

I don't know whether there will be time or opportunities to continue to write and speak out as I have over the past 8 years away from direct service. I hope I can, because I believe the conversations among thought leaders and decisions that are being made by policy makers are too often divorced from the day-to-day nonprofit realities.  We need to ground these conversations and decisions in what's really happening in the trenches! There aren't many of us with that experience and perspective participating in those dialogues. So I will try my best to allocate time to pause, reflect and share on Ken's commentary and elsewhere.

Meanwhile, thank you all for reading and listening over the past 8 years. I hope there are many years more of it to come!

All the best, 
Ken

Tuesday, March 8, 2016

‘Oops: we made the non-profit impact revolution go wrong’



Originally published in Alliance magazine by Caroline Fiennes and Ken Berger 

The non-profit ‘impact revolution’ – over a decade’s work to increase the impact of non-profits – has gone in the wrong direction. As veterans and cheerleaders of the revolution, we are both part of that. Here we outline the problems, confess our faults, and offer suggestions for a new way forward. 

Non-profits and their interventions vary in how good they are. The revolution was based on the premise that it would be a great idea to identify the good ones and get people to fund or implement those at the expense of the weaker ones. In other words, we would create a more rational non-profit sector in which funds are allocated based on impact. But the ‘whole impact thing’ went wrong because we asked the non-profits themselves to assess their own impact. 

There are two major problems with asking non-profits to measure their own impact

Incentives 

The current ‘system’ asks non-profits to produce research into the impact of their work, and to present that to funders who judge their work on that research. Non-profits’ ostensibly independent causal research serves as their marketing material: their ability to continue operating relies on its persuasiveness and its ability to demonstrate good results. 

This incentive affects the questions that non-profits even ask. In a well-designed randomized controlled trial, two American universities made a genuine offer to 1,419 microfinance institutions (MFIs) to rigorously evaluate their work. Half of the offers referenced a real study by prominent researchers indicating that microfinance is effective; the other half referenced another real study, by the same researchers using a similar design, which indicated that microfinance has no effect. MFIs receiving offers suggesting that microfinance works were twice as likely to agree to be evaluated. 

Who can blame them?

Non-profits are also incentivized to only publish research that flatters: to either bury uncomplimentary research completely or share only the most flattering subsets of the data. We both did it when we ran non-profits. At the time, we’d never heard of ‘publication bias’, which this is, but were simply responding rationally to an appallingly designed incentive. This problem persists even if charity-funded research is done elsewhere: London’s respected Great Ormond Street Hospital undertook research for the now-collapsed charity Kids Company, later saying, incredibly, that ‘there are no plans to publish as the data did not confirm the hypothesis’. 

The dangers of having protagonists evaluate themselves is clear from other fields. Drug companies – who make billions if their products look good – publish only half the clinical trials they run. The trials they do publish are four times more likely to show their products well than badly. And in the overwhelming majority of industry-sponsored trials that compare two drugs, both drugs are made by the sponsoring company – so the company wins either way, and the trial investigates a choice few clinicians ever actually make.

Such incentives infect monitoring too. A scandal recently broke in the UK about abuses of young offenders in privately run prisons, apparently because the contracting companies provide the data on ‘incidences’ (eg fights) on which they’re judged. Thus they have an incentive to fiddle them, and allegedly do.

Spelt out this way, the perverse incentives are clear: the current system incentivizes non-profits to produce skewed and unreliable research

Resources: skills and money 

Second, operating non-profits aren’t specialized in producing research: their skills are in running day centres or distributing anti-malarial bed nets or providing other services. Reliably identifying the effect of a social intervention (our definition of good impact research) requires knowing about sample size calculations and sampling techniques that avoid ‘confounding factors’ – factors that look like causes but aren’t – and statistical knowledge regarding reliability and validity. It requires enough money to have a sample adequate to distinguish causes from chance, and in some cases to track beneficiaries for a long time.  Consequently, much non-profit impact research is poor. One example is the Arts Alliance’s library of evidence by charities using the arts in criminal justice. About two years ago, it had 86 studies. When the government looked for evidence above a minimum quality standard, it could use only four of them. 

The material we’re rehearsing here is well known in medical and social science research circles. If we’d all learned from them ages ago, we’d have avoided this muddle. 

Moreover, non-profits’ impact research clearly isn’t a serious attempt at research. If it were, there would be training for the non-profit producers and funder consumers of it, guidelines for reporting it clearly, and quality control mechanisms akin to peer review. There aren’t.

Non-profits should use research rather than produce it

Given that most operating non-profits have neither the incentives nor the skills nor the funds to produce good impact research, they shouldn’t do it themselves. Rather than produce research, they should use research by others. 

So what research should non-profits do? First, non-profits should talk to their intended beneficiaries about what they need, what they’re getting and how it can be improved. And heed what they hear. 
Second, they can mine their data intelligently, as some already do. Most non-profits are oversubscribed, and historical data may show which types of beneficiary respond best to their intervention, which they can use to target their work to maximize its effect.

Put another way, if you are an operating non-profit, your impact budget or impact/data/M&E people probably shouldn’t design or run impact evaluations. There are two better options: one is to use existing high-quality, low-cost tools that provide guidance on how to improve. The other is to find relevant research and interpret and apply it to your situation and context. A good move here is to use systematic reviews, which synthesize all the existing evidence on a particular topic.    

For sure, this model of non-profits using research rather than producing it requires a change of practice by funders. It requires them to accept as ‘evidence’ relevant research generated elsewhere and/or metrics and outcome measures they might not have chosen. In fact, this will be much more reliable than spuriously precise claims of ‘impact’ which normally don’t withstand scrutiny. 

What if there isn’t decent relevant research?

Most non-profit sectors have more unanswered questions than the available research resource can address. So let’s prioritize them. A central tenet of clinical research is to ‘ask an important question and answer it reliably’. Much non-profit impact research does neither.  Adopting a sector-wide research agenda could improve research quality as well as avoiding duplication: each of the many (say) domestic violence refuges has to ‘measure its impact’, though their work is very similar. 

Organizations are increasingly using big data and continuous learning from a growing set of non-profits’ data to expand knowledge on what works. As more non-profits use standardized measures, they can make increasingly accurate predictions of the likelihood of changed lives, and prescribe in more detail the evidence-based practices that a non-profit can use. 

In summary


Non-profits and donors should use research into effectiveness to inform their decisions; but encouraging every non-profit to produce that research and to build their own unique performance management system was a terrible idea. A much better future lies in moving responsibility for finding research and building tools to learn and adapt to independent specialists. In hindsight, this should have been obvious ages ago. In our humble and now rather better-informed opinion, our sector’s effectiveness could be transformed by finding and using reliable evidence in new ways. The impact revolution should change course. 

Caroline Fiennes is founder of Giving Evidence. Email caroline.fiennes@giving-evidence.com
Ken Berger is managing director of Algorhythm. Email ken@algorhythm.io

Wednesday, February 3, 2016

The Occupy Charity Problem: Big Money in Few Hands





An 11 minute podcast that describes a little known or discussed reality in the nonprofit sector - the tremendous concentration of resources among a relatively small number of organizations. The implications of this "Occupy Charity" problem are also considered.

https://soundcloud.com/tinyspark/big-money-in-few-hands

Monday, February 1, 2016

Friday, January 29, 2016

Thursday, January 28, 2016

Winning the Battle for the Soul of the Social Sector

This is a 50 minute presentation, followed by 20 minutes of Q&A on my more recent thinking on this subject. Thanks to my work at Algorhythm, I now have a deeper understanding of what is required to win this battle! Check it out.

The presentation was conducted at the Maxwell School of Citizenship and Public Affairs at Syracuse University.






Tuesday, October 13, 2015

The Democratization of Social Impact Measurement: Why I Joined Algorhythm




I spent roughly thirty years helping to manage human service and health care organizations dedicated to serving those most in need. I then spent almost 7 years at Charity Navigator. As a result, I was lifted out of the trenches of direct service and exposed to the intoxicatingly “thin air” of thought leaders, consultants and academics who dwell at the 50,000 foot level of the nonprofit and social sector.  The ideas and principles of many of those individuals are brilliant and exciting. However, more often than not, their ideas are either 20 to 30 years ahead of where most of the sector is today or just simply wrong (nice in theory but not in practice). 

Nonetheless, there was one fundamental concept that some of them promoted that made complete sense to me - the need to have nonprofits pay attention to data and measure what they do to be certain they are meeting their mission. For thirty years in the trenches I collected plenty of data, but it was mostly just counting stuff and rarely indicative of meaningful change in the lives of people being served. Therefore, about six months into my job at Charity Navigator I announced to the world (on my blog site) that we were going to change the way we rated charities over time to focus on outcomes. 

Over the years that followed I became an increasingly outspoken advocate for managing and measuring what matters most to achieve nonprofit and social enterprises good works.  However, I also became increasingly aware of a fundamental problem, I called it the Occupy Charity problem. That is, that roughly 1% of nonprofits in the USA (registered here but serving every country in the world), take in about 86% of the $2 trillion dollars that comes into the sector each year. In fact, it is a global problem and their is a similar situation in most countries. 

I observed that the leaders of the 1% tend to dominate the conversations around all things having to do with the sector in general. Not surprisingly, the consultants and institutes that developed models of performance management and measurement have predominantly been geared to them as well. After all, that’s where the bulk of the money is! As a result, a typical response to my speeches about performance management and measurement by the leaders of small and mid-sized nonprofits around the country was, “How will we ever afford to do that stuff?”

That was a very good question. My answers were very limited and over time even less so, until 2013. That was the year I began talking to Peter York about his new company called Algorhythm. He described a low cost, scalable tool he was developing to help the other 99% take advantage of Big Data, machine learning and other cutting edge technologies. He also mentioned how the tool gave front line staff the ability to know even before a program begins the likelihood of success, as well as things they could do proactively to make the program more effective. He noted that, through aggregation of data from many small nonprofits, they could learn together and get even better at delivery of high quality services. Amazingly, it could all be accomplished at 10 to 20 times less than the traditional tools and systems.

So when I left Charity Navigator and was considering what to do next in my career, the offer to join Algorhythm was a no brainer! I had met with nonprofits and experts on measurement from  around the world. There was and is no one else I am aware of that has a tool like Algorhythm. I came to this realization two years ago, while still at Charity Navigator, and have been promoting them ever since with absolutely no financial “skin” in the game. Yes that has changed since I now work at Algorhythm and could arguably be biased. However, working here has only deepened my appreciation for the immense value these tools can bring to organizations that are willing to consider them. 

Below is a list of some of the outstanding things that the Algorythm - iLearning System can help a nonprofit or social enterprise to do:

  1. Identify all pathways to success for their beneficiaries.
  2. Provide on-demand insights to the frontline staff.
  3. Provide big-picture strategic insights to leadership.
  4. Empower and engage beneficiaries in the learning and improvement process.
  5. Connect everyone to an evolving learning network.
  6. Transform data for reporting into data for meaningful improvement.


Given all this, I believe that Algorhythm has “cracked the code” for the 99% of small and mid-sized charities that have been left out of the social impact revolution. The wait is over for a system that can provide meaningful information on what matters most to every nonprofit or social enterprise’s mission. No longer will these organizations have to face the increasing demands of funders or investors for outcome data without a viable affordable option to meet that need. No longer will front line staff be faced with yet another meaningless reporting requirement that adds no value to their work. No longer will beneficiaries of services be voiceless and disengaged from the program design and improvement process. 


I hope that funders, investors, experts, as well as leaders of nonprofits and social enterprises will begin to stand up and take notice of this one of a kind accomplishment. We have heard about the wonders that Big Data and machine learning are doing in the traditional for profit world. It’s now time to finally have our turn and create the most effective and high performing organizations imaginable. As a result, we will be able to help many more communities and people in need in measurable ways. The world can be a much better place as a consequence. Please join us. The future is now.