by Jason Saul and Matt Groch– Apr 09 2014
In a recent SSIR blog post, “Cracking the Code on Social Impact,” one of us (Jason Saul) introduced our Universal Outcomes Taxonomy, which serves as a foundation for benchmarking across the social sector. Even though this is a big step forward, the critical question remains: How can we benchmark social impact programs if they have no outcomes data?
The fact is that few nonprofits have quality outcomes data today, and they likely never will; capacity, cost, time, and consistency are all factors that make it impractical to expect them to produce quality data. To overcome this challenge, we must flip the measurement paradigm from empirical, longitudinal, retrospective data to real-time, predictive, algorithm-based data. In a word, we need to create “synthetic” data.
Other sectors have been successful in using algorithm-based data to predict future behavior or outcomes. For example, when someone applies for a loan, the bank uses his or her credit score to predict the likelihood of repayment. When a student applies to a college program, the admissions committee uses a formula that considers standardized tests, high school transcripts, and other factors to predict the likelihood that the student will be successful in the program. Both of these widely used and well-regarded decision-making tools rely on “synthetic” data.
The basis of developing this “synthetic” data is a comprehensive mapping of the factors involved in predicting a specific outcome. In 1990, the Department of Energy and the National Institutes of Health launched the Human Genome Project to predict health outcomes. In 2000, Pandora created the Music Genome Project to quantify music and predict songs that are likely to produce the outcome of a heightened listening experience. And now, in 2014, we’re announcing the launch of the Impact Genome Project, a massive effort to systematically codify and quantify the factors that research has shown drive outcomes across the entire social sector.
Read the full feature here.