Note: The authors’ views expressed in this article do not reflect the views of the United States Agency for International Development or the United States Government
In my last post I discussed how the U.S. Government (USG) is funding civil society organizations (CSOs) abroad to help build their capacity to use new media in the pursuit of increased democracy and governance. Essentially, this initiative is based on the assumption that increased ability to engage in new media equals increased effectiveness in democracy promotion. However, without empirical evidence to test this assumption, it leaves new media development interventions open to criticism and failure. In this post I’ll outline why research focused on this small niche of USG funded organizations is important for more than just Washington bureaucrats.
Within the fields of both civil society and digital activism, one of the most debated topics is whether increased engagement in new media is in having a positive or negative influence on actors working towards increased democracy. On one hand, they represent invaluable tools for organizing and disseminating information – on the other they’re a window for repression and detached realities of progress. In short, it’s yet to be determined whether the ICT revolution is one of liberation technology or repression technology. A main reason this debate continues is the lack of research, particularly research along methodological lines of hybridity (a problem succinctly outlined in this post by Mary Joyce). Hybridity in this case refers to the identification of objects of analysis in which online and offline activity interact. This is a way to measure not only the digital footprint of activism, but also their real world implications. A key challenge of hybridity analysis is finding ways to scale research beyond qualitative case studies in a practical, cost-effective manner while still maintaining the richness of data required to measure offline activities.
With this challenge in mind, the small sub-set of CSOs receiving USG funding to support their democracy efforts in new media represent a unique sample from which to draw data from the broader spectrum of digital activists. Foremost, an organization receiving USG funds is generally bound to complete regular systematic monitoring of inputs, outputs, and outcomes coupled with at least one evaluation of population level impacts. A common yet disparaging theme of development project reporting is characterized by field staff writing lengthy reports only to be read once and then stuffed in a drawer never to see daylight again. The limited shelf life of these reports is understandable, they represent data specific to one project working in one country within a relatively narrow focus. A method for aggregating these individual reports and making them useful for cross-country comparison was exemplified by the Presidents Emergency Plan for AIDS Relief (PEPFAR), a surge of billions of dollars to combat HIV/AIDS around the world initiated by President Bush and carried on by President Obama. PEPFAR instituted a rigorous format of reporting along standardized indicators as a requirement for any organization receiving its funds. The aggregate data from thousands of organizations across dozens of countries comes together in an annual report. This report allows PEPFAR to show demonstrable evidence of success to congress (thus ensuring continued funding), guides more effective programing, and adds a trove of data to the field of HIV/AIDS research.
A similar standardized reporting system initiated for CSOs receiving USG funding for new media promotion would have similar benefits, assisting in the discovery of conditions that allow the combination of new media and democracy promotion to flourish and where it’s destined to be fruitless or too risky an endeavor. A mandatory reporting system would also go a long way in solving one of the problems of scaled hybridity analysis, in that the collection of rich offline data falls not on the researcher traveling to each organization, but on trained staff within the CSO who are responsible for submitting reports on a regular basis.
A drawback to this method is that like all research, the usefulness of the data collected is dependent on the validity of the indicators and the quality of the measurements. In the field of digital activism, both of these areas have remained elusive from shared consensus. One possible starting point is the U.S Institute for Peace (USIP) report Blogs and Bullets , which outlines five levels of analysis for finding a comparable scale of measurement in regards to impact across organizations and countries. With considerable fleshing out it could serve as a useful framework to build standardized indicators that accurately capture hybridity.
Another distinct hurdle is that unlike success in battling HIV/AIDS, organizations working in democracy promotion may be wary to share a comprehensive record of their achievements, or even make public their acceptance of USG funds. Anonymity and limited public release of certain data are possible solutions, but caution would have to take precedence.
One more factor to consider is that standardized reporting across a sector is expensive for a development agency. It takes training, time and collaboration that require additional staff and funds from project budgets already stretched thin. PEPFAR can do it because it’s one of the largest development initiatives ever undertaken. USG funding to support democracy activists abroad in the use of new media is a relatively miniscule sliver of foreign aid, but as I wrote in my last post it has the potential to grow exponentially. But if this prediction proves true, it’s going to be critically important to have data that can answer the simple question: Is it a good idea? Developing a standardized hybridity analysis is beneficial not only for the USG, but also any international donor supporting democracy through new media. The results of such an analysis would help answer whether foreign funded democracy initiatives through new media support is a good idea, but also shed new light on the continuing cyber optimist – cyber pessimist debate.
In forthcoming posts I will continue to explore methods of evaluating the effectiveness of digital media used by civil society actors.
Interesting post, Travis. I agree that having comparable data on US-funded projects could be very interesting, but I think that the challenges you identify – choosing meaningful indicators and building a uniform reporting system – are substantial. Maybe a first step (and next post?) could be to suggest what types of indicators should be reported.
Also, I’d suggest that it has been demonstrated that digital technology is useful to both digital activism and digital repression. The duality is likely to continue into the future and our analysis should presume this continuing contention, rather than to seek to identify a winner.
Thanks Mary. Yes, developing the indicators and reporting system is the next big step. This Fall I’m going to be collaborating with some M&E folks who are implementing these new media projects in the field to learn what methods they’re using and see what could be applied universally. Will keep you posted.