Why More Information is Less
The more information we gain about digital activism, the less we seem to know. We have an ever-increasing volume of case studies from the Philippines to Spain and from Moldova and Ukraine and Iran. Every day blogs offer cogent advice how to use YouTube, Facebook, or the latest geo-location application for activism and advocacy. Yet the argument about digital activism’s value grows more heated instead of more tempered. The gulf widens between the optimists, who think that digital technology offers potential for transforming global power dynamics, and pessimists, who think the technology is equally likely to empower dictators or fundamentalists.
We know more and more about digital activism, yet this information isn’t additive. The totality is not greater than the sum of its parts. It is not creating a unified body of knowledge or consensus about the nature of digital activism.
Apple to Apples: the Need for Comparability
How can more information about digital activism not lead to understanding of digital activism? Because this information is not comparable. Which case study is more telling of the value of digital technology for activism: an optimistic one like Spain’s pre-election mobilization or a pessimistic one like Iran’s post-election mobilization? Should an organization with limited time resources use a blog, a Facebook group, or an active Twitter account? Without a common framework, comparing Iran’s case to Spain’s or a blog to Twitter is like comparing apples to oranges: whoever is making the comparison can do so according to their own tastes and biases.
The goal of comparability is to determine which tactics are most effective in a given context. Yet, without a common frame of reference it is impossible to compare relative value. This means more argument, more debate, and less of the consensus-based knowledge that can build a field.
Creating Comparability: a Framework for Digital Tactics
In order to compare digital activism cases we need to break campaigns down into their component digital tactics and then compare those tactics on equal terms. Such a framework would need to accurately quantify all the technical aspects of each case and also weigh the non-technical contextual factors that have weighed on the success of activism campaigns since time immemorial: the centralization or decentralization of government power, the freedom of the mainstream media, the existence of autonomous civil society organizations. Since my experience is in the technical context, the framework below focuses on the digital side of digital activism. I’ll look to activism experts like the International Center on Nonviolent Conflict and others in building out a framework for non-technical factors.
Here is the framework I propose for creating comparability of digital activism cases. In order to compare seemingly diverse tactics I have based my analysis on a common unit of which they all share: transmission of information. The factors below then describe how different digital activism tactics transmit information in different ways. I’ll present the factors, explain them, and then give some visual examples of how they can be applied.
Framework for Comparing Digital Activism Cases
1) Vector: To what extent is information being created or consumed?
2) Frequency: How often is information being created/consumed?
3) Participation: How many people are taking part as creators or consumers ?
4) Duration: For how long did this activity last?
Vector: Though in the broadcast era most people were mere recipients of information, in the digital era, we the “former audience” can now act as both consumer and creator of information. Information can either have an incoming vector (direction): create (send) or consume (receive). Because most digital applications (email, Facebook, YouTube, etc.) allow the user to both consume and create, any digital activism tactic will fall somewhere on a continuum of between 100% consumption and 100% creation. Though there are different methods for measuring the create/consumer balance, I would quantify it by create/consume balance among participants. For example, the Obama campaign used its email list mostly to send information to supporters, but rarely expected a reply, so the participant pool was approximately 99% consumers and 1% creators. (The visual examples below will clarify this.)
Frequency: In addition to the direction of a piece of information, the frequency with which the content is created is also important. Any digital activism tactic will thus fall somewhere on a continuum between frequent communication and infrequent communication. For example, the most popular Twitter feeds exhibit frequent content creation, with several tweets sent in a single day.
Participation: How many people are participating in the digital campaign? That is the question which this factor seeks to answer, with answers ranging along a continuum from broad to narrow. Though we usually equate broad participation with campaign effectiveness, internal decision-making using a wiki or Skype is most effective with only a few participants.
Duration: The final factor in measuring a digital activism tactic is duration: how long is the tactic used? The continuum here is short to long, on a scale to be developed according to common tactical durations in a representative pool of cases. In many cases of successful digital mobilization (the Spain case, for example) information transmission has high participation and frequency but short duration – only a day or two.
As an aside, though these are the key factors I was able to develop, this list is open to amendment. There are other digital factors, like tactical diversity, that could also have a bearing on the effectiveness of a digital activism tactic.
Examples: Applying the Framework
From a practical perspective, this framework can be used by researchers to code case studies in order to create comparable data sets from currently incomparable qualitative case studies. Digital activism campaigns can be broken into component tactics (spreading info by SMS, posting video to YouTube, writing blog posts) and then each tactic could be scored 1 to 5 according to its place on the continuum for each of the four factors.
However, the benefits of using this type of framework to compare seemingly disparate digital tactics in equal terms can perhaps best be understood visually. In the graphics below I have plotted tactics as dots along an x/y axis of frequency and vector. Participation is measure by the diameter of the dot (big dot = more participation). Though ideally I think duration should be shown through animation as Gapminder does, for the graphics below I have marked duration by color from hot red (short duration) to cool blue (long duration).
Tools Analysis: Email Lists
Though there are more uses of this framework, I have given examples for two types of analysis: of a particular tools across multiple cases and of a single case which included multiple tool types.
The diagram to the left is of the first type, showing how a single tool (in this case, email) can be used differently in different campaigns. It is all too common to talk about “Facebook activism” or “Twitter activism,” where in reality there are many tactics that derive from a single tool, each with different implications about efficacy. One of the benefits of this framework is that it is possible to map and compare multiple uses for the same tool, allowing us to parse the complexities of a given tool’s value for digital activism.
The diagram at left shows three examples of how to use an email list. The Obama campaign (1) had a large, high-frequency, and long-duration list when the average user was a receiver. The MAP Strategy Group listserv (2) has had a much shorter duration, is much smaller and is more interactive, but nearly meets the daily frequency of the Obama list. The final example, another small listserv I started, called DigiActive Big Ideas (3), is almost totally inactive because its messages are so infrequent.
Case Analysis: Obama New Media
This framework can also be used to map the different digital tactics used in a single campaign case study. I’m using the Obama campaign again because I am most familiar with that case (I worked for the New Media department). This diagram does not present all the digital tactics used by the campaign and also does not apply any value to this particular constellation of tactics. Though a variety of tactics was useful (and feasible) for the Obama campaign, it might be both unfeasible and unnecessary for others.
In looking at the diagram, one can see that content that was produced by the campaign and broadcast to supporters is on the right side of the diagram, since the average participant was a consumer. Email occupies the upper-left corner with its high participation (millions of supporters on the list) , high frequency (daily), and long duration (since the early days of the campaign). Below that is the short-duration orange dot the represents the Neighbor to Neighbor online tool that was introduced later in the campaign to allow supporters to access lists of voters call lists. In this case, participants were exclusively consuming information since only the campaign had access to voter phone number databases. Below that is the small blue dot of video, some of which was embedded in emails and some of which was posted directly to YouTube. The blue color indicates that this was also a long duration tactic and the large size indicates that there were multiple participants, though the participants were almost exclusively consuming (viewing) the videos rather than creating response videos. Also, because video production is a more labor-intensive process, video were produced less frequently than email.
Blogging at the Obama campaign is another interesting case. As with the first email diagram, we can see from the diagram above that blogging was used in different ways by the campaign. The HQ blog was written by Obama campaign staffers at a high frequency of a few posts a day, but low participation since only a handful of staffers wrote the posts, (though all supporters could comment). Users could also create their own blogs on the campaign web site. In this case, all users were creating content, either by posting or by commenting on other posts.
In the bottom-left corner the medium-duration green dot represents a functionality created by the campaign which allowed supporters who volunteered to host a house party for the campaign to create a dedicated web page for their event which included a map to the event location, the date and time, and a way for attendees to RSVP. The placement of this dot in the lower-left corner indicates that this content was uniquely created by supporters yet events were held infrequently.
Hopefully these examples demonstrated the value of the framework. However, they may also have revealed weaknesses. I am eager to have feedback from researchers and practitioners on whether this framework is useful and to learn about other coding frameworks for creating comparable digital activism data sets.