Makin’ Digital Activism Data

This summer I’ve been leading a deal of coder to create the next version of the Global Digital Activism Data Set. Every Monday I meet with coders (From left: Frank, Luis, Mary, Shin)and we review a selection of the past weeks cases to ensure that we are interpreting the cases in the same way and to discuss instances in which we are not. Version 2.0 of the GDADS (GDASD2) should be up on this sit later this fall.

The Activism Knowledge Gap

Earlier this week I met Nathan Matias, who will be starting his PhD at the MIT Media Lab this fall.  He pointed me to a great blog post he wrote about the many intelligent people talking about how little we know about changing the world, how fuzzy so much of the social change endeavor still is.  This is a concern that I share and that I am trying to use the Digital Activism Research Project to change.  From Nathan’s post:

Mark Simpkins, who does socially responsible design in the UK:

Sometimes you wake up and realise that you want to change the world….  The next step is always one of the hardest, what to do next, how do you go about changing the world?… When do I create a pledge? When do I contact my MP? When do I take to the streets?…. You examine it and start to break it down, possibly into steps.

This is not about creating a platform to ‘help you make change’…. its a discussion about taking part in the community that can make change happen.

Tom Steinberg of MySociety:

The knowledge-sharing disconnect between the academic and activist/practitioner communities is really, truly terrible, everywhere except data-driven voter-targeting..

Greenpeace is part of the environmental movement. Oxfam is an international development charity. Human Rights Watch is part of thehuman rights movement. Obama for America is a political campaign…. But what primary movement or sector is mySociety part of? Or Avaaz? Or Kiva? Or Wikileaks?

When I ask myself these questions, no obvious words or names race quickly or clearly to mind. There is a gap – or at best quite a bit of fuzziness – where the labels should go.

From Nathan himself:

We still have much to learn about the basics of creating change…. Mark suggests that we need services that help us decide how to create the change we care about. One response is to curate marketplaces or collect case studies to list the options and strategies for change.

[But] as attractive as it seems, change doesn’t simply come from picking the right tool or tactic…In addition to knowing what we could do and having the confident experience to try, we need to know what works and what doesn’t. This is an area where academics can help.

To this discussion I’ll also add a Personal Democracy Forum talk by Taren Stinebrickner-Kauffman, Executive Director,  Taren points out how little campaigners know about their effectiveness, and how useful an empirical and evidence-based approach to campaigning could be.

Who else is talking about these issues?


Image: Flickr/Martin Deutsch


Cross-posted from The Digital Activism Research Project

Three Types of Hybridity in the Boston Bombing Investigation

Readers of this blog know that I like to write about hybridity, which I define as the mix of online and offline action in the context of digital activism.  In reality, there are at least three kinds of hybridity that describe the intersection of digital and analog culture: spatial hybridity, organizational hybridity, and systemic hybridity.

Spatial Hybridity

The type of hybridity I refer to most often on this blog is spatial hybridity, the switches from digital space to physical space and back again.  For example, the Million Hoodie March last year was spatially hybrid because Facebook was used to mobilize an offline march.

This type of hybridity is extremely common in digital activism and may, in fact, be universal, since the people who engage in digital activism always exist in physical space, even when they are typing away at their computers.  Also, institutions of power, such as governments, still exist in physical space, so digital action must jump the bits-to-atoms barrier if they are to have impact.

Organizational Hybridity

The second kind of hybridity is organizational hybridity, and has to with the behavior of organizations.   The analysis of organizational hybridity is most associated with Andrew Chadwick of the University of London, and relates to the convergence of repertoires of contention (tactics) within single organizations.

In a 2005 paper, Chadwick wrote that political “parties, interest groups and new social movements’ organizational features and policy impacts may be converging” and that the Internet makes it particularly easy for organizations like to mix their tactics. “How do we make sense of MoveOn?,” writes Chadwick in the article. “Is it an interest group, a new social movement, or simply the progressive wing of the Democratic Party?” He answers his own question: “In combining the mobilization strategies typically associated with parties, interest groups and new social movements, MoveOn is a hybrid political organization.”

Systemic Hybridity

Chadwick’s forthcoming book, Hybrid Media System: Politics and Power, looks at hybridity between people and organizations that use new media and old media.  Instead of individual organizations, he is looking at systems of organizations.  The Amazon blurb about his book notes that “the new media system is increasingly defined by organizations, groups, and individuals who are best able to blend old and new within… a hybrid system.”

Hybridity and Boston Bombing Investigation

A network and a newpaper collaborated… to identify two innocent men.

All three types of hybridity have been on display in the investigation into the Boston bombing. Continue reading

New Digital Activism Data!

Version 1.0 of the Global Digital Activism Set is now available.

Last month my other initiative, the Digital Activism Research Project, released version 1.0 of the Global Digital Activism Data Set (GDADS), a collection of digital activism cases from around the world, created as an open resource to scholars.  I am finally getting around to posting the announcement here, which seems only fair as GDADS began at the Meta-Activism Project.

The release includes the following resources.  Some are available via email so we can track distribution. All requests will be answered promptly and all materials have a Creative Commons license.

1) Documentation: User’s Manual and Codebook 
Description: Contains project history, data description, methodology notes, variable definitions.
Format: Personal Document File (.pdf)
Access: Download Link

2) Coded Case Studies Spreadsheet
Description: Contains 1,180 cases of digital activism from 151 countries and dependent territories, rangine from 1982 through 2012, coded according to 57 variables.
Format: Excel (.xlsx)
Access: email request to Mary at mjoyce AT uw DOT edu

3) Case Study Sources Spreadsheet
Description: Contains links and citations to the source materials for 1,346 cases of digital activism initially collected for the GDADS project.
Format: Excel (.xlsx)
Access: email request to Mary at mjoyce AT uw DOT edu

If you have any additional questions about the project, please contact Mary Joyce at mjoyce AT uw DOT edu.

Academic Ethics in a Networked Age

Now I am a grad student and, even though I am only in my first quarter, we are being encouraged to write for academic publication. This is a great idea except:

  1. Submission to peer-reviewed journals causes publication delays of many months, even when the article is accepted.
  2. Academic journals often forbid publication of the article elsewhere, creating artificial scarcity(artificial since there is no longer a technical constraint on limitless free copies).
  3. Even when published, access is limited to elites (those with academic affiliation at well-funded institutions) since subscriptions to journal databases are prohibitively expensive.

To some, these may simply be inefficiencies, but I see an ethical dimension. The current system of academic advancement encourages the benefits of scholarship to be narrowly distributed, though technology now allows broad distribution. The scholar is encouraged to use her mental products for her own career advancement by seeking to have them published in a peer-reviewed journal which, as mentioned above, delays and limits broad access.

To rely on an inefficient technology to disseminate ideas with public value is unethical.

This system of academic publishing allows the scholar a benefit in renumeration (employment) and prestige (official recognition of the value of the mental product within the academic community), but prevents broad access to her findings and thus limits their social impact.

At least in the hard sciences and social sciences, this system of delay and limited access creates a contradiction by the following logic:

  1. If a research finding has public value, then delay in making that finding broadly and quickly public is unethical.
  2. If the subject does not have public value, it is not an apt focus of focus of research.
  3. Ergo, if a subject is an apt focus of research, it has public value, and…
  4. to not to make it broadly and quickly public is unethical.

It may seem odd that I am equating publication in a peer-reviewed academic journal with not making research findings public. This is a result of changes in what public knowledge means in the digital age of instant mass self-publication.

Even a few decades ago, mass self-publication was not possible, so an academic had to rely on the slow closed system of academic publishing to make their work public in any way. Now, the Internet has surpassed the past efficiency (print, closes databases), to turn it into an inefficiency (Clay Shirky pointed this out at a talk at MIT last year). Because the findings of the hard and social sciences have public benefit, to rely on an inefficient technology to disseminate those ideas is unethical.

The fact that the careersof academic are tied to this unethical and inefficient system is unfortunate. The job-for-life tenure system relies of publications of books and peer-reviewed journal articles. This makes it difficult for academics to act ethically, because they are putting their financial well-being and those of their families (not to mention their opportunity for advancement) in jeopardy. Still, I think many academics would acknowledge the truth of the argument I have sketched above.

So it is up to certain academics to decide to buck the system and not embargo their ideas. In some instances there might not be a trade-off. One may be able to share basic research findings publicly while still submitted another version of those findings for publication in a peer-reviewed journal. In other instances, however, there will be a trade-off. Publishing findings publicly will mean that the findings will not be accepted for academic publication.

I personally commit not to submit for academic publication any research finding without also sharing it publicly at the same time. I have yet to apply this personal commitment to any piece of work, so I am not sure how this will work in practice, but I am willing to make this commitment publicly in order to help myself to abide by it.

There is a project greater than publish-or-perish, greater than tenure, greater than poster sessions and conference papers. This is the human project and it is the only measure of professional success that matters.

Converting Online Commitment to Offline Action in Cairo

If You Flash It, They Will Mob

A Thursday flash mob in Cairo’s Ramsis Station has been drawing some press attention, as reporters seem determined to figure out what the purpose of the event was. As usual, reporters try their hardest to emphasize the pointlessness and essential frivolity of any kind of digitally-organized gathering.

The point of this post is not to decide whether or not the flash mob constituted street art or some other political protest. It is to try, once again, to complicate our understanding of what constitutes success and failure in digital organizing. Continue reading

How Big Data Entered American Politics

One of the major stories this election cycle has been “big data”: campaigns combining voter files, consumer records, and response data collected by their own volunteers to individually target voters. This practice is at once exciting because it allows campaigns greater precision than ever before in how they interact with individual voters, yet it also raises privacy concerns as citizens are often unaware of the amount of personal data available to third parties or how it is being used.

Right or wrong, big data is now a part of our political process. But how did it enter American politics in the first place? This history is recounted in Rasmus Kleis Nielsen’s new book, Ground Wars: Personalized Communication in Political Campaigns, which looks at how campaigns conduct field operations (door-to-door canvassing and phone-banking). Though these activities take place offline, computers are never far away, for it’s the analysis of digitized data that directs volunteers which doors to knock and which phones to call.

The Republicans Strike First

While consumer data has been used since the 1970’s to calculate credit scores and since the 1980�s for direct marketing, political campaigns didn’t get into the digital data game until 1995, when the Republicans created the Voter Vault, a shared and continuously-updated voter file hosted on a server available to Republican state parties and campaigns.

This was quite an improvement over the previous data system. “In the absence of a shared voter file,” writes Nielsen, “every new campaign would have to start from scratch, building their own voter files by collecting public records on registered voters, buying commercial data to enhance it, and making identification calls.” After each campaign, “the entire painstakingly constructed database typically simply disappeared”.

The Democrats Slowly Respond

As soon as the Republicans had a shared voter file, the Democrats had to have one too, though their effort to create one was far bumpier. It was not until 2002 that the Democratic National Committee (DNC), under chairman Terry McAuliffe finally invested in their own system. The result, Demzilla, was distinctly underwhelming. In a 2003 article in Roll Call, one anonymous Democratic consultant complained that “the quality of data is far from a level that would make it immediately useful…. [and] the system is overly cumbersome.” It was hard to use and not worth the effort.

Also, many of the state parties did not even donate their data to the project, afraid that the system would be used more for the 2004 presidential race that for their local campaigns. (This perception was not helped by the fact Demzilla was part of Project 5104, McAuliffe’s campaign to win at least 51% of the presidential vote in 2004.) Notes the Roll Call article, “Demzilla is an idea on paper makes a lots of sense… the problem is that [the DNC] took the idea and let the technology run ahead of the relationships….”

Howard Dean to the Rescue

Howard Dean did not win the presidency in 2004, but he developed enough of a grassroots following that he was able to take the DNC chairmanship in 2005 against the wishes of party leaders. His two big projects were the 50 State Strategy to put DNC-salaried organizers in every state to help the local parties and to hit reset (almost like an Etch-a-Sketch…) on the party’s voter database.

This time the party’s electoral and technological projects complemented rather than undermined one another. While Project 5104 had sown distrust in the state parties, the 50 State Strategy increased it, making the parties more likely to contribute their data. As a result, the new database, VoteBuilder, grew quickly, and VoteBuilder achieved the same data participation in one year that Demzilla achieved in three. Demzilla was abandoned and VoteBuilder rose from the ashes. A single online interface, known as the VAN (Voter Activation Network) was added in 2007. Also, while Demzilla allowed 300 hundred data points to be added for each voter, VoteBuilder allowed 900, a recognition that more data was both available and useful for voter targeting.

Business to Politics: Mitt Romney Shows the Way

This realization about the usefulness of data was largely due the example set by Mitt Romney. In 2001 Romney ran for governor of Massachusetts and his consultant, Tom Gage, used data to “supersegment” voters as never before. Using sophisticated statistical techniques, he created predictive models which determined the probability of future voting behavior based on information about their past political and consumer habits. (Gage’s term eventually loss ground to the term “microtargeting,” possibly because of the divisive connotation of “segmenting”).

In his book, Neilsen quotes Gage as saying that the businessmen who were Romney’s advisers “were flabbergasted when they learned that such techniques, mainstays in many parts of corporate America, were not already widespread in politics.” However, Romney himself likely played a role as well in the centrality of data in his campaign.

In a recent articlein the New Yorker, Louis Menand points out that at Boston Consulting Group and Bain Capital, Romney’s employers from 1975 to 1999, “data crunching seems to have been the main engine of analysis.” Menand draws the link between Romney’s business background in management consulting and his pioneering role in bringing big data into American political campaigning. “Virtually everyone agrees that Romney was extremely good at” data crunching, writes Menand, “and he runs his political campaign in the same way.”

Obama Steals the Data Crown

Though the Republicans clearly got a head start on big data, Nielsen notes that in 2008, many observers believed that the Democrats took the lead. VoteBuilder and the VAN “were built by experienced vendors… and were subject to repeated field testing… before they faced the ultimate test during the general election of 2008” notes Nielsen. “That year, many of my interviewees argue, the Democratic Party for the first time went one better than the Republican party in the targeting and data arms race.”

Coming Soon…. 2012: Clash of the Data Titans

Digital Activism Research: Learning a Lot About a Little

Kony tweets: a gorgeous N of 1 (source: Gilad Lotan)

We now know a tremendous amount about the Kony 2012 campaign and excellent analysis keeps rolling in: on the Ushahidi blog, Patrick Meier has posted a variety of responses from Ugandans and Ethan Zuckerman has posted a gorgeous visualization fromGilad Lotan of the first 5000 Kony tweets (see left). At a panel at SXSW yesterday, I learned that Invisible Children plans to release their own data on the campaign.

As a digital activism researcher this makes me happy, because we need more empirical qualitative and quantitative analysis of digital activism, and most of the analysis I have read is of this type: nuanced, data-driven, analytically sophisticated. At the same time, it is just one case. We are learning a lot about a little.

This reminds me of 2009, when there was so much attention paid to the use of digital technology in the Iranian post-election protests. Excellent research was conducted by the Web Ecology Project, The Center for International Media Assistance, and The United States Institute of Peace. This happened again in 2011 when in-depth survey data on citizen media use during the Egyptian Revolution was collected by The Engine Room and analyzed by Zeynep Tufekci. The problem with intense but uneven data collection is that there is little basis for comparison. In academic terms, we are left with an N of 1.

I am not criticizing the intense analysis of the Kony case, or any of the other cases. I am pushing for an awareness that knowing a lot about a few cases has limited value because there is a great danger of making baseless extrapolations about how the lessons learned in Iran, Egypt, or the Kony case apply to other digital activism cases. What does our knowledge about media choices in Egypt tell us about media choices in Syria? What will Kony tell use about the next viral video? We don’t know.

The Global Digital Activism Data Set is collecting and comparatively analyzing digital activism cases, but our data is mostly qualitative and narrative. We don’t have network analyses like Gilad’s. For every digital activism case for which we have detailed information, there are thousands for which we know little or nothing. Even as we laud the empirical analysis of individual digital activism cases, we must work for the funding, tools, and academic interest that will allow the Gilad Lotans of the world to conduct their analyses not only on single digital activism cases, but on hundreds.

Author’s Reponse: “Digitally Enabled Social Change: Activism in the Internet Age”

Note: This post is a response to Book Review: “Digitally Enabled Social Change: Activism in the Internet Age”

by Jennifer Earl (Professor of Sociology, University of California, Santa Barbara)

I appreciate the chance that Mary has given us to reply to her critical review of our book, Digitally Enabled Social Change (2011, MIT Press). Given the tone of Mary’s review, I think it is helpful to first step back and notice that there are many things on which Mary and I agree. Indeed, Mary ends her review with a laundry list of things she liked about our book, some of which are quite important themes. For example, she agrees with our arguments about the changing infrastructure of movements—which may seem like a simple point to her but is one that in some ways upends four decades of social movement scholarship. She also agrees with our argument about the likelihood of episodic activism, which again may seem minor to her but would represent a fundamental break in our academic understanding of social movements across two centuries. But, at an even bigger level, and perhaps most importantly, Mary and I both think digital activism is important and that people (activists and scholars alike) should pay more attention to it.

Where Mary and I diverge is in how you forward an agenda about raising the profile of digital activism. That divergence in large part owes to our expected audiences—Katrina and my audience is academic; we are trying to make a case to social movements researchers, who as a group have been exceedingly skeptical of digital activism. It has been an uphill battle to get social movement scholars to consider the possibility that digital activism has different dynamics and that studying those dynamics is important. Mary’s is a technology-rich audience where utopian visions of technology are as common as skepticism. Our primary audience uses email; hers tweets. Our primary audience is obsessed with the quality of research methods, theories of causality, and academic rules of evidence. Hers is obsessed with cutting edge technologies. So, it is understandable that despite common orientations to digital activism overall, we end up with very different means of forwarding that agenda. With this as background, let’s turn to Mary’s main concerns:

Why study online petitions, boycotts, and email and letter-writing campaigns?

Mary takes issue with our empirical focus on these tactics for a variety of reasons (indeed, if you read carefully, this is her biggest beef with our work), and certainly, if you don’t want to read about these kinds of online tools, you might take issue with us too. But, instead of critiquing our book based on the book you wish we had written, let’s discuss the merits of the book that was written. We focus on these tactics for several reasons.

First, as Mary points out, these are online incarnations of offline tactics. Although Mary takes this on its face as a negative, we think it actually gives us a lot of helpful research leverage. From a research methods perspective, it allows us to very precisely isolate the impact of action taking place online versus offline because we know how these specific tactics have worked offline in the past and can use that as a baseline. Also, by limiting the only source of variation to whether the action is taking place online or off, we immediately eliminate a ton of other causal explanations for what we find. For instance, if we had chosen other online protest forms, like the Google bomb that Mary mentions, critical social movement scholars would have been able to assume that the differences between on and offline activism we find owe to the exoticism of the tactic, not to its online elements.

Second, Mary argues that the tactics we study are least likely to showcase novel action. We agree—this is a chief reason we chose them! For an academic audience, choosing the hardest target and then still proving your point is a huge bonus, not a criticism: that we find important differences between online and offline tactics in places where you might expect those differences to not exist or to be minimal is our argument. Indeed, choosing a venue where you are mostly likely to be wrong and then testing your theory is a hallmark of good social science—despite what many people think, social scientists should try to avoid “cooking the books” in their favor through their selection of cases.

Third, we chose these tactics because they seem to be everywhere online. Mary asserts (without data, much as she accuses us of doing) that these tactics are not the most common online forms of action. Perhaps these are not the most common tactics in the complex world of the Obama campaign or in a training session for experienced digital activists, but in the everyday world where my aunt and her friends are looking to participate online, online petitions, boycotts, letter-writing and emailing campaigns are where it is at. And, while this book doesn’t present data on the frequency of these tactics versus other kinds of tactics (you got us there, Mary), I have recently finished a 5-year study of 20 social movement arenas and can tell you that those data conclusively show that the tactics Katrina and I are studying in this book are the most frequent online tactics. You can check out some early results from that study in the December 2010 issue of Mobilization. Later papers from this study will confirm what Mary thinks must be wrong: even in 2010, online petitions, boycotts, and email and letter-writing campaigns are very popular online.

Fourth, Mary objects to the lack of focus on social media and related social media tactics, arguing that the tactics we study are stale in digital terms. But, as I just mentioned, later work of mine shows that these are not stale—they are quite popular even today. And, my more recent data collection shows that the social media tactics Mary thinks are so prevalent still make up a very small share of the overall online protest universe. While social media maybe the shiniest thing out today, it’s not the only or the most empirically frequent form of online protest. Moreover, while we are looking at data from 2004, we don’t believe that the theoretical principals we are trying to illustrate with that data are much different in 2011 from 2004. Indeed, my current work is testing precisely that hypothesis. I also think there is another audience issue at work here: academics understand that writing a well-researched book and getting it through the academic publishing process takes a few years. Mary’s audience is now, new, next. But just because something took place yesterday, doesn’t mean it’s not instructive about today and tomorrow.

Finally, Mary suggests our tactics are not representative of the online universe. Here I could not strenuously disagree more and hope that readers will judge this for themselves. The methods that Katrina and I use are unique in that they actually give us a better chance of charting a representative population of online petitions, boycotts, and email and letter-writing campaigns. Check out the methods and decide for yourself. As for whether they are representative of the most common forms online, see my response above—new work shows, yes, they are. While Mary’s anecdotes about the popularity of other forms are interesting, they aren’t accurate in painting a larger view. Some of Mary’s examples certainly have gotten a lot of news coverage—but they are outliers in both their public notoriety and their novelty.

Are you claiming that some kinds of online protest are “better” than others?

No. I think this is a place where Mary misreads us. For whatever part of that Katrina and I are responsible for as authors, let me set the record straight. In a nutshell, we argue that theoretical processes that have been developed over the past 50 years to explain activism are only good guides for the theoretical processes driving some kinds of online activism today, and those theoretical expectations don’t lead us in the right direction for other kinds of online activism. Try this analogy: 50 years of research has shown that engines are based on combustion. We are saying that for some kinds of online protest, the engine is all electric (proverbially speaking). We are not claiming that an electric engine is better; we are saying it operates differently from a classic combustion engine. More precisely, we are saying that when people take advantage of and leverage two unique features of Internet-enabled technologies—low costs and coordinated action without co-presence—the theoretical explanations for participation and organizing change. When they don’t leverage these unique features, the engine stays the same. We are not claiming one is better, just that they are different. For Mary, those theoretical differences might not matter much. For social movement scholars, they are critically important.

Why didn’t you study which online protests were effective?

From her review, it appears Mary wanted a book that empirically examined which kinds of online protest are more or less effective. We don’t advertise such a book, nor did we write one. I think for a variety of methodological reasons, it will be a long time before someone writes the book that Mary wants in this regard, or at least writes one that, from a research methods point of view, I would also want to read. It turns out that the study of offline protest faces the same problem: it turns out to be very difficult to prove, from a social scientific perspective, which offline actions or movements are effective. I gave a paper on this very topic in Berlin this summer and would be happy to share it with people who email me.

Other Quibbles

Mary had other quibbles with the book. She thought we should have mathematically tested whether organizers’ time exactly conformed to a power law. We didn’t see that as necessary because even something that looks close to a power law—which we do show—is a very radical departure from what would be anticipated by social movement scholars. If it is off by a hair, it doesn’t really matter to the arc of our argument because it’s still in the ballpark. She wishes we didn’t use the terminology of e-tactics, preferring digital protest. I am hoping much of both of our audiences can get past such semantic differences in style.

So, where does this leave us?

As I said in the opening, Mary and I actually have pretty similar agendas: we both think people should pay more attention to digital activism. In my case, I want social movement scholars to dig more deeply into our theoretical approaches so we can figure out when and how protesting online differs—at a theoretical process level—from protesting offline. I also want Internet scholars to have to seriously engage with the literatures that have been developed around relevant offline areas of social life instead of engaging in drive-by theorizing that doesn’t connect with different areas’ rich research traditions. I think Katrina and my book gets scholars off to a good start on both of these endeavors. I hope readers will judge for themselves and I am confident that most will enjoy the book much more than Mary did.

Book Review: “Digitally Enabled Social Change: Activism in the Internet Age”

Note: This post is followed by a response from the authors.

Broad-based empirical studies are sorely needed to understand the effect of digital technology on contentious politics, a field where battling anecdotes predominate. Jennifer Earl and Katrina Kimport’s new book, Digitally Enabled Social Change: Activism in the Internet Age attempts to fill this need. However, the book has several serious weaknesses in its methodology, theoretic goals, and conclusions.

The authors clearly wish to find disruptive new phenomena, evidence that digital activism challenges traditional social movement theory. Yet they choose to study digital tactics that are merely digital incarnations of analog ones and focus on outdated tactics and applications while ignoring tactics which are more relevant to current practices of digital activism. This desire to come to a certain theoretical conclusion leads the authors to make intellectually appealing yet practically unsound conclusions about how to evaluate digital tactics.

Outdated Tactics

In 2004, Jennifer Earl of the University of California at Santa Barbara and Katrina Kimport of the University of California at San Francisco created a data set composed of four types of digital tactics: petitions, letter-writing campaigns, email campaigns, and boycotts. The tactics, identified through a robust and innovative use of Google search, were split between “warehouse” sites like (362 tactics), and non-warehouse sites, like personal blogs (748 tactics).

Already some problems are apparent. First, no social media platforms are included in their data set. Based even on a casual review of news headlines, social media seems extremely salient in “activism in the Internet age,” the avowed topic of the book. Social media is not discussed at length until the very last page of the conclusion, and is discussed in vague and platitudinous terms: “social networking sites like Facebook will encourage new uses and dynamics of online protest,” “networks could even be created around specific actions,” etc. (p 204). The disconnect is perhaps most visible in the index, which includes one entry and four sub-entries on “fax campaigns,” but not a single entry of any kind for “social networking” or “social media”.

Earl and Kimport do briefly acknowledge this shortcoming in the beginning of their book when they note that their “data are drawn from the period just preceding the rise of many dominant social networking Web sites” (p 27). However, this statement is problematic. First of all, it is not really true. The first wave social networks like Friendster (2002) and MySpace (2003) existed when they collected their data. Other social media platforms more important for activism, like Facebook (2004), YouTube (2005), and Twitter (2006), were founded soon after, and long before the publication of their book.

To draw conclusions about activism in the Internet age without reference to social media seems almost as negligent as drawing conclusions about weight gain without reference to carbohydrates. During the seven years between data collection and publication could they not have collected new data to reflect current practice? And, if such revision was impractical, could they not have couched their findings with a strong disclaimer? Publishing outdated material is a particular danger in this rapidly changing field, but downplaying or ignoring valid concerns about old data is more problematic.

What about the tactics they did choose: petitions, letter-writing campaigns, email campaigns, and boycotts? Earl and Kimport state that “most examples of relatively inexpensive activism online take the form of the e-tactics we examine in this book” (p 73), yet they offer no evidence for this claim. Did they do an earlier survey of a broad range of digital tactics and select the most popular to focus on? There is no evidence of this. Though anecdotal evidence supports the continuing popularity of email in digital campaigns, it’s hard to imagine that digital boycotts are as pervasive. After failures of high-profile e-petition sites like Number 10, the tactical value of e-petitions has also been questioned.1 In fact, Earl and Kimport acknowledge that “seven of the fifteen [warehouse sites] we studied are gone from the Web” (p 195), a fact that does not bode well for the current validity of their seven-year-old tactical data.

This focus on the pre-social media, Web 1.0 era of the Internet is also evident in their terminology: e-tactics, e-movements, e-mobilization. These terms are a throwback to the late nineties when the terms grew out of the word e-mail, short for “electronic mail.” Back then it was assumed that the principle difference between digital and paper media was that the former required electricity to be created and disseminated. We now have a more sophisticated understanding for how digital media is different from paper media, differences that hinge on the use of code (Lessig’s “perfect copies, freely made”) and network effects. (It is for this reason that I use the prefix “digital” instead of “e-” in this paper.)

Bottom Line: The four tactics selected for the study ignore the most popular current platforms, and have questionable representative salience.

Theory Before Evidence

Though Earl and Kimport’s book is founded on an empirical study, theirs is a deductive rather than inductive logic. They propose interesting theories yet their evidence does not really fit them. In other cases evidence they ignore challenges the validity of their theoretical claims.

There are the main theoretical arguments in the book:

  • Tactics that maximally leverage digital affordances will lead to theory 2.0 changes in pre-digital social movement theory while tactics that only minimally leverage those affordances will lead to supersize changes where the mechanics of pre-digital social movement theory remains intact, only at a larger scale.
  • That leveraged affordances should be the principal lens through which to evaluate digital tactics.
  • That new types of online organizing structures exhibit a power-law function in that the most engaged organizers is twice as active as the second most active and so on.

Supersize / Theory 2.0

Earl and Kimport’s contention that the use of digital technology in activism will require the updating of pre-digital social movement theories is a good one. However, the evidence they provide does not support this contention. This is likely a result of the types of tactics they decided to study. Since they clearly have an interest in showing how digital tactics require the re-working of pre-digital theories, it is peculiar that they choose to focus on tactics that are merely digital forms of pre-digital tactics, a fact they acknowledge when they write that “each of these e-tactical forms has an offline progenitor and a long offline legacy” (p 202). These seem like the tactics least likely to reveal paradigm shifts. In fact, I often refer to e-petitions as the consummate example of a supersize tactic in that it achieves the exact same function as an offline petition – collecting signatures – only it allows more signatures to be collected more quickly and from a wider geographic area.

To find evidence of model change Earl and Kimport could have chosen tactics, like the Google-bomb, that have no pre-digital form. For example, in one high-profile instance of Google bombing, gay sex columnist Dan Savage led a campaign to redefine the name of conservative politician Rick Santorum as a “frothy mix” associated with anal sex, a tactic meant to publicly shame Santorum for his homophobia. An analog equivalent of this tactic – changing the definition of a word in every dictionary – simply doesn’t exist. They could also have studied tactics that don’t quite line up with their pre-digital precursors. For example, though DDoS attacks have been referred to as digital sit-ins, they are really more like ordering a million pizzas to arrive at MasterCard headquarters. These “false cognate” digital tactics could also reveal areas where old theory needs to be updated. The reason why Earl and Kimport chose supersize tactics in their effort to demonstrate theory 2.0 effects is never explained as they never reveal their process for selecting the tactics in the first place.

Leveraged Affordances

Their second theoretical point, that the leveraged affordances (fully utilizing capacities of digital tools) of digital technology is “critical to understanding Web activism” (p 177) is certainly true, but it is not as central as they make it out to be. In their analysis they equate maximal leveraging of affordances with “skill” and the altering of past social movement theory while they equate a lack of maximal leveraging with the junk food metaphor “supersizing” (p 177). They are drawing an implicit hierarchy of tactics here, with the tactics that maximally leverage digital affordances on top.

Earl and Kimport acknowledge that both types of tactics are likely to coexist in practice, which is clearly the case. Yet the reason they give for this integration is flawed:

Reality is likely to always be a mix of supersize effects and theory 2.0 effects because some people don’t notice key affordances, other don’t want to or can’t leverage them even if they do notice them, and still others notice and leverage these affordances quite skillfully (p 177).

In this analysis they set up a duality where skill is equated with leveraged affordances and ignorance, refusal, and lack of capacity are equated with a failure to leverage. However, they miss one reason why someone might choose not to maximally leverage digital affordances: they have noticed the affordance and understand it, but skillfully realize that a digital tactic will not be effective in their particular context. That is, they make a skillful decision not to maximally leverage digital affordances.

There are many examples of the phenomenon of skilled minimal leveraging. The case of the Stop Stock-Outs campaign, which took place in 2009 in the southern Africa countries of Kenya, Malawi, Uganda, Zambia and Zimbabwe, is one such example.2 The key digital tactic of the campaign was a Pill Check Week in which volunteers visited public health facilities and then submitted reports of out-of-stock essential medicines via FrontlineSMS from their mobile phones. Those messages were used to create a stock-out map using the software Ushahidi to create a compelling visual which, in combination with other tactics, successfully drew the attention of local governments and international media.

By Earl and Kimport’s logic the campaign should have maximally leveraged digital affordances by using mostly (or exclusively) online tactics, since these tactics maximally leverage the digital affordances of low-cost collective action without copresence. According to this pure leveraged affordance analysis, the organizers should have called the facilities to check pill levels rather than sending volunteers out into the field to collect reports. A volunteer could call a facility in a few minutes for a few cents but it likely took hours to visit the facilities in person, plus the cost of transportation. However, if volunteers had only called facilities, they would have been forced to receive their information from facility staff who may not have been motivated to be truthful about the shortages at their facility, assuming any of the overworked staff even had the inclination to pick up at all. By using the higher-cost offline tactic of visiting the facilities, the volunteers could interview a range of patients, circumventing the necessity of getting information from facility staff and making their final reports multi-sourced and more reliable. In this way organizers increased their effectiveness by choosing a mixed offline-digital tactic rather than a purely digital tactic that maximally leveraged digital affordances. Though less theoretically elegant that a pure affordances framework, analysis of the effects of digital tactics reveals that no single measure can determine the value of a tactics, especially where information about effects is absent.

Power-Law Dynamics

The final theoretical argument, which Earl and Kimport introduce at the end of their paper, is that new digital organizations are likely to follow a power-law function because of the ease of digital organizing. They quote Clay Shirky’s definition of a power law: data “in which the nth position has 1/nth of the first position’s rank” such that the 2nd position has a quantity of 1/2 that of the 1st position and so on. They argue:

Innovative uses of the Web can make organizing inexpensive enough that it can begin to follow power-law dynamics in some situations. When that happens, one person will bear the majority of the costs, the active organizer has to bear substantially fewer costs, and so on down the line so that quickly there are no organizing costs left to bear at all…. Our power-law explanation of organizing is certainly consistent with the findings that we cited above of online protests being led by single individuals, pairs, or drastically small teams (p 152 and 153).

However, “consistent” is quite different from mathematically verified. Saying that, in the thirty-eight digital organizations they interviewed, a lead organizer did the lion’s share of the work is far from being able to demonstrate a power law. On could easily imagine that the division of labor followed this patten without following the power law.

In fact, they do not even attempt to show quantitative evidence for their claim. It would not have been difficult to collect data during these interviews about the number of hours per week each member of the organization worked, to see how closely this data matched a power-law graph. However, there is no evidence that they attempted to prove or disprove their claim, even based on their own limited sample. Perhaps they thought that the theory was so attractive that it did not require evidence to support it.

But Does it Work?

Why do Earl and Kimport’s theoretical conclusions seem so detached from the real life practice of digital organizing and activism? It is likely because they blithely eschew the key evaluative question of any organizer: “does it work?”. Earl and Kimport do not consider the effectiveness of any of the tactics in their study. “Ours is not a study of the effectiveness of e-tactics,” they write, “so although we are aware of many successful online campaigns, including efforts in our data set, we cannot empirically address” claims challenging the effectiveness of digital tactics (p 94). Their focus on claims over evidence may be a direct result of the fact that their otherwise methodologically strong study ignores the effects of their digital tactics.

Bottom Line
: The authors present arguments in favor of theories of value that are unsupported – or substantially contradicted – by evidence.


The book is not without interesting ideas. The idea that digital tactics will force a re-working of some elements of social movement theory is spot-on and leveraged affordances is certainly one valuable way to evaluate a digital tactic. Changes in organizational structure, made possible by digital tools, are also important, though without reference the the effectiveness of these new types of organizations, their ultimate impact is in question. Earl and Kimport are also right to note that online privacy norms may change expectations about what it means to act in public and I agree that these quick-start organizations, created by new activists, will likely lead to more episodic activism. It also seems that the ease of online participation may also not require previous feelings of collective identity to motivate participation, another interesting challenge to existing theory.

These positive points do not save the book, though. Its disregard for effects, choice of outdated tactics as the focus of study, and attention to theory over evidence lays the whole field of the digital contention open to charges of disconnected abstraction, cyber-utopianism, and techno-fetishism that threaten this young field’s legitimacy.

Proudly powered by WordPress
Theme: Esquire by Matthew Buchanan.