In the largest study yet of Chinese internet censorship (PDF), scholars at Harvard University have learned that China’s censorship program targets incitements to collective action, not criticism of the government, as previously supposed. Notes the abstract:
Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future — and, as such, seem to clearly expose government intent….
The scholars -Gary King,Jennifer Pan, andMargaret Roberts – used a scraper to collect content from 1,382 Chinese social media services in early 2011. They then used computer-assisted text analytic methodsto compare content that was censored (removed from the Internet) to content that wasn’t. As the scholars point out, China’s censorship program, though “designed to limit freedom of speech ofChinese people, paradoxically also exposes an extraordinarily rich source of information about the Chinese government’s interests, intentions, and goals.”
China’s censorship ecology is formidable and at times surprising. Some of the paper’s best background observations:
Unlike in the U.S., where social media is centralized through a few providers, in Chinait is fractured across hundreds of local sites, with each individual site employing up to1,000 censors. Additionally, approximately 20,000–50,000 Internet police and an estimated 250,000–300,000 “50 cent party members” (wumao dang) are employed by thecentral government
….The vast majority of censorshipactivity [on high-sensitivity topics] occurs within 24 hours of the original posting, although a few deletions occurlonger than ?ve days later. This is a stunning organizational accomplishment, requiringlarge scale military-like precision:The many leaders at different levels of government?rst need to come to a decision (by agreement, direct order, or compromise) about whatto censor in each situation; they need to communicate it to tens of thousands of individuals; and then they must all complete execution of the plan within about 24 hours.
….Overall, approximately 13% of all social media posts [in our study of high, moderate, and low-sensitivity topics] were censored.
….An oddly “inappropriate” behavior of the censors: They offer freedomto the Chinese people to criticize every political leader except for the censors, every policyexcept the one they implement, and every program except the one they run.
Previous to this study, what the authors call thestate critique theory of censorship dominated. This theory posits that “the goal of the Chinese leadership is supress [sic.] dissent, and to prune human expression that ?nds fault with elements of the Chinese state, its policies, or its leaders.” The theory and is supported by evidence, presented by Rebecca MacKinnon and others, of particular sensitive words, like “democracy” or “Bo Xilai” being blocked or immediately removed from Chinese weibo microblog services once posted.
The second theory of censorship is that of collective action potential:“collective expressions — many people communicating on social media on the same subject — regarding actual collective actions, such as protests.” Whether or not the speech is critical of the governmentis irrelevant. In fact, “the government censorsviews that are both supportive and critical of the state” if they are related to collective action.An example of this kind of apolitical censoring of speech about collective action is the rather strange anecdote from 2011 of the government censoring the word “to stroll” after the word was used to organize protests inspired by the Arab Spring.
While these two theories were debated by experts or thought to be jointly valid, the authors argue that they have found a winner:
State critique theory is incorrect and the theory of collection action potential iscorrect…. censorship is primarily aimed at restricting the spread of information that may lead tocollective action, regardless of whether or not the expression is in direct opposition to thestate….
Thus, observations like MacKinnon’s of individual words being censored can be reinterpreted. The government was not reacting to the critical meaning of the word, but to the volume of use of the word. If one person says “stroll” in China, it is not censored. If one million people do, it is. Thus, the government does not have a problem with people talking about democracy or freedom, except when they believe that it is likely to lead to collective action.
According to the theory of the paper, this is why these and similar words are either added to a list of machine-blocked keywords or are manual censored by human reviewers. Theimplication is that the Chinese government is usingonline collective expression as a predictorofoffline collective action.
The study itself focuses on content that is censored (removed) by human reviewers. This is because content containing machine-censored words would not be posted in the first place. The methodology of the paper compares published content on the 1,382 sites to the sub-set of that content that is later removed. As such, the scholars required that the content be first published on the public net so it could be collected by their scraper.
By using automated data collection they actual had an advantage over censors. As the authors state, “the reason we are ableto accomplish this is because our data collection methods are highly automated whereasChinese censorship is a massive effort accomplished in large part by hand.”
What first clued them in to the fact that content had little relevance to what was and was not censorship was that there was a “surprisingly low correlation between our ex ante measure of political sensitivity and censorship.”At the beginning of their study they selected 85 topics on which to collect content, divided into three level of political sensitivity”High” (such as Ai Weiwei), “Medium” (such as the one childpolicy), and “Low” (such as a popular online video game). They defined each topic by keywords and them collected all posts on those topics from the selected platforms for six months.
When they began analyzing their data, they found that “censorshipbehavior in the Low and Medium categories was essentially the same (16% and 17% respectively) and only marginally lower than the High category (24%).” That is, a post about the one child policy had about the same chance of being censored as a post about an online game.
In another instance, a health scare (a run on iodized salt to protect against radiation following the Japanese earthquake), which incited apolitical collective action, was also highly censored, while supposedly political news about education and a rise in food prices was not.
A diagram of high an low-censored topics is at left and shows the surprising lack of correlation between a topic’s political sensitivity and its likelihood of being censored. Political topics appear in both histograms, but it is the topics that involve protest or crowd formation offline (hence the salt run’s inclusion) that are most censored.
Some topics that one might think would cause offline collective action, like the rise in food prices, were not highly censored. According to the analysis of the researchers, this was because the topic did not fall into one of three types of content which have collective action potential.
Is this the checklist used by Chinese censors? It is likely something similar.
- Current Inciter ofCollective Action: The discussant calls for offline collective action (“we should…”).
- Past Inciterof Collective Action: The discussant previously called for offline collective action on another subject (past offender).
- Past Subjectof Collective Action:The topic itself was previously the subject of offline collective action,particularlynationalism.
A post could thus be categorized as having collective action potential without actually containing an incitement to collective action. For example, the translated post below supports the government’s position in the case of Ran Jianxin, a local legislator who died in police custody.
According to news from the Badong county propaganda department website, when Ran Jianxinwas party secretary in Lichuan, he exploitedhis position for personal gain in land requisition, building demolition, capital constructionprojects, etc. He accepted bribes, and is suspected of other criminal acts.
The post does not incite collective action orcriticizethe government, but it references Ran Jianxin, who was thesubject of past protests, making the post an example of the third type of collective action content and was thus censored.
At the end of the paper, the author provide a juicy treat: their censorship analysis software is predictive of actions taken by the Chinese government. This is because censorship policies are determined and implemented in advance of public government actions. If you can find an up-tick in censorship activity (not explained by chance or other factors), it is likely to pre-sage public government action on that topic.
For example, the authors found that censorship of Ai Weiwei’s name increased in the days ahead of his April 3rd arrest (the gray area in the diagram at left) and censorship discussion of Wang Lijun, who exposed the corruption of Bo Xilai, was censored in advance of Wang’s demotion on February 2nd.
The paper concludes with the wise dictum that “with respect to speech, the Chinese people are individually free but collectively in chains.” It is the collective nature of speech, rather than its content, that merits censorship in the eyes of the Chinese government.
This also supports the hypothesis that the Chinese government uses social media as a barometer of public opinion, thus allowing it to respond to certain public demands while remaining an autocracy. This policy of freedom of speech (so long as it is individual) ironically allows China to maintain its legitimacy and improve governance as measured byresponsivenessto citizens’ needs. As the article’s authors point out:
Dimitrov (2008) argues that regimes collapse when its people stopbringing grievances to the state, since it is an indicator that the state is no longer regardedas legitimate. By extension, this suggests that allowing criticism, as we found the Chineseleadership does, may legitimize the state and help the regime maintain power.
Personally, this makes me never want to use human coders again (except to train a machine to code). It seems like the machine readable content and mind-boggling scale of the subject matter of digital activism require the adoption of methods best suited to this new medium.
Thank toJay Ulfelder and Patrick Meier for alerting me to this article.