Are social media bots a threat to democracy?

October 12, 2017

Ryan de Laureal analyzes the push to get Silicon Valley to clamp down on promoters of "fake news"--and argues that the solution isn't censorship, but transparency.

TECH GIANTS Facebook, Google and Twitter have found themselves under fire as the latest details have emerged about the use of fake Russian social media accounts and political ads in last year's presidential election.

The furor over various attempts to manipulate public opinion by spreading "fake news" during the 2016 campaign began almost immediately after Donald Trump's shocking victory, resulting in a storm of criticism toward companies like Facebook, which were accused of failing to crack down on the abuse of their platforms by Russian fakesters.

The issue was revived again on September 6, when Facebook announced it had discovered that about $100,000 worth of political ads that were purchased from June 2015 to May 2017 by accounts with potential links to the Russian government, many of them posing as fake American users.

In the following weeks, the company handed over thousands of these ads to Congressional investigators and announced steps to limit the impact of such content in the future.

Are social media bots a threat to democracy?

After Facebook, Twitter and Google were the next ones to be caught up in the investigation.

In Twitter's case, a primary focus was on the use of so-called "bots"--automated accounts that can be programmed to post and share content, and can often be made to appear indistinguishable from real users. Hundreds of such bots were apparently used by Russian actors to spread propaganda during the election.

The alarm being raised by Democrats about this Russian influence campaign should be looked at skeptically. Rather than being a smoking gun, the ad spending uncovered thus far by Facebook raises serious doubts about how extensive and impactful this campaign really was.

To begin with, $100,000 is an almost laughably minuscule amount of money compared to what presidential campaigns typically spend on political propaganda. While it is possible that more Russian ad spending may come to light, the fact remains that the Trump and Clinton campaigns each spent hundreds of millions of dollars in the 2016 presidential race, large portions of which were dedicated to advertising.

Even if an extra $100,000 bump for the Trump campaign by Russian actors is taken into account, Clinton still outspent Trump by over $200 million, and even Green Party candidate Jill Stein outspent the Russians 50 times over.

If the claim that Russian propaganda activity cost the Democrats the election is taken seriously, it reveals either superhuman ability on the part of the Russians or total ineptitude on the part of the Democrats, who failed to defeat Trump despite burning through buckets of money in their attempt to do so.


THAT ISN'T to say that there aren't genuine concerns raised by the issue of Twitter bots and fake accounts.

Though certain bot functions--such as liking posts, following users en masse and sending direct messages--technically violate Twitter's terms of service, the company still encourages the use of automated accounts, and there are a proliferation of services available online that allow for abuse of the platform even by those who are not tech-savvy.

Certain products allow customers to create and control thousands of bot accounts in an instant, and Twitter's low standards for account verification (little more than an e-mail address is needed to create an account) have made bots desirable tools for anybody wishing to influence public opinion, Russians or not--which is precisely why the current concern among Democrats about their use falls short.

Allowing anonymous users to create thousands of fake accounts at the click of a button and use them to impersonate real people and spread lies certainly is something that should be of public concern. This is especially true when--as was the case with many of the pro-Trump Russian bots and fake accounts active during the campaign--they are used to incite xenophobia and bolster society's racist, far-right fringe.

But thus far, the Democrats' only apparent concern is the use of bots and fake accounts by the Russians--even though bots have become a fairly regular feature of U.S. political campaigns over the past few years, with Republicans and Democrats alike investing in automated Twitter traffic to spread their campaign propaganda, alongside more traditional advertising routes.

An analysis of selected Twitter traffic during the 2016 election by the Oxford Internet Institute's Computational Propaganda Project found that over 10 percent of users tweeting election-related hashtags were potential bots--and it's likely that most of them weren't Russian.

Though the concentration of bot activity was stronger for accounts tweeting pro-Trump content, bots were also used to spread pro-Clinton content in 2016.


DESPITE THE often narrow, jingoistic focus of the current frenzy over "fake news," and its obvious use for Democrats as a political tool in their ongoing Russia inquiry, there are clear problems posed by bots and other forms of modern technological propaganda that should be taken seriously.

Addressing these problems must go hand in hand with the fight to defend the Internet as a free and open form of communication.

The ease and anonymity with which platforms such as Twitter can be abused make them attractive venues for wealthy and powerful actors--from dictatorial regimes to corporate interest groups--to manipulate public opinion, sow confusion and quell dissent.

In addition to their use by multiple players in the 2016 U.S. election, bots have been used extensively by the widely despised Institutional Revolutionary Party of Mexico to manufacture fake support for its candidates and silence criticism online. Pro-government bots have also been used by repressive regimes in Syria and Turkey to spread propaganda in support of the Assad and Erdoğan dictatorships.

A number of solutions have been proposed by the government and by companies such as Twitter and Facebook in response to the current Russia scandal, including tighter regulation and greater transparency in online political advertising, more aggressive enforcement of terms of service rules by social media companies, and greater collaboration between Silicon Valley and the national security state.

While some proposals, such as greater transparency around online ads and automated accounts, could be welcomed, many of these are quite dangerous. Of particular concern are measures that would lead to greater control over the Internet or censorship powers against online speech by either the state or corporations or both.

One example of these dangers can be found in the debate over the Stop Enabling Sex Traffickers Act, or SESTA, which is currently gaining traction in the Senate.

While the bill has the ostensible purpose of cracking down on sex trafficking, it has been criticized by Internet advocacy groups for its proposal to limit the application of Section 230 of the 1996 Communications Decency Act.

Section 230 has been described as "the law that built the modern Internet" by the Electronic Frontier Foundation (EFF):

Section 230 says that for purposes of enforcing certain laws affecting speech online, an intermediary"--such as a company, website, or organization that provides a platform for others to share speech and content--"cannot be held legally responsible for any content created by others. The law thus protects intermediaries against a range of laws that might otherwise be used to hold them liable for what others say and do on their platforms.

It's thanks to Section 230 that social media exists in the way that we know it today. The proposal to limit it, which could make companies like Facebook or Twitter open to lawsuits for illegal content posted by users, means that any organization providing an online platform for speech would be incentivized to more heavily police and censor content.


THIS DEMAND to be more vigilant in finding and removing malicious content is essentially what many have been making of Facebook and Twitter in the current Russian hacking scandal.

But what counts as malicious is subjective--whether there are human moderators on the other end screening ads and content and deciding what gets approved, and even more especially, when the moderators themselves are bots.

The enormous amount of advertising that is bought and sold on platforms like Facebook makes it impossible for a human to review and approve every ad purchase. An attractive alternative for tech companies is bots, programmed with an algorithm that allows them to automatically review and flag content as potentially troublesome.

If Internet companies censor content more heavily, it won't be humans doing the moderating. Instead, as the EFF argues, increased sanctioning for online platforms will actually lead to a greater automation of this function.

A number of programs like this already exist, such as Google's recently rolled out Perspective, a programming interface designed to fight online trolls by automatically moderating comment threads and flagging posts based on their "toxicity."

The danger posed to free speech by programs like Perspective isn't hard to see. After its rollout, users experimenting with it discovered a discriminatory streak in the kinds of statements flagged as toxic.

A statement such as "I am a man" is flagged as 20 percent likely to be seen as toxic, while "I am a Black man" is flagged at 80 percent. "I am a woman" is 41 percent, and "I am a gay Black woman" is flagged as 87 percent.

The problem is that algorithms can't understand things like human intent. They can search posts for key words--like "Black" or "Jew"--that might be used by racist online trolls, but they have trouble distinguishing between actually racist posts containing these words from ones that aren't racist.


SPAMBOTS AND fake news are legitimate problems. They muddy the waters of online speech and are ripe for abuse. Governments, including the U.S., use them for psychological operations to spread false ideas and silence dissent, and they can also be used by hackers to spread malware.

But sanctioning Internet companies for the actions of their users and giving them more power to censor content online is a route that could chill the Internet as a venue for free speech.

There are better ways to handle bots. If Twitter were simply to disclose which of its user accounts were automated--in the same way that it has been proposed to create greater transparency around Facebook ads by disclosing who bought them--it would go a long way toward eliminating the ability to disguise bots as real users.

There are trickier Internet questions out there, such as how to handle the epidemic of online harassment. But when it comes to bots, increased transparency may not be the best solution for Silicon Valley's profit margins, but it would make for a better Internet for the rest of us.

Further Reading

From the archives