Every month more evidence piles up, suggesting that online comment threads and forums are being hijacked by people who aren't what they seem.
The anonymity of the web gives companies and governments golden opportunities to run astroturf operations: fake grassroots campaigns that create the impression that large numbers of people are demanding or opposing particular policies. This deception is most likely to occur where the interests of companies or governments come into conflict with the interests of the public. For example, there's a long history of tobacco companies creating astroturf groups to fight attempts to regulate them.
After I wrote about online astroturfing in December, I was contacted by a whistleblower. He was part of a commercial team employed to infest internet forums and comment threads on behalf of corporate clients, promoting their causes and arguing with anyone who opposed them.
Like the other members of the team, he posed as a disinterested member of the public. Or, to be more accurate, as a crowd of disinterested members of the public: he used 70 personas, both to avoid detection and to create the impression there was widespread support for his pro-corporate arguments. I'll reveal more about what he told me when I've finished the investigation I'm working on.
It now seems that these operations are more widespread, more sophisticated and more automated than most of us had guessed. Emails obtained by political hackers from a US cyber-security firm called HBGary Federal suggest that a remarkable technological armoury is being deployed to drown out the voices of real people.
As the Daily Kos has reported, the emails show that:
• Companies now use "persona management software", which multiplies the efforts of each astroturfer, creating the impression that there's major support for what a corporation or government is trying to do.
• This software creates all the online furniture a real person would possess: a name, email accounts, web pages and social media. In other words, it automatically generates what look like authentic profiles, making it hard to tell the difference between a virtual robot and a real commentator.
• Fake accounts can be kept updated by automatically reposting or linking to content generated elsewhere, reinforcing the impression that the account holders are real and active.
• Human astroturfers can then be assigned these "pre-aged" accounts to create a back story, suggesting that they've been busy linking and retweeting for months. No one would suspect that they came onto the scene for the first time a moment ago, for the sole purpose of attacking an article on climate science or arguing against new controls on salt in junk food.
• With some clever use of social media, astroturfers can, in the security firm's words, "make it appear as if a persona was actually at a conference and introduce himself/herself to key individuals as part of the exercise … There are a variety of social media tricks we can use to add a level of realness to fictitious personas."
Perhaps the most disturbing revelation is this. The US Air Force has been tendering for companies to supply it with persona management software, which will perform the following tasks:
a. Create "10 personas per user, replete with background, history, supporting details, and cyber presences that are technically, culturally and geographically consistent … Personas must be able to appear to originate in nearly any part of the world and can interact through conventional online services and social media platforms."
b. Automatically provide its astroturfers with "randomly selected IP addresses through which they can access the internet" (an IP address is the number which identifies someone's computer), and these are to be changed every day, "hiding the existence of the operation". The software should also mix up the astroturfers' web traffic with "traffic from multitudes of users from outside the organisation. This traffic blending provides excellent cover and powerful deniability."
c. Create "static IP addresses" for each persona, enabling different astroturfers "to look like the same person over time". It should also allow "organisations that frequent same site/service often to easily switch IP addresses to look like ordinary users as opposed to one organisation."
Software like this has the potential to destroy the internet as a forum for constructive debate. It jeopardises the notion of online democracy. Comment threads on issues with major commercial implications are already being wrecked by what look like armies of organised trolls – as you can sometimes see on guardian.co.uk.
The internet is a wonderful gift, but it's also a bonanza for corporate lobbyists, viral marketers and government spin doctors, who can operate in cyberspace without regulation, accountability or fear of detection. So let me repeat the question I've put in previous articles, and which has yet to be satisfactorily answered: what should we do to fight these tactics?