A pro-China propaganda campaign that’s been bashing the US on social media has been creating fake followers with the help of AI-generated images.
Since June, the campaign has been posting English-language videos critical of the Trump administration on Facebook, Twitter and YouTube, according to the research company Graphika, which has been tracking the group’s activities.
Graphika dubs the campaign “Spamouflage Dragon.” And like other propaganda activities, the pro-China group will use fake accounts to share and post comments on its content to help it gain wider circulation. However, Graphika noticed something odd with the profile photos belonging to these fake accounts: In some cases, the headshots appear to be the work of an AI program designed to create artificial human faces.
(Credit: Graphika)
On first glance, the profile photos all look legitimate. But on closer examination, Graphika spotted strange commonalities in the images, such as the blurred backgrounds, and how the eyeball positions in the profile photos all coincidentally matched up. The odd features indicate the photos were likely the product of a “generative adversarial network” or GAN, a machine learning technology adept at creating seemingly real, but ultimately fake human faces.
(Credit: Graphika)
The GANs can generate a synthetic face by studying existing images of real people, and learning how to recreate the facial features into a new image. However, the results aren’t always perfect. The AI program often has trouble rendering ear rings and other objects around the fake person’s face. The backgrounds are also left vague.
Nevertheless, the technology has sparked fears about bad actors exploiting AI-created media to help them pump out disinformation over social media. For instance, a reverse image search can often reveal whether a user’s profile photo is legit or has been repurposed from somewhere else. But the same can’t be done for a freshly-generated photo made by an AI.
In the case of Spamouflage Dragon, the pro-China group used the AI-generated photos to create fake followers on Twitter and YouTube. However, the campaign itself was pretty shoddy, according to Graphika. “The videos were clumsily made, marked by language errors and awkward automated voice-overs,” the research company said in its report.
The computer-assisted text-to-voice recordings were so bad some videos pronounced the US as “us.” Other language errors include using headlines and subtitles that mentioned “Public blamed Trump sinaction,” and “very good at be mischievous.”
(Credit: Graphika)
As a result, the videos failed to receive any engagement from real social media users. The campaign ran from June to early August, posting videos critical of President Trump’s ban on TikTok and his approach to COVID-19. However, the social media companies have since taken down the group’s videos and the affiliated user accounts.
Whether the Chinese government was behind the campaign remains unclear. However, US intelligence officials warned last week that foreign governments, including China, Russia and Iran, will try to sway US public opinion to influence the upcoming presidential election.
Graphika says it isn’t the first time the company has encountered a propaganda campaign incorporating AI-generated photos into their schemes. But the company warns: “Given the ease with which threat actors can now use publicly available services to generate fake profile pictures, this tactic is likely to become increasingly prevalent.”