Drones fly in formation over Ruyi Lake square in celebration of China's Army Day on August 1, 2023 in Zhengzhou, Henan Province of China.

Drones fly in formation over Ruyi Lake square in celebration of China's Army Day on August 1, 2023 in Zhengzhou, Henan Province of China. Ma Jian / VCG via Getty Images

How China could use generative AI to manipulate the globe on Taiwan

Generative AI could allow China to radically improve and scale up disinformation efforts, just in time for the Taiwanese presidential election, a new report argues.

Chinese researchers are already investigating how to use generative AI tools—similar to ChatGPT—to manipulate audiences around the world and will likely use such tools to shape perceptions about Taiwan, according to researchers from RAND.

“Given the [People’s Liberation Army] and the Chinese Communist Party's prior intentions, their prior actions…we think is logically the next target for China would be the Taiwanese [2024 Presidential] elections,” Nathan Beauchamp-Mustafaga, a policy researcher who focuses on Asian security issues at RAND, told reporters Thursday. 

Chinese research into the use of technology to alter or manipulate foreign public opinion in key target locations goes back to at least 2005, when PLA researchers first “espoused a desire to create what they sometimes call ‘synthetic information’ specifically, creating inauthentic content using some amount of original information that is intended to be spread online for malign purposes,” RAND researchers write in their newly released paper. 

China has been playing at a disadvantage when it comes to weaponized disinformation—behind more adept players like Russia—due to the Chinese government’s obsession with censorship and blocking foreign media channels. But generative AI tools promise to change that. “Generative AI [large language models] such as ChatGPT offer to bridge this cultural gap for the party-state at scale. However, generative AI’s reliance on massive amounts of training data will be a key focus for the PLA, and PLA information warfare researchers have even complained about the lack of internal data-sharing,” the paper says.

The paper points to efforts by Chinese AI researcher Li Bicheng, who was tasked with finding military applications for AI within China. Li’s “special importance within the PLA is evident in the fact that he co authored his 2019 article with a researcher at Base 311, right after the unit was accused by Taiwan of election interference via social media,” the paper notes. 

Li has specifically led research into bots for social media influence, and has noted that at present, such bots are easy to detect and not persuasive, because they aren’t good with language and can’t answer simple biographical questions about themselves. Li is now working on “improving the outputs of a language model for better using emotion in text generation and thus generating more-convincing synthetic text,” according to the paper. 

How might generative AI change or influence public opinion around Taiwan? Generative AI tools could help the PLA create large numbers of false personas that seem to hold a particular view or opinion, thus creating the impression that certain opinions or views have popular support when they do not—a phenomenon sometimes called “astroturfing.” Moreover, generative AI could allow for the rapid production of false news articles, research papers, and other pages, creating a false sense of truth itself.

“People think often about deep fakes in terms of images or videos, but this is going to allow you to more credibly deep fake effect, so if you want to try to create what you want to present as a fact that is truly a falsehood, you can now very easily, or will be very easily, be able to do if you're using these tools,” Heather Williams, associate director for the international security and defense policy program at RAND, told reporters. “They might be hyped projections about future disorder or disaster that's going to occur if the world as we know it continues, or conspiracies about small groups who might be controlling events or controlling civil affairs…Those falsehoods will be able to be more credible now because of what some of these third-generation social media manipulation tools allow a bad actor to do.”

While the RAND researchers said they don’t have specific information indicating Taiwan will be a major target, U.S. officials have warned that a Chinese invasion of Taiwan could come by 2027. And Taiwan’s presidential elections in 2024 could have a major impact on Chinese actions toward Taiwan in the years ahead. 

Importantly, as China is exploring the use of generative AI for large-scale media manipulation, social media platforms in the United States aren’t well positioned to guard against it. One of the most popular networks, TikTok, is Chinese in origin. X, formally Twitter, is less able today to find large-scale state-backed media manipulation than it was a few years ago—since X owner Elon Musk fired the company's data team in charge of watching for disinformation. Not surprisingly, a recent EU report found that Russian disinformation on the site had recently jumped as a result.  

Said Williams: “These are very concerning trends that are happening in the social media space. You know, if you let social media providers run themselves, we have now a decade, two decades, of history of what happens: bad things. It goes to the trolls.”