Elections and Foreign Interference: Lessons from India’s Elections

This paper talks about how in 2023, Meta exposed Chinese information operations targeting India, the U.S., and Tibet, with sophisticated networks posing as journalists and activists. These operations intensified ahead of the 2024 elections, aiming to polarize voters and undermine democracy, especially in Taiwan, where swift fact-checking countered disinformation. In India, PRC-based accounts leveraged societal fissures and spread fake news about defense deals, targeting relations with key allies. Despite Meta's efforts, new inauthentic accounts emerged, showing that foreign disinformation remains a significant threat. Democracies must share experiences to combat such non-traditional security threats effectively.

Elections and Foreign Interference: Lessons from India’s Elections

Analysis

By Dr Sriparna Pathak, Dr Nishit Kumar, Divyanshu Jindal and Nikhil Parashar

In November 2023, Meta warned that Chinese information operations are targeting India, the U.S. and Tibet. One of the networks that Meta took down in 2023 was aimed at the U.S., while the other network which was smaller but more sophisticated, targeted India and Tibet. It consisted of 13 Facebook accounts and seven groups, and the accounts posed as journalists, lawyers and human rights activists. Some also operated accounts using the same names and profile pictures on X (formerly known as Twitter). Meta said about 1,400 accounts joined one of the groups before the groups were taken down.

A month later, in December 2023, Meta reported that it had been taking down Chinese Facebook accounts that were likely designed to polarise Indian and U.S. voters ahead of the 2024 elections. In mid-2023, a small portion of this network’s accounts changed names and profile pictures from posing as Americans to posing as being based in India when they suddenly began liking and commenting on posts by the other China-originnetworksfocusing on India and Tibet.

One of the first elections in 2024 was that of Taiwan. China-sponsored disinformation relying on content farms to bots to influencers all tried to divide the Taiwanese electorate and even cast doubts on the credibility of the election results. However, the responses to disinformation were swift in Taiwan, as fact-checking groups debunked rumours, while the Central Election Commission held a news conference to push back on claims of electoral discrepancies, and influencers put out explainers on YouTube explaining how votes are tallied. Doublethink Lab which is part of the vibrant civil society groups pushing back against China’s state-sponsored disinformation stated that the stream of disinformation from China was a deliberate attempt to undermine Taiwanese democracy. Taiwan’s efforts thus should be a lesson for other democracies in tackling disinformation from China, particularly during elections.

In a research conducted by Doublethink Lab, with partners from India, it was found that China used multiple narratives ranging from reducing human rights in India under the ruling government to low economic achievements under the government to shape perceptions in India. The work of foreign actors manipulating information begins much before elections begin and several inauthentic accounts targeting India were found made in mid-2023. Thus, the preparatory stage began at least a year before elections took place in India. With regards to U.S. elections as well, based on the observed patterns of disinformation around India’s elections, the work of foreign actors would have begun well in advance in 2023, if not earlier.

In May 2024, Meta stated in its threat adversarial report that it removed 27 Facebook accounts, 13 pages, five groups and nine Instagram accounts for violating its policy against coordinated inauthentic behaviour. Despite Meta’s efforts, as revealed by research from Doublethink Lab, newer accounts emerged engaging in coordinated inauthentic behaviour. Given India’s sensitivity to China’s manipulation of the information space, even though a lot of accounts are blocked in India, these accounts remain active in countries and regions with the Indian diaspora. The diaspora continues to have linkages through family and friends in India. Information manipulation thus finds a way into India still. While Chinese language content was spread widely during India’s elections, content in English, Hindi, Bengali, Tamil, and Urdu which are used in India were also found. Coordination in the narratives emerging from Chinese handles and the bot accounts operating in India was found. All these accounts sought to leverage the existing fissures in Indian society and to influence the electoral choices of the audience.

Narratives were also spread on foreign policy issues, with the presumed objectiveof dividing allies. An example of this was the narrative that the Brahmos missile sales from India to the Philippines have been all for nought as the Brahmos missiles were found to be vulnerable to explosion during the test stages. While the fact remains that none of the India-exported artillery has ever malfunctioned, fake videos were widely shared on TikTok in particular, which remains in use in the Philippines to portray that India cheated the Philippines. Other similar narratives were aimed at pre-emptively harming future cooperation, such as the possible sale of the Brahmos to Vietnam. The study found narratives aimed at dividing partnerships between India and Taiwan, and India and the U.S. respectively as well. Given that the ruling government in India prioritises its relations with the U.S., Taiwan and partners in Southeast Asia, attempts to divide through disinformation were observed to be particularly strong.

Overall, the attempts from PRC-based accounts churned out disinformation or conspiracy theories, amplified existing propaganda from the PRC, indulged in coordinated inauthentic behaviour, used suspicious accounts that tried to generate extraordinarily high interactions, aimed at potentially negatively impacting the domestic society and tried to tear apart relations between partners and allies. All these attempts pre-identified the existing fissures in Indian society and foreign policy. For the U.S., the same methodology has already been put to use and as newer fissures emerge during the democratic processes of debates, election manifestoes and actual voting, further foreign information manipulation can take place.

The steps in the process of information manipulation entail mapping the target audience, political segmentation, identifying target audience adversaries, finding echo chambers and identifying data voids, developing competing narratives and leveraging conspiracy theory narratives from a long list of others. Each of these stages has been witnessed in India’s and Taiwan’s elections.

Given that India’s elections have already taken place, a few lessons can be taken from the exercise so that other democracies can understand the challenges that lie ahead. For India, Taiwan’s electoral experience was of pertinence and India strengthened its fact-checking machinery. To combat this newer arena of non-traditional security threat, democracies need to share experiences and align interests against malicious actors while preserving the essence of their own respective democracies.

Disclaimer: This paper is the author’s individual scholastic contribution and does not necessarily reflect the organisation’s viewpoint.

About the authors:

Dr. Sriparna Pathak is an Associate Professor of China Studies at O.P. Jindal Global University in India.

Dr. Nishit Kumar is a New Delhi based independent scholar.

Mr. Divyanshu Jindal is a New Delhi based international relations analyst.

Mr. Nikhil Parashar is the operations head at ThinkFi, India.