Four days before the results of the Lok Sabha elections are announced, OpenAI made a stunning announcement claiming that it has thwarted clandestine efforts that aimed to manipulate Indian elections through the use of AI models.
A threat intel report by OpenAI claims that a for-hire Israeli firm “began generating comments that focused on India, criticised the ruling BJP party and praised the opposition Congress party.”
The activity that was focused on Indian polls was flagged in May, the report mentions, adding that the “network was operated by STOIC, a political campaign management firm in Israel”.
Campaigns that employed AI for covert operations to sway public opinion or affect political results are mentioned in the OpenAI report.
“While we observed these threat actors using our models for a range of IO, they all attempted to deceive people about who they were or what they were trying to achieve,” the report mentions.
The report goes on to say that content for the covert operations was created and edited using a number of identities that were run out of Israel. The videos were posted on YouTube, X, Facebook, Instagram, and websites.
“In early May, it (the network) began targeting audiences in India with English-language content,” the report claims.
Founded in December 2015, OpenAI is a research organisation focused on artificial intelligence.
Reacting to the report, Union Minister of Electronics and Technology Rajeev Chandrasekhar said, “It is absolutely clear and obvious that BJP was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties.”
He called it a “dangerous threat” to the country’s democracy.
“It is clear: vested interests in India and outside are clearly driving this and needs to be deeply scrutinised/investigated and exposed,” Rajeev Chandrasekhar said.
“My view at this point is that these platforms could have released this much earlier, and not so late when elections are ending,” he added.
According to a report published by Meta, they have eliminated multiple Instagram profiles, pages, and groups that were directed at the Sikh community residing in India, Australia, Canada, the UK, New Zealand, Pakistan, and Nigeria. The study states that the network was founded in China.
“The operation used compromised and fake accounts to pose as Sikhs, post content and manage Pages and Groups,” the report claims.
It further claims that a “fictitious activist movement called Operation K” was created by the network, calling for pro-Sikh protests.
“We found and removed this activity early, before it was able to build an audience among authentic communities,” the report mentions.
The network posted primarily in English and Hindi about news and used images “likely manipulated by photo editing tools or generated by artificial intelligence”.
Content that Meta highlighted included articles regarding floods in the Punjab region, the global Sikh community, the movement in favour of Khalistan, the Canadian death of terrorist Hardeep Singh Nijjar, and articles that were critical of the Indian government.



