3-hr deadline for SM platforms to remove flagged AI content
New Delhi: The Centre has set a three-hour deadline for social media platforms to take down AI-generated, deepfake content flagged by it or courts.
In its latest order, the Union government has mandated social media platforms to prominently label AI-generated content and urged them to embed synthetic content with identifiers. The order said social media platforms cannot allow removal or suppression of AI labels or metadata once applied.
The Centre has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, that formally define AI-generated and synthetic content. The new rules will come into effect from February 20 this year.
The amendments define “audio, visual or audio-visual information” and “synthetically-generated information”, covering AI-created or altered content that appears real or authentic. Notably, routine editing, accessibility improvements, and good-faith educational or design work are not included in the definition.
The main changes include treating synthetic content as ‘ínformation’ and AI-generated content will be treated on par with other information for determining unlawful acts under IT rules.
The gazette notification issued by Ministry of Electronics and Information Technology (MeitY) states that social media platforms must act on government or court orders within three hours, reduced from 36 hours. User grievance redressal timelines have also been reduced.
As per the rules, labelling of AI content is mandatory and the platforms aiding such creation or sharing of synthetic content must make sure such content is clearly and prominently labelled and embedded with permanent metadata or identifiers, where technically feasible.
Urging for ban on illegal AI content, the order said platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual, or related to false documents, child abuse material, explosives, or impersonation. Intermediaries cannot allow removal or suppression of AI labels or metadata once applied, it said.
Earlier, Union Minister for IT Ashwini Vaishnaw had said, “In Parliament as well as many other fora, people have demanded that something should be done about the deepfakes which are harming society, people using somebody’s, some prominent person’s image and creating deepfakes which are then affecting their personal lives, privacy, as well as the various misconceptions in society.
So the step we’ve taken is to make sure that users get to know whether something is synthetic or real. Once users know, they can take a call in a democracy. But it’s important that users know what is real. That distinction will be led through mandatory data labelling.”
The IT Ministry had said: “Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods — depicting individuals in acts or statements they never made. Such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.”