India’s tougher AI social media rules spark censorship fears

0
4

India is tightening the rules on how artificial intelligence is used on social media.

The new regulations will take effect on February 20, the final day of a major AI summit in New Delhi. The goal, the government says, is to fight the growing wave of AI-generated misinformation online.

With more than a billion internet users, India has become one of the biggest battlegrounds for fake content created using AI.

Faster takedowns

Under the new rules, platforms like Instagram, Facebook, and X will now have just three hours to comply with government takedown orders.

Previously, they had 36 hours.

That’s a huge change.

The government says this will help stop harmful posts from spreading quickly. It also requires platforms to clearly and permanently label AI-generated or manipulated content. These labels must not be removable.

Authorities have also expanded the rules to cover any content “created, generated, modified or altered” by computer tools — not just obvious deepfakes.

Automation and concern

Last year, the government launched an online portal called Sahyog — which means “cooperate” in Hindi — to automate takedown requests sent to social media companies.

Critics say the new three-hour window is so tight that platforms won’t have time for proper human review. Instead, they may rely heavily on automated systems.

The Internet Freedom Foundation warned that platforms could turn into “rapid-fire censors” just to avoid penalties.

Digital rights activists argue that such speed makes fair appeals nearly impossible. Many users, they say, may not even know why their content was removed.

Free speech fears

India, under Prime Minister Narendra Modi, has already faced criticism from rights groups over alleged limits on free speech — something the government denies.

Critics worry the new AI rules could make that worse.

They say the definition of “synthetic content” is broad and could include satire, parody, or political commentary that uses realistic AI tools.

The US-based Center for the Study of Organized Hate warned that the law might push platforms to over-monitor content, leading to “collateral censorship” — where harmless posts are removed just to be safe.

Why the government says it’s necessary

Supporters of the move argue that AI tools have made it easier to spread hate, fake documents, and sexualized images — including of women and children.

Earlier this year, outrage erupted after users manipulated images using the AI chatbot Grok, developed by Elon Musk’s company, to create inappropriate content involving real people.

Officials and some analysts say platforms have not acted responsibly enough on their own.

So now, the government is stepping in.

The debate is clear: India is trying to fight dangerous AI-driven misinformation — but critics warn the cure could end up hurting digital freedom.