Celebrating Safer Internet Day 2026 by Strengthening Digital Safety for Children

Celebrating Safer Internet Day 2026 by Strengthening Digital Safety for Children
X

AI now influences what children see, learn, and share online, making digital safety a core responsibility of platforms and policymakers.

As digital platforms and AI-powered technologies become an integral part of children’s everyday lives, ensuring their online safety has never been more critical. On Safer Internet Day 2026, the focus turns to creating a digital world where innovation goes hand in hand with protection for the youngest users. In conversation with Hans India, Anand Vishwakarma, Executive Director, ChildFund India, shares insights on the responsibilities of technology platforms, the need for stronger collaboration, and the urgent steps required to make AI-driven spaces safer for children.

1. As AI-driven platforms increasingly shape children’s online experiences, what responsibility do technology companies and digital platforms have in ensuring children’s safety online?

Technology companies should act as the first line of defence for children. AI-powered features such as algorithmic recommendations, autoplay content, and smart friend suggestions play a significant role in shaping what children see and who they interact with online. As children spend increasing amounts of time on digital platforms, companies must adopt child-safe defaults, ensure greater transparency in how algorithms function, and strengthen mechanisms for faster detection and removal of abusive content. Ethical AI design and proactive risk assessments are critical to preventing unintended exposure to harmful interactions and content. These companies should also run campaigns to make parents aware of parental privacy settings and controls to prevent exposure to inappropriate content.

2. How can collaboration between technology companies, civil society organisations, and regulators strengthen online safety for children in India, particularly in the context of emerging AI technologies?

Protecting children online requires a multi-stakeholder approach. Collaboration between technology platforms, regulators, and civil society organisations can help ensure that AI systems support child protection rather than deepen the invisibility of abuse. Shared reporting mechanisms, clearer accountability frameworks, and co-regulation models can improve responsiveness, while civil society organisations play a key role in bringing ground-level evidence to inform policy and platform design.

There is an increasing need for the government and regulators to frame strategies, frameworks, and policies, strictly monitor the situation, and create friendly platforms to fast-track child abuse cases while working closely with tech companies. Collaboration between technology platforms, regulators, and civil society organisations can help ensure that AI systems support child protection rather than deepen the invisibility of abuse, while civil society organisations play a key role in bringing ground-level evidence to inform policy and platform design.

3. Despite high digital access, why does online sexual exploitation of children remain so underreported in India?

There exists a significant gap between experience and reporting. As per the OSEAC Report, while 18% of parents acknowledged that their child had faced online sexual exploitation, fewer than half (45%) reported the incident to authorities. Fear of stigma, shame, social isolation, and lack of trust in law enforcement were consistently cited as barriers to reporting. Children often hesitate to disclose such experiences due to embarrassment or fear of being blamed, suggesting that the actual prevalence is likely much higher than reported figures. Many parents also choose informal responses, such as blocking perpetrators or restricting internet use, further obscuring the scale of the problem.

4. What digital behaviours are putting Indian children most at risk of online sexual exploitation today?

Digital access among children is widespread and increasingly frequent, with many spending extended periods online, often without adequate supervision. Popular platforms such as YouTube, WhatsApp, and Instagram, along with online gaming platforms with live chat features, present well-documented grooming risks. A concerning number of children engage in risky online behaviours, including befriending strangers, meeting online contacts offline, or sharing intimate personal information. These vulnerabilities are further intensified by low levels of parental monitoring and limited exposure to structured digital safety awareness programmes.

5. What safeguards do you think are critical as AI becomes more embedded in children’s digital lives, from social media and gaming to learning platforms?

There is a clear need for stronger platform accountability, particularly as AI tools can accelerate grooming and sextortion. Significant gaps in parental capacity persist, with many caregivers unfamiliar with basic digital safety tools, privacy settings, and reporting mechanisms. Mandatory digital safety education in schools is a critical safeguard, alongside improved reporting systems, trained law enforcement, and closer collaboration with technology companies to ensure that AI systems are designed with child protection at their core.

6. On Safer Internet Day, what gaps do you think exist in how India is protecting children online, and what needs urgent attention?

While legal frameworks such as POCSO and the IT Act are in place, enforcement varies across geographies due to capacity differences, limited cybercrime training, fragmented inter-agency coordination, and challenges in digital evidence collection. Parallelly, due to underreporting, authorities can only see the tip of the iceberg. The absence of child-friendly and anonymous reporting mechanisms continues to discourage timely reporting and increases the risk of re-victimisation. This calls for immediate action through mandatory digital safety education starting at schools and homes, stronger collaboration with technology platforms, community-level “Digital Safety Champions,” and enhanced capacity building for law enforcement agencies.

As India marks Safer Internet Day 2026, the message is clear: protecting children online can no longer be reactive or fragmented. As AI becomes more deeply embedded in digital platforms, safety must be designed into systems from the start. Ensuring safer digital spaces for children will require sustained collaboration between technology companies, policymakers, educators, parents, and civil society, because the future of the internet must be built with children at its centre.

Next Story
Share it