Sukumar Sah
NEW DELHI:
Even as the who’s who of the tech world is in New Delhi to be a part of the India-AI Impact Summit 2026, February 16-20, digital security concerns dominate the agenda due to a dramatic rise in AI-enabled cyber threats. According to officials and analysts, rapid integration of artificial intelligence into cybercrime tools has led to a steep uptick in attacks — with India recording millions of malware incidents and emerging as one of the world’s most targeted digital ecosystems.
Data collated by the Indian Cybercrime Coordination Centre (I4C) of the Ministry of Home Affairs shows that reported cybercrime cases have risen sharply in recent years: 4.52 lakh in 2021, 9.66 lakh in 2022, and 15.56 lakh in 2023. Between January and April 2024 alone, the National Cyber Crime Reporting Portal recorded over 7.4 lakh complaints — an average of nearly 7,000 reports per day — highlighting the expanding scale of digital threats.
The financial impact has been equally significant. I4C estimates that over Rs 10,300 crore was siphoned off by cybercriminals between 2021 and late 2023.
The rapid integration of AI into cybercrime tools has led to a steep uptick in attacks, with India recording millions of malware incidents and emerging as one of the world’s most targeted digital ecosystems
Taken together, these figures point to a steady escalation in both the volume and sophistication of cybercrime, providing a crucial backdrop to policy discussions at the AI Summit. A key plenary on combating AI-enabled cybercrime, deepfakes, dark web threats and data breaches is set to focus specifically on how governance frameworks must evolve to counter emerging risks without stifling innovation.
The summit has drawn thousands of delegates from more than 100 countries, including leading technology executives from Google, OpenAI, Meta and Nvidia, alongside ministers, regulators and heads of government. French President Emmanuel Macron is scheduled to attend from February 17 to 19 to deepen bilateral cooperation on AI research and governance.
At the heart of the summit’s policy thrust is the recognition that AI governance must extend beyond technological optimism to incorporate robust regulatory and ethical safeguards. Officials say India’s approach will be structured around seven thematic “chakras,” or working groups, designed to translate AI principles into regulatory, economic and security roadmaps.
These themes include data governance, responsible AI deployment, inclusive growth, AI safety and international cooperation — reflecting an effort to balance rapid innovation with protections for citizens, markets and critical infrastructure.
A major concern informing these discussions is what officials describe as the “industrialisation” of cybercrime. According to Rajesh Kumar, Chief Executive Officer of I4C, cyber attacks between 2024 and 2025 have seen extensive adoption of AI and automation.
Modern cybercrime, he says, has evolved from isolated hacking attempts into organised, assembly-line operations in which malicious software adapts in real time, social engineering techniques are amplified, and deepfake technologies are weaponised to impersonate officials, executives or even family members.
The integration of AI has significantly lowered entry barriers for criminals. Automated phishing kits, ransomware-as-a-service platforms and botnets are now available on illicit marketplaces, allowing individuals with minimal technical expertise to mount sophisticated attacks. Analysts estimate that a substantial share of current cyber incidents now involve some form of AI-driven automation, underscoring the magnitude of the challenge.
For policymakers, the surge in AI-enabled threats serves as both a warning and a catalyst for reform. Government delegates at the summit are expected to advocate a risk-based regulatory framework that imposes stricter oversight on high-impact AI applications — particularly in sectors such as finance, healthcare and national security — while permitting lower-risk innovation within regulatory sandboxes.
Data governance is likely to be another central pillar of discussions. With India’s Digital Personal Data Protection Act now in force, debates will focus on anonymisation standards, cross-border data flows, algorithmic accountability and audit mechanisms. The objective is to ensure that AI systems trained on large datasets adhere to privacy norms and ethical standards without undermining innovation and economic growth.
Capacity-building and public awareness will also feature prominently. Officials acknowledge that law enforcement agencies and businesses must adopt AI-driven detection systems, invest in cyber-skills training and strengthen inter-agency coordination. Public-private partnerships are expected to play a crucial role in sharing threat intelligence and accelerating defensive capabilities.
Given that cybercriminal networks operate across borders, policymakers are expected to explore harmonised legal frameworks, joint investigations and real-time information-sharing mechanisms. India’s broader ambition is to position itself as a rule-shaper in global AI governance, particularly from the perspective of emerging economies.
The stage is thus set for a new phase in AI policy — one that acknowledges the technology’s transformative potential while confronting its growing risks. As digital adoption deepens and cyber threats become increasingly automated and industrial in scale, the challenge before policymakers is clear: to ensure that the promise of artificial intelligence is realised within guardrails that safeguard citizens, economic stability and democratic values.


