An OECD paper, Trends in AI Incidents and Hazards Reported by the Media, finds that as global AI adoption accelerates, reported incidents and hazards increased from 92 to 324 per month between 2022 and 2025. Utilizing the OECD.AI Incidents and Hazards Monitor (AIM), the study analyzes media coverage across 14 thematic clusters—including synthetic media, child safety, and cyberattacks—to identify how public and policy attention to AI risks is evolving.
While the total volume of reports has surged, their share relative to overall AI-related news has slightly declined, from 3.2% in 2022 to around 2.5% in 2025. The analysis distinguishes between three coverage patterns: increasing (e.g., child safety, fraud), intermittent (e.g., election interference, LLM spikes), and decreasing (e.g., autonomous vehicles, privacy).
Strategic Trends in Global and Indian AI Risk Coverage
The AIM data identifies several critical foundational pillars that define the current global AI risk landscape:
Proliferation of Synthetic Media: Reports grew 2.5 times since 2022, now accounting for 14% of recorded incidents. A notable peak in November 2023 was driven by deepfake videos targeting Indian celebrities, an incident reported by 853 news outlets.
Emerging Threats to Child Safety: The share of reports related to child safety doubled by 2025, frequently highlighting AI-generated child sexual abuse material (CSAM) and inappropriate content targeting children in India and globally.
Cyberattacks and Financial Fraud: AI-enabled incidents linked to financial manipulation, phishing, and scams increased by a factor of 2.7, reaching nearly 10% of total reports by late 2025.
Event-Driven Spikes in LLM and Election Risks: Coverage of Large Language Model (LLM) risks, such as hallucinations and misuse, surged eightfold following the release of ChatGPT. Election interference reports peaked in February 2025, significantly influenced by AI-generated deepfakes targeting Indian politicians during the 2024 global “super year” of elections.
Societal and Labour Market Disruption: Reports tied to AI-driven automation and layoffs have risen steadily. In India, YouTube’s AI algorithms were notably reported for amplifying hate speech against specific communities and women, exacerbating societal divisions.
What are “AI Incidents” and “AI Hazards” according to the OECD framework? An AI incident is an event where the development, use, or malfunction of one or more AI systems directly or indirectly leads to realized harms, such as physical injury, disruption of critical infrastructure, or human rights violations. In contrast, an AI hazard is an event or circumstance that could plausibly lead to an incident but has not yet resulted in actual harm. By tracking both, the OECD and GPAI provide a consistent reporting framework for developers and policymakers to monitor and mitigate emerging risks across global jurisdictions.
Policy Relevance
The OECD findings represent a transition from speculative AI risk management to data-driven, evidence-based global oversight.
Safeguarding Democratic Integrity: The spike in election interference reports provides a mandate for MeitY to strengthen regulations against deepfakes and AI-driven misinformation during national and regional polls.
Countering Algorithmic Hate: Reports of AI-driven hate speech amplification in India necessitate a shift in the Digital India mission toward mandating algorithmic transparency for social media platforms.
Protecting Vulnerable Groups: The doubling of child safety incidents underscores the need for the Ministry of Women and Child Development to develop indigenous safety benchmarks to protect Indian youth from radicalization and CSAM.
Regulating Biometric Surveillance: As AI-powered facial recognition is increasingly deployed in India, global trends of decreasing media attention toward privacy must not lead to a regulatory “blind spot” in domestic surveillance oversight.
Re-skilling the Labour Force: Steady growth in labour market disruption reports highlights the role of the Ministry of Labour in designing proactive re-skilling programs for sectors vulnerable to automation.
Follow the full report here: OECD: Trends in AI incidents and hazards reported by the media


