When AI Becomes a Weapon: The IMF Warns on Systemic Cyber Risk
The IMF's report shifts the frame: cyber risk in the AI era is no longer an operational issue. It's a macro-financial risk.

On May 7, 2026, the International Monetary Fund published a note that should be on the desk of every CISO, CRO, and compliance officer in the European financial sector.
The title is direct: "Financial Stability Risks Mount as Artificial Intelligence Fuels Cyberattacks." The thesis is even more direct: artificial intelligence is transforming cyber risk from an operational problem into a systemic risk for global financial stability.
This isn't a generic alarm. It's a shift in coordinates.
1. What the IMF Report Says
AI dramatically reduces the cost and time needed to identify and exploit vulnerabilities in computer systems. The structural advantage shifts to attackers: discovering and exploiting a vulnerability can happen faster than a patch can be applied.
In a financial system built on shared digital infrastructure — software, cloud services, payment networks — this creates an unprecedented condition: correlated vulnerabilities that can strike many institutions simultaneously.
The IMF warns that extreme losses from cyber incidents could trigger liquidity strains, solvency concerns, and market disruption. This is no longer about damage to individual organizations: these are macro-financial shocks.
2. Three Risks That Change the Nature of the Problem
Systemic risks. When discovery and exploitation scale at machine speed, the implications extend far beyond the single institution hit. Contagion propagates through the digital interconnection of the financial system.
Cross-sectoral risks. The financial sector shares digital foundations with energy, telecommunications, and public services. An AI-powered attack on shared infrastructure can propagate across different sectors that depend on the same technological base.
Risk concentration. Dependence on a small number of software platforms, cloud providers, and AI models amplifies the impact of any exploited vulnerability. A weakness in a widely-used system becomes a systemic vulnerability.
3. The Other Side of the Coin
In March 2026, we analyzed how AI needs crypto-assets to build verifiable trust infrastructure: onchain identity, autonomous micropayments, proof of personhood against digital manipulation.
The IMF report shows us the other side of that same coin. The same advanced models that can build trust infrastructure, in malicious hands, become threat multipliers. These aren't two separate phenomena: they're two faces of the same technological transformation underway.
The AI-blockchain convergence that represents an opportunity for compliance and autonomous agent governance is also the terrain where the next generation of cyber attacks will play out.
4. The Regulatory Response: NIS2 and DORA Already Operative
The good news is that the European regulatory framework has — at least partially — anticipated this evolution.
▶ DORA
The Digital Operational Resilience Act, operative since January 2025, imposes stringent requirements on banks, insurers, CASPs, and other regulated financial entities: documented ICT risk management, incident reporting to competent authorities within 4 hours for major incidents, TLPT (Threat-Led Penetration Testing) based on real threat scenarios — including AI-powered scenarios — and technology supply chain risk management.
▶ NIS2
Transposed into Italian law via Legislative Decree 138/2024, NIS2 extends cybersecurity obligations to a much broader perimeter — digital service providers, cloud providers, critical infrastructure — with technical and organizational measures proportionate to risk, supply chain security management, business continuity, and significant incident notification within 24 hours.
The IMF recommends that authorities focus on building resilience through supervision and coordination — not treating these developments as purely technical or operational issues. This is exactly the approach NIS2 and DORA impose: cybersecurity as a governance issue, not just an IT one.
5. What to Do Now
AI-powered systemic risk is not a future risk. Offensive capabilities already exist; their diffusion is a matter of time.
- •NIS2/DORA gap analysis: verify control coverage against current regulatory requirements, with particular attention to AI risk management in the ICT supply chain
- •Update the threat model: include AI-assisted attack scenarios in risk assessments and TLPTs
- •Business continuity: test operational continuity plans against systemic incident scenarios, not just isolated incidents
- •ICT supply chain governance: map dependencies on cloud providers, software vendors, and AI models — every concentration is a risk amplification point
The IMF's recommendation — international coordination and active supervision — takes time. Internal preparedness cannot wait.
Is your organization ready for the new AI-powered cyber risk scenarios?
Tomato Blue supports companies and institutions in NIS2 and DORA gap analysis and in building an AI risk governance framework.
Let's talk →