The Chicken or the Egg: Securitization of AI or AI-fication of Security?
What was once portrayed only in dystopian movies about killer robots, seems increasingly realistic on today’s battlefields: disruptive technologies, such as artificial intelligence (AI), have started to alter the nature of warfare significantly. As Henna Virkkunen, EU Commissioner for Digital and Frontier Technologies and Commission Vice-President, put it at the adoption of the EU defence package in November 2025: “The war in Ukraine clearly demonstrates how fast defence technologies evolve and how frontier technologies provide rapid tactical change on the battlefield. The EU needs a fundamental change of mindset at all levels.” Agility, speed, collaboration, and risk-taking, she argued, must become the new normal in European defence capability development.
Long cast as a normative power whose legal competences stopped short of hard security and defence, the EU increasingly presents itself as a geopolitical actor pursuing strategic autonomy and technological sovereignty. While these concepts remain vague in their definition, they have become tightly bound to defence readiness and military innovation. Russia’s war against Ukraine, mounting transatlantic frictions beyond the NATO context, and growing pressure for Europe to become more independent spurred these ambitions and highlight the need for the EU to step up its commitments in security and defence. For its upcoming Multiannual Financial Framework (MFF) 2028-2034, for example, the EU already foresees to a long term budget of around 131 billion EUR, five times more funding compared to its previous budget, for building a “European Defence Union” that helps to “boost EU defence readiness against rising global tensions” and enhance its capabilities in cybersecurity, dual-use infrastructure and defence.
In this current discourse, AI becomes increasingly securitised. Stylised as an existential threat that fundamentally changes the nature of warfare, AI-powered innovation in the military field demands urgent investment to develop war-ready capabilities for Europe, such as drones, missile defence and decision-support systems, to defend itself in future conflicts. The EU increasingly justifies its expanding role in the military domain by framing them as a strategic necessity. The window of opportunity to build up these capabilities, potentially decisive for Europe’s geopolitical fate, is considered to be “very narrow, as strategic competitors and rivals are heavily investing in these areas”. Thus, when Virkkunen called for at least 10 percent of EU funding to be directed toward disruptive technologies, such as AI, the subtext is clear: this is about power as much as it is about economic growth.
Where earlier discourses surrounding AI have emphasised its economic cash cow potential, the current narrative foregrounds geopolitical competition, strategic military importance, and, bottom line, Europe’s survival in future conflicts. Calls to step up investment in defence-related technologies are not presented as innovation policy choices to boost the European economy, but as strategic imperatives. AI becomes something Europe must develop, not (only) because it promises economic growth, but because adversaries are doing so too, and future wars are imagined to be inevitably algorithmic, autonomous, and AI-powered. The result is a powerful securitization logic that legitimizes increased EU activity and funding in an area that traditionally lay outside its formal competences.
While traditional arms production was inherently organized top-down by states, requiring specific regulation for production, close monitoring of quality and quality, and export checks, current developments seem to erode that. Today’s military innovations are increasingly fuelled by private investors. A new generation of defence-tech start-ups is emerging, often backed by venture capital rather than defence ministries, developing AI-enabled military technologies at commercial speed. Instead of waiting for governments to express demands, procure and finance projects, private investors are funding prototypes, research, and testing in anticipation that states will later adopt these systems. The logic mirrors the civilian tech sector: move fast, scale quickly, and shape demand rather than respond to it. AI, with its inherently dual-use nature, makes this possible because many core technologies, such as generative AI and image processing tools, are widely accessible, relatively cheap to develop, and easily repurposed for military use. This bottom-up dynamic promises speed and adaptability that traditional procurement regimes struggle to match, but it also introduces new vulnerabilities: instead of strategic and security coherence, profit and greed for more innovation are the primary drivers.
In this context, since Russia’s full-scale invasion, Ukraine became a real-life test-lab for AI-powered military systems, having been labelled “drone capital of the Western world” and “new Silicon Valley of defence tech”. In the war in Ukraine, currently roughly 80 percent of targets are reportedly destroyed by drones, many of them developed and adapted rapidly using commercially available components and software. When military advantage now depends more on technological adaptability, iteration speed, and private investment in defence innovation, we experience a surge of AI-fication of security and defence. Originally civilian technologies were monetised at scale first and militarised later. This raises concerns about misaligned priorities between private profitability and strategic security objectives, unregulated diffusion of military technology, and the expansion of a defence-industrial ecosystem that is only partially accountable to public authorities.
Just like with the chicken and the egg, it is difficult to say now what came first – the securitization of AI or the AI-fication of security. But also just like with the chicken and the egg, both are not competing explanations, but they actually condition one another. Securitizing political narratives about strategic necessity justify increased investment in military AI, while the rapid emergence of AI-driven military technologies reinforces the perception that Europe must act quickly or fall behind. This creates a self-accelerating spiral, reflecting what has often been called the “AI-race to the bottom”: as AI becomes more central to defence, calls for regulatory restraint appear increasingly unrealistic, especially vis-à-vis the ambitions and recklessness of potential rivals.
As tech start-ups fuelled by private capital reshape military innovation, and are able to test them unregulated in real-time on real-life battlefields, states ultimately have to adapt their policies to accommodate them. The result is a security landscape where urgency replaces deliberation and most certainly, regulation. For the EU, this poses a profound governance challenge: strategic autonomy cannot simply mean keeping pace with rivals in an AI arms race, nor can technological sovereignty be reduced to subsidising defence-tech ecosystems.
Further Reading on E-International Relations
The Dangers of AI Hype
Between Knowledge and Value: AI as a Technology of Dispossession
Tech Imperialism Reloaded: AI, Colonial Legacies, and the Global South
Opinion – A New International AI Body Is No Panacea
The Silicon Conquistadors: Humanity and Digital Colonialism in the Age of AI
Opinion – Why China’s Ambitious Agenda Could Fail in 2024
Sina Hoch is a PhD candidate at the University of Amsterdam (UvA) in the NWO Vici–funded RegulAite project. Her research focuses on the EU’s role as a global actor in AI governance across different international fora, including the Council of Europe and NATO. Her latest article is “EU Influence In Global AI Governance and Its Limits” (RegulAite Working Paper 01/2025).
