By Gavin Roberts| 6th January 2026
The tech industry is currently awash in speculation about whether we’re in an ‘AI bubble’ cybersecurity moment and when/if it might burst. Analysts debate valuations, economists examine energy consumption data, and pundits forecast timelines. But these discussions miss a critical opportunity. Rather than endlessly debating if and when, we should be examining what comes after.
So, let’s skip ahead. Let’s imagine it’s 2027, and the AI bubble has already burst. The hype cycle has completed its inevitable correction. The infrastructure build-out has hit the wall of physical and economic reality. What does the world look like, particularly for an industry that bet everything on AI: cybersecurity?
What if?
The infrastructure reality we face in this post-burst landscape is sobering. Data centre electricity consumption had been projected to double by 2030, reaching around 945 terawatt-hours globally, according to Goldman Sachs. U.S. data centres alone were consuming 183 TWh in 2024, representing more than 4% of the country’s total electricity, per the Electric Power Research Institute. The build-out was staggering in its ambition and, as it turned out, its unsustainable.
When the bubble burst, it revealed what industry insiders had been quietly calling “stranded assets.” In Ireland alone, €5.8 billion worth of data centre projects sat with land and permits but no grid capacity to power them.
Across the United States, Europe, and Asia, the story repeated itself: ambitious facilities that couldn’t get connected to the grid, couldn’t meet new environmental regulations, or simply couldn’t justify their operational costs once the venture capital frenzy subsided.
The cybersecurity industry, which had become deeply intertwined with AI infrastructure during the boom years, now faces an existential reckoning.
During the bubble, security experts predicted that by 2025, security operations would move into “full-scale machine-versus-machine warfare” with AI systems engaging in real-time combat against adversarial AI. The industry embraced this vision wholeheartedly. AI dominance within cybersecurity became so pervasive that it represented a point of no return, or so everyone believed.
The dependency stack looked like this:
AI-Powered Threat Detection required continuous compute for real-time analysis of network traffic, user behaviour, and anomaly detection across vast attack surfaces.
Behavioural Analytics needed massive data processing at scale to establish baselines and identify deviations that might indicate compromise.
Automated Response Systems depended on low-latency AI inference to execute defensive actions faster than human operators could type commands.
Cloud Security Posture Management was built entirely on hyperscale assumptions, constantly scanning multi-cloud environments with AI models.
Every major security vendor had pivoted their product lines to emphasize AI capabilities. “AI-powered” became the prefix for every feature in every sales deck.
Organizations decommissioned traditional security tools, viewing them as legacy constraints on their AI-enabled future.
A generation of security professionals trained exclusively on AI-enhanced platforms, never learning to hunt threats manually or write custom detection rules from scratch.
Then the bubble burst. And the dependency became a liability.
The collapse didn’t happen overnight, but when it came, it was comprehensive.
Several factors converged:
Power Constraints Became Insurmountable. Wholesale electricity costs had risen as much as 267% in areas near data centres compared to five years earlier, according to utility company reports. Grid interconnection delays pushed delivery timelines out 5-7 years, while data centres needed to be operational within 18-36 months to satisfy investors. The math stopped working. Power constraints forced the industry to behave less like real estate and more like an energy access product, but by then, the commitments were too deep.
Regulatory Reality Hit Hard. New regulations, like Germany’s mandate for 100% renewable energy for data centres by 2027 with requirements for waste heat reuse, turned facilities built during the boom into instant liabilities. These weren’t facilities that failed due to lack of demand but operations that became obsolete from regulatory compliance overnight.
The Cost Redistribution Crisis. When regulators like FERC proposed that data centres pay 100% of grid upgrade costs under a participant funding model, the economics that had been barely viable became completely untenable. Venture-backed startups that had promised “AI security for everyone” suddenly faced infrastructure bills that dwarfed their revenue projections.
Concentration Risk Materialized. Four companies—AWS, Google, Meta, and Microsoft—had come to control 42% of U.S. data centre capacity during the boom. When the hyperscale’s simultaneously pulled back on expansion plans and raised prices to cover their own exposure, the entire ecosystem felt the shock at once. There was no geographic or corporate diversification to cushion the fall.
In this post-burst world, cybersecurity operations faced several immediate crises:
The Capability Gap. Organizations that had fully transitioned to AI-powered security suddenly discovered that their legacy tools had been decommissioned, their licenses had lapsed, and their staff had forgotten how to use them. The organizational gap between those who understood data and those who secured infrastructure created a critical blind spot. When AI-powered security operations centres went dark or became prohibitively expensive, companies found themselves functionally blind to threats.
The Knowledge Drain. A generation of security analysts who had been trained exclusively on AI-enhanced platforms now struggled with basic tasks. Writing custom Sigma rules, manually investigating log files, and conducting hypothesis-driven threat hunting required skills that had atrophied or never been developed. The industry had outsourced so much cognitive work to AI that the humans in the loop had become passengers rather than pilots.
The Adversary Advantage. Ironically, threat actors adapted faster than defenders. Attackers had always been more agile, and in the post-burst environment, they pivoted to exploit the exact dependencies that had made AI security appealing. They launched attacks timed to coincide with known service degradations, targeted the remaining AI infrastructure with resource exhaustion attacks, and poisoned training data for the security models that organizations desperately tried to keep running on shoestring budgets.
Geographic Fragmentation. Power availability created power-rich and power-poor regions, and cybersecurity capabilities became geographically determined rather than universally available, as infrastructure analysts had warned. Organizations headquartered in regions with reliable power access maintained some AI security capabilities, while those in constrained markets faced a stark choice: relocate critical infrastructure or accept degraded security posture.
Yet the post-burst landscape isn’t apocalyptic. It’s actually forcing a necessary maturation. The industry is being compelled to act like adults rather than adolescents drunk on limitless possibilities.
Hybrid Models Emerge. The most resilient organizations are those that maintained dual capabilities throughout the boom. They kept their traditional SIEM platforms running alongside AI enhancements. They ensured their staff could still write detection rules manually and conduct investigations without automated assistance. These organizations stumbled but didn’t fall.
Edge Computing Returns. Security processing moves back to endpoints and local infrastructure. The pendulum swings from “everything in the cloud” to “process what you can locally, elevate what you must.” This distributed approach proves more resilient when centralized AI services become unavailable or unaffordable.
Right-Sized AI. The emergence of more efficient AI models during the boom years now proves critical. Models that can run inference on modest hardware become the standard rather than the exception. Security vendors that can deliver meaningful capabilities without massive infrastructure requirements find their moment.
Skills Renaissance. Organizations scramble to train (or retrain) their security teams in fundamentals. Universities that had pivoted entirely to “AI for Cybersecurity” curricula now rush to reintroduce courses on manual analysis, system internals, and traditional forensics. The pendulum swings back toward understanding how systems actually work rather than just how to prompt AI to defend them.
Living through the aftermath forces the industry to confront uncomfortable truths:
AI Was Often Unnecessary. Much of what was marketed as “AI-powered security” was actually traditional heuristics with a neural network wrapper. Organizations discover that 70% of their AI security features can be replaced with well-tuned rule-based systems that require a fraction of the infrastructure.
Dependency Is Vulnerability. The rush to consolidate security operations onto AI platforms created single points of failure. Resilience requires redundancy, including redundancy in methodology. The best security programs maintain multiple approaches to the same problem rather than betting everything on the most advanced technology.
Marketing Drove Architecture. Many organizations adopted AI security not because it solved a specific problem but because it was what boards expected to hear and what vendors were selling. In the post-burst world, security architecture decisions are driven by actual threat models and operational requirements rather than buzzwords.
Power Is the Real Constraint. All the venture capital and engineering talent in the world couldn’t overcome the physical reality of power generation and distribution. The industry learned that infrastructure has real constraints and that those constraints must shape technology adoption rather than being ignored in pursuit of competitive advantage.
The post-burst world requires a different approach to cybersecurity architecture:
Design for Degradation. Security systems must have clear fallback modes when AI services become unavailable. This means maintaining traditional detection capabilities, ensuring manual investigation processes remain viable, and regularly testing operations in “AI-unavailable” scenarios.
Energy-Aware Security. Every security tool and process now includes energy consumption in its evaluation criteria. The question isn’t just “does this detect threats effectively” but “what does it cost in power consumption, and is that sustainable?”
Documented Dependencies. Organizations maintain clear maps of which security functions absolutely require AI infrastructure versus which are AI-enhanced but can function without it. This documentation becomes as critical as disaster recovery plans.
Local Capability Priority. Cloud services are valuable, but critical security functions must be executable locally. Organizations invest in on-premises infrastructure for core security operations, treating cloud AI as an enhancement rather than a foundation.
Continuous Skill Development. Security teams maintain proficiency in both AI-assisted and manual methods. Training programs ensure that analysts can function effectively whether or not AI assistance is available.
The AI bubble’s burst didn’t end innovation in cybersecurity. It ended the illusion that AI alone could solve security problems and that infrastructure constraints could be ignored indefinitely. In this post-burst world, the industry is healthier despite being humbler.
Organizations that survived the correction are those that treated AI as a powerful tool in a diverse toolkit rather than a silver bullet. They’re the ones that maintained fundamental skills, invested in efficient architectures, and built resilience through redundancy rather than consolidation.
The adversaries haven’t disappeared. Threats continue to evolve. But defenders in this post-burst world approach security with a hard-won wisdom: technology enables security, but it doesn’t guarantee it. Infrastructure has real constraints that must be respected. And the most advanced capability means nothing if it becomes unavailable exactly when you need it most.
We’re now in an era where cybersecurity is defined not by who has the most advanced AI, but by who has the most resilient, sustainable, and diversified security posture. The bubble’s burst forced that evolution. And perhaps that was exactly what the industry needed.
The morning after is always difficult. But it’s also when we often see things most clearly.

Gavin is CTO at Topsec, bringing over 25 years of expertise in Financial IT, Secure Payments, and Email Security. Gavin dedicated his focus to cybersecurity and email protection, where he continues to drive innovation and secure solutions.