Speaking at FIA Expo on 17 November, Jen Easterly, the former director of the Cybersecurity and Infrastructure Security Agency (CISA), said the speed of technological change has reduced the timeline for addressing long-standing weaknesses across critical infrastructure, financial markets and enterprise IT. Easterly argued that AI is both the most powerful defensive tool available to US institutions and the fastest-moving threat.
“AI is the most powerful technology of our lifetime. It will change pretty much everything,” Easterly said. She added that better AI will let defenders spot attacks early, find flaws faster and even automate responses. She said its biggest impact may be finally updating old codes that support US critical infrastructure – a job once considered costly and risky.
“We don’t actually have a cybersecurity problem; we have a software quality problem,” she said. “Much of the critical infrastructure we depend on was built on flawed, defective code. AI finally gives us the ability to transform that at scale.”
But she cautioned that adversaries are already using AI to accelerate and expand cyberattacks. Phishing emails are now nearly flawless, malware is harder to detect, and recent threat-intelligence disclosures show state-linked actors experimenting with AI models to chain together stages of an intrusion. “Attackers are going to use AI as well,” she said. “The offence-defence dynamic does not go away.”
With frontier models advancing quickly, Easterly said the US can no longer rely on voluntary guardrails set by tech companies themselves – especially when the incentives of private AI developers differ dramatically from the nation-state entities that once controlled other strategic technologies.
“These will be the most powerful weapons of our lifetime,” she said. “Nuclear weapons were built and safeguarded by governments disincentivised to use them. AI is being built by private companies that answer to investors. Self-policing isn’t enough.”
Easterly urged the US government to adopt a harmonised federal approach to AI oversight, warning that the current patchwork of rules risk creating severe burdens for regulated firms while offering little real protection.
She cited the clash between CISA and the Securities and Exchange Commission in 2022, when both agencies pursued competing cyber incident-reporting rules with different requirements and timelines. The result, she said, was “regulatory chaos” that forced companies into box-checking exercises rather than real risk reduction.
“The problem isn’t regulation; the problem is dumb, sloppy regulation,” she said. Any US AI framework, she argued, should follow principles similar to the EU AI Act and emerging state-level proposals such as California’s SB-53, which organise oversight based on risk tiers. She also called for a software liability regime to shift responsibility back to vendors whose insecure products underpin critical systems.
“Instead of blaming victims for not patching their software, or blaming the intern for downloading the malicious file or getting phished into giving up their password, I really think we need to demand more from our vendors.” Easterly said. “We need to demand accountability from vendors and ensure technology is secure by design.”
Easterly emphasised that organisations should no longer expect to prevent every attack, particularly as the world becomes more digitised. With 5.7 billion global social-media users, billions of connected devices and millions of data transactions every minute, the attack surface is too large for perfect protection.
“Disruption will happen,” she said, noting that outages can come not only from cyberattacks but from technology outages, weather events that disrupt businesses, physical attacks and new infectious diseases.
“Anybody in business today needs to recognise that disruption will occur and be building systems, data and training people so that they are prepared for that,” she said.
She warned that companies must avoid the “failure of imagination” that contributed to the unpreparedness exposed during the 9/11 era. The rise of AI, she said, demands scenario planning not only for the probable but for the extreme “worst case scenarios” especially in sectors like finance, where outages can trigger cascading economic effects.
Beyond AI, Easterly pointed to quantum computing as the next looming cybersecurity challenge. While today’s machines cannot yet break modern encryption, she said, adversaries are stockpiling encrypted data now with the expectation they will decrypt it later once cryptographically relevant quantum computers arrive.
To explain the stakes, Easterly compared a classical computer solving a Rubik’s Cube one turn at a time to a quantum machine exploring millions of possible solution paths simultaneously. “Think about solving the cube in a trillionth of a second,” she said.
With the US National Institute of Standards and Technology publishing the first quantum-safe algorithms at the end of 2024, Easterly said businesses must begin the transition immediately as it is resource-intensive process that can take years.
“Any critical infrastructure entity that is not already looking at that and swapping out the encryption with quantum-safe algorithms needs to get on it right now.”
“Any business that is not already on the road to doing the transition is behind the tower.” she said.
Easterly’s message was blunt: the convergence of AI, quantum computing and deeply entrenched software vulnerabilities is reshaping global cyber risk far faster than existing governance structures can respond. Without action, she warned, the US risks facing simultaneous, cascading failures driven by adversaries who are innovating just as quickly.
“There will be mass disruption,” she said. “Our job now is resilience – planning for the worst case, not hoping it will not happen.”