🚨 87% — that’s the number worth pausing on.
A study by the University of Illinois showed that GPT-4 was able to exploit 87% of 15 one-day vulnerabilities tested. What used to take skilled security researchers days of reverse engineering, the model achieved in hours — sometimes even minutes.
Now connect this to agents.
When an LLM like GPT-4 is plugged into an agent loop — reading the CVE, generating exploit code, running tests, adjusting when it fails — you essentially have a junior hacker working 24/7. No need for elite expertise, the AI automates most of the heavy lifting.
The implication: the defender’s window to react between disclosure and exploitation is collapsing to near zero. The next step is clear — defenders will also need agents on their side, to analyze, patch, and respond at machine speed.
This is just one insight from Radware’s new Internet of Agents report. I’m sharing it here as a datapoint — and a point to think about.
👉 Do you see defensive AI agents becoming part of your future security stack — or are you still relying mainly on human processes today?