Oct 2, 2025
I enjoy posts like this deep dive from Joshua Rogers on “AI Security Engineers” as amidst so much noise they show the value that agents are adding at the frontier. Josh finds the tools generally useful, giving a good tear down in the post. I’m not quite convinced the tools are ready for prime time, there’s a few too many obvious gotchas outlined here (e.g. monorepo support, vulnerability to prompt injection). I have to admit though I’m cheering for this class of tooling from the sidelines. This type of cheap, ubiquitous and performant vulnerability style scanning which seems like it could be in our adjacent future would be would be a major boon for the industry. Interesting as well that the technical approach these agents take seems to be to use existing tooling, crank up the level of permissiveness to generate a large result set containing lots of false positives (but likely some hits as well) then use the LLM to sort through those results to filter signal from noise.
Hacking with AI SASTs: An overview of 'AI Security Engineers' / 'LLM Security Scanners' for Penetration Testers and Security Teams