This blog post has been reported on and distorted by a lot of tech news sites using it to wax delusional about AI’s future role in vulnerability detection.
But they all gloss over the critical bit: in fairly ideal circumstances where the AI was being directed to the vuln, it had only an 8% success rate, and a whopping 28% false positive rate!
This line annoys me… LLMs excel at making signal-shaped noise, so separating out an absurd number of false positives (and investigating false negatives further) is very difficult. It probably requires that you have some sort of actually reliable verifier, and if you have that, why bother with LLMs in the first place instead of just using that verifier directly?
Trying to take anything positive from this:
Maybe someone with the skills of verifying a flagged code path now doesn’t have to roam the codebase for candidates? So while they still do the tedious work of verifying, the mundane task of finding candidates is now automatic?
Not sure if this is a real world usecase…
As the other comments have pointed out, an automated search for this category of bugs (done without LLMs) would do the same job much faster, with much less computational resources, without any bullshit or hallucinations in the way. The LLM isn’t actually a value add compared to existing tools.