Summary:
Original Link:
Original Article:
I’ve been an AI Security Researcher for over a decade. And I just attended my first Black Hat. Here are my thoughts:
This year was all about AI and security—especially putting AI into security tools. My question posed to every one of the many vendors I spoke to: who is modeling threats to the AI you claim can be trusted with literally securing the enterprise?
I talked to I don’t know how many vendors promising either AI or Agentic solutions who had NO IDEA how their AI worked. Or how any AI works, for that matter. You are all very focused on APIs, and it’s evident you spent exactly zero hours learning the AI components of your systems. Oof. Do better.
What’s more, there were so many promising that your “AI Agents” can reduce SOC hours and fatigue. Or other sensitive security data analysis.
Not ONE of you could tell me what security was in place for these agents you’re planning to deploy as highly privileged identities in someone else’s enterprise data.
Not. ONE. Zero. Zip. Nada. None.
Protip: “We are a security company so of course our Agents are secure” is not gonna cut it anymore.
You want our business? We want specs.
Finally, a soft skills note: If you see someone who looks like me, maybe don’t assume they’re non technical? Saying “do you have some interest in this field” when a girl hacker approaches your booth is maybe not the look.
Assumptions like that are the reason I got bored with breaking your physical systems when I was on the red team—your preconceived projections made walking right in too easy. Just saying.
Back to AI Agents and security: If you’re a security leader, it’s time to hold these vendors to the technical fire. If they aren’t prepared to give solid answers, they don’t deserve the chance to embarrass you.
Stay frosty.