Seems like a space that is really heating up. I recall most of the foundational labs announced some kind of agentic security product last year (OpenAI's Aardvark, Claude Code security reviewer, etc.)
I think with the advent of the AI gold rush, this is exactly the mentality that has proliferated throughout new AI startups.
Just ship anything and everything as fast as possible because all that matters is growth at all costs. Security is hard and it takes time, diligence, and effort and investors aren't going to be looking at the metric of "days without security incident" when flinging cash into your dumpster fire.
do people even care about security anymore? I'll bet many consumers wouldn't even think twice about just giving full access to this thing (or any other flavor of the month AI agent product)
I really hope this is performative instead of something that the Anthropic folks deeply believe.
"Broadly" safe, "broadly" ethical. They're giving away the entire game here, why even spew this AI-generated champions of morality crap if you're already playing CYA?
What does it mean to be good, wise, and virtuous? Whatever Anthropic wants I guess. Delusional. Egomaniacal. Everything in between.
reply