Tech »  Topic »  OpenClaw proves agentic AI works. It also proves your security model doesn't. 180,000 developers just made that your problem.

OpenClaw proves agentic AI works. It also proves your security model doesn't. 180,000 developers just made that your problem.


OpenClaw, the open-source AI assistant formerly known as Clawdbot and then Moltbot, crossed 180,000 GitHub stars and drew 2 million visitors in a single week, according to creator Peter Steinberger.

Security researchers scanning the internet found over 1,800 exposed instances leaking API keys, chat histories, and account credentials. The project has been rebranded twice in recent weeks due to trademark disputes.

The grassroots agentic AI movement is also the biggest unmanaged attack surface that most security tools can't see.

Enterprise security teams didn't deploy this tool. Neither did their firewalls, EDR, or SIEM. When agents run on BYOD hardware, security stacks go blind. That's the gap.

Why traditional perimeters can't see agentic AI threats

Most enterprise defenses treat agentic AI as another development tool requiring standard access controls. OpenClaw proves that the assumption is architecturally wrong.

Agents operate within authorized permissions, pull context from ...


Copyright of this story solely belongs to venturebeat . To see the full text click HERE