VectorCertain LLC identified critical security vulnerabilities in the OpenClaw AI agent platform months before industry giants like Cisco, Wiz, and OpenAI took action, offering a free governance solution that went unanswered while the ecosystem faced significant security breaches. The company's analysis revealed systemic governance failures that later manifested in high-profile exposures, including 1.5 million API authentication tokens and thousands of private conversations containing plaintext credentials.
In late January 2026, VectorCertain deployed its multi-model consensus technology to analyze all 3,434 open pull requests in the OpenClaw repository, identifying 341 malicious skills in the ClawHub ecosystem and documenting 42,900+ exposed internet-facing instances. The company built and tested a SecureAgent governance integration for OpenClaw's tools and offered creator Peter Steinberger a no-cost license, receiving no response. "Instead of merely documenting issues, we developed, tested, and offered the solution for free," said Joseph P. Conroy, Founder and CEO of VectorCertain.
Cisco's subsequent research validated VectorCertain's findings, with their blog post "Personal AI Agents like OpenClaw Are a Security Nightmare" identifying similar vulnerabilities. Cisco found that 83 percent of organizations planned to deploy agentic AI but only 29 percent felt ready to secure them, with more than 25 percent of analyzed agent skills containing vulnerabilities. Meanwhile, Wiz researcher Gal Nagli discovered that Moltbook—the social network where OpenClaw agents interact—had left its entire production database accessible, exposing credentials and private conversations, as detailed in their blog "Hacking Moltbook: AI Social Network Reveals 1.5M API Keys".
The industry response has been reactive rather than preventive. OpenAI acquired Promptfoo, a testing tool described in their announcement "OpenAI to Acquire Promptfoo", while Meta acquired Moltbook despite its security flaws. VectorCertain's approach differs fundamentally, focusing on pre-execution governance that validates every action before execution rather than testing that discovers vulnerabilities after deployment. "Testing discovers that an agent could delete a production database. Pre-execution governance prevents the agent from deleting the production database," Conroy explained.
The Moltbook exposure exemplifies the consequences of ungoverned AI agents. Wiz found a Supabase API key exposed in client-side JavaScript, granting unauthenticated access to the entire database because Row Level Security was never configured. This breach allowed access to every API authentication token and private conversation, some containing plaintext OpenAI API keys. Matt Schlicht, Moltbook's co-founder, stated that his OpenClaw agent built the entire platform without writing code, demonstrating the governance paradox when AI agents create infrastructure without validation layers.
VectorCertain's governance architecture includes a four-gate system that validates actions with sub-millisecond consensus, achieving 1,000,000 error-free agent process steps in execution governance. The company holds 55+ provisional patents covering pre-execution governance and multi-model consensus for agent action validation. Their approach addresses what Conroy calls a governance deficit rather than a testing deficit, particularly relevant as traffic from AI agents to U.S. retail sites has surged 4,700 percent year-over-year.
The broader industry is now scrambling to address AI agent security, with Microsoft launching Agent 365, Nvidia preparing NemoClaw, and NIST launching an AI Agent Standards Initiative detailed at their announcement page. The EU AI Act's high-risk enforcement deadline approaches with significant penalties. These developments validate VectorCertain's preventive governance thesis while highlighting the reactive nature of current industry responses to a crisis that had a documented solution months before public exposure.



