The Macro: Pentesting Is Broken and Everyone Knows It
I am going to say something that will annoy every CISO reading this: your annual pentest is mostly theater.
Here is how it works. Once a year, maybe twice if you are diligent, you hire a penetration testing firm. They spend two to four weeks poking at your application. They produce a PDF report that is 80 pages long, half of which is boilerplate. Your engineering team fixes the critical findings. They deprioritize the mediums. They ignore the lows. Then you ship 200 more pull requests over the next six months, each one potentially introducing new vulnerabilities, and nobody tests any of it until the next annual engagement.
The market for penetration testing services is roughly $3 billion and growing. The big names are Cobalt, Synack, HackerOne, and the professional services arms of firms like CrowdStrike and Rapid7. Bug bounty platforms like HackerOne and Bugcrowd take a different approach, offering continuous coverage through crowds of freelance hackers, but the quality is inconsistent and most programs collect a pile of low-severity XSS reports that nobody acts on.
Static analysis tools like Snyk, Semgrep, and SonarQube catch known patterns but produce false positive rates that make developers stop paying attention. DAST tools like Burp Suite and OWASP ZAP are powerful but require skilled operators. The gap in the market is clear: nobody offers continuous, automated pentesting that actually exploits vulnerabilities to prove they are real, not just flags potential issues and hopes someone investigates.
The timing for AI-powered security testing is right. Code is being generated faster than ever thanks to AI coding assistants. More code means more attack surface. More attack surface means more vulnerabilities. The security industry has not scaled its detection capabilities to match the rate of code production, and manual pentesting certainly has not.
The Micro: The Number One US Hacking Team Goes Commercial
Veria Labs is built by three founders who were all members of the top-ranked competitive capture-the-flag hacking team in the United States. That is not a resume bullet point I see often, and it matters here more than in almost any other startup.
Cayden Liao specializes in Web3 security and competitive hacking. Jayden Sarveshkumar focuses on web exploitation. Stephen Xu was an offensive security researcher at TikTok before this. All three are from the same CTF team. They are based in San Francisco with a team of three, out of Y Combinator’s Fall 2025 batch.
The product connects to your Git repository and CI/CD pipeline. On initial setup and with every pull request, AI agents analyze the code for vulnerabilities. But here is the part that matters: they do not just flag issues. They generate proof-of-concept exploits and test them against your staging environment. If the exploit works, you know the vulnerability is real. If it does not, it gets filtered out. No false positives, no “potential” findings that waste engineering time.
The four-step workflow is clean. Connect your repo. The AI finds vulnerabilities. It exploits them to prove they are real. It suggests patches. That last step is important because most security tools tell you what is wrong but leave the fix to your engineers, who may or may not understand the vulnerability well enough to fix it correctly.
Veria claims to have prevented over one billion dollars in potential hacks, which is a big number but not unbelievable given that the average cost of a data breach is $4.45 million and they reportedly found vulnerabilities in high-profile targets including Claude, Gemini, and the MCP protocol. Finding security issues in AI systems is a nice piece of credibility when selling to companies that are building with AI.
Compared to Snyk, which does static analysis, Veria is more like a human pentester in agent form. Compared to HackerOne, it is deterministic and continuous rather than crowdsourced and sporadic. Compared to Cobalt, it is automated rather than manually staffed. The closest competitor in approach is probably Aptori or Pentera, both of which do automated security testing, but neither positions itself as doing full exploitation with proof-of-concept generation at the pull request level.
The target market is fintech, healthcare, and crypto, which are the three verticals where a security breach has the most immediate financial and regulatory consequences. Smart targeting for a company that needs to demonstrate high-stakes value quickly.
The Verdict
I think Veria Labs has the most credible founding team I have seen in security AI. Competitive hacking is real-world offensive security compressed into time-limited competitions. If these founders can find vulnerabilities under pressure against world-class opponents, they can find vulnerabilities in your SaaS application.
At 30 days, I want to see the false positive rate compared to static analysis tools. If Veria produces zero false positives because every finding includes a working exploit, that alone is worth the price of admission for any engineering team drowning in Snyk alerts. At 60 days, I want to understand the coverage depth. Can the AI find complex vulnerability chains that require multiple steps to exploit, or is it limited to known patterns? The competitive hacking background suggests the former, but the product needs to prove it at scale. At 90 days, the question is pricing and positioning. If they price like a pentest replacement at $50,000 to $100,000 per year, they are selling to CISOs. If they price like a developer tool at $500 per month, they are selling to engineering teams. The second market is larger and faster to close, but the first market has bigger contracts. Where they land on that spectrum will determine the trajectory.