The Macro: The Security Problem That Shipped Before Anyone Noticed
AI agents are already inside enterprise infrastructure. Not hypothetically, not in pilot programs, not on a roadmap. Right now. Engineers are connecting Cursor, Claude Code, and Copilot to live databases and customer records using MCP, the protocol Anthropic introduced to let language models talk to external tools. It took off faster than any governance structure could follow.
MCP, or Model Context Protocol, is genuinely elegant as a technical idea. It gives AI agents a standardized way to call tools, read data, and take actions. It also means any developer with a thirty-second setup window can point a coding assistant at a CRM full of customer PII, and nobody who manages risk at that company will know it happened.
This is the actual problem. Not the AI part. The nobody-knows-it-happened part.
The AI security category is filling up fast. You have players attacking the LLM layer directly, focusing on prompt injection and model-level guardrails. You have others building around data loss prevention, or trying to retrofit existing IAM frameworks onto AI workloads. What’s less crowded is the control plane layer specifically, the piece that sits between the agent and the tool call, watching what flows through MCP connections and deciding what gets blocked.
The market numbers being thrown around for AI broadly are staggering. Multiple research firms are projecting the global AI market somewhere between $2 trillion and $3.6 trillion by the early 2030s, according to Precedence Research and MarketsandMarkets. U.S. private AI investment hit $109.1 billion in 2024 alone, per the Stanford AI Index. None of that money is useful if companies can’t govern what the agents built with it are actually doing.
That gap is where Golf is trying to live.
The Micro: One Platform, Sitting Between Your Agents and Everything They Can Touch
Golf describes itself as an enterprise MCP control plane. The pitch is simple enough to say in one sentence: every AI agent and MCP connection in your organization, visible and governable, without touching the LLM itself or changing how developers work.
The product is built around three capabilities. Discovery first, because before you can govern anything you have to know it exists. Golf surfaces every agent, every MCP server, every data connection running in an environment, including the ones nobody officially provisioned. That framing is doing a lot of work in the product’s positioning, and it’s probably the most credible hook. The shadow IT problem with AI tooling is real and documented enough that most enterprise security teams will nod at it immediately.
Enforcement is the second piece. Granular policies per tool, per team, per data source. Block PII exposure in real time. Prevent credential leaks. The site claims sub-millisecond latency on policy enforcement, which matters, because any security layer that slows down developer tools will get routed around within a week.
Third is audit trails. Logs of what every agent did, what data it touched, what actions it took. This is the compliance surface, the thing that makes a security team able to answer questions after an incident rather than just shrug.
The approach is notable for what it deliberately avoids. Golf does not try to control the LLM. It doesn’t intercept model outputs or try to detect hallucinations. It controls the connection layer, the MCP handshake, the tool calls. That’s a smart constraint. It means the product doesn’t need to solve AI alignment to be useful.
It got solid traction on launch day, which makes sense given how acutely the target buyer feels this problem. A LinkedIn post from the team noted they’d been talking to enterprises about MCP adoption since March.
Founder information wasn’t verifiable through my research, so I’m leaving that alone.
The Verdict
The problem Golf is solving is real. I believe that without qualification. The scenario on their website, where an engineer connects Cursor to a production database in thirty seconds and nobody in security knows, is not a thought experiment. It is happening.
What I’d want to know at thirty days is whether discovery actually works in practice. Finding known MCP connections is one thing. Finding the rogue ones, the ones an engineer set up through a personal account or a non-standard client, is genuinely hard. If discovery has gaps, the whole value proposition gets shaky.
At sixty days, the enforcement question becomes whether policies are expressive enough to handle real enterprise complexity without becoming so burdensome that developers find workarounds. Security tools fail at the adoption layer more often than the technical one.
The competitive risk I’d watch is the MCP protocol owners themselves. Anthropic, the hyperscalers, the IDE vendors, any of them could build governance tooling directly into the protocol layer. That’s a real ceiling.
Still, this feels like the right problem at the right moment. Enterprises are past the point of debating whether to adopt AI agents. They’re trying to figure out how to not lose control in the process. Golf is betting that control plane tooling becomes as standard as IAM, and that bet is not crazy. I’d want to see the discovery claims stress-tested before I bought it fully, but the direction is right.