The Macro: Fraud Teams Are Drowning in Alerts They Cannot Investigate
Financial fraud detection has a bottleneck, and it is not detection. Modern fraud systems are decent at flagging suspicious transactions. The problem is what happens after the flag. Each flagged transaction needs to be investigated. An analyst reviews the account history, checks for patterns, examines associated entities, and makes a determination. This takes time. At scale, it takes armies of analysts.
Most fraud teams run at roughly 10x the alert volume they can actually investigate. The result is triage by severity. Only the biggest or most obvious cases get a full investigation. Everything else either gets auto-declined, which creates false positives that hurt legitimate customers, or gets auto-approved, which lets fraud through.
The existing tools are not solving this. Sardine, Socure, and Persona handle identity verification. Seon and LexisNexis provide risk signals. But the investigation layer, the part where an analyst connects the dots and determines what actually happened, is still overwhelmingly manual. And the gap between detection and action is where fraud teams lose.
The Micro: Closing the Loop From Investigation to Production
Nicholas Aldridge and Joseph McAllister cofounded MouseCat. Nicholas was a Principal Engineer at AWS AI and is a core maintainer of the Model Context Protocol (MCP). Joseph previously built ML infrastructure and fraud detection systems at Coinbase. They are a two-person team from YC Winter 2026 with Jared Friedman.
The backgrounds are perfectly matched. One cofounder understands AI infrastructure at the deepest level. The other built fraud systems at one of the most fraud-targeted companies in fintech. Together, they are building an AI investigator that runs end-to-end fraud investigations with explainable decisions.
MouseCat handles KYB fraud investigations, automated rule development, and ATO and payments fraud modeling. The platform extracts features from unstructured data, backtests rules, generates synthetic labels for fraud detection models, and integrates with data warehouses like Databricks and Snowflake. On-premises deployment is available for companies with strict data requirements.
The key differentiator is closing the loop from investigation to production. Most tools stop at recommendations. MouseCat converts investigation findings into production-ready rules and models. When the AI investigates a new fraud pattern, it does not just write a report. It generates the detection logic to catch similar patterns going forward.
The Verdict
MouseCat is building the piece of the fraud stack that nobody has automated well. Detection is commoditized. Investigation is where the value is, and it is still mostly manual. The founding team has the exact right combination of AI infrastructure expertise and fraud domain knowledge.
The risk is enterprise sales cycles. Banks and fintech companies are slow to adopt new fraud tools. The compliance requirements are stringent. The integration with existing systems is complex. MouseCat needs to prove ROI fast enough to justify the long sales process.
In 30 days, I want to see investigation accuracy. How often does MouseCat’s determination match a senior analyst’s? In 60 days, the question is rule quality. Are the auto-generated rules actually catching new fraud in production? In 90 days, I want to know about the alert volume reduction. If MouseCat can process 10x more alerts than a human team, every fraud-heavy fintech company needs it.