← April 16, 2027 edition

overshoot

AI infra for real-time vision applications

Overshoot Makes It Easy to Build Apps That See the Real World in Real Time

The Macro: Computer Vision Is Ready, the Developer Tools Are Not

Computer vision models can now identify objects, read text, understand scenes, and track movement with impressive accuracy. The models are good. What is not good is the infrastructure for building production applications on top of them.

If you want to build an application that watches a parking lot and counts cars in real time, you need to handle video ingestion, model inference, latency optimization, edge deployment, and result streaming. Each of these is a separate engineering challenge. And you need them all working together, in real time, at scale. The gap between “I have a vision model that works on test images” and “I have a production application processing live video” is enormous.

The existing options are not great. You can use cloud vision APIs, but latency makes them impractical for real-time use cases. You can deploy models on edge devices, but managing a fleet of edge deployments is its own nightmare. You can build everything from scratch, but that takes months and a specialized team.

Roboflow handles some of this for training and deploying vision models. Voxel51 provides dataset management for computer vision. But nobody has built a complete infrastructure layer specifically for real-time vision applications. That is the gap Overshoot is targeting.

The Micro: Brothers Who Built Vision Systems at Uber and Intel

Zakaria and Younes El hjouji are brothers who cofounded Overshoot. Zakaria spent 7 years building low-latency pricing algorithms at Uber and writing GPU kernels at Meta AI. He studied at the London School of Economics and MIT, won 3 AI hackathons, and previously built and sold a software product. Younes was a founding engineer at Cosmonio, which was later acquired by Intel, where he built computer vision frameworks. Between them, they cover the full stack from GPU-level optimization to production deployment.

Overshoot makes it easy for developers to build and run real-time vision applications. The platform handles the infrastructure so developers can focus on the application logic. Video agents that watch your home, security systems that detect threats, robotics vision, gaming applications. All of these require the same underlying infrastructure, and Overshoot provides it.

They are a five-person team from San Francisco, part of YC Winter 2026 with partner Jon Xu. The team size is notably larger than most companies at this stage, which suggests they have been building for a while.

The Verdict

Overshoot is building infrastructure for a wave of applications that has not arrived yet but is clearly coming. Real-time vision applications will be everywhere. Home security, retail analytics, autonomous systems, consumer AR products. Every one of them needs the infrastructure Overshoot is building.

The risk is timing. If the wave of vision applications takes longer to materialize than expected, Overshoot could burn through resources waiting for the market. The good news is that specific use cases like physical security are already here and growing.

The competitive pressure comes from the cloud providers who offer vision APIs and from Roboflow which keeps expanding its platform. But real-time inference at scale requires fundamentally different infrastructure than batch processing, and Overshoot’s focus on low-latency, real-time applications gives it a technical edge.

In 30 days, I want to see the number of applications built on the platform. In 60 days, the question is latency benchmarks. How fast can Overshoot process a video frame from ingestion to result? In 90 days, I want to know about the developer experience. Are developers building real applications, or just experimenting? The infrastructure play only works if developers actually ship products on top of it.