← August 22, 2026 edition

efference

Stereo cameras and 3D perception for robotics

Efference Is Building Eyes for Robots, and LiDAR Should Be Worried

AIRoboticsComputer VisionHardware

The Macro: Robots Need to See, and the Current Options Are Expensive and Fragile

Every robot that interacts with the physical world needs depth perception. Self-driving cars need it. Warehouse robots need it. Humanoid robots need it. Drones need it. And right now, the dominant solution is LiDAR: spinning laser arrays that create 3D point clouds of the environment. LiDAR works. It works well. It also costs anywhere from $500 to $75,000 per unit depending on the application, adds mechanical complexity, draws significant power, and creates a dependency on specialized hardware suppliers who have their own supply chain constraints.

The robotics industry has known for years that LiDAR is a bottleneck. Tesla famously bet against it, going camera-only for autonomous driving and taking years of criticism before the approach started producing competitive results. The lesson from Tesla’s experience is not that cameras are easy. It is that cameras are cheap and ubiquitous and improving fast, while LiDAR is expensive and improving slowly. If you can solve the software problem, the hardware economics are overwhelmingly in your favor.

The stereo vision approach sits between monocular cameras and LiDAR. Two cameras, slightly offset, can triangulate depth the same way human eyes do. Intel’s RealSense cameras and Stereolabs’ ZED cameras have offered stereo depth for years, but the software that interprets the data has been the weak link. Traditional stereo matching algorithms struggle with textureless surfaces, repetitive patterns, and varying lighting. The images are there. The interpretation has been the bottleneck.

That bottleneck is exactly where modern machine learning excels. Learned stereo matching has gotten dramatically better in the past three years. Models like RAFT-Stereo, CREStereo, and UniMatch have pushed accuracy to the point where stereo depth maps rival LiDAR in many scenarios. The gap is closing fast, and the cost difference between a $30 camera module and a $3,000 LiDAR unit makes the economics almost absurd if the software can deliver.

The market is enormous. Autonomous vehicles alone represent a multi-billion dollar sensor opportunity. Add humanoid robots (Figure, 1X, Apptronik, Agility), warehouse automation (Locus, 6 River Systems), agricultural robots, inspection drones, and you are looking at a market where every unit shipped needs some form of 3D perception. Whoever cracks affordable, reliable depth sensing for robotics will have a customer base that grows with every robot deployed.

The Micro: A Princeton Neuroscientist Who Studied How Brains Actually See

Efference builds stereo cameras and 3D perception software for robotics. The approach combines stereo triangulation with learned algorithmic models that run in real-time. The company treats vision as a software problem rather than a hardware problem, which means their system works with both proprietary cameras and existing hardware like Intel RealSense and ZED cameras. They sell the H1 stereo camera and the M1 monocular camera with embedded AI processing.

Gianluca Bencomo is the sole founder and he is running a one-person company, which in hardware is either brave or reckless. His background is the interesting part. He was a PhD candidate in machine learning and neuroscience at Princeton, on leave to build Efference. Before that, he did research at Harvard Medical School on visual decision-making and undergraduate work in biochemistry with experience at NASA’s Jet Propulsion Laboratory. The company operates under Remnant Robotics, Inc. and came through Y Combinator’s Fall 2025 batch.

The neuroscience angle is not just a biographical detail. The name “Efference” itself comes from neuroscience. An efference copy is the signal your brain sends to predict the sensory consequences of your own movements. It is the reason you cannot tickle yourself and the reason you do not get dizzy every time you move your eyes. Bencomo studied how biological visual systems work, and now he is building artificial ones. That is about as direct a founder-market connection as you can get.

The cost reduction claim is aggressive: 10 to 100x cheaper than traditional sensor stacks. If even the lower end of that range holds up in production environments, the implications are significant. A humanoid robot that currently needs $5,000 in sensors could get by with $500. A delivery drone that needs $2,000 in perception hardware could use $200 worth of cameras. At those price points, entire categories of robots that are currently too expensive to deploy become economically viable.

The competitive field includes Stereolabs (ZED cameras), Intel (RealSense, though largely abandoned), Luxonis (OAK cameras with on-device AI), and a wave of startups working on neural radiance fields and other novel 3D reconstruction approaches. Ouster and Velodyne dominate LiDAR. The question for Efference is whether stereo vision can match LiDAR reliability in the messy conditions that robots actually operate in: rain, dust, direct sunlight, featureless walls, transparent surfaces.

The Verdict

I think Efference is making a bet that the industry agrees with in principle but has struggled to execute in practice. Camera-based depth perception is cheaper, lighter, and more scalable than LiDAR. The software just has to be good enough. The founder’s background in visual neuroscience is genuinely relevant, not just interesting cocktail party material.

The one-person team is a concern. Hardware companies are hard to build alone. There are supply chain problems, manufacturing quality control issues, customer integration work, and certification requirements that eat time at a rate that software companies never experience. At 30 days, I would want to see whether the pre-order pipeline is converting and what the customer profiles look like. At 60 days, the question is whether the cameras perform reliably in real deployments or whether edge cases are eating the team alive. At 90 days, Bencomo either needs to hire or partner, because a one-person hardware company cannot scale production and support customers and improve the software simultaneously.

If the product works as claimed, the market will come to Efference. Every robotics company on the planet is looking for cheaper perception. That is not a speculative demand curve. That is an engineering constraint that every hardware team lives with daily.