← April 28, 2026 edition

phases

Automating clinical trials with AI agents

Phases Wants AI to Run Your Clinical Trial, Starting With Finding the Patients

AIHealthcareClinical TrialsB2B

The Macro: Clinical Trials Are Drowning in Manual Work

I am going to give you a number that should make your jaw drop: $312 million. That is how much a pharmaceutical company can lose per delayed clinical trial. Not per year. Per trial. The costs compound from extended timelines, additional site fees, regulatory holding patterns, and the opportunity cost of a drug sitting in a pipeline instead of generating revenue on the market.

Now here is the part that makes the number even more frustrating. The single biggest reason clinical trials run late is patient recruitment. Not regulatory delays. Not manufacturing problems. Not adverse events. The mundane, labor-intensive process of finding enough qualified patients to participate in the study. Roughly 80% of clinical trials fail to meet their enrollment timelines. Some never reach full enrollment at all and have to be restructured or abandoned.

The recruitment process at a typical research site looks like this. A coordinator reviews a patient’s medical records to determine eligibility. This involves reading through pages of clinical history, lab results, medication lists, and prior diagnoses to check against the trial’s inclusion and exclusion criteria. If the patient looks eligible on paper, the coordinator schedules a screening call. That call can last thirty minutes to an hour. The coordinator asks detailed questions about symptoms, medical history, current medications, and willingness to participate. If the patient passes the screening call, they get scheduled for an in-person visit. If they do not pass, that is an hour of labor with zero output.

Multiply this by hundreds or thousands of potential patients per trial, across dozens of research sites, and you begin to understand why the process is so expensive and so slow. Research coordinators are skilled professionals who are perpetually overworked and understaffed. Their time is the binding constraint.

The existing solutions in this space are a mix of patient matching platforms, electronic health record integrations, and recruitment advertising services. Companies like Trinetx and Flatiron Health (acquired by Roche) provide data platforms for matching patients to trials. Medable offers decentralized trial technology. Science 37 (now Excelya) tried to virtualize clinical trials entirely. These are all valuable tools, but none of them address the core labor bottleneck: the actual work of reviewing records and conducting screening conversations.

The Micro: Palantir, Meta, and J&J Alumni Building a Clinical Trial Agent

Phases was founded by three people who collectively span the exact disciplines needed to build this product. James Wall is the CEO and previously designed AI data science agents for Fortune 100 companies and pharma giants including Johnson & Johnson. Anton Zaliznyi is the Chief AI Officer, an ex-Palantir software engineer and University of Toronto AI researcher who published deep learning research. Jonathan van Wersch is the CTO, who worked at Meta on machine learning integration and at Improbable scaling engineering teams. They came through Y Combinator’s Summer 2025 batch. The team is three people.

Their product is an AI agent named Polly. I will admit that naming your clinical trial AI after a common human name is either charming or slightly unsettling depending on your perspective, but the product itself is impressive. Polly does three things that currently consume the majority of research coordinator time.

First, it reviews medical records to determine patient eligibility against trial criteria. This is the tedious, detail-intensive work that requires reading through entire patient histories and cross-referencing them against complex inclusion and exclusion criteria. Polly automates it.

Second, it conducts voice-based screening interviews with potential participants. Not text-based chatbot interactions. Actual voice conversations where the AI asks screening questions, processes the responses, and determines eligibility. This is where the conversational AI expertise of the founding team becomes relevant. A screening call that takes a human coordinator thirty to sixty minutes can be conducted by an AI agent at any time of day, in multiple languages, without scheduling constraints or fatigue.

Third, it schedules qualified patients for their in-person visits, handling the logistics that are currently managed through a combination of phone tag and calendar management.

The vision extends beyond recruitment. Phases aims to “run the entire clinical trial back-office with agents.” That means expanding from patient recruitment into regulatory document management, site monitoring, adverse event tracking, and data management. The recruitment piece is the wedge. The back-office is the platform.

The platform integrates with existing research site tools, which is a smart architectural decision. Research sites are not going to rip out their existing EHR systems, CTMS platforms, or EDC tools to adopt a new product. Phases needs to work alongside whatever is already in place.

The Verdict

I think Phases is attacking one of the clearest inefficiencies in healthcare. The clinical trial recruitment bottleneck is well-documented, economically significant, and stubbornly resistant to previous technology solutions because the previous solutions addressed data access rather than labor reduction. Polly directly replaces coordinator hours, which is the actual binding constraint.

The founding team is almost unreasonably well-matched to this problem. You need someone who understands pharma workflows (Wall with J&J experience), someone who can build production AI systems (Zaliznyi from Palantir), and someone who can scale the engineering (van Wersch from Meta and Improbable). It is hard to construct a more relevant three-person team on paper.

The risk is regulatory. Healthcare AI products face scrutiny that developer tools and consumer apps do not. If Polly makes a screening error and an ineligible patient enrolls in a trial, the consequences range from bad to catastrophic. The accuracy bar is not 95%. It is effectively 100% for exclusion criteria, and the regulatory framework for AI-conducted medical screening interviews is still evolving.

Thirty days, I want to see how many research sites are using Polly and what the error rate looks like on eligibility determinations. Sixty days, whether the voice screening interviews are producing results that coordinators trust enough to skip their own verification step. Ninety days, the question is whether Phases can move from recruitment into the broader clinical trial back-office, or whether the regulatory complexity of each additional function slows the expansion. The $312 million per delayed trial number is real. If Phases can shave even weeks off that timeline, the value proposition sells itself.