The Macro: The AI Agent Orchestration Problem Is Real and Getting Worse
Every company I talk to right now has the same problem. They have AI tools. They have agents. They have automations scattered across six different platforms. And they still have people doing repetitive work because nobody has figured out how to reliably route tasks between humans and AI systems.
The workflow automation market is not new. Zapier has been connecting apps since 2011. Make (formerly Integromat) handles more complex multi-step flows. n8n is the open-source option. Retool and Superblocks give you internal tools. These products are all good at what they do. But they were built for a world where automations are deterministic. Trigger X, do Y. If condition, then action. They were not designed for a world where some steps should go to an AI agent that might need to reason about the task, and other steps need human review because the stakes are too high for a model to handle alone.
The agent framework space is moving fast but in a different direction. LangChain, CrewAI, AutoGen, and a dozen others help developers build multi-agent systems. But these are developer tools. They assume you have engineers who can write code, define agent behaviors, and maintain the pipelines. Most companies are not there. Most companies have ops teams, support teams, and marketing teams who are doing repetitive work and have zero ability to build AI agent pipelines themselves.
The gap is clear. There is a class of work that is too nuanced for Zapier, too repetitive for humans, and too enterprise for a LangChain notebook. Someone needs to build the orchestration layer that sits in between and makes routing decisions in real time: this task goes to an AI agent, this one goes to a person, this one goes to an AI agent with a human review step at the end.
The Micro: Two Founders Building the Operating System for Human-AI Work
Trace was founded by Tim Cherkasov and Artur Romanov. Tim is the CEO and Artur is the CTO. They came through Y Combinator’s Summer 2025 batch as a three-person team based in San Francisco. They have since raised $3 million in funding and have over 30 companies using the platform.
The core idea is a workflow orchestration platform that integrates with the tools companies already use. Slack, Jira, Notion. Trace watches the activity across these systems, identifies patterns of repetitive work, and surfaces automation opportunities. When it finds them, it routes tasks to AI agents for the predictable work and to humans for the parts that need judgment.
What makes this different from a Zapier alternative is the context engine. Trace claims to build a company-wide understanding of how teams work by analyzing internal systems. It learns which tasks are routine, which ones have high stakes, and which ones require specific expertise. That context is what allows the routing decisions to be intelligent rather than rule-based.
The non-technical user angle is important. Trace is designed so that ops teams and managers can build workflows without writing code. This is the correct bet for a workflow product in 2026. The bottleneck at most companies is not engineering capacity to build automations. It is the ability for non-engineers to define and modify the workflows themselves without filing a Jira ticket and waiting three sprints.
Thirty companies in production is a meaningful number for an enterprise-adjacent tool less than a year old. It suggests the product is actually usable, not just demo-able. Enterprise workflow tools that get to 30 customers fast tend to have high retention because the switching cost increases as the system learns more about the organization.
The Verdict
I think Trace is positioned in exactly the right gap. The space between deterministic automation and full AI agent autonomy is where most companies actually live. The work is not clean enough for simple rules but not dangerous enough to justify keeping humans on every step. A smart router that makes the assignment decision in real time is the product this moment demands.
The competitive risk is real. Zapier just launched AI features. Make is adding agent capabilities. The big workflow platforms are all moving toward exactly this use case. Trace has a head start but the incumbents have distribution, brand recognition, and existing customer relationships.
In 30 days I want to see the accuracy of the routing decisions. What percentage of tasks sent to AI agents actually get completed correctly without human intervention? That metric is the whole product.
In 60 days the question is whether the context engine is learning. Does Trace get better at routing decisions after a month of observing a company’s workflows? If the system is static, it is just another automation builder with a nice UI. If it is genuinely learning, it becomes harder to replace every week.
In 90 days I want to understand the pricing and expansion model. Workflow tools that charge per automation run into the perverse incentive where customers pay more the more efficient they become. The right model rewards adoption. Getting this wrong is a common and avoidable mistake in the category.
The founding team, the early traction, and the market timing all look strong. If Trace can stay ahead of the incumbents adding AI features and keep the product simple enough for non-technical users, this is a company that could define how organizations divide work between people and machines.