The Macro: Operations Work Is the Dark Matter of Every Company
Here is what nobody puts on their LinkedIn but everyone does at work: operations tasks. Updating the CRM after a call. Generating the weekly report from three different data sources. Chasing an invoice that is 30 days overdue. Creating a Jira ticket from a Slack conversation. These tasks are not hard. They are not interesting. But they consume hours every week, and when they do not get done, things fall apart.
The tools exist to do all of this manually. Notion, Jira, HubSpot, Stripe, Gmail, Google Workspace. The problem is not that the tools are bad. The problem is that the work of moving information between tools, formatting it correctly, and following up on the output is tedious enough that people procrastinate on it. And when they finally do it, they are doing it at the expense of whatever their actual job is supposed to be.
Automation platforms have tried to solve this for years. Zapier connects apps with triggers and actions. Make (formerly Integromat) offers more complex workflows. Tray.io targets enterprise automation. But all of these require someone to build the automation first. You need to define the trigger, map the fields, handle the edge cases. For many ops tasks, the setup time exceeds the time saved, especially for one-off or infrequent tasks.
The AI agent wave is changing this equation. Instead of pre-building an automation, you just tell an agent what you want done in natural language, and it figures out the steps. The question is whether these agents are reliable enough to handle real ops work, where errors have consequences, or whether they are just a faster way to create slightly wrong outputs.
The Micro: Pearl Learns Your Preferences and Promotes Tasks to Workflows
Bubble Lab lets companies deploy Pearl, an AI agent that lives in Slack. You ask Pearl to do something, like “pull last week’s revenue from Stripe and summarize it in Notion,” and it does it. The product connects to over 25 integrations including Gmail, Jira, Notion, GitHub, Postgres, Stripe, and more.
The founding team is Selina Li, CEO and cofounder, and Zach Zhong, CTO. Selina previously cofounded gymii.ai. Zach comes from a CS background at Cornell Tech and UCSD with a self-described obsession with automating everything. The company went through Y Combinator’s W26 batch and is also part of the NVIDIA Inception Program.
What makes Bubble Lab different from a generic AI chatbot connected to APIs is the workflow promotion feature. When you ask Pearl to do something once, it runs it as a flexible AI agent task. But when that task becomes repeatable, Bubble Lab automatically converts it into a deterministic workflow. This is a smart architectural decision. AI agents are flexible but unpredictable. Deterministic workflows are rigid but reliable. By starting with an agent and graduating to a workflow, you get the best of both.
The long-term memory feature is also worth noting. Pearl claims to learn your preferences, your tone, and your shortcuts over time. If this actually works, it means the agent gets more useful the longer you use it, which creates switching costs. Once Pearl knows that “weekly report” means “pull data from these three sources, format it this way, and post it to this channel,” rebuilding that context in a competing product is a real barrier.
The security credentials are solid for an early-stage company. SOC 2 Type 1 certified, Tier 2 CASA verified, and end-to-end encryption. For a product that connects to your CRM, email, and billing systems, security is not optional. Plenty of companies will not even evaluate an operations tool without SOC 2.
Deployment takes under a minute, they claim. You install it in Slack and start asking for things. This is a fundamentally different adoption model than Zapier or Make, where you spend hours configuring your first workflow before you see any value. If Pearl can deliver useful output within minutes of installation, the time-to-value is exceptional.
The competitive space is getting crowded. Dust.tt is building similar AI agent infrastructure. Bardeen automates browser-based workflows. Lindy.ai offers AI assistants for various business functions. But the Slack-native approach is a specific bet on where knowledge workers actually live. If your team communicates in Slack, having the ops agent right there in the conversation reduces friction to essentially zero.
The Verdict
I think the workflow promotion concept is the most interesting idea here. Most AI agent products stop at “it can do things when you ask.” The leap to “it notices patterns and creates reliable automations from them” is where the real value compounds.
At 30 days, I would want to see error rates on agent-executed tasks. If Pearl updates the wrong CRM field or generates an inaccurate report, the trust breaks fast. Ops work tolerates slowness but not inaccuracy.
At 60 days, the question is how many integrations are deep versus shallow. Connecting to 25 apps is easy. Being able to do complex, multi-step operations within those apps is hard. Can Pearl handle “create a Jira epic with three subtasks based on this Slack thread and assign them to the right people based on the GitHub commit history”? That is the bar.
At 90 days, I would be looking at expansion within accounts. Does the first team that adopts Pearl drive adoption to other teams? Slack-native tools spread virally within organizations because people see them working in shared channels.
The best ops tools are the ones you forget are running. If Pearl reaches that level of reliability, it is going to be hard to rip out.