As labs scale automation, teams often face a tradeoff between systems that are easy to adopt at the bench and those that can scale across complex, multi-instrument workflows. In this webinar, Opentrons hosts HighRes to introduce a new model that bridges that gap: the industry’s first agent-to-agent laboratory workflow.
James Branigan (SVP, Software, HighRes) and Crystal McKinnon (Principal Product Manager, Cellario OS, HighRes) from HighRes will showcase how Opentrons Flex robotics and AI-driven workflow creation integrate with HighRes’ Cellario OS™ orchestration software and FlexPod® automation platform. Together, these technologies enable scientists to move from natural-language experiment intent to reliable, physical execution and scale seamlessly as workflows grow in complexity. The session includes a walkthrough of a semi-automated qPCR workflow and a discussion of how open, interoperable systems support long-term automation strategies.
Moderated by Mike Carderelli
For decades, lab automation has been about building better robots. The next chapter is about making those robots smarter and making them work together. A recent webinar hosted jointly by Opentrons and HighRes offered a glimpse of what that future looks like today, including a live demonstration of an AI-driven, multi-instrument workflow executed almost entirely through natural language.
Here's what was covered, and why it matters.
Opentrons has shipped over 10,000 lab robots globally, serving more than 40 countries. More than 90% of top pharma and biopharma organizations now have an Opentrons robot in their labs, with over a thousand new robots shipping each year. That's roughly three installations every single day.
That scale reflects something important: the hardware problem of lab automation is largely solved, at least at the bench level. The new challenge is intelligence. How do you take a lab full of capable instruments and make them act as a unified, responsive system?
That's the question both companies are now focused on answering.
Opentrons frames the autonomous lab as an infrastructure problem with three interdependent layers:
Intent translation — converting scientific goals into executable robot protocols. This is the domain of large language models, and it's where Opentrons AI (launched in 2024) plays a central role. Scientists can describe what they want in plain language, upload reagent kit PDFs or CSV files, and have the system generate a complete, executable Python protocol for their liquid handler.
Perception™ — giving robots the ability to see and respond to their environment. Opentrons is developing vision language models and has already demonstrated computer vision capabilities at SLAS, allowing robots to detect errors like failed plate seals or imminent collisions before they become costly problems. This work is being developed in collaboration with NVIDIA and is live in production at Recursion Pharmaceuticals.
Execution — translating intent and perception into reliable robotic action through vision language action models (VLAs). This is the cutting edge of the roadmap: getting from a scientific question to physical robot movement with minimal human intervention in between.
HighRes Senior VP of Software James Brangand made a point worth dwelling on: AI is not just an add-on to automation — it's changing the fundamental pace of science itself.
As AI tools allow researchers to generate hypotheses and design experiments faster than ever before, the bottleneck shifts. It's no longer scientific thinking that limits the speed of drug discovery. It becomes the lab's ability to actually run those experiments. Organizations that have built a reliable, flexible automation foundation will be able to compete at the new pace. Those that haven't will fall progressively further behind.
This means the foundation has to be right. If robots fail frequently, if workflows break down, if data is unreliable — closed-loop experimentation is impossible. That's why both companies are investing heavily in uptime, error detection, and automated recovery before layering in more AI capability.
The centerpiece of the webinar was a live demonstration of an AI-assisted qPCR workflow — a real multi-instrument experiment designed, executed, and analyzed almost entirely through conversational prompts.
Here's what the workflow looked like from start to finish:
Step 1. Protocol design via natural language. Using HighRes' Cellario Lab Assistant™, a scientist types a plain-language description of a serial dilution protocol. The system asks a few clarifying questions — pipette type, volumes, starting column — then generates a complete Python-based Opentrons protocol, communicating directly with the Opentrons MCP server. No manual protocol builder. No scripting required.
Step 2. Workflow creation and order placement. Still in the lab assistant, the scientist asks what qPCR workflows are available, gets a summary, and places an order, again, in natural language. The system commits the workflow to the lab, timestamps it, and puts it in the task queue.
Step 3. Guided execution in the lab. On a lab iPad, the operator opens the pending order and is walked through each step via digital SOPs. When manual actions are needed (loading labware, moving a plate), the system provides step-by-step instructions. When automated steps are triggered — the Opentrons Flex running the serial dilution, the Precise Drop II™ dispensing master mix, the QTower opening its door ready to receive the plate. Those happen automatically, without the operator needing to touch each instrument's individual software.
Step 4. Data analysis via conversation. Back at the computer, the scientist asks the lab assistant to pull up the completed run and generate a heat map of the qPCR output. The assistant interprets the request, writes Python code on the fly, runs it, and returns a visualization. No spreadsheet exports. No manual analysis setup.
The whole experience, from typing a prompt to reviewing analyzed results, represented a genuinely new way of working in the lab.
A few things stand out about this demonstration as distinct from earlier visions of the automated lab.
First, it's explicitly designed for hybrid environments. The demo didn't involve a fully integrated robotic system. A human carried plates between instruments. Some steps were manual. The orchestration layer handled the coordination, data capture, and device triggering regardless — because that's the reality of most labs today.
Second, the data continuity is built in. Throughout the workflow, the system captured what was intended, what actually happened, and what the instruments returned. That package of contextual data gets automatically routed to downstream systems (e.g., ELNs, LIMS, analytics platforms) without manual file handling.
Third, the AI integration goes agent-to-agent. Cellario Lab Assistant doesn't just talk to the scientist. It talks directly to the Opentrons AI via MCP server, enabling one AI system to generate protocols on behalf of another. This is a meaningful step toward the kind of autonomous, coordinated experimentation that's been discussed in theory for years.
The presenters were refreshingly candid about limitations and open questions:
Consistency. Because the system uses large language models, identical prompts can occasionally produce different outputs. The teams have done significant work to push results toward consistency, but LLM variability remains a real consideration in production environments.
GXP compliance. Cellario OS captures all the protocol steps, metadata, and data associations that regulated environments require, but the platform is not yet formally GXP certified. It can meaningfully support validation workflows, but customers in regulated environments should discuss specifics with the Hi-Res team.
Guard rails in brownfield environments. For labs where workflows are already well-defined, the recommended approach mirrors good software development practice with AI: give the system smaller, more explicit tasks; be prescriptive about what you want; don't leave room for ambiguity. The better the prompting, the better the output.
Device support. Cellario® currently supports around 500 instruments via its driver catalog, with the full library still being integrated. Custom or one-off instruments can be supported via a driver development kit.
Both companies are actively seeking partners to advance their AI initiatives on protocol generation, vision models, and the emerging VLA layer. For labs considering whether this approach makes sense before a full integrated system is in place: the answer from both teams was yes. The value of orchestrating semi-automated and manual workflows, capturing consistent data, and reducing the software burden on operators doesn't require a fully robotic floor to be worth pursuing.
The "lab of the future" demonstrated in this webinar isn't a concept video. It's running software, on real instruments, producing real data. The gap between where the industry is today and fully autonomous closed-loop experimentation is narrowing fast and the labs building their foundation now are the ones that will be positioned to move at the speed AI is about to make possible.
Interested in learning more? Book a demo!