Skip to main content

From Robots to Results: Orchestration for the Modern Drug Discovery Lab

Discover how orchestration transforms modern drug discovery labs by centralizing automation, enhancing efficiency, and enabling seamless integration of workflows and data for groundbreaking results.

Across drug discovery organizations, a quiet but profound shift is underway. Automation is no longer confined to individual labs or dedicated project teams. Instead, it is increasingly centralized into shared facilities that function as internal service providers for the broader enterprise.

High-throughput screening groups, sample management teams, tissue culture labs and assay development now sit at the core of these centralized environments. They support diverse internal customers, run highly complex workflows across multiple automation systems, and are expected to deliver high-quality, complete datasets at scale.

This evolution is transforming automation from a collection of robotic systems into something much larger: an operational fleet that must be coordinated, scheduled, monitored, and optimized, often across an entire organization.

Orchestration – true fleet management for drug discovery – is the architectural response to this change.

The Rise of the Centralized Automation Facility

Today’s core automation labs are asked to do far more than run instruments. They must act as both operational hubs and service platforms, balancing demand from multiple teams while maintaining efficiency and data integrity.

A central assay-ready cell plate facility, for example, may simultaneously support screening teams, biology groups, and external partners. A multifunctional screening lab might run workflows that span liquid handlers, incubators, detection systems, standalone devices, and manual intervention steps all within the same experimental flow an capable of servicing a variety of assays.

What makes this environment uniquely challenging is that different stakeholders care about fundamentally different aspects of the same experiment.

  • Scientists requesting work to focus on what they want to do – the assay type, experimental design, plate formats, parameters, and outcomes.
  • Lab operators care deeply about how the experiment is executed – sequencing of steps, resource constraints, system dependencies, and error handling.
  • Automation specialists are responsible for ensuring systems are ready for scientific execution, translating experimental intent into executable workflows, maintaining protocols, and managing system readiness and runtime visibility.
  • Data Scientists and other data consumers care about completeness, continuity, and context of the results.
  • Lab managers and facility leaders need visibility as to who did what, when, and where, and how effectively resources are being used across the entire operation.

Meeting all these needs simultaneously requires more than scheduling tools or instrument control software. It requires orchestration.

 

 

Video 1. Natural Language workflow creation with Cellario Lab Assistant

 

The Hidden Challenge: Preparing Systems for Science

Behind every successful automated experiment is an often-overlooked role - the automation specialist.

These professionals act as the translators between scientific intent and machine execution. When a scientist submits an experimental request, it rarely arrives in a form that automation systems can immediately run. Instead, automation specialists must interpret assay designs, SOPs, and manufacturer documentation to construct executable workflows that align with the capabilities and constraints of the automation fleet.

This preparation work is both technically demanding and time intensive. It involves defining the requirements and step sequences, configuring system parameters, validating resource compatibility, and ensuring protocols can execute reliably across multiple devices.

Beyond workflow creation, automation specialists are also responsible for maintaining operational readiness. They must monitor system states, respond to errors, manage recovery actions, and ensure experiments continue running smoothly when disruptions occur.

As facilities scale, this role becomes a critical bottleneck and the burden on this team escalates. The time required to translate scientific requests into executable automation workflows can limit throughput, slow onboarding of new assays, and reduce the agility of centralized labs.

Orchestration platforms help address this challenge by accelerating and standardizing workflow definitions, embedding execution intelligence into the system, and providing real-time visibility into run status and system health.

 

Where the Operational Friction Emerges

As facilities scale, several systemic challenges consistently appear.

One of the most persistent is coordinating workload across multiple systems. Modern workflows rarely run on a single platform; they often involve a chain of devices operating in sequence or in parallel, with dependencies that span hours or even days. Managing this manually quickly becomes unsustainable.

Facilities also struggle to expose their capabilities clearly to internal customers. Scientists frequently do not know what resources are available, what parameter ranges are supported, or how long workflows will take. As a result, request intake often becomes informal, inconsistent, and difficult to translate into executable plans.

The dynamics of a real lab present another major obstacle. Automated workflows are not static reservations; they require continuous adjustment as system availability changes, delays occur, or new high-priority requests arrive. Traditional calendar-based scheduling simply cannot account for this level of complexity.

Finally, transparency remains a critical gap. Stakeholders want real-time insight into experiment progress, system state, and data generation. Without a unified operational view, information becomes fragmented across systems and teams.

 

How Orchestration Changes the Model

Orchestration platforms such as Cellario OS™ address these challenges by shifting the focus from individual devices to the flow of experimental intent through execution and data delivery.

At a technical level, this transformation is driven by a structured flow of information.

It begins with an experimental definition. Instead of submitting free-form requests, scientists describe their work through structured workflows. These workflows capture not just the steps involved, but the parameters, constraints, and resource requirements that define how the experiment should be executed.

This information feeds into a guided reservation layer, where requests are translated into operational plans. The system evaluates capability matches across available resources, identifies suitable devices or platforms, and guides the user to make reservations that embed the experimental parameters within them. This effectively bridges the gap between scientific intent and operational execution. For modular automation systems, this includes coordinating docked resources; for multi-device workflows, it means aligning sequential system usage without bottlenecks.

From there, orchestration moves into execution coordination. The platform highlights each step to be run, guiding the user through the next steps and drawing attention to critical actions such as manual interventions, stand-alone device operation or tear down of systems.

Figure 1. Calendar view in Cellario OS

As execution begins, real-time telemetry flows back into the orchestration layer. Device status updates, run progress, and data outputs are monitored continuously, creating a live operational model of the facility. This enables automated status tracking, proactive issue detection, and clear visibility for both operators and stakeholders.

Frame 230 (1)

Figure 2. Runtime queue for all requests flowing through Cellario OS

Throughout the process, contextual metadata is captured alongside the experimental data itself. This ensures that when results are delivered, they include not only measurements but also the full execution context, who performed each step, which systems were used, when operations occurred, and under what conditions – a complete data set of the experiment.

The result is a closed information loop that connects experimental intent, operational execution, and data delivery into a single continuous flow.

 

Enabling Both Operators and Scientists

One of the most powerful aspects of orchestration is its ability to serve very different user roles simultaneously.

This multi-layered visibility enables facilities to move from reactive operations to proactive optimization.

Figure 3. System utilization Data in Cellario OS

 

Beyond Integrated Systems

Importantly, orchestration is not limited to large integrated robotic systems. Modern facilities depend on a mix of technologies, including standalone instruments, third-party automation platforms, and manual processes.

True orchestration is agnostic and must unify all these elements into a single operational framework. When it does, even traditionally disconnected devices become part of a coordinated automation fleet.

In a modern lab, value comes from integration, not uniformity.

 

The Next Evolution: Natural Language and Digital Twins

The near future of lab orchestration is becoming even more intuitive and accessible.

With emerging capabilities, such as Cellario Lab Assistant, users can now interact with automation environments through natural language. Querying experiment status, requesting schedules, investigating system issues or even creating new experimental workflows, all conversationally rather than navigating complex interfaces.

For automation specialists, natural language capabilities have the potential to significantly reduce the effort required to prepare systems for execution. Instead of manually translating SOPs or experimental descriptions into workflow configurations, they can increasingly generate, refine, and validate workflows through conversational interaction (Video 1).

This allows specialists to focus less on repetitive configuration tasks and more on optimizing execution strategies, improving protocol robustness, and expanding facility capabilities. These tools immediately bridge the first 80% of the work in minutes, saving from hours to weeks to days in workflow and protocol creation.

At the same time, facilities are beginning to be represented as digital twins through tools such as Cellario Labs: real-time visual models that reflect system state, workflow progress, and operational health. These representations enable faster troubleshooting, clearer situational awareness, and more effective planning.

Again, addressing the specific challenges for the automation specialists running the fleet, digital twins provide an immediate operational advantage by making system states, workflow progress, and error conditions visible in a unified environment. Rather than diagnosing issues across multiple disconnected interfaces, they can quickly identify bottlenecks, trace failures, and guide recovery actions within a single contextual view of the facility.

Both products are in their beta phase for HighRes, but are already demonstrating incredible power with our early access partners. Together, these advances are transforming orchestration from a backend control system into an interactive operational partner.

 

A New Role for Central Automation Facilities

Operational Excellence in the Intelligent Lab

Figure 4. Operational Excellence in the Intelligent Lab

As automation continues to centralize, core labs are evolving beyond their traditional role as equipment operators. They are becoming strategic infrastructure, enabling scientific teams to access sophisticated capabilities without needing to manage operational complexity.

Orchestration transforms:

  • Order intake into structured workflow objects
  • Devices into schedulable resources
  • Experiments into traceable digital threads
  • Data into contextualized knowledge

Access and capability are being exposed in a way that enables operational excellence and scientific possibilities across an organization not previously considered possible.

Orchestration is what makes this possible. By connecting workflows, systems, and data into a unified operational model, orchestration allows centralized facilities to scale efficiently while maintaining transparency, flexibility, and quality.

In doing so, it turns automation fleets into engines of discovery, delivering not just capacity, but clarity, coordination, and confidence across the entire R&D organization.

Blog

Read more about how we’re connecting science, technology, and the humans behind discovery.

HighRes Blog

Subscribe to the HighRes Blog

Get the latest insights weekly delivered right to your inbox.