Skip to main content

SLAS Solutions Spotlight: A New Era in Scientific Software

Discover how HighRes unveiled a new era of scientific software at SLAS2026, enabling intelligent lab automation and seamless orchestration with Cellario OS.

At SLAS this year, HighRes introduced a new era in scientific software.

For most of our history at HighRes, we’ve been known for building systems that work.  HighRes delivers reliable, scalable automation platforms that power some of the most advanced laboratories in the world. That hasn’t changed. Our mission remains the same: to accelerate the rate at which life science organizations impact human health.

What is changing is how that mission comes to life.
The shift isn’t incremental. It’s foundational. And it’s driven by a simple realization: automation, as powerful as it has been, still reaches too few scientists.

If you want to see how we’re thinking about this transformation, you can watch the full overview here:

Video 1. HighRes' Solutions Spotlight talk from SLAS2026, "A New Era In Scientific Software" which showcases HighRes' current direction: a company focused on removing friction from science, enabling laboratories to operate as cohesive, responsive systems, and giving every scientist the freedom to push beyond yesterday’s limits.

What follows is a closer look at the ideas behind that moment, and what it means for the future of lab automation.

From Automation to Intelligent Automation

For years, lab automation has been defined by capability. Can it run the workflow? Can it scale throughput? Can it deliver reproducible results?

But as labs push toward autonomy, those questions are no longer enough.

To achieve the kind of reliability required for autonomous labs, four nines and beyond, you need a different class of system. One built not just on automation, but on intelligence. Systems that can perceive their environment, understand context, and adapt in real time.

This is where perception and physical AI come into play. Robots that can see. Systems that can detect issues before they become failures. Automation that doesn’t just execute but responds.

That shift from automation to intelligent automation is what enables both reliability and scale.

The Accessibility Problem We Can No Longer Ignore

Even with all the progress in automation, there’s a hard truth: most scientists still don’t use it directly.

There’s a breaking point where complexity outweighs value. A workflow fails, a system becomes difficult to troubleshoot, or the expertise required becomes too specialized. And when that happens, people revert to manual processes.

That’s not a technology problem. It’s an accessibility problem.

The next phase of lab automation isn’t about making systems more powerful, but it’s about making them usable. It’s about putting those capabilities directly into the hands of the people who need them most.

That means reducing the learning curve. Simplifying interaction. And ultimately, removing the translation layer between scientific intent and execution.

Rethinking the Lab as a Connected System

When you look at how labs operate today, the fragmentation is clear.

Planning happens in one place. Execution happens somewhere else. Analysis is often disconnected entirely. Different teams own different pieces, and each system requires its own expertise.

The result is a workflow that’s technically functional, but operationally inefficient.

What we’re working toward is something fundamentally different: a unified lab environment where planning, execution, and analysis are connected end-to-end.

In this model, experiments don’t just run, but they exist within a continuous loop. Data is captured automatically, with full context, and made immediately available for interpretation. The system becomes aware of the entire lifecycle of an experiment, not just individual steps.

That’s what enables a true closed-loop lab.

The Building Blocks of the Intelligent Lab

To make that vision real, we think about the lab in a few core layers.

At the foundation is the execution engine, what we call Cellario OS™. This is the operating system for the lab, responsible for orchestrating instruments, workflows, and resources in a way that is deterministic, reliable, and always running. It lives at the edge because downtime isn’t an option.

On top of that sits intelligent automation, systems that can handle the unpredictability of real lab environments. The lab is dynamic. Things move, change, and occasionally fail. Automation has to be able to respond to that reality.

Then there’s integration. This is the part most people underestimate. Labs are made up of hundreds of devices, each with its own interface, behavior, and quirks. Making them work together is one of the hardest problems in automation.

Over the past 20 years, we’ve built a deep body of knowledge around that challenge. Every integration, every edge case, every recovery scenario. We’ve taken that knowledge and embedded it into our software in the form of Cellario Atlas™, a structured, accessible representation of how labs actually work. It becomes the system’s built-in expertise.

And finally, there’s the user experience layer, which supports the translation from scientific intent to execution.

Making Complex Operations Feel Simple

One of the biggest barriers in lab automation is interaction. Running a successful experiment requires more than executing a script. It requires managing a full procedure: preparing reagents, configuring instruments, ensuring conditions are correct, and capturing results in a reproducible way.

Traditionally, that complexity has been distributed across multiple tools and interfaces. What we’re doing instead is bringing it together.

By digitizing SOPs and embedding them into workflows, we ensure that every step is executed correctly and consistently. By simplifying interfaces, and increasingly moving toward natural language, we allow scientists to interact with systems in a way that feels intuitive.

Instead of navigating multiple software environments, users operate within a single framework. A single place to see what’s happening, what needs attention, and what comes next.

Seeing the Lab Through a Digital Twin

As labs become more complex, understanding the state of the system becomes harder.

That’s where the digital twin plays a critical role.

By creating a visual representation of the lab that mirrors the real world, we remove the need for interpretation. Scientists don’t have to translate between abstract data and physical reality, and they can see the state of their environment directly.

What’s running? What’s idle? Where are the bottlenecks?

It’s all visible, in context, in a way that aligns with how people naturally understand their workspace.

Data as the Backbone of Intelligence

None of this works without data, and not just data, but contextualized data.

Every action in the lab generates information. But without context, what was run, how it was run, under what conditions, that data has limited value.

In the model we’re building, every piece of data is captured automatically, tied to execution, and made accessible through a unified system. This creates a consistent, reliable foundation for analysis.

It also enables something much bigger: the ability to feed that data into AI models that can drive prediction, optimization, and ultimately, more intelligent experimentation.

From Intent to Execution in Minutes

One of the most tangible examples of this shift is how workflows are created and executed.

In the past, designing a workflow required deep knowledge of instruments, software, and protocols. It was a multi-step process involving multiple stakeholders.

Now, it can start with something much simpler: intent.

A scientist describes what they want to do. The system understands the available instruments, the required steps, and the constraints of the environment. It generates a workflow automatically.

From there, execution is guided step by step. The system ensures everything is ready, controls the instruments, captures the data, and makes it immediately available for analysis.

What used to take hours, or days, can now happen in minutes.

A New Era, Unveiled at SLAS

At SLAS, this wasn’t just a concept. It was something we demonstrated in practice.

The video above walks through that experience, from the foundational ideas behind intelligent automation to a real-world example of how workflows can be generated, executed, and analyzed within a single system.

It’s a glimpse of what becomes possible when reliability, accessibility, and orchestration come together.

If you’re interested in going deeper, I’d encourage you to explore Cellario OS and how it’s being used to orchestrate modern laboratories.

Because ultimately, this isn’t just about new software.

It’s about changing how science gets done.

Blog

Read more about how we’re connecting science, technology, and the humans behind discovery.

HighRes Blog

Subscribe to the HighRes Blog

Get the latest insights weekly delivered right to your inbox.