Skip to main content

Metrics 1.1 release notes

Visualize your worst and best performing assays in this Metrics release.

Thomas Beuls avatar
Written by Thomas Beuls
Updated over 3 months ago

This minor release delivers one of the most requested features by our users: a ranked list of the best and worst performing assays across your available Metrics.

We’re excited to make it available to you today.

With the release of Metrics 1.1, you’ll find three new widgets on your Assay Quality dashboard, giving you deeper insight into assay performance and data consistency across your lab.

Assay automation

Assay automation further expands on how automated you're analyzing your assays. It groups the AI call rate, manual override rate, manual completion and manual corrections.

Assay Automation

The Assay Automation widget expands your understanding of how automated your analyses are. It groups the following four metrics:

  • AI Call Rate: This shows the call rate achieved by the AI immediately after data upload, before any user input. Since the AI performs best on clean, high-quality data, this metric serves as a strong proxy for overall data quality in the lab. Lower call rates may signal issues requiring manual review.

  • Manual Overrides: This reflects the percentage of data that your team changes from one class to another—providing a sense of how often human intervention is required.

  • Manual Completions: A subset of overrides, this tracks how often your team changes uncalled data to a definitive class (e.g., from uncalled to X:Y or X:X).

  • Manual Corrections: Also a subset of overrides, this tracks edits where one called class is changed to another, or back to uncalled (e.g., from X:Y to X:X, or X:Y to uncalled).

Worst and best performing assays

These two new widgets highlight which assays are delivering consistent results—and which may require closer attention. For each assay, you’ll see:

  • Frequency and datapoint volume

  • Call rate trends over time

  • Performance indicators (green/red icons) that show whether the metric is improving or declining compared to previous periods

This gives your team immediate insight into assay stability and reliability over the selected time frame. High-performing assays can be trusted in critical workflows, while underperforming ones can be flagged for manual review across any run.

Did this answer your question?