Potency Testing on the Floor: How to Build a “Release-Support” Workflow That QA Will Actually Approve

The goal: faster decisions without creating a “shadow lab”

Most operations teams want in-process potency data for one reason: time. Waiting on a third-party COA or even an internal central lab can add hours (or days) to decisions like blend adjustments, cut points, or rework. But QA teams hesitate for equally valid reasons: uncontrolled sampling, unqualified methods, weak data integrity, and results that quietly become “COA-equivalent.”

A workable middle path is to build an in-process potency testing workflow as a release-support system—meaning:

  • It is not a replacement for official release testing.
  • It is explicitly bounded to pre-approved use cases.
  • It produces decision-support results with known uncertainty.
  • It is run under a QA-approved control strategy (sampling, training, documentation, and trending).

When you do this well, QA gets what it needs—traceability, consistency, documented suitability—while operations gets what it wants: rapid, defensible process decisions.

Recommended gear (Product Plug): Shimadzu Hemp/Cannabinoid Analyzer – HPLC https://www.urthandfyre.com/equipment-listings/hemp-cannabinoid-analyzer---hplc-high-performance-liquid-chromatography

Why “release-support” is the framing that unlocks QA approval

If you call it “potency testing on the floor,” it can sound like you’re building a rogue QC lab next to production. Instead, define it as:

  • A controlled in-process measurement
  • Used for process monitoring and adjustment within pre-established limits
  • With mandatory confirmation by the official release method/lab when required

This aligns with how regulated industries think about measurement systems: fitness for intended use. Even if you’re not operating under full 21 CFR Part 11 controls, the field is moving toward broader data integrity expectations (auditability, traceability, controlled access, and review), consistent with ALCOA/ALCOA+ principles (attributable, legible, contemporaneous, original, accurate, plus complete, consistent, enduring, available).

Step 1 — Define the intended use (and what the workflow is NOT)

Write a one-page Intended Use & Limitations document that QA can sign.

Define what it is

Your in-process potency testing workflow is:

  • A decision-support tool for operations
  • A way to reduce cycle time by enabling faster adjustments
  • A controlled way to reduce out-of-spec risk by catching drift early

Define what it is not

Be explicit:

  • Not an official batch release test
  • Not a replacement for the contracted/central lab COA
  • Not valid for labeling claims without approved correlation/bridging

Practical throughput expectations

For many facilities, a realistic target is:

  • 2–10 in-process samples per shift (depending on number of lines and how many decisions you’re trying to support)
  • Same-shift decisions for blend corrections, hold/release-to-next-step, and rework triage

If your team tries to run 50+ samples/day without a commensurate plan for system checks, documentation, and review, QA will (rightfully) shut it down.

Step 2 — Build a sampling plan that doesn’t sabotage your data

Most “bad potency data” isn’t a chromatography problem—it’s a sampling and homogenization problem.

Minimum sampling plan elements QA will expect

Document:

  • Sampling points (where in the process and why)
  • Sample size (mass/volume) and container type
  • Number of increments (single grab vs composite)
  • Homogenization procedure (and acceptance check)
  • Hold times and storage (temperature, light protection)
  • Who is authorized to sample

Chain-of-custody-lite (without turning into full evidence handling)

You can implement “just enough” traceability:

  • Pre-printed labels with:
  • Batch/lot ID
  • Process step
  • Timestamp
  • Sampler initials
  • Unique sample ID (barcode helps)
  • A simple log (paper or electronic) that records:
  • Sample ID
  • Where it came from
  • Who sampled it
  • Who prepared it
  • Who ran it

The objective is attributable and reconstructable data—so an internal auditor can follow the chain from sample pull to result to decision.

Homogenization: the #1 pitfall

Common failure modes:

  • Different operators shake/mix differently
  • Sticky matrices cling to container walls
  • “Representative” samples are pulled from the top of a tote or tank

Controls that help:

  • Use defined mixing times (timer, not “about a minute”)
  • Use a consistent tool (vortex, stir bar, rotor-stator—appropriate to matrix)
  • Require a visual check (no layering/settling) plus a periodic check like duplicate preps

If your workflow is for blend adjustment decisions, consider composite sampling or stratified sampling logic (multiple locations) rather than a single grab, especially where segregation risk is high.

Step 3 — Establish daily “fit-for-use” checks (system suitability + calibration verification)

QA will approve in-process testing faster when you treat the analyzer like a controlled measurement system.

Daily start-up checklist (15–30 minutes of discipline that prevents weeks of arguments)

At minimum, document and perform:

  • Mobile phase / solvent checks (correct prep, expiry, labeling)
  • Column condition and method version confirmation
  • Leak check / pressure check
  • Blank injection check (carryover/contamination)

System Suitability Testing (SST): your QA credibility builder

In chromatography, SST is how you demonstrate the system is performing adequately before running samples. USP’s general chromatography chapter emphasizes suitability checks via replicate injections and performance metrics such as precision and resolution (USP <621> concept reference: https://ftp.uspbpep.com/v29240/usp29nf24s0_c621s12.html).

Your SST can be pragmatic and still meaningful:

  • Replicate injections of a standard
  • Track:
  • %RSD of peak area
  • Retention time stability
  • Resolution (where critical pairs exist)
  • Peak shape/tailing (where relevant)

Set action limits (investigate) and reject limits (do not use for decisions). Keep them stable; don’t “move the goalposts” to pass.

Calibration checks (verification, not constant recalibration)

Avoid recalibrating every time something looks off. Instead:

  • Run a calibration verification (known standard) at a defined frequency (daily or per batch/shift)
  • If verification fails, trigger:
  • Troubleshooting
  • Corrective action
  • Documented decision on whether previous results are impacted

Also document your reference standards strategy. The field is increasingly emphasizing certified reference materials (CRMs) and traceability (ISO/IEC 17025 thinking) to improve comparability and reduce “lab-to-lab potency spread.” A useful overview of CRM value and traceability expectations is here: https://www.sigmaaldrich.com/US/en/technical-documents/technical-article/analytical-chemistry/calibration-qualification-and-validation/cannabinoid-certified-reference-materials-improved-testing-accuracy-traceability

Step 4 — Define the only allowed use cases (and hard-stop everything else)

QA approval becomes easier when the workflow is bounded to pre-approved decisions.

Good “release-support” use cases

These are common, defensible, and operationally valuable:

  1. Blend adjustment
  • Example: adjust blend ratio to hit target potency range before packaging/fill.
  1. Cut point decisions
  • Example: decide when to switch collection containers or divert material based on potency trend.
  1. Rework triage
  • Example: decide if a lot should be held for reprocessing vs sent forward.
  1. Process drift detection
  • Example: detect gradual potency loss or unexpected conversion trends that indicate thermal or time-at-condition issues.
  1. Incoming or intermediate verification (screening)
  • Example: screen intermediate material before committing it to a long downstream run.

Use cases that should be prohibited (unless you do a much bigger validation package)

  • Treating in-process results as label claim
  • Using results as an official COA substitute
  • Making release decisions without defined correlation to the official release method

Make this explicit in SOP language: “In-process potency results are decision-support and must not be reported externally as certificate values.”

Step 5 — Put documentation in the workflow (so it survives internal audits)

QA isn’t asking for paperwork for its own sake. QA is asking for governance.

The core controlled documents

At minimum:

  • SOP: Sampling & labeling
  • SOP: Sample preparation & homogenization
  • SOP: Instrument operation & shutdown
  • SOP: Daily SST & calibration verification
  • SOP: Data review & result reporting (including who can approve what)
  • Form/Log: Sample chain-of-custody-lite
  • Form/Log: Daily suitability + verification results
  • Deviation template: what to do when checks fail

Data integrity “Part 11-lite” controls

Even without full Part 11, implement the behaviors auditors look for:

  • Unique user accounts (no shared logins)
  • Role-based access (operators vs reviewers)
  • Locked methods (version control)
  • Controlled exports (PDF print to controlled folder)
  • Backup and retention policy
  • Second-person review for:
  • failed SST
  • unusual results
  • any result used for a high-impact decision (rework/hold)

If your organization is under stronger regulatory expectations, anchor your approach to FDA’s Part 11 “scope and application” thinking (risk-based enforcement discretion, legacy systems, and ensuring trust in electronic records): https://www.fda.gov/media/75414/download

Step 6 — Trend precision and accuracy (or you will lose trust)

A release-support program lives or dies by whether teams trust it over time.

Trending that matters

At minimum, trend:

  • SST pass rates
  • Calibration verification recovery (%)
  • Duplicate prep agreement (same sample, two preps)
  • Control sample (e.g., in-house check standard) over time
  • Drift by operator, shift, and matrix type

This helps you catch:

  • Operator technique drift
  • Column aging
  • Matrix effects
  • Silent failures (results still “look reasonable” but are biased)

What “good enough” looks like

You don’t have to build a full analytical validation dossier equivalent to a regulatory submission. But you should borrow the logic from recognized validation frameworks (e.g., specificity/selectivity, accuracy, precision, range, robustness) and right-size it for intended use (ICH Q2(R2) overview: https://www.ema.europa.eu/en/ich-q2r2-validation-analytical-procedures-scientific-guideline).

Step 7 — Bridge to the official release method (so your numbers don’t fight)

One of the fastest ways to kill an in-process program is when:

  • In-process says 78%
  • Official COA says 71%
  • No one can explain why

Practical bridging approach

  • Run a comparability study: same lots, paired samples, same time window
  • Track bias and variance by matrix and potency range
  • Decide (with QA) whether in-process results:
  • need a correction factor (careful)
  • need matrix-specific prep changes
  • or must be used only for trend/relative decisions

Also track what the broader industry is trying to solve: method standardization and more consistent performance targets. AOAC’s Standard Method Performance Requirements (SMPRs) are one example of consensus performance expectations used to evaluate methods (AOAC SMPR example PDF: https://www.aoac.org/wp-content/uploads/2020/11/SMPR202017_001.pdf). Regardless of matrix specifics, the direction is clear: more harmonization, more reference materials, and more scrutiny of comparability.

The hidden failure modes (and how to prevent them)

1) Inconsistent homogenization

  • Symptom: high variability between duplicate preps
  • Fix: defined mixing tools/times, matrix-specific prep, periodic duplicate checks

2) Poor labeling and sample traceability

  • Symptom: “mystery samples,” results no one can map to a decision
  • Fix: unique sample IDs, barcode labels, chain-of-custody-lite logs

3) No trending of precision/accuracy

  • Symptom: month-to-month disagreements and culture wars
  • Fix: control charts, weekly QA review, CAPA triggers

4) Treating in-process results as COA-equivalent

  • Symptom: label decisions or customer claims based on floor results
  • Fix: hard language in SOPs; train, audit, and enforce

5) No defined decision limits

  • Symptom: operators “chase the number” and over-adjust
  • Fix: decision bands (e.g., green/yellow/red), required confirmation rules

Where the right analyzer fits: throughput, risk, and onboarding

A controlled workflow still needs equipment that matches your environment.

What QA and Ops should look for in an analyzer

  • Method maturity and vendor support
  • Repeatable chromatography (stable pump, autosampler reliability)
  • Documentation (method packages, serviceability)
  • Compatibility with your training and maintenance capacity
  • Data handling that supports review and retention

If you’re building an in-process potency testing workflow and want a turnkey package designed around cannabinoid quantitation, Urth & Fyre currently has a listing for a Shimadzu Hemp/Cannabinoid Analyzer – HPLC (High-Performance Liquid Chromatography):

https://www.urthandfyre.com/equipment-listings/hemp-cannabinoid-analyzer---hplc-high-performance-liquid-chromatography

Position it correctly internally: this is not “ops buying a lab instrument.” It’s QA and operations jointly implementing a controlled measurement system for defined decisions.

Implementation timeline (a realistic path to QA approval)

Week 0–1: Design

  • Intended Use & Limitations (QA sign-off)
  • Draft sampling plan + labeling + chain-of-custody-lite
  • Define use cases and decision bands

Week 2–4: Build + train

  • Install/qualify instrument basics (IQ/OQ-lite appropriate to your system)
  • Train authorized samplers/operators
  • Run initial SST and verification routines

Week 5–8: Bridging + governance

  • Comparability study vs official release lab
  • Set action/reject limits
  • Launch trending dashboard and weekly QA review

Ongoing: Sustain

  • Preventive maintenance schedule
  • Periodic competency assessments
  • Quarterly internal audit of logs, deviations, and trend reviews

Urth & Fyre’s role: equipment matching, onboarding, and service connectivity

Urth & Fyre is positioned to support release-support workflows in three practical ways:

  • Right-sizing the analyzer to your throughput and risk: We help buyers avoid overbuying (complexity) or underbuying (downtime and weak comparability).
  • Onboarding and training support: Getting to consistent sampling, prep, and review behaviors is the make-or-break step.
  • Connecting calibration/service partners: A workflow that depends on an instrument needs a plan for uptime, qualification, and reference standards.

The takeaway: make it audit-friendly, and it becomes fast

QA will approve in-process potency testing when it looks like a controlled system:

  • Defined intended use and hard boundaries
  • Sampling discipline that protects representativeness
  • Daily suitability and verification that demonstrates fitness for use
  • Trending that builds long-term trust
  • Documentation that survives internal audits

If you’re ready to build or upgrade an in-process potency testing workflow, explore available analyzers and consulting support at https://www.urthandfyre.com—starting with the Shimadzu Hemp/Cannabinoid Analyzer listing here:

https://www.urthandfyre.com/equipment-listings/hemp-cannabinoid-analyzer---hplc-high-performance-liquid-chromatography

Tags