Korpra
Test AutomationParallel TestLabVIEW Test Executive

Multi-Up Parallel Test System Development

How a configurable LabVIEW test executive on a SQL Server backbone compressed a 3-day manual test into hours of automated parallel operation — and the architectural decisions that made it scale across distributed stations.

Published May 2026

3 days

Original manual test time per device

Hours

After multi-up automation

10

Devices in parallel per station

1

SQL Server source of truth

The Challenge: A 3-Day Manual Test Was the Bottleneck

The customer made a highly configurable, technically complex product. Every unit needed an extensive functional test before shipping — multiple measurement modes, signal stabilization periods, calibration steps, and pass/fail decisions that depended on previous results. Done manually, the test took over three days per device. The test was good. The test was thorough. The test was strangling production.

Every approach to fixing it had a trade-off. Cut the test? The product team — correctly — refused; the test catches real defects that customers would otherwise see. Hire more operators? Linear scaling at premium cost, with the same human variability that was already a quality concern. Buy a faster automated tester from a vendor? No off-the-shelf option matched the product's specific test requirements, and a custom build from one of the big test integrators came back at a price that made internal modification look attractive. The customer wanted the same test, executed faster, by software, in parallel — a configurable platform their internal team could maintain and extend over the product's lifecycle.

The Real Goal

Build a system that runs the existing test, unchanged in rigor, but in parallel across multiple devices simultaneously — and architect it so adding new product variants over the next decade doesn't require a software release every time.

The Decision: Parallel Execution + Configurable Architecture

Two architectural commitments shaped everything that followed.

Parallel execution — every test station would test multiple DUTs at once, each on its own fixture and own measurement front-end. The application would treat each DUT position as an independent state machine running the same sequence, synchronizing only when the test logic genuinely required it (e.g., shared resource access). The throughput multiplier was the obvious motivation; the less obvious one was that idle time on one DUT — waiting for a soak or a stabilization — became productive time on the others.

Configurable architecture — every test sequence, parameter, limit, and report layout would live in a database. The LabVIEW application would be a generic engine that loaded sequences from the database and executed them. This is a configurable test system architecture in its most useful form: the application doesn't know anything about specific products; the database does. Adding a new model is a database operation, not a release.

Why Configurable Mattered More Than Parallel

Throughput got management's attention, but architectural longevity is what the maintenance team thanks us for five years later. A multi-up station that's hard-coded to one product is a worse system on day one — and a much worse system on year five — than a single-DUT station built on a configurable architecture. Parallel execution multiplies throughput. Configurable architecture multiplies usefulness across time.

System Architecture: Application, Test Executive, Database

The LabVIEW application has three logical layers, each with a clear responsibility. Test logic lives in none of them; it lives in the database below.

Operator UI Layer

Multi-DUT dashboard • Load/start/abort • Pass-fail at a glance

Layer 1

Test Executive Engine

Per-DUT state machines • Step dispatcher • Result aggregator

Layer 2

Hardware Layer

Per-DUT measurement front-ends • Shared resource arbitration

Layer 3

SQL Server Database

Sequences • Parameters • Limits • Results • Station personalities

Layer 4

Per-DUT state machines were the central design pattern. Each fixture position spawned its own state machine instance at start-of-test. The state machine queried the database for the sequence belonging to the model number scanned at that position, then walked through the steps — dispatching to the hardware layer for measurements, comparing against limits from the database, writing results back to the database, and signaling completion. Ten DUT positions meant ten state machine instances running in parallel, each operating on its own sequence, each completely independent of the others except where a shared physical resource forced synchronization.

Distributed Stations with Model-Specific Personalities

The customer didn't want one giant test cell — they wanted multiple smaller stations spread across the manufacturing floor, each capable of running any model the company built. That meant the same application binary had to run on every station while behaving slightly differently on each. Station personality was the mechanism.

What's in a Station Personality

Calibration constants for that station's instruments, GPIB / Ethernet addresses for the DUT interfaces, position-to-fixture mapping, the station's traceability ID, and a list of which models that particular station was qualified to test. The application loads the personality at startup, keyed off the station's hostname, and configures itself accordingly. Move a station to a new bench, rebuild it from a fresh image, or stand up an entirely new station — the application doesn't change. Only the database row keyed to the new hostname changes.

This pattern is the bridge from a single multi-up station to a scalable test system architecture — one application, many stations, central data, no per-station code branches. When the customer added their fourth and fifth stations a year after the original deployment, the additions took weeks rather than the months the first station had taken. That's the operational dividend a database-backed personality model pays back.

Multi-Model Support: Ten Different Products at Once

Because every DUT position queries the database for its own sequence at start-of-test, positions don't have to run the same model. Operator scans a part number into position one — the state machine looks up the sequence for that model. Position two scans a different model — different sequence. Position three is a third model with totally different limits — fine. The test executive doesn't know or care which products are on which fixtures; it asks the database, gets a sequence, and runs it.

For a customer running a high-mix, low-to-medium-volume product line, this was a meaningful operational change. Prior to multi-up, scheduling test capacity was a non-trivial planning exercise — load up serial testers with whichever model needed throughput that day. After multi-up, scheduling collapsed to *whatever shows up at the test station gets tested*. The work order arrived; the operator scanned the parts; the system handled the model variation transparently.

Decoupled Test Steps: The Real Long-Term Win

The most consequential architectural choice — and the one easiest to underestimate at design time — was keeping test steps and the application in different lifecycles. Test steps live in the database. The application is a generic engine that runs them. Spec changes happen by updating the sequence row in the database; the application binary doesn't move. Adding a new step type (say, a new measurement mode the original product didn't need) does require an application update — but it's a one-time addition, after which any sequence in the database can use that step.

This pattern reframes test-station maintenance entirely. Instead of every spec change generating a code change, regression test cycle, and release, most spec changes are operational, not engineering. Test engineers update the database; QA validates the new sequence the way they'd validate any test plan; production picks up the change at the next station refresh. The software team is involved only when fundamentally new capability is needed. For a system intended to live ten years across many product revisions, this is the difference between a sustainable platform and a maintenance treadmill.

Reporting Engine: On-Demand and Post-Test

Test results streamed to the database in real time — every measurement, pass/fail decision, timestamp, station ID, and operator ID logged centrally. The reporting engine then served two distinct audiences. Operators and supervisors got immediate post-test reports for the unit just completed — pass/fail summary, out-of-spec measurements highlighted, audit trail. Engineering and quality got on-demand reports across arbitrary date ranges, model groups, station subsets, or failure modes — useful for trend analysis, supplier conversations, and yield investigation.

Crucially, the report templates also lived in the database. Adding a new customer who wanted their own report layout was a template registration, not a code change. The same generic application served whatever reporting style the customer or internal team needed.

Results: 3 Days to Hours, And the System Kept Earning

The headline outcome was the throughput multiplier — what had been a 3+ day manual test became a parallel automated test that completed in hours. That alone justified the investment. The longer-term outcome was the architecture's resilience.

The Compounding Wins

Operators were freed from physical test execution — the multi-up station ran itself once loaded, and operators became loaders, monitors, and exception-handlers. Test variability dropped — the same automated sequence executed identically every time, eliminating the human variation that had been a quality concern under manual operation. New product variants were added through database updates rather than software releases. New stations stood up in weeks rather than months. The original investment in configurable architecture kept paying off year after year.

When the Multi-Up + Configurable Pattern Wins

Not every test problem wants this architecture. The pattern wins when:

  1. 1

    Per-DUT cycle time is long enough to amortize fixture and measurement-resource cost — typically minutes, not seconds

  2. 2

    Test resources can be duplicated economically — a per-DUT DMM is reasonable; a per-DUT environmental chamber usually isn't

  3. 3

    The product family has variants — different models, revisions, customer-specific configurations

  4. 4

    The deployment is intended to last — three to ten years of operation, not a one-shot validation campaign

  5. 5

    The customer wants to own the test sequences — not engage the integrator every time the spec changes

When all five hold, multi-up + configurable is the right answer. When some don't — for example, sub-minute cycle times with no model variation, or one-off test-stand needs — simpler architectures often beat it on cost-per-tested-unit. The architecture should match the problem.

Frequently Asked Questions

Common questions we get from test engineering managers and validation managers evaluating a multi-up parallel system.

What does multi-up testing actually mean?

Multi-up testing means running more than one device under test (DUT) simultaneously on the same test station, controlled by a single application. Instead of a station that tests one device, then waits for the operator to load the next, a multi-up station tests two, four, eight, or more devices in parallel — each on its own fixture, each progressing through the same test sequence independently. The operator loads all positions, presses start once, and the system handles every DUT in parallel. For products with long test cycles (signal stabilization, thermal soak, burn-in, multi-step calibration), multi-up testing is often the difference between meeting production demand and falling behind it.

How do you test multiple devices in parallel without electrical or measurement crosstalk?

Three architectural decisions handle this. First, every DUT gets its own dedicated measurement front-end — separate DMM, separate scope channel, separate DAQ module — sized for the test, not shared across DUTs. Second, the wiring topology keeps high-current and high-voltage runs physically separated from low-level signal runs (twisted-pair, shielding, star grounding at the fixture). Third, the LabVIEW application synchronizes only what genuinely has to be synchronized; everything else runs as independent parallel state machines, one per DUT. Done right, the only resource ever shared across DUTs is the database connection — every DUT writes its results without ever waiting on another.

What is a test executive architecture, and why does it matter for multi-up?

A test executive is a small core engine that loads test definitions from somewhere external (database, file, recipe) and executes them — instead of having the test sequence baked into the application source code. The application becomes a generic step-runner; the test sequence becomes data. This matters dramatically for multi-up because each DUT typically has a unique test plan based on model number, revision, or customer-specific configuration. With a test executive, adding a new model is a database insert, not a software release. NI TestStand is the off-the-shelf example; many production systems implement a custom test executive in LabVIEW for tighter integration with their measurement layer or because the customer wants to own the source.

Why decouple test steps from the main application?

Coupling test logic to application code is the most common reason a test station starts well and decays over five years. Every spec change becomes a code change. Every code change risks regressions in unrelated tests. Every new model means a release, regression test, deployment, and re-validation of the entire system. Decoupling — moving test steps, parameters, limits, and sequences into a database or external recipe file — turns spec changes into data changes. Engineers update the database; the application doesn't move. Over a multi-year deployment that's the difference between a system that's still earning its keep at year five and one that gets replaced because nobody trusts it anymore.

When is multi-up parallel testing the right choice, and when isn't it?

Multi-up wins when (a) per-DUT test cycle time is long relative to load/unload time, (b) test resources are inexpensive enough to duplicate, and (c) production volume justifies the throughput multiplier. It's the right answer for most EOL functional testing of medium-cycle products — anything that takes more than a couple of minutes per unit and runs at meaningful volume. Multi-up is the wrong answer when test cycle time is dominated by a shared, expensive resource (e.g., a single environmental chamber where you have to test devices serially) or when DUT-to-DUT crosstalk is unavoidable for physical reasons. For high-volume products with sub-30-second cycles, dedicated single-DUT stations on a paced line often beat multi-up on cost-per-tested-unit.

Can a multi-up system support multiple product variants?

Yes — and the configurable test system architecture pattern makes it almost free. Each DUT position queries the database at start-of-test for the test sequence, parameters, and pass/fail limits associated with its model number. Positions one through ten can be running ten different models simultaneously if the operator scans them in that way. The application doesn't know or care which model is on which fixture; the test executive looks up the recipe for each DUT and runs it. This is exactly how you support a high-mix, low-to-medium-volume product line without paying for ten separate test stations.

What does a 'station personality' mean, and why does each station get one?

Station personality is configuration that's specific to a physical station — calibration constants for that station's instruments, GPIB or Ethernet addresses for its DUT interfaces, position-to-fixture mapping, station ID for traceability. The same application binary runs everywhere, but each station loads its personality at startup from the database keyed off its hostname or station ID. This means deploying a new station is: install the application, register the station in the database with its personality, walk away. It also means a station can be moved or rebuilt without touching the application — a critical operational property for high-volume manufacturing environments.

How long does a multi-up test station take to build?

For a green-field design with reasonably well-understood requirements: 4–8 months from kickoff to production handoff. The architecture phase (1–2 months) carries most of the technical risk — defining the test sequence model, the database schema, the synchronization rules between DUT positions, the operator workflow. Build (2–4 months) adds the measurement code, the test executive, the database, the operator UI, the reporting engine. Site integration and validation (1–2 months) covers fixture integration, calibration, first-article correlation, and operator training. Subsequent multi-up stations on the same architecture take a fraction of the time because the framework is already built — sometimes weeks rather than months.

Need to Compress a Long Manual Test Into Parallel Automation?

Korpra has been delivering multi-up and EOL functional test systems on a configurable LabVIEW test executive architecture since 2016 — for automotive, aerospace, medical-device, electronics, and industrial manufacturers across the East Coast. We handle architecture, hardware specification, software, and integration with your existing PLC, MES, and SQL infrastructure. Call 585-678-1649 · Request a quote → · See our LabVIEW consulting services → · Browse other test automation projects →

Interested in a similar system?

Let's talk about your requirements.

Request For Quote

Further Reading

Related articles on LabVIEW, NI hardware, and test automation.