Reaching best performance: How to balance parallel jobs in Silverstack v9

pomfort blog how to reach best performance in silverstack offloading parallel jobs

In data management, speed is critical. Many tasks must run quickly and therefore often in parallel; however, more parallelism isn’t always better. At some point, it can actually slow things down.

To achieve optimal performance, you therefore need to find the right balance of concurrent operations. For example, running tasks one after another could waste available resources like transfer bandwidth or CPU/GPU power. On the other hand, running too many things at once may overwhelm your system with too much management overhead and context switching for the parallel task, slowing it down. Hence, the ideal balance depends heavily on your hardware setup: 

  • Slow source, fast destination: If you are working with slow camera cards/readers but a fast destination volume, offloading multiple cards in parallel helps you make better use of the destination’s speed. 
  • Fast source, slower destination: If the camera media is very fast but the destination is the bottleneck, offloading multiple cards at once can actually hurt performance and increase turnaround times.

That’s why Silverstack allows you to control the degree of parallel operations by setting limits to each involved volume. The internal job machinery ensures your limits are respected and new tasks are started as soon as capacities become available. Let’s look at a few examples! 

New volume and job views in Silverstack v9

Before diving in, let’s quickly clarify the terminology around Silverstack’s job machinery. A workflow schedules one or multiple jobs for a set of input files (e.g., offload and transcode of card A001). Each job is made up of tasks, which generally split up the job operation down to the single file. Depending on their type and configuration, tasks access volumes to read and/or write (e.g., the backup task reads from the source card and writes to the destination volume).

Job execution in Silverstack is controlled at the task level. That’s why each volume provides performance settings, allowing you to fine-tune how tasks interact with it:

  • Max Reading: Defines the maximum number of tasks that are allowed to read from this volume simultaneously
  • Max Writing: Defines the maximum number of tasks that are allowed to write to this volume simultaneously
  • Read/Write: Defines whether reading and writing tasks can access the volume at the same time, or if only one type is allowed at once

The image below summarizes how to access the performance settings: Switch to the “Jobs” tab in Silverstack’s status bar (1), select the volume you want to adjust from the left sidebar (2), then edit its settings in the detail view on the right (3). Both sidebars can be shown or hidden via the toolbar to keep your workspace organized. To monitor the operations on a task level, select one or multiple workflows and click on the „Tasks“ button in the toolbar (4).

Fig. 1: Navigating to the performance settings in Silverstack

Please note: Changes to these settings are not applied immediately. In Silverstack v9.0.5, changes take effect only when new tasks are scheduled or when a running task changes its state (e.g., finished, cancelled). Starting with Silverstack v9.1, this behavior will change: lifting limits takes effect immediately and triggers new tasks, whereas hardened limits do not affect tasks that are already running.

Since these settings should be configured for each destination, the workflow configuration window in Silverstack v9 highlights unconfigured volumes and provides quick access to presets with common configuration options (see image below).

Fig. 3: Workflow configuration window with quick access to volume settings

Offloading multiple cards in parallel

To enable concurrent offloads, destinations must be allowed to receive data from more than one task at a time: For example, one task from the backup job of card A and one task from card B. To do so, set “Max Writing” to the desired concurrency level (in this case, ”2” or higher). If a backup job uses multiple destinations, each destination volume must accept this number of writing tasks; otherwise, the lowest value will limit the operation.

Let’s look at the execution timing graphs from an offload of two cards to destinations RAID and Shuttle when using different volume performance settings. In the first visualization below, the destinations allow “Max Writing: 2”, and the cards are offloaded in parallel. 

Fig. 4: Offloading two camera cards to two destinations with “Max Write: 2”

However, if one or both destinations allow only a single concurrent writing task, card B must wait until card A’s offload is complete, as shown in the next chart.

Fig. 5: Offloading two camera cards to two destinations with “Max Write: 1”

So far, we’ve focused on the “Max Writing” parameter. But what happens when you change the “Max Reading” setting? It defines the concurrency degree when reading from a specific volume. It is intended to control parallel execution from destination drives (e.g., do I want a cascading backup from the RAID to happen in parallel to a transcoding job?). Be cautious with this setting for source cards.

Trying to read multiple files in parallel from the same camera card usually results in very bad throughput as it generates process overhead and can overwhelm the card reader – which could result in unmounting of the cards or failure due to reading errors. Additionally, each file read from the source can block a writing task on the destination, which may prevent multiple cards from being offloaded in parallel, as illustrated in the following example:

⚠️ Fig. 6: Reading multiple files in parallel from the same source often creates issues ⚠️

In most scenarios, reading only one file after the other from each camera card is the most reliable and fastest setting. We therefore recommend leaving the “Max Reading” of the source cards at “1”, which is the default for new source cards in Silverstack.

Please note: The default value for new camera cards can be changed in “Settings” > “Copy&Jobs”.

Cascading backups and other subsequent jobs

A cascading backup happens when one of the destinations of the initial offload is used as the source for a subsequent backup. A typical scenario where you’d want to use a cascading backup is when your RAIDs are significantly faster than your shuttle drive(s). In that case, you might want to offload to the RAIDs at their full speed first, then back up from the RAID to the shuttle drives at the slower shuttle speed. That way, your camera card can be released quicker, but it means your RAID has to serve as the destination for the offload (writing) AND as a source for the cascading backup (reading). Do you want these operations to run in parallel? 

The option “Read/Write: Force Exclusive Reading or Writing” waits for all writing tasks from the initial offload to finish before starting the cascading backup. The two jobs will run one after the other. On the other hand, “Allow Parallel Reading and Writing” results in interleaved job execution: both reading and writing accesses happen at the same time. Hence, the cascading backup can start as soon as one file of the initial offload is available on the RAID (one offload task has finished).

Fig. 7: Sequential vs. interleaved cascading backup in Silverstack

The same mechanism specifies whether transcodes (or other subsequent jobs) are allowed to start running from the RAID while an offload is in progress on it.

HDDs typically perform best when either reading or writing exclusively. In contrast, SSDs and RAIDs can handle parallel reading and writing better. However, even for SSDs or RAIDs you might want to consider the “Exclusive Reading or Writing” option: When connecting highly performant drives with your computer, the thunderbolt bus can become the bottleneck. In such cases, this option avoids frequent switching of the data flow direction on the bus, leading to better overall throughput.

Limits for verification, transcoding, and upload tasks

For even greater control over workload execution and an even deeper understanding of the topic, you should also consider the following background information: 

  • Transcoding tasks read from a source and write to a destination, but the writing part is not taken into account when the limits are evaluated (since the writing of a transcoding result usually doesn’t create a significant workload). That said, transcoding has an internal limit of only one task at a time to ensure optimal transcoding speed.
  • There are different types of backup tasks. If you use the backup option “Verification included in copy job”, the resulting “Backup (copy+verify)” task wraps the copy and verification operations together: The task starts with copying the file (reading from the source and writing to the destination). When the file is fully copied, 50% task progress is reached, so then it internally switches to verification (reading only on all drives). However, it’s important to note that within the job execution machinery, this type of backup task costs the source drive “1 reading” and the destinations “1 writing” count at all times. For more granular control over the execution of copy and verify tasks, use the option “Separate Verification Job”. This will create separate “copy only” (reading from source, writing to destination) and “verify” (only reading) tasks, allowing you to control their execution with greater precision.
  • Uploading, dynamic metadata extraction, and verification tasks cost the source drive one reading task count.

This quick sidetrack reveals that workload execution not only depends on the volume limits you set, but also on the intrinsic limits inherent to a specific task type (e.g., transcoding only one task at a time). So, if both those limits are respected, how does Silverstack decide which task to start next? This is where another important concept comes into play: Job priority.

By setting priorities, you can tell Silverstack to process certain tasks before others; for example, the cascading copy tasks before the dynamic metadata extraction task. You can set job priorities in the job details. From Silverstack v9.1 onwards, this setting will also be integrated into the workflow configuration. If multiple scheduled tasks fulfill all the criteria we have discussed and additionally have the same priority, it’s first-come, first-served. So the oldest scheduled task will be started first.

Conclusion: Get to know your hardware

Finding the optimal concurrency depends on your individual hardware setup and personal priorities on what you want/need to get done first. Silverstack v9 gives you granular control to tailor workflow execution to your requirements. By configuring each volume individually, it is now easier to combine fast and slow drives while managing fast throughput.

Experiment with the actual hardware during prep to test your configuration in action. Keep in mind, though, that hardware characteristics can change over time: When SSDs heat up, the internal controllers throttle their performance. When they are flooded with data, their fast burst speed buffers might eventually overflow, which can reduce write speed. 

For this reason, it’s best to adjust parallel task execution in small increments, monitoring both reliability and overall throughput over a longer period.

Posted in: Product know-how
Posted on: October 2, 2025

About the author
Franz is a product manager for media management products. His experience in the film industry is versatile and paired with a solid background in IT. He’s passionate about smooth workflows and eager to make the user experience even more consistent and self-explanatory.
Kim
Natascha
Patrick
Lukas
Wanja
Franz
Elwin
Who is writing for you?

We are Pomfort's editorial team - a group of tech specialists, industry experts and communication professionals working across diverse technical and creative roles within the company. Together, we create engaging content for the global film production community, exploring the topics that matter most to them.

Stay up to date
Pomfort Blog RSS Feed