HDR Preview on Set with Livegrade – Part 2

HDR Preview on Set with Livegrade – Part 2

This article is the second part of a series of articles about HDR production and resulting implications on the film set. Overall, the series covers typical use cases, presents best practices, and offers insights for setting up all required devices and systems. In this second article, we will talk about all camera and monitor settings that are relevant in the context of HDR viewing. 

We will also illustrate how to set up and configure Livegrade and the processing devices such as LUT boxes.


If you haven’t already, check out the other parts of the “HDR Preview on Set with Livegrade” series:


Matching the Settings

In the first part of the series, we identified the HDR monitor as most significant change when bringing “HDR monitoring” to the film set. While other devices might already be capable of transmitting or processing “HDR” signals, the monitor changes the typical settings quite a bit.

To keep things simple, we will focus on a minimal setup for now – consisting of a camera with a log-signal output, a LUT box (controlled by Livegrade) applying the creating grade and converting the image to an HDR encoding, and lastly an HDR monitor receiving and displaying the modified signal.

Camera Settings

The camera needs to be configured to provide a “log” signal (such as Log-C, Slog3, etc.). Such a log output configuration is already the standard for most setups where a grade is applied on set. The log signal is typically designed by the camera manufacturer so that a high range of brightness details, from the darkest shadows to the brightest highlights, can get encoded in the HD-SDI signal. Having access to that range is essential for applying a grade for SDR and HDR. With the log output at the camera, we are already prepared for HDR monitoring and no extra “HDR settings” are required here.

Monitor Settings

HDR monitors typically have several settings concerning HDR. The basic settings influencing how a monitor will display a signal consist of two “color management” parameters:

  • The color space (defining the signal’s “gamut” or the range of encoded colors), and
  • the transfer function (defining the encoding of brightness levels).

These parameters let the user choose from a list of possible values, which represent different color management standards (such as “P3”, “PQ”, or “sRGB”). The selected standard tells the monitor how to interpret the input signal correctly. According to the configured settings, the monitor itself knows (if calibrated properly) how to convert the incoming signal to the correct visual image on its screen.

Additional secondary parameters can influence this further, but if the monitor gets wrong information about the kind of signal coming in, the image on the screen will not provide a clean starting point from a color management perspective. Working with that image would be like color grading while wearing tinted sunglasses.

The goal of color management is to set up monitoring so that other systems can reproduce the image with the same appearance. Only then can we assure that the creative intent from set is carried over authentically throughout the production.

Color Space

The typical color space used for HDR is called “Rec.2020”. Unfortunately, the names and terms used in the movie industry are often a bit ambiguous. In the case of Rec.2020, the entire standard actually defines a broad range of aspects of an image. But in the context of color management, we usually only refer to the color space from that standard when choosing “Rec.2020”.

Compared to the color space defined by Rec.709, the Rec.2020 color space is much wider. In other words: More saturated colors can be encoded with Rec.2020 than with Rec.709.

Today’s HDR displays can display much more colors than specified with Rec.709 (which has been defined for CRT monitors’ colors at the time). Although not all colors of Rec.2020 can be shown on today’s typical display technology yet, a wider color space extends the color pallet of possible colors in a scene. Especially with highlights not burning out into white in HDR, only a wider color space can reproduce a scene’s saturated lights (such as LED lights or neon signs) correctly.

Another possibility is the use of P3 for HDR settings in some settings. P3 is wider than Rec.709 but smaller than the Rec.2020 color space.

Encoding the Levels

The central part of the HDR configuration is the setting that specifies the encoding of brightness levels across the available code value range of the signal.

There are two primary standards to choose from: “PQ” (also called “ST-2084”) and “HLG”. Thankfully, each one found its main use case – so in most productions, it should be easy to predict what will be used:

  • HLG is the standard for broadcast and live TV productions. It has the advantage of bringing a kind of backward-compatibility with SDR displays with it.
  • PQ or ST-2084 is mainly used in feature film and series production. It has a higher fidelity than HLG, but cannot be displayed correctly on an SDR device.

PQ has been developed by Dolby as part of their Dolby Vision technology. Dolby Vision is an adaptive technology that requires scene-based metadata to be transferred with the image. That makes the entire Dolby Vision systems less applicable for live viewing applications where the image can change unpredictably. Hence, Dolby Vision is mainly used for delivering images to end users, for example via internet streaming.

The non-adaptive encoding part of Dolby Vision is standardized separately as “PQ” and can be used to view live camera signals, for example, on the film set. PQ has two main characteristics and one additional qualifier parameter:

PQ encodes the absolute luminance levels of an image (measured in the “nit” unit) as code values.

This means that the signal’s code values can directly be interpreted by the monitor as nit values to be emitted by its display. For example, the nit level for the 10-bit code value “519” is 100 nits and the nit level for the code value “847” is 2000 nits. In SDR encoding standards this direct association has not been set up, for example, due to the differing black (and possibly white) levels of different displays.

The encoding that PQ defines is optimized to the human eye.

The encoding distributes the range of code values in a signal over the luminance range so that in all different zones, from darkest blacks to the brightest highlights, the signal provides the same “luminance resolution” for the human viewer. Therefore, PQ encoding is the optimal way to encode as much dynamic range into a limited number of code values (e.g. in the ~1024 different values of a 10-bit signal) for uniform quality of brightness perception. This is especially practical for HDR on the film set. On set, we use signal formats such as SDI, limited to 10-bit or 12-bit color depth, as opposed to 16-bit, float-based encoding in the files used in post-production.

Passing on material encoded in PQ requires the mention of the maximum nit level that has been used for viewing.

HDR displays differ in their maximum nit levels, which will remain the case in the future where even brighter displays might be available. HDR images encoded in PQ get viewed on different monitors during production – including the one used for final mastering. Agreeing on one maximum nit level – even if a brighter HDR display is available at a certain display – ensures consistency and thus simplifies the creative process.

There are also combined names for color space and encoding. A prominent example is “Rec.2100”, which implies Rec.2020 as the color space and either HLG or PQ as the encoding.

These are the most important settings that are relevant for setting up Livegrade correctly. We will see how that works just after one more differentiation.

Color Space vs. YCbCr Color Matrix

In the signal configuration of some more recent devices, there is an additional setting concerning the color matrix used to convert between an RGB image and a YCbCr signal. This setting lets you choose between familiar parameter values, for instance, “Rec.709” and “Rec.2020” – and although that sounds identical to the mentioned color space names, this setting works independently and affects something completely different. While the color space setting tells the monitor how to interpret code values as colors, the color matrix configures the conversion between YCbCr and RGB. While choosing the wrong color space setting leads to a visibly over- or under-saturated image, a wrong color matrix setting is almost not noticeable. Typically, only minor differences in red tones can be spotted when inspected carefully. So don’t let yourself get confused between these two settings and communicate them unambiguously.

The color matrix might be a worthy topic for a separate blog post. For now, let’s keep it simple and stick to a recommendation for the same settings in every device. Choose Rec.709 for the conversion matrix when also using devices that don’t have a configurable color matrix for YCbCr-RGB conversion.

Setting up Livegrade for Controlling the LUT box

Our minimalist example setup is still missing in the LUT box that sits between the camera and the monitor. The LUT in the LUT box is responsible for transforming the camera signal to a signal that’s suitable for the monitor (as well as making space for a creative grade on the way). This conversion consists of several transformations:

  • Transforming the camera signal to a working-color space and encoding. In typical CDL+LUT workflows nothing happens here, as the grading happens directly on the camera signal (where working-color space and encoding is the same as the input-color space and encoding).
  • Applying the actual look (such as CDL or curves) in the working-color space. In ACES the working-color space is usually ACEScct (and thus not the same as the camera color space and encoding).
  • Transforming (including tone mapping) the graded signal from the working-color space and encoding into the output/monitor-color space and encoding.

Whenever you change the grade – no matter if you change the input or output transforms or simply touch a CDL wheel – Livegrade bakes all these transforms together into one LUT suitable for the LUT box.

For HDR (the same ways as for SDR) you need to choose all parameters correctly for the pipeline that has been decided upon. We need the previously discussed parameters:

  • the output-color space and encoding of the camera and
  • the color space and encoding expected by the monitor.

With this information at hand, let’s go!

The Color Pipeline

When Livegrade is used to provide that color transformation to the LUT box, we need to set up the right color pipeline in the software.

The first decision to be made is whether Livegrade should:

A) take care of the pipeline and let the user configure it with parameters, or whether
B) the user comes with ready-made LUT(s) that already implement a pipeline.

In the first case, A) choosing an ACES grading mode (for example ACES and CDL) or the Colorfront Film grading mode is the way to go with the following configuration steps:

  • Choose a working color space (e.g., ACEScct),
  • Choose the IDT (input device transform) according to input-signal from the camera (for instance, “SLog3” for a Sony Venice set to SLog3, or “Log-C” for an ARRI camera set to Log-C)
  • Choose the ODT according to the setting of the monitor (for instance, Rec.2020 color space with ST-2084 encoding)

For the second case, B) the “CDL and LUT” or “CDL Advanced” grading modes are required. This can seem easier (because there’s less to be configured), but it often carries more challenges:

Variant B with presets: Use one of the provided HDR preset LUTs in Livegrade:

  • The grading working color space is camera color space (for instance, if you choose an ARRI HDR LUT preset, it is assumed that your CDL is applied before in Log-C.
  • The LUT is chosen according to the monitor setting (for instance, the “ARRI Log-C -> Rec.2100 PQ (1000 nits max) HDR” LUT preset for a monitor set to Rec.2100 and PQ settings).

Variant B with custom LUTs: You (or a colorist, facility, etc.) provide “the LUTs” for HDR.

In that case, we need to know a few things about the LUT:

  • What is the output of the LUT (color space and encoding, max nit level)?
  • What does the LUT expect as the input (the camera output color space or a different working color space)?

The LUT file’s name might give first indications, but asking is often the only way to find out the details. The LUT file itself (such as a cube file) usually contains no information about the input and output color spaces and encodings.

Let’s look at an example:

The colorist provided a custom LUT going from ACEScct (that’s a working color space) to Rec.2020/ST-2084. If you have an Alexa camera with Log-C output on set, there is one more conversion LUT required before the CDL, going from Log-C to ACEScct. Without that, you will not be able to accurately reproduce the pipeline that the colorist prepared in the grading software. In Livegrade’s “CDL Advanced” grading mode you can add LUT nodes not only after the CDL controls, but also additionally the the CDL controls. You can use the this grading mode with two LUT nodes for such situations.

Summary

In any case, the camera and monitor configuration needs to match the configuration of the pipeline in Livegrade.

When you use a pipeline such as ACES (with a LUT box) or the Colorfront Film engine (with an FS-HDR), you need to choose the pipeline’s input and output parameters according to your camera and monitor settings.

With a LUT-based pipeline, the naming or documentation for the used LUT(s) must be very specific to avoid ambiguity when recreating a pipeline that someone else set up in another software.

No matter which variants you choose, the basics are now done:

  • The LUT box does a proper end-to-end conversion from camera color space and encoding to monitor color space and encoding.
  • The look or CDL is applied in the working color space and is ready to be creatively adjusted.
  • All is baked together by Livegrade and loaded onto the LUT box whenever something is changed in the user interface.

The next article will dive deeper into grading for HDR, talk about dual monitoring on set and how to set that up with Livegrade. In addition, we’ll discuss a few pitfalls and give some tips on how to avoid them.


All posts in this series:

About the Author
Patrick is head of products for Pomfort’s on-set applications. He combines a technical background in software engineering with practical experience in digital film productions – and a preference for things that actually work.