5 Types of Clip Metadata on Set 8 min read

With today’s digital film cameras, a lot of data is produced and recorded on set. Most of the time the actual image and sound data is accompanied by various kinds of metadata. Some of this metadata is broadly available, such as the running time code overlaid in video monitors. Other metadata is not presented so prominently – especially when the benefits of carrying this information further on are less obvious on set.

In this article we will have a look at different types of metadata, discuss how they can be managed on set, and show how they affect the work in post. The different sections of this article discuss metadata used for identifying and further describing clips, technical metadata about camera state, the difference between static and dynamic metadata, and metadata about the VFX-related context of a clip. We outline how different camera manufacturers are saving the metadata differently and what on-set software can do to help with that.

Let’s start off with a definition: “Metadata” in the context of a film production is data that provides information about the recorded media. This metadata can for example be used to identify recorded clips, carry technical information about the state of the camera, and may be captured for use in later visual effect work. Metadata comes from different sources and can be beneficial for various purposes on the film set set as well as later in post production.

Metadata for Identifying Clips

A typical movie production can record a huge amount of clips per day. The first task that metadata is involved with is identifying these clips in further (post-)production steps. Cameras save the recorded clips with different file names – which already represents the first metadata for a clip.

Camera manufacturers today tend to use a common file naming system consisting of a camera character (“A”, “B”, ..), a card number, and a clip number on that card. In addition to that the filename my contain additional information such as date of shooting, or a camera identifier. Although this scheme becomes adopted more and more, there are still cameras, that just name their clips with not much more than a sequence of numbers, which can lead to duplicate filenames during the course of the production.

Although the filename itself doesn’t contain any reference to the script (such as with scene numbers), productions typically try to preserve the filename from the camera as a (hopefully) unique identifier for that clip within the entire production. Also the folder’s names where the file is located on a camera card may carry useful information.

During editorial clips are usually identified by scene, (shot,) and take names. So in addition to the filename this information somehow needs to show up in the editorial system in order to put the clip into the context of the script. Often this information is captured by the sound recordist together with the audio files, but it can also be entered at various stages during production. Also considered as very basic metadata is the duration and a start and end timecode. Both help to identify the clip and to specify the cuts in editing.

Metadata Used for Describing Clips

Already on set there is some descriptive metadata captured. While the director may call one or the other take a “circled” take as marked for potentially best use in editing, the DIT may also add some notes during a basic first quality check (QC). This can be information that the action in a shot is not in focus properly, or that a microphone’s boom is visible in the image. Also all notes made for continuity purposes can be called metadata, as this also further describes the recorded media.

Technical Metadata

Typically cameras used in digital cinematography store a lot of parameters in the clip file. Some of it is technically crucial for displaying and playing back the media properly. This metadata contains information about the used file container and codec, where in the file the compressed or encoded data for a frame is located, and the frame rate specifies how fast frames need to advance during playback.

Example of Clip Metadata in Silverstack

But the metadata from the camera can also contain additional information about the state of the camera at the moment of recording. An example would be information about the exposure set by the user, containing exposure, ASA / ISO, active ND filters, and white balance parameters. If available via the lens mount, the state of the lens can be stored in the metadata, with information such as aperture, focal length, focus distance, and lens model.

It is helpful to review this information with the help of a capable software already on set, for example for reference. In order to maintain the same look in different scenes, the information about the used lens (for its characteristics), the aperture (for the depth of field) and the set white balance (when using the same light sources) from previous shots help to achieve consistency.

Some of these parameters are automatically collected by the camera if available, but some technical information is worth capturing manually (e.g. the aperture of a lens without electronic connectors, or the use of an external filter).

While descriptive metadata may often apply to all cameras of a multi-camera shot, the technical metadata always will just apply to the camera where it’s recorded for.

Static vs. Dynamic Metadata

So far we discussed metadata per clip, which is often called static metadata (as it applies to the entire clip). In contrast, dynamic metadata means metadata that can change throughout a clip.

The most basic, implicit dynamic metadata is the running timecode within a clip. Each frame has a different value, which helps identifying each single frame of a clip, e.g. in editorial. But as so often, timecode tends to be a legacy technology with lots a drawbacks: Timecode is unique only within 24h, and additional properties such as card names (also known as reel / tape information) is needed to uniquely identify a frame within a production. Shooting beyond midnight sometimes becomes a problem, as the order of shooting is not clear any more when shooting in free-run timecode. High-speed clips also can pose problems, as different clips might share overlapping timecode ranges.

But in addition to timecode there are other metadata properties that can change throughout a clip. While settings such as ASA / ISO are not likely to change during a clip, lens information such as focus distance or focal length (for zoom lenses) are changing quite often during a clip. Being able to review the changes of dynamic metadata throughout a clip helps for reference (looking up focus data for placing actors in a pickup scene correctly in the set), as well as for visual effects (VFX).

Example of Dynamic Metadata in LiveGrade Pro

There are two ways of inspecting dynamic camera metadata on set: One is from the recorded source clips, and the other is directly form the live signal via HD-SDI. As an example: Our media management software Silverstack extracts metadata from source media for a broad range of cameras, while our on-set grading software LiveGrade Pro can extract metadata from the HD-SDI signal for a range of capable cameras.

Metadata About the On-Set Context

Slate information such as the episode and scene of the current shot could already be considered as static contextual information, but sometimes there is much more (dynamic) context to be captured in order to simplify post-production – especially when further talking about VFX.

Location metadata such as GPS coordinates help to quickly and globally identify the location where a clip has been shot. If the position of the camera on a set or the relative position of cameras to each other or objects on set is important, this can be captured by simply measuring distances manually or electronically. Tilt and roll sensors in cameras or external sensors applied to cameras also add to the exact location in space. If available, dynamic position data of a crane or camera dolly also adds to positioning information.

All this information can help VFX operators later to aid 3D-trackers. When the calculation of the 3D position of a camera leads to ambiguous results (e.g. because tracked objects or features are temporarily occluded), positioning metadata can help rule out false solutions and thus speed up the tracking process.

Challenges With Metadata From Different Formats

Manufacturers of cameras used in digital cinematography usually are very good at coming up with file formats specific to their cameras capabilities. So for example our on-set software Silverstack for media management in productions implements advanced support for 30 and more different camera formats. Most of these formats store the metadata in different syntax, encode it differently, and put it in different places within or besides the media file. Also the semantics is often different: While one camera manufacturer stores the duration of exposure as a “mirror angle” (despite the fact that no mirror is built into such cameras any more), another manufacturer might store the exposure as a value of milliseconds. Both values can only be converted into each other with the availability of another metadata parameter, the sensor frame rate that has been set during shooting.

The capability of extracting such metadata properly for use e.g. in reports, for reference, and in further steps in post production is a part of the constant “homework” that independent software vendors are constantly busy with in order to cover the wide range of relevant camera systems. Silverstack for example not only captures a wide range of metadata, but makes this metadata also available in its library database, when the actual source footage is not attached.

Clever Media Management for Film Productions

Check out how Silverstack can improve your daily workflows!



About the Author
Patrick is a product manager for Pomfort's on-set applications. For his work at Pomfort he combines a technical background in software engineering with practical experience in digital film productions – and a preference for things that actually work.