The Foundry’s Camera Tracker for HitFilm allows anyone to easily and quickly composite effects or other elements into video footage that was filmed with a moving camera. While making the basic process incredibly simple, Camera Tracker also includes powerful features that ensure high quality results even with difficult to track files.

Camera tracking, also called matchmoving, is usually among the first steps in the post-production process, because compositing elements into a scene convincingly relies on the camera tracking data. The basic workflow can be broken into three main steps:

  1. Tracking Features – Identifying unique points of detail in the scene, then locating those features in each frame to determine how they move through the frame.
  2. Solving – Analyzing the tracked features and comparing their relative movement to determine where the camera was positioned in relation to each of the tracked features. By triangulating the movement of multiple points, the camera position can be solved with great accuracy.
  3. Creating a Scene – Using the solve data to generate a scene, comprised of a moving 3D camera, the relative positions of each feature that was tracked, and the original video clip which is being tracked.

Using Camera Tracker

The basic process of using Camera Tracker to create a 3D camera solve is shown in this video, or you can follow the steps detailed below.

  1. Add the CameraTracker effect to the layer that needs tracked.
  2. Double-click the effect on the timeline to open its controls in the Controls panel.
  3. Click the Track Features button. This may take some time, while the software auto-selects features and tracks them through the video.
  4. Click the Solve Camera button. Camera tracker will evaluate the movement of each feature to determine where the camera was when the scene was filmed.
  5. Click the Create Scene button. A new point will be created, with its position animated to match the camera movement. A new camera will be created and parented to the animated point.
  6. The final step, which is not essential, but is useful in most cases, is to set the ground plane. Select two or more points that are positioned on the ground.
  7. In the bottom left corner of the Viewer, open the Camera tracker menu. Select Ground Plane > Set To Selected.

This simple process creates a scene containing your video and a matchmoved camera, ready to add new elements.

Advanced Features

For many situations, the basic steps above are all that will be required. But frequently the tracking results can be improved by customizing the settings used to track the scene. By getting familiar with these features and settings, you can get the best results possible for any scene you are working on.

  • Matte Source:
    • None: No matte is applied.
    • Src Alpha: Uses the alpha of the source layer.
    • Src Inverted Alpha: Uses the inverted alpha of the source layer.
    • Matte Layer Luminance: Uses the luminance of the matte layer.
    • Matte Layer Inverted Luminance: Uses the inverted luminance of the matte layer.
    • Matte Layer Alpha: Uses the alpha of the matte layer.
    • Matte Layer Inverted Alpha: Uses the inverted alpha of the matte layer.
  • Analysis Range: Select the range of frames within the source layer that will be tracked.
    • Source Clip Range: Tracks the entire duration of the source layer
    • Specified Range: Allows you to specify a limited range within the layer, and only tracks the selected frames. When this option is selected, two new controls will appear below.
      • Analysis Start: Select the frame where tracking will begin.
      • Analysis Stop: Select the frame where tracking will end.
  • Display: Choose what information the Viewer displays about the tracked features.
    • Tracks: Shows only the tracks, without any additional information. All tracks are shown as orange before solving. After solving, solved tracks are colored green, unsolved tracks are orange, and rejected tracks are red.
    • Track Quality: The reliability of the tracks is indicated through color coding. Reliable tracks are green, questionable tracks are yellow, and unreliable tracks are red.
    • Point Quality: The quality of the 3D points generated by the solve are displayed through color coding. Tracks with the lowest probability of error are green. Tracks with the highest probability of error are red. This option is only available after the camera is solved.
  • Allow Line Selection: Enabling this checkbox makes it easier to select multiple points in the viewer. When disabled, you must click on the track X to select it. When enabled, you can click the line of the track path to select it.
  • Preview Features: Enabling this option allows you to view the features before tracking has begun.
  • View Keyframed Points Only: Select this checkbox to view only keyframed points. Keyframed points are the core of the tracking data, and are used to fill in data for the other tracks
  • Track Features: Click this button to begin the tracking process.
  • Solve Camera: After tracking is completed, click this button to solve the camera’s position and movement, based on the tracking data.
  • Create Scene: After the camera is solved, clicking this button generates a 3D scene, with a moving 3D camera and your source video layer.
  • Toggle Render Mode: This control toggles between showing the features placed over the source video, and showing the 3D point cloud created by the tracker.

Tracking

  • Number of Features: Set the number of features used to track the movement int he scene. More features can improve accuracy of the solve, but will extend the processing time needed to track and solve the scene. As a rule, most layers should use a minimum of 100 features to ensure a reliable solve.
  • Detection Threshold: Lowering the detection threshold selects more prominent points within the layer, while increasing detection threshold spreads the features more evenly across the layer.
    • (marker-blue) TIP: If your layer contains large areas that are relatively featureless, use a low detection threshold to improve the results.
  • Feature Separation: Controls the distribution of features across the layer. Higher values space features more evenly, while lower values allow features to group together near areas of more prominent contrast.
    • TIP: Increase feature separation when using a low number of features. When you raise the number of features, reduce the feature separation.
  • Track Threshold: Adjusts the tolerance to change within the video. Lowering the threshold can generate longer tracks, but they may potentially be less accurate. Use the preview features to check the accuracy of tracks when lowering the threshold, to ensure they are still accurate.
  • Track Smoothness: When working with more complex scenes, increase the track smoothness value to discard tracks that error over time.
  • Track Consistency: Sets the acceptable level of consistency before a track is discarded and replaced with a new feature in a different location. Higher values allow less inconsistency, but may take longer to process.
  • Track Validation: Select the type of camera motion that Camera Tracker should expect while tracking the scene.
    • None: Do not validate tracks base don any particular camera movement.
    • Free Camera: Compensates for both translational and rotational movement in the camera.
    • Rotating Camera: Compensates for a camera that is rotating only, if your scene was shot from a tripod.

Using Mattes

Mattes block specific areas of the video from being used in tracking. This helps reduce processing time, and prevent unusable tracking data. By keyframing mattes to cover areas of the frame containing moving objects, you allow Camera Tracker to ignore those areas, so it can focus on tracking stable objects that will give superior results.

Mattes must be created on a separate layer. In most cases a plane works best, but other layer types can work as well. Use the following steps to create a basic matte.

  1. On the timeline, open the New menu and create a new Plane layer.
  2. Open the transform controls for the plane, and reduce its Opacity to 30%. This allows you to see through the plane and observe the details of the video layer.
  3. On the Viewer, select the Freehand Mask tool. Then, on the timeline, select the plane.
  4. Draw a mask loosely around the moving object in your video. It does not need to be precise, but try to keep the space outside of the object to a minimum.
  5. Keyframe the mask’s path, position, rotation, or scale as needed, to follow the movement of the object.
  • If necessary, repeat steps 3-5 for any additional moving objects in the frame.
  • Right-click the plane on the timeline, and select Make Composite Shot. In the dialog that opens, rename the composite shot to “Matte”, and select Move With Layer to move the masks with the layer into the new comp. This step bakes the masks into the layer, so they are calculated into the layer’s shape.
  • Switch back to the main composite shot timeline, where the tracking is being performed, and open the Camera Track controls.
  • For the Matte Source property, select Matte Alpha.
  • For the Matte Layer property, select the “Matte” layer that contains the masks.

You can now proceed with the tracking, and the areas inside the matte will be ignored.

Solving the Camera

Creating a Scene

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.