The Camera Tracker effect is a HitFilm Pro exclusive.

Foundry’s Camera Tracker for HitFilm Proallows anyone to easily and quickly composite effects or other elements into video footage that was filmed with a moving camera. While making the basic process incredibly simple, Camera Tracker also includes powerful features that ensure high quality results even with difficult to track files.

Camera tracking, also called matchmoving, is usually among the first steps in the post-production process, because compositing elements into a scene convincingly relies on the camera tracking data. The basic workflow can be broken into three main steps:

  1. Tracking Features – Identifying unique points of detail in the scene, then locating those features in each frame to determine how they move through the frame.
  2. Solving – Analyzing the tracked features and comparing their relative movement to determine where the camera was positioned in relation to each of the tracked features. By triangulating the movement of multiple points, the camera position can be solved with great accuracy.
  3. Creating a Scene – Using the solve data to generate a scene, comprised of a moving 3D camera, the relative positions of each feature that was tracked, and the original video clip which is being tracked.

Using Camera Tracker

The basic process of using Camera Tracker to create a 3D camera solve is shown in this video, or you can follow the steps detailed below.

  1. Add the CameraTracker effect to the layer that needs tracked.
  2. Double-click the effect on the timeline to open its controls in the Controls panel.
  3. Click the Track Features button. This may take some time, while the software auto-selects features and tracks them through the video.
  4. Click the Solve Camera button. Camera tracker will evaluate the movement of each feature to determine where the camera was when the scene was filmed.
  5. Click the Create Scene button. A new point will be created, with its position animated to match the camera movement. A new camera will be created and parented to the animated point.
  6. The final step, which is not essential, but is useful in most cases, is to set the ground plane. Select two or more points that are positioned on the ground.
  7. In the bottom left corner of the Viewer, open the Camera tracker menu. Select Ground Plane > Set To Selected.

This simple process creates a scene containing your video and a matchmoved camera, ready to add new elements.

Advanced Features

For many situations, the basic steps above are all that will be required. But frequently the tracking results can be improved by customizing the settings used to track the scene. By getting familiar with these features and settings, you can get the best results possible for any scene you are working on.

  • Matte Source:
    • None: No matte is applied.
    • Src Alpha: Uses the alpha of the source layer.
    • Src Inverted Alpha: Uses the inverted alpha of the source layer.
    • Matte Layer Luminance: Uses the luminance of the matte layer.
    • Matte Layer Inverted Luminance: Uses the inverted luminance of the matte layer.
    • Matte Layer Alpha: Uses the alpha of the matte layer.
    • Matte Layer Inverted Alpha: Uses the inverted alpha of the matte layer.
  • Analysis Range: Select the range of frames within the source layer that will be tracked.
    • Source Clip Range: Tracks the entire duration of the source layer
    • Specified Range: Allows you to specify a limited range within the layer, and only tracks the selected frames. When this option is selected, two new controls will appear below.
      • Analysis Start: Select the frame where tracking will begin.
      • Analysis Stop: Select the frame where tracking will end.
  • Display: Choose what information the Viewer displays about the tracked features.
    • Tracks: Shows only the tracks, without any additional information. All tracks are shown as orange before solving. After solving, solved tracks are colored green, unsolved tracks are orange, and rejected tracks are red.
    • Track Quality: The reliability of the tracks is indicated through color coding. Reliable tracks are green, questionable tracks are yellow, and unreliable tracks are red.
    • Point Quality: The quality of the 3D points generated by the solve are displayed through color coding. Tracks with the lowest probability of error are green. Tracks with the highest probability of error are red. This option is only available after the camera is solved.
  • Allow Line Selection: Enabling this checkbox makes it easier to select multiple points in the viewer. When disabled, you must click on the track X to select it. When enabled, you can click the line of the track path to select it.
  • Preview Features: Enabling this option allows you to view the features before tracking has begun.
  • View Keyframed Points Only: Select this checkbox to view only keyframed points. Keyframed points are the core of the tracking data, and are used to fill in data for the other tracks
  • Track Features: Click this button to begin the tracking process.
  • Solve Camera: After tracking is completed, click this button to solve the camera’s position and movement, based on the tracking data.
  • Create Scene: After the camera is solved, clicking this button generates a 3D scene, with a moving 3D camera and your source video layer.
  • Toggle Render Mode: This control toggles between showing the features placed over the source video, and showing the 3D point cloud created by the tracker.

Tracking

  • Number of Features: Set the number of features used to track the movement int he scene. More features can improve accuracy of the solve, but will extend the processing time needed to track and solve the scene. As a rule, most layers should use a minimum of 100 features to ensure a reliable solve.
  • Detection Threshold: Lowering the detection threshold selects more prominent points within the layer, while increasing detection threshold spreads the features more evenly across the layer.
    • (marker-blue) TIP: If your layer contains large areas that are relatively featureless, use a low detection threshold to improve the results.
  • Feature Separation: Controls the distribution of features across the layer. Higher values space features more evenly, while lower values allow features to group together near areas of more prominent contrast.
    • TIP: Increase feature separation when using a low number of features. When you raise the number of features, reduce the feature separation.
  • Track Threshold: Adjusts the tolerance to change within the video. Lowering the threshold can generate longer tracks, but they may potentially be less accurate. Use the preview features to check the accuracy of tracks when lowering the threshold, to ensure they are still accurate.
  • Track Smoothness: When working with more complex scenes, increase the track smoothness value to discard tracks that error over time.
  • Track Consistency: Sets the acceptable level of consistency before a track is discarded and replaced with a new feature in a different location. Higher values allow less inconsistency, but may take longer to process.
  • Track Validation: Select the type of camera motion that Camera Tracker should expect while tracking the scene.
    • None: Do not validate tracks base don any particular camera movement.
    • Free Camera: Compensates for both translational and rotational movement in the camera.
    • Rotating Camera: Compensates for a camera that is rotating only, if your scene was shot from a tripod.

Using Mattes

Mattes block specific areas of the video from being used in tracking. This helps reduce processing time, and prevent unusable tracking data. By keyframing mattes to cover areas of the frame containing moving objects, you allow Camera Tracker to ignore those areas, so it can focus on tracking stable objects that will give superior results.

Mattes must be created on a separate layer. In most cases a plane works best, but other layer types can work as well. Use the following steps to create a basic matte.

  1. On the timeline, open the New menu and create a new Plane layer.
  2. Open the transform controls for the plane, and reduce its Opacity to 30%. This allows you to see through the plane and observe the details of the video layer.
  3. On the Viewer, select the Freehand Mask tool. Then, on the timeline, select the plane.
  4. Draw a mask loosely around the moving object in your video. It does not need to be precise, but try to keep the space outside of the object to a minimum.
  5. Keyframe the mask’s path, position, rotation, or scale as needed, to follow the movement of the object.
  • If necessary, repeat steps 3-5 for any additional moving objects in the frame.
  • Right-click the plane on the timeline, and select Make Composite Shot. In the dialog that opens, rename the composite shot to “Matte”, and select Move With Layer to move the masks with the layer into the new comp. This step bakes the masks into the layer, so they are calculated into the layer’s shape.
  • Switch back to the main composite shot timeline, where the tracking is being performed, and open the Camera Track controls.
  • For the Matte Source property, select Matte Alpha.
  • For the Matte Layer property, select the “Matte” layer that contains the masks.

You can now proceed with the tracking, and the areas inside the matte will be ignored.

Solve

When you’re happy with the features that you’ve tracked, you can proceed to solving the camera. CameraTracker uses the tracking information to internally calculate the camera position and add positional information to the feature points in the Viewer. The first step is to adjust the parameters for your camera solve.

  • Focal Length Type: Select the focal length type used when you filmed the scene, from this menu. If you do knot know the correct focal length, CameraTracker can calculate the correct value for you, but this will extend the processing time required. Keeping track of the focal lengths used so you can enter them before tracking gives the fastest results.
    • Known: If you know the focal length of lens used to film the scene, select this option, then enter the value below, as the Focal Length.
    • Approximate Varying: Select this option if a zoom lens was used, and you do not know the exact focal length used but do have an approximation of the correct value. Enter that approximate value below, as the Focal Length.
    • Approximate Constant: Select this option when no zoom is present, but you only have an approximate value for the focal length used. Enter that approximate value below, as the Focal Length.
    • Unknown Varying: Select this option if the focal length is unknown and changes through the scene.
    • Unknown Constant: This is the default option. Use this option when the zoom does not change in the scene, and you do not know the focal length used.
  • Units: Select the units used to define the focal length and Film Back Size.
  • Focal Length: Specify the focal length value used to film the scene. When an Approximate Focal Length Type is selected, CameraTracker will attempt to refine the focal length during the solve.
    • NOTE: Digital cameras often record the focal length used for each shot, in the metadata for the video file.
  • Film Back Size X: Defines the physical width of the image sensor in the camera used to captured the scene.
  • Film Back Size Y: Defines the physical height of the image sensor in the camera used to captured the scene.
  • Camera Motion: You can assist CameraTracker with the solve by selecting the motion type of the camera while it was on set filming. Selecting a Motion Type limits the number of calculations that CameraTracker needs to run, thereby expediting the tracking process.
    • Rotation Only: This option is designed for cameras mounted on a tripod, where the camera’s position is stationary and any movement consists entirely of rotation on the X, Y or Z axes.
      • NOTE: When Rotation Only is selected, make sure that Tracking > Track Validation is also set to Rotating Camera.
    • Free Camera: Select this option for any scene where a tripod is not in use, and the camera is translating and rotating.
  • Keyframe Separation: Specifies the gap between the keyframes which define the camera’s movement. Higher values create fewer keyframes, useful for slow moving cameras. Low values generate more keyframes, which can more effectively handle fast camera motion. Generally this property is used for troubleshooting a difficult solve, and can initially be left at its default.
  • Set Reference Frame: Select a frame with a high number of tracks, spread widely through the frame at varying depths, to be used as the first keyframe from which the solve will be calculated. Generally this property is used for troubleshooting a difficult solve, and can initially be left at its default.
  • Keyframe Accuracy: The higher the accuracy, the longer the solve will take to process, but the more precise the results will be. Higher accuracy also helps when tracking longer sequences. Generally this property is used for troubleshooting a difficult solve, and can initially be left at its default.
  • Three Frame Initialization: Enable this option to help prevent distorted solves. Trying different reference frames with this option can be effective for correcting complex solves. Generally this property is used for troubleshooting a difficult solve, and can initially be left at its default.

Once the parameters are set, click the Solve Camera button to create the camera solve. The solve happens much more quickly than the tracking in most cases, but the speed will vary depending on the length of the sequence, the number of tracks, and the complexity of the camera movement. Once the solve is completed, a solve report is displayed. If you are happy with the result, you can proceed to creating a scene.

Lens Distortion

Lens Distortion is used to select the type of lens used on set. Select the lens type from the drop down menu.

  • No Lens Distortion: Disables all lens distortion controls and treats the footage as having no distortion.
  • Known Lens: Allows you to specify the lens distortion manually for the camera solve.
  • Refine Known Lens: Uses your selected distortion settings as an approximation, and attempts to refine them further in the camera solve.
  • Unknown Lens: Calculates the lens distortion automatically from the sequence and then refines the distortion in the camera solve.

Lens Parameters

  • Lens Type: Select the type of lens used.
    • Spherical: Select this option for standard spherical lenses.
    • Anamorphic: Anamorphic lenses which squash the footage at filming, requiring you to stretch it later to create a wider frame, should use this algorithm to calculate the distortion.
  • Radial Distortion 1 (d1): Defines the first radial distortion term. This is proportional to r2, where r is the distance from the distortion centre.
  • Radial Distortion 2 (d2): Defines the second radial distortion term. This is proportional to r4, where r is the distance from the distortion centre.
  • Distortion Centre (r2, r4): Defines the values for the centre of the radial distortion on the X and Y axes.
  • Anamorphic Squeeze (Asq): Defines the anamorphic squeeze value. If you select an anamorphic lens, the Distortion Centre on the X axis is scaled by this amount.
  • Asymmetric Distortion (Ax, Ay): Defines distortion for anamorphic lenses. Enter values to define asymmetric distortion to correct for slight misalignments between multiple elements in the lens.
  • Undistort: Check this to view your footage undistorted.

Refine

The refine tools can assist you when you are troubleshooting your tracks. In conjunction with the Value Graph, they provide detailed statistics on track and solve data across the entire composition, helping you to locate problematic frames within the scene.

Track Statistics

Track statistics can be used after you have tracked your composite shot. To access values for these statistics on the timeline, click the Value Graph button at the top right of the timeline panel.

  • Track-Points: Shows the total number of tracking points present on each frame throughout the scene. Large variations from your specified Number of Features in the Tracking settings indicate possible tracking issues.
  • Track-Lifetime-Min: Shows the shortest lifetime, in frames, of any track poin on the current frame. Very short tracks are likely to be poor quality, and are good candidates for removal.
  • Track-Lifetime-Avg: Shows the average lifetime of all tracks present on the current frame. Significant drops in this number indicates a high number of tracks being reseeded. Try using matting on that frame, to exclude problematic areas from the tracking, then retrack to improve the results.
  • Track-Distance-Avg: Shows the average distance the track points have moved from the previous frame to the current frame. Large variations in the average distance could indicate tracking problems.
  • Track-Distance-Max: Shows the maximum distance any track has moved between the previous frame and the current frame. Compare these values to the Track-Distance-Avg. Notable spikes in the Max value which differ from the average may indicate tracking errors.

Solve Statistics

Solve statistics can only be used after you have solved your camera. They can be accessed in the same way as Track statistics, by clicking the Value Graph button at the top right of the timeline panel.

  • Solve Keyframe: Shows frames used as a keyframe by the solve process: 1 for keyframes, 0 for those not used as a keyframe.
  • Solve-Overall Error: Shows the total solve error. Lower values indicate a more reliable solve.
  • Solve-Margin of Error: Indicates frames where outlying points may have caused possible solve errors. Higher values represent a greater margin of error.
  • Solve-Per Frame Error-Avg: Shows the general quality measure of the solve over time. It is calculated as the average of all the solve errors for the points present on the current frame.
  • Solve-Per Frame Error-Max: Shows the maximum value of all the solve errors for the points present on the current frame. Spikes in the maximum distance measurement which are not reflected in the average measurement could indicate possible outliers which need cleaning up.
  • Solve-Per Track Error-Max: Shows the maximum value of all the solve errors for the points present on the current frame, averaged over the course of the track lifetimes. High values could indicate feature points which are generally bad over their lifetime.

Track and Solve Thresholds

These controls set minimum and maximum thresholds for the tracks and the solve. Any tracks that fall outside these thresholds are automatically rejected and displayed in red. You can easily delete them by clicking Delete Rejected.

  • Threshold-Track-Lifetime-Min: Sets the shortest acceptable lifetime, in frames, of track points. For example, if you set the threshold to 3, any tracks that only exist for one or two frames are automatically rejected. This allows you to easily reject any very short tracks, which are likely to be of poor quality.
  • Threshold-Track-Distance-Max: Sets the maximum distance, in pixels, which a track point can move from one frame to the next without being rejected. For example, if you set this to 80, CameraTracker rejects any tracks where the track point moves 81 pixels or more between two adjacent frames.
  • Threshold-Solve-Frame Error-Max: Sets the maximum acceptable value of all the solve errors for the points present on any one frame. This allows you to reject tracks that may not be generally bad over their lifetime but do go wrong at some point in time.
  • Threshold-Solve-Track Error-Max: Sets the maximum acceptable value of all the solve errors for the points present on any one frame, averaged over the course of the track lifetimes. This allows you to reject tracks that are generally bad over their lifetime.
  • Delete Rejected: Click this button to delete all tracks which fall outside of the thresholds you have specified above.
  • Delete Unsolved: Click this button to remove all feature tracks which do not have a valid 3D point after the camera solve is created.

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.