Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Matchmoving & camera tracking

No description
by

Monika Klamecka

on 4 November 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Matchmoving & camera tracking

Picture 1- The virtual camera is lined up with the virtual set and characters at the beginning of the shot...
Picture 2- … and at the end…
Picture 3- Creating an animated CG camera that matches all properties of the real life camera: lens, focal length,position, and movement relative to the subject.

Matchmoving is a cinematic technique that allows animators the insertion of computer generated objects into live-action footage with the right position, scale, orientation and motion relative to the photographed objects in the shot. The technique is used to track the movement of a real camera so that a corresponding virtual camera movement can be duplicated in a camera tracking software. All animated objects, when composited into the live- action footage will appear seamless and will be in “perfectly matched perspective”(Wikipedia, 2013). The result of matchmoving is a realistic illusion; ensuring viewers believe the CG element is an actual component of the live footage.

Erica Hornung, a well-known digital artist, in her book “The art and technique of matchmoving” explains the matchmoving technique in more depth: the matchmove artist gathers all the information from the real-life set (where the film is being made) and recreates is as a virtual camera (using the focal length of the lens, the height, the tilt and the position) in the relation to the object in the CG environment. “Then, when the CG world is created, it is “photographed,” or rendered, with the virtual CG twin of the real-life camera: the same lens, the same position, the same movement if there was any” (Hornung, 2010, Pages xiii-xiv).
Matchmoving is quite a new technique. The US first developed the idea of using tracking as part of their missile guidance system. In late 70’s the military experimented with head mounted displays that combined magnetic head trackers to understand where the pilot was looking (Anon, 1999).The matchmoving technique was then introduced to Visual Effects (VFX) by Tom Brigham and J.P. Lewis in 1985 who implemented an Fast Fourier Transform tracker (FFT) in the New York Institute of Technology graphics lab for a series of TV commercials for National Geographic that showed a coin travelling in arc as if it was a rising sun (Seymour, 2005).
Industrial Light & Magic had an early 2D tracking software system called MM2 first tested on “Hook” and “Death Becomes Her”. MM2 was the beginning of the 2D tracking software that let users to manually insert key frame to position changes by hand. Later this process was used as the base to implement 3D tracking systems, where you would adjust the virtual camera frame by frame to get the CG element to align with the live footage. This was used in early movies such as “Terminator 2” (T-1000, the liquid metal man) and the dinosaurs in “Jurassic Park” (Carney, n.d.).

Most of the time matchmoving happens early in the production. This allows animators and technical directors plan where to place their CG characters, explosions and other special effects. Also they need a camera from which to look at and render the objects (Dobbert, 2005)
A couple of examples of matchmoving
Picture 1
Picture 2
Picture 3
Position in current VFX pipeline
1
Matchmover's workflow
History
Overview
Virtual camera created by the matchmover is based on the data collected during surveying the shot and locations (measuring still objects, buildings etc.). Image plane (movie screen) moves with the virtual camera and it is attached to it at all times. The image shown on it is the images taken on the set, the exact live-action footage that was shot and then scanned to be later worked on in the three-dimensional world. The matchmover sees exactly everything that the camera operator saw during the shoot. Virtual set will fit the plate correctly once all the virtual camera’s properties are matched with the real camera when the footage was shot. Then the footage is ready to be processed in matchmoving software (Hornung, 2010).

The most important information that must be given to the software is as follow (Hornung, 2010):
•Certain spots on the plate are important
•They are important because they match certain points in the virtual set
•Those points in the virtual set are located at the specific coordinates.
Camera tracking
To mark those important features on the plate 2D tracking process must be applied. The 2D tracking software will specify where these features are on every frame of the sequence in 2D, flat space. Once the 2D tracking process is finished, corresponding 3D locations (called locators) of those 2D points on the virtual set need to be determined. Matchmove software creates 3D camera solution based on correlation between 2D tracking points and 3D locators.
The camera and plate are locked to each other and they only move together, for example rotating the camera will result in rotating the plate as well. The way matchmove works is that every 2D point on a plate and every correlated 3D locator in the computer generated scene, “the matchmoving software “stretches a string” from the camera through
the virtual location and to the
plate, manoeuvring the virtual
camera until all the strings
line up correctly”
(Hornung, 2010, Pages 6-8).
3D Tracking & Matchmoving
Compositing CG Robot Character into Live Action
(De Guzman, 2011)
(Hornung, 2010, pages: xiii-viv )
(RumbleStudio3D, 2011).
(Dobbert, 2005, p: 5)
(Dobbert, 2005, p: 11)
(Hornung, 2010, p. 4)
(Hornung, 2010, p. 5)
(Hornung, 2010, p.6)
Monika Klamecka
09004293
Matchmoving
&
Camera Tracking
By doing this, the CG objects created in the virtual world will have the same properties and the same relationship to the moving camera as the live actors had to the live camera, therefore they will be seamlessly integrated into the live plate (the live action film from set, digitalized for the use in the computer generated environment) for the final shot.
Matchmover must understand how real-world cameras work and how they make the images we see on the screen. A good matchmover should be familiar with lenses and cameras and be able to spot mistakes in the data from set and account for them. A real camera is doing one thing when it films a scene- it is capturing the three-dimensional world as a two-dimensional image. It is gathering light from the 3D world around us and records it in a 2D form, such as a piece of film or a digital image. A matchmover’s job is the exact opposite- a matchmover must take a two-dimensional image and create a three-dimensional world (Dobbert, 2005).
Image plane is attached to the virtual camera and shows the actual footage shot on set. As well as the real-life camera the virtual camera can be manipulater around the image plane.
Whilst looking through the virtual camera the plate and wirtual set are seen. This it where the camera solution can be adjusted to achieve a perfect fit.
(Hornung, 2010, p.6)
2D tracking
2D track points
on the plate
(white)
3D locators on the virtual set (blue)
"strings"
3D locators
in the virtual set
2D track points
associated with
3D locators
(Hornung, 2010, p.7)
To summarize,matchmoving and camera tracking is crucial when it comes to creating realistic shots. If compositing a CG object into a scene is done correctly the viewer will not notice it as it will appear seamless. The technique has been improved over the past years and now allows creating realistic looking movies at a very low budget and achieving stunning effects. Nowadays there is no limits in the matchmoving industry, the only limit is the imagination.
Design
developmental work for matchmoving & camera tracking in Nuke
Before experimenting with camera tracking & matchmove a live-action 2D footage needs to be shot. Which then has to be converted into an image sequence before importing it to Nuke. This can easily be done in After Effects, where the footage can be rendered as a jpeg sequence.
The footage used in this experiment was already converted into an image sequence. It was then imported to Nuke project and connected to the viewer node to enable displaying it in the workspace.
before lens distortion
after lens
distortion

at this stage the Nuke
script should
look like that

final roto
importing the image
sequence to Nuke
connecting the 'viewer' node to
enable displaying the footage
in the workspace
The 'camera tracker' node must be added to track the live-action footage. To ensure that the camera tracker is tracking from undistorted footage the values from lens distortion should be used within the camera tracker, which can easily be copied over.
The camera tracker needs to find points from which it can track. To preview those points 'preview features' from the tracking menu must be turned on, which will display all the points the virtual camera can track from.
At the moment the camera is tracking the points from people that are moving as it is assuming that everything in the footage is still apart from the camera that moves. To ensure that the camera will not consider the moving objects they have to be rostoscoped out of the scene. To do so a 'roto' node needs to be plugged in and roto shapes need to be created.. this only needs to be done on one frame as Nuke uses autokeying.
To make sure the camera tracker is not considering the moving people anymore in the camera tracker menu the mask alpha channel must be enabled as the roto shape was drawn on the alpha channel.
Tracking points are ignoring the elements that were rotoscoped out of the scene
It is recommended to increase the number of tracking points (number of features) to help the camera track better, In this case the number of features was increased to 300 where default is 150 tracking points.
The next step is tracking the set of features that has been detected within the image sequence.
Once the camera track is done it finds a number of tracks. However some of the tracks found are not consistent shapes and need to be cleaned up,
removing deviants
The tracks can also be refined automatically, for example in the refine menu it can be decided the minimum and maximum length of tracks to keep. In this case the minimum track length was changed to 6 and the rejected tracks were deleted. Now the camera can be solved.
Once the camera is solved green valid points appear from which the camera can track. The red points are not good and camera is not able to track from them as they are not working properly. Those rejected tracks must be deleted.
green- good tracking points
red- bad tracking points
low, acceptable solve error
Now a 3D camera can be created
3D camera viewed in 3d space
It is now time to create a camera point cloud, computer generated virtual set made up of 3D locators
Next step is to change the actual scene i.e. setting the ground plane.
to create the ground plane points on the actual floor in the scene must be selected and set to them. As a result the ground plane is created (pink points).
set ground plane in 3D view
While creating the scene the on site dimensions taken while surveying the shot come in very useful. That is why it is important to measure the shot, location, any other still objects so this information can later be used while creating the virtual scene. This information can be input through the scale distance. Nuke will scale up/down the scene depending on this information.
scene view, looking through
the 3D camera. Any object put into the scene now will appear seamless and in perfectly matched perspective
here is where the distance parameters
should be inserted
To test how good the matchmove will be, a simple 3D cube has been inserted into the scene.
The cube node also appears in the node graph.
To ensure the 3D object will render out a color input needs to be plugged into it, for example: color bars, texture, checker bar, color wheel etc.
The 3D cube appears seamlessly in
the scene while scrolling through
the keyframes.
Final Nuke script using color bars as the color input for the 3D cube.
To experiment a bit more,
the color bars node was
replaced with a texture.

To investigate how good the matchmove will work the 3D cube has been replaced with a 3D rig of an ant, imported to Nuke as .obj file.
Final Nuke script.
A 'write' node was added to
render the sequence

The image sequence was then imported to After Effects and rendered out as a movie file.
To ensure the 3D object will render out some kind of color input needs to be plugged into it, for example: color bars, texture, checker bar, color wheel etc.
Next,to help the camera track better, the
footage needs to be undistorted. To do so the 'lens distortion' node is plugged in to the image sequence. The camera tracker in Nuke assumes that everything is still and does not move except the camera. It also thinks that everything in the image is totally straight although in reality it is not. To remove distortion 'line analysis' can be used where some straight points in the image need to be selected and then analyzed.
straight points in the image
lines created by selecting
straight points
Final design with a CG cube composited into
the live-action footage

Final design with a CG ant composited into
the live-action footage
Full transcript