To illustrate the importance of an efficient workflow, consider the manufacture of an automobile. During the design and engineering process, the most critical decisions are made: What will this car need to do? Who is it for? How will it be used?
After answering these questions, the designers and engineers get to work, the design and mechanics of the car begin to take shape, and eventually the car is ready for production. When in production, it is common that different tasks will be tackled by different teams.
It is essential that, even while they work separately, the teams remain in constant communication. Any change at one end, no matter how insignificant it may seem, may have a dramatic effect on another team’s production.
Furthermore, if the pieces are assembled in the wrong order, the next piece may not fit and all of the work will have to be disassembled and reworked. It is very easy for the production to fail if each team does not understand the production as a whole piece or cannot communicate outside of its own circle.
A 3D production is no different. If you build and animate a character while the initial designs are still being worked out, chances are that all of the model, rigging, and animation will have to be redone. If a texture map is painted before the model is completed, for example, the features will probably not match up.
Even within each aspect of a production, there is an order that each individual must follow to accomplish a task efficiently. Although the 3D production technology, Maya included, attempts to provide solutions to these order-of-operations issues, it is best to avoid them altogether by being educated on how everything fits together before you begin production.
It is crucial that every single element in a 3D production—be it modeling, rigging, animation, or lighting—not attempt to outdo or become more prevalent than the others. Rather, all elements of production should make the best effort to support each other.
The most successful productions are those for which the technology aids in clearly communicating a story. Figure below shows a block diagram of a general production pipeline used in many movie studios today.
While individual studios handle everything a little bit differently, all of them follow this same general path. In general, a good workflow should begin with a story. It becomes more and more evident everyday that just because our culture is blessed with all of this amazing technology, realizing its potential is impossible without great ideas for how to use it.
Once a story is defined, the design of every element must be developed. It is very common that the design and story feed back on one another, exposing new possibilities and pushing the visual design further. This whole process falls under the preproduction phase of a production pipeline.
The next major phase of the production pipeline is the modeling phase. While a modeling department might constantly be delivering and updating models throughout a production, it is imperative that the hero objects, the main characters of a scene, be at least roughed out and delivered to the rigging department so that they can be set up for the animators.
A lot happens during the animation phase. In addition to the keyframed animation techniques you’ll learn about effects such as fire, smoke, and water can be added into the scene through the use of particles. Finally, the objects are textured, the scene is lit, and the animation is rendered to create the final sequence of images.
Depending on the type of production, the last phase of the workflow involves some kind of postproduction process. In film and television, this might include using compositing software to combine live-action footage shots with the elements rendered from Maya.
In a video game production, postproduction might involve the programming of the game within a game engine. As we begin to examine these different processes of a production pipeline, we want to stress that these are very general assessments of production workflows.
Every company has its own structure that defines how the pipeline tasks are arranged and the specifications that each team must adhere to so that every single piece can fit into the complete, final output. As an individual, you will adopt your own personal style of working with Maya, just as you would for any other tool.
So, as you carve your way, just remember that the features and processes demonstrated are not the only way to accomplish any task. Workflows are being evaluated and improved every day. As a result, new tools become available in the software. Never forget your place as a user!
Think outside of the box. If a specific process does not fit your production, write your own tool. That is the real power of this software. Let’s look at some specific workflows within the production process to learn more about them.
Preproduction is not necessarily a workflow particular to Maya, but it is one of the most important phases of any 3D production workflow. Even in a world of advanced technology, a 3D project should begin on pencil and paper. Storyboarding, conceptual sketches, and character design are truly essential to any successful 3D project.
For Star Wars: Episode I, George Lucas and his team of computer graphics artists at JAK Films pioneered the idea of using computers to previsualize scenes in the movie once the storyboards were complete.
The resulting animatics enabled Lucas to use basic geometry, animation, and lighting to set up and experiment with the shots that the storyboard artists had conceived. It also provided the perfect channel for communication with the visual effects artists at Industrial Light & Magic.
These artists not only received the drawn storyboards, but also had an idea of exactly how the objects and the camera moved though the scenes. Since then, previsualization has become popular in the film industry as a method for designing complicated shots involving effects.
The movie Sin City was approved for production by a major studio based on an impressive 3D animatic. Even as a solo artist, “previz” can help you quickly figure out if your shot, or sequence of shots, is working.
Before you can begin animating, you need to have created objects to animate. Modeling is the process of building the characters, props, and environments in the scene. These objects are constructed from 3D geometrical surfaces so that they can be rotated around and viewed from all angles.
One of the biggest advantages of using 3D over more traditional 2D techniques for animation is that the 3D objects, a character for example, needs to be built only once, while a character created in 2D needs to be re-created for every frame of the animation.
In terms of workflow, the most important thing for you to know before you begin is how much of the model will be seen in the animation and how close the camera will get to the model. It is important that you figure out in the preproduction process how a model will be used in the project.
For example, spending a week to model a detailed cell phone that will only be seen sticking out of someone’s pocket from a distance is a waste of time. If you plan ahead in preproduction, you can avoid spending time on irrelevant details and focus on the important ones.
Once you have determined the types of scenes and camera shots that will be needed for a model, you then need to consider what will happen with the model downstream. In the case of a character, its geometry must be modeled so that it will deform, or bend, properly when it is animated.
In most cases, the geometry must be complete before the texture coordinates, known as UV coordinates, are laid out for texture mapping. The model’s complexity will also have a direct effect on the amount of time it takes to render.
Maya offers three different modeling toolsets: NURBS (Non-Uniform Rational B-Splines), polygonal, and subdivision surface modeling. (NURBS is a type of geometry in Maya where the surfaces are defined mathematically.)
Character setup, also called rigging, is the process of preparing a character so that it can be animated. Typically, you begin by creating a skeleton that matches the scale and features of a model. For example, the character’s hip joint needs to be placed at the hip, the knee joint at the knee, the ankle joint at the ankle, and so on.
Control objects are created and connected to the skeleton. These controls enable the animator to make a 3D character perform similar to the way a puppeteer makes a puppet perform. If a skeleton is properly rigged, it can be handed off to an animator who lacks any computer knowledge and that animator will be able to intuitively pose and animate the character.
Apart from the animation controls of a character rig, a large portion of the rigging process involves setting up how the model will bend or deform with the rig. For characters that are intended for realistic purposes, many combinations of deformation utilities must be used to maintain the volume of the mesh as it bends and reacts to the animated behaviors.
Occasionally, physically based muscle systems are created and dynamically simulated to add an extra element of realism to the final product.
The animation process is what will finally let your character zip through time and space. Effective animation is achieved through orderly keyframing—the process of recording an object’s position, rotation, scale, shape, and such at a specific time. The most efficient animation workflow is using what is known as a block and refine technique.
The first pass at the animation is focused on timing: what poses the character is in at specific frames. At this point, usually no keyframe interpolation is involved—that is, no movement exists to transition between the keyframes, and the character merely pops into position, holds, and then pops into the next pose.
During the next few passes, you block out some of the secondary poses that fall between those you set in the previous pass. Once all of the significant poses have been set, interpolation is enabled and the process of refining the motion between these poses begins.
Because this process can take a long time, much patience is required as you do pass after pass, adding more and more detail to the motion.
Shading and Texturing
Shading and texturing is the process that adds realistic or stylized surface elements to your models; otherwise, all 3D elements would render out to be the same textureless, flat color. For every surface or group of surfaces, a material is created that determines the surface’s characteristics— what color it is, or how transparent, shiny, bumpy, or reflective it is.
3D artists usually say that the material determines how the object shades. Bitmapped images, such as those that you might create in an external image editing or illustration program such as Adobe Photoshop or Pixologic’s ZBrush application, can be used to control the various shading characteristics.
In most cases, much of the finer detail of an object’s surface can be added via these texture maps. The wrinkles in skin or the panels on an airplane’s wing are usually added with texture maps.
Lighting and Rendering
The final piece of the 3D production process is the lighting and rendering of the scene. Lights are added and used in Maya just as they might be in the real world. In filmmaking, lights are used not just to illuminate the scene, but also to create or enhance a mood.
This is done through the placement, intensity, and color of the lights used as well as any go-betweens that may indicate elements offstage. To see the results of your lighting, you must first render the scene. Rendering is the process of creating an image from all of this 3D data.
For a single frame, the rendering engine draws each pixel by finding an object in front of the camera and drawing it based on the direction of the surface, the surface characteristics, and the lighting information. The goal of the rendering process is to achieve in a reasonable amount of time an image that is free of unwanted artifacts.
Once the specific calculations have been determined and the file is set up, the rest of the process is accomplished by the computer and can take anywhere from a few seconds to hours—or even days—to render, depending on the complexity of the scene.
After every frame of the animation has been rendered, the footage will usually be brought into another software package so that the 3D elements can be combined with other elements shot on video or film. It is also common for the 3D elements in the scene to be rendered separately.
For example, each character might be rendered as a separate element and then integrated at the compositing stage. In some cases, the various surface characteristics are rendered in a separate pass for each object.
This gives the compositor absolute control over the entire image. With this method, the reflectivity, shininess, and color can be changed easily without the artist having to go back and correct it in 3D and re-render the image.
Now that you have an idea of the different workflows involved in the 3D production process, you should be able to map out a good plan for how to attack your animation in the most efficient and least frustrating way possible.
Sometimes there may be some back and forth between the different processes to get it right. For example, the rig built for a character may need to be reworked if the first attempt was not sufficient for the shot. This is inevitable.
However, you don’t want to take a model all the way through the pipeline only to realize that the design is wrong. There are always ways to patch things, but in most cases you’ll be starting from scratch. So, always understand how your pipeline works.