3D Video Card
Since the late 1990s, 3D acceleration—once limited to exotic add-on cards designed for hardcore gameplayers—has become commonplace in the PC world. Although business software has yet to embrace 3D imaging, full-motion graphics are used in sports, first-person shooters, team combat, driving, and many other types of PC gaming.
Because even low-cost integrated chipsets offer some 3D support and 3D video cards are now in their sixth generation of development, virtually any user of a recent-model computer has the ability to enjoy 3D lighting, perspective, texture, and shading effects in her favorite games.
The latest 3D sports games provide lighting and camera angles so realistic that a casual observer could almost mistake the computer-generated game for an actual broadcast, and the latest 3D accelerator chips enable fast PCs to compete with high-performance dedicated game machines, such as Sony's PlayStation 2, Nintendo's GameCube, and Microsoft's Xbox, for the mind and wallet of the hard-core gameplayer.
How 3D Video Card Work
To construct an animated 3D sequence, a computer can mathematically animate the sequences between keyframes. A keyframe identifies specific points. A bouncing ball, for example, can have three keyframes: up, down, and up. Using these frames as a reference point, the computer can create all the interim images between the top and bottom. This creates the effect of a smoothly bouncing ball.
After it has created the basic sequence, the system can then refine the appearance of the images by filling them in with color. The most primitive and least effective fill method is called flat shading, in which a shape is simply filled with a solid color. Gouraud shading, a slightly more effective technique, involves the assignment of colors to specific points on a shape. The points are then joined using a smooth gradient between the colors.
A more processor-intensive, and much more effective, type of fill is called texture mapping. The 3D application includes patterns—or textures—in the form of small bitmaps that it tiles onto the shapes in the image, just as you can tile a small bitmap to form the wallpaper for your Windows desktop.
The primary difference is that the 3D application can modify the appearance of each tile by applying perspective and shading to achieve 3D effects. When lighting effects that simulate fog, glare, directional shadows, and others are added, the 3D animation comes very close indeed to matching reality.
Until the late 1990s, 3D applications had to rely on support from software routines to convert these abstractions into live images. This placed a heavy burden on the system processor in the PC, which has a significant impact on the performance not only of the visual display, but also of any other applications the computer might be running.
Starting in the period from 1996 to 1997, chipsets on most video adapters began to take on many of the tasks involved in rendering 3D images, greatly lessening the load on the system processor and boosting overall system performance.
Both video games and 3D animation programs are taking advantage of their capability to render smooth, photorealistic images at high speeds and in real time. Fortunately, users with less-demanding 3D performance requirements often can purchase low-end products based on the previous generation of 3D accelerator chips.
These cards typically provide more-than-adequate performance for 2D business applications. Most current mid-range and high-end 3D accelerators also support dual-display and TV-out capabilities, enabling you to work and play at the same time.
3D technology has added an entirely new vocabulary to the world of video display adapters. Before purchasing a 3D accelerator adapter, you should familiarize yourself with some of the terms and concepts involved in the 3D image generation process.
The basic function of 3D software is to convert image abstractions into the fully realized images that are then displayed on the monitor. The image abstractions typically consist of the following elements:
-
Vertices. Locations of objects in three-dimensional space, described in terms of their x, y, and z coordinates on three axes representing height, width, and depth.
-
Primitives. The simple geometric objects the application uses to create more complex constructions, described in terms of the relative locations of their vertices. This serves not only to specify the location of the object in the 2D image, but also to provide perspective because the three axes can define any location in three-dimensional space.
-
Textures. Two-dimensional bitmap images or surfaces designed to be mapped onto primitives. The software enhances the 3D effect by modifying the appearance of the textures, depending on the location and attitude of the primitive. This process is called perspective correction.
Some applications use another process, called MIP mapping, which uses different versions of the same texture that contain varying amounts of detail, depending on how close the object is to the viewer in the three-dimensional space. Another technique, called depth cueing, reduces the color and intensity of an object's fill as the object moves farther away from the viewer.
Using these elements, the abstract image descriptions must then be rendered, meaning they are converted to visible form. Rendering depends on two standardized functions that convert the abstractions into the completed image that is displayed onscreen. The standard functions performed in rendering are
-
Geometry. The sizing, orienting, and moving of primitives in space and the calculation of the effects produced by the virtual light sources that illuminate the image
-
Rasterization. The converting of primitives into pixels on the video display by filling the shapes with properly illuminated shading, textures, or a combination of the two
A modern video adapter that includes a chipset capable of 3D video acceleration has special built-in hardware that can perform the rasterization process much more quickly than if it were done by software (using the system processor) alone. Most chipsets with 3D acceleration perform the following rasterization functions right on the adapter:
-
Scan conversion. The determination of which onscreen pixels fall into the space delineated by each primitive
-
Shading. The process of filling pixels with smoothly flowing color using the flat or Gouraud shading technique
-
Texture mapping. The process of filling pixels with images derived from a 2D sample picture or surface image
-
Visible surface determination. The identification of which pixels in a scene are obscured by other objects closer to the viewer in three-dimensional space
-
Animation. The process of switching rapidly and cleanly to successive frames of motion sequences
-
Antialiasing. The process of adjusting color boundaries to smooth edges on rendered objects
Common 3D Techniques
Virtually all 3D cards use the following techniques:
-
Fogging. Fogging simulates haze or fog in the background of a game screen and helps conceal the sudden appearance of newly rendered objects (buildings, enemies, and so on).
-
Gouraud shading. Interpolates colors to make circles and spheres look more rounded and smooth.
-
Alpha blending. One of the first 3D techniques, alpha blending creates translucent objects onscreen, making it a perfect choice for rendering explosions, smoke, water, and glass. Alpha blending also can be used to simulate textures, but it is less realistic than environment-based bump mapping.
Because they are so common, data sheets for advanced cards frequently don't mention them, although these features are present.
Advanced 3D Techniques
The following are some of the latest techniques that leading 3D accelerator cards use. Not every card uses every technique.
Stencil Buffering
Stencil buffering is a technique useful for games such as flight simulators, in which a static graphic element—such as a cockpit windshield frame, which is known as a HUD (heads up display) and used by real-life fighter pilots—is placed in front of dynamically changing graphics (such as scenery, other aircraft, sky detail, and so on). In this example, the area of the screen occupied by the cockpit windshield frame is not re-rendered. Only the area seen through the "glass" is re-rendered, saving time and improving frame rates for animation.
Z-Buffering
A closely related technique is Z-buffering, which originally was devised for computer-aided drafting (CAD) applications. The Z-buffer portion of video memory holds depth information about the pixels in a scene. As the scene is rendered, the Z-values (depth information) for new pixels are compared to the values stored in the Z-buffer to determine which pixels are in "front" of others and should be rendered.
Pixels that are "behind" other pixels are not rendered. This method increases speed and can be used along with stencil buffering to create volumetric shadows and other complex 3D objects.
Bump Mapping and Displacement Mapping
Environment-based bump mapping introduces special lighting and texturing effects to simulate the rough texture of rippling water, bricks, and other complex surfaces. It combines three separate texture maps (for colors, for height and depth, and for environment—including lighting, fog, and cloud effects).
This creates enhanced realism for scenery in games and could also be used to enhance terrain and planetary mapping, architecture, and landscape-design applications. This represents a significant step beyond alpha blending. However, a feature called displacement mapping produces even more accurate results.
Special grayscale maps called displacement maps have long been used for producing accurate maps of the globe. Microsoft DirectX 9 supports the use of grayscale hardware displacement maps as a source for accurate 3D rendering. The Matrox Parhelia, the ATI Radeon 9500, 9700, and 9800 series, and the GeForce FX all support displacement mapping.
Texture Mapping
To improve the quality of texture maps, several filtering techniques have been developed, including MIP mapping, bilinear filtering, trilinear filtering, and anisotropic filtering. These techniques and several others are explained here:
-
Bilinear filtering. Improves the image quality of small textures placed on large polygons. The stretching of the texture that takes place can create blockiness, but bilinear filtering applies a blur to conceal this visual defect.
-
MIP mapping. Improves the image quality of polygons that appear to recede into the distance by mixing low-res and high-res versions of the same texture; a form of antialiasing.
-
Trilinear filtering. Combines bilinear filtering and MIP mapping, calculating the most realistic colors necessary for the pixels in each polygon by comparing the values in two MIP maps. This method is superior to either MIP mapping or bilinear filtering alone.
-
Anisotropic filtering. Some video card makers use another method, called anisotropic filtering, for more realistically rendering oblique-angle surfaces containing text.
-
T-buffer. This technology eliminates aliasing (errors in onscreen images due to an undersampled original) in computer graphics, such as the "jaggies" seen in onscreen diagonal lines; motion stuttering; and inaccurate rendition of shadows, reflections, and object blur.
-
Integrated transform and lighting. The 3D display process includes transforming an object from one frame to the next and handling the lighting changes that result from those transformations.
-
Full-screen antialiasing. This technology reduces the jaggies visible at any resolution by adjusting color boundaries to provide gradual, rather than abrupt, color changes.
-
Vertex skinning. Also referred to as vertex blending, this technique blends the connection between two angles, such as the joints in an animated character's arms or legs.
-
Keyframe interpolation. Also referred to as vertex morphing, this technique animates the transitions between two facial expressions, allowing realistic expressions when skeletal animation can't be used or isn't practical.
-
Programmable vertex and pixel shading. Both NVIDIA and ATI have embraced various methods of programmable vertex and pixel shading in recent versions.
-
Floating-point calculations. Microsoft DirectX 9 supports floating-point data for more vivid and accurate color and polygon rendition. ATI Radeon 9500 and 9700 and NVIDIA GeForce FX are the first 3D accelerator chips to have full DirectX 9 support, with GeForce FX providing additional precision. The Matrox Parhelia supports this, but not all, DirectX 9 features.
Hardware Versus Software Acceleration
Compared to software-only rendering, hardware-accelerated rendering provides faster animation. Although most software rendering would create more accurate and better-looking images, software rendering is too slow. Using special drivers, these 3D adapters can take over the intensive calculations needed to render a 3D image that software running on the system processor formerly performed.
This is particularly useful if you are creating your own 3D images and animation, but it is also a great enhancement to the many modern games that rely extensively on 3D effects. Note that motherboard-integrated video solutions, such as Intel's 810 and 815 series, typically have significantly lower 3D performance because they use the CPU for more of the 3D rendering than 3D video adapter chipsets do.
To achieve greater performance, many of the latest 3D accelerators run their accelerator chips at very high speeds, and some even allow overclocking of the default RAMDAC frequencies. Just as CPUs at high speeds produce a lot of heat, so do high-speed video accelerators.
Both the chipset and the memory are heat sources, so most mid-range and high-end 3D accelerator cards feature a fan to cool the chipset. Also, some high-end 3D accelerators such as the Gainward GeForce 4 Ti 4200 Golden Sample (based on the NVIDIA GeForce 4 Ti4200 chipset) use finned passive heatsinks to cool the memory chips and make overclocking the video card easier.
Software Optimization
It's important to realize that the presence of an advanced 3D-rendering feature on any given video card is meaningless unless game and application software designers optimize their software to take advantage of the feature. Although various 3D standards exist (OpenGL, Glide, and Direct 3D), video card makers provide drivers that make their games play with the leading standards.
Because some cards do play better with certain games, you should read the reviews in publications such as Maximum PC to see how your favorite graphics card performs with them. It's important to note that, even though the latest video cards based on recent ATI and NVIDIA chips support DirectX 8.0, 8.1, and 9.0, many games still support only DirectX 7.
As with previous 3D features, it takes time for the latest hardware features to be supported by game vendors. Some video cards allow you to perform additional optimization by adjusting settings for OpenGL, Direct 3D, RAMDAC, and bus clock speeds, as well as other options.