3D Animation Workshop: Lesson 5: Lights, Camera, Render!
|
Lesson 5 - Lights, Camera, Render! - Part 1
Recall that deep question of philosophy which ponders whether a tree, falling in a deserted forest, makes a sound. Just so, our 3-D models can't be seen unless there is someone there to see them. In this case, however, that "someone" is not a person, but a hypothetical camera, and the process of seeing is called RENDERING. We have taken the rendering process for granted thus far in these tutorials, suggesting for simplicity that we just create 3-D models and look at them. But the models themselves are just the data necessary to produce a rendered image, and the rendering process itself is half the story.
To strip the rendering process to its bare essentials, a hypothetical camera is placed in the same 3D coordinate space that contains our models. It therefore has a location in (x,y,z) coordinates. It is a single point that represents the spot from which an "eye" looks at the scene, and so it is often called the viewpoint. Like a real eye and real camera, it must have an orientation. It must be looking in a certain direction. And it must also have an field of vision, an angle projecting out from the viewpoint. Objects that fall within this angle can be seen, and those falling outside it cannot. This is all exactly like a real camera and like our own eyes, although a camera with a zoom lens can expand and contract its field of vision without moving the camera itself. So can our hypothetical camera in 3-D coordinate space.
In the rendering process, the camera "takes a picture" of the objects in the scene as seen from the camera's viewpoint, given its direction and field of view. The rendering is achieved mathematically, by tracing lines from the vertices of all the polygons in all the objects in the scene back to the viewpoint of the camera. This enables the "rendering engine," as the software that produces the rendering is often called, to reconstruct the polygons as they would be projected on a flat surface, just as light is focused though the center of a camera lens onto the film plane. A major aspect of this process is determining which polygons (surfaces) on the objects are obstructed by other polygons from the camera's viewpoint. Surfaces that are behind other surfaces obviously should not be rendered.
Once the rendering engine has determined which polygons are visible and how they should be projected on the rendering surface, called the "viewing plane," they must be drawn as pixels to produce a bitmap. Each pixel in a bitmap must be assigned a color. How does the rendering engine assign a color?
Here we approach one of the most fascinating aspects of 3-D graphics. 3-D graphics applications model reality in two distinct ways. The most obvious is in geometry. A 3-D object is no mere flat representation, but rather like a sculpture that can be viewed from different directions to reveal its full three-dimensional substance. But 3-D graphics also models the way that light interacts with objects and with our eyes. Objects don't have just a color the way they do in painting and 2-D computer graphics. They have surface qualities that reveal themselves under the particular lighting in the scene. If there is no lighting, the object is rendered black regardless of what color it would appear in light.
To Continue to Parts 2 and 3, Use Arrow Buttons |
|
Comments are welcome
and brought to you by webreference.com
Created: Mar. 25, 1997
Revised: Apr. 22, 1997
URL: https://webreference.com/3d/lesson5/