Part 1: Basic 3D Modelling
A: Ray tracing vs polygon scan conversion
B: Polygon mesh management & hardware PSC quirks
on to part 2...
Ray tracing works by firing one or more rays from the eye point through each pixel. The colour assigned to a ray is the colour of the first object that it hits, determined by the object's surface properties at the ray-object intersection point and the illumination at that point. The colour of a pixel is some average of the colours of all the rays fired through it. The power of ray tracing lies in the fact that secondary rays are fired from the ray-object intersection point to determine its exact illumination (and hence colour). This spawning of secondary rays allows reflection, refraction, and shadowing to be handled with ease.
Ray tracing's big disadvantage is that it is slow. It takes minutes, or hours, to render a reasonably detailed scene. Until recently, ray tracing had never been implemented in hardware. A Cambridge company, Advanced Rendering Technologies, is trying to do just that, but they will probably still not get ray tracing speeds up to those achievable with polygon scan conversion.
Ray tracing is used where realism is vital. Example application areas are high quality architectural visualisations, and movie or television special effects.
The advantage of polygon scan conversion is that it is fast. Polygon scan conversion algorithms are used in computer games, flight simulators, and other applications where interactivity is important. To give a human the illusion that they are interacting with a 3D model in real time, you need to present the human with animation running at 10 frames per second or faster. Polygon scan conversion can do this. The fastest hardware implementations of PSC algorithms can now process millions of polygons per second.
One problem with polygon scan conversion is that it can only support simplistic lighting models, so images do not necessarily look realistic. For example: transparency can be supported, but refraction requires the use of an advanced and time-consuming technique called "refraction mapping"; reflections can be supported -- at the expense of duplicating all of the polygons on the "other side" of the reflecting surface; shadows can be produced, by a more complicated method than ray tracing. The other limitation is that it only has a single primitive: the polygon, which means that everything is made up of flat surfaces. This is especially unrealistic when modelling natural objects such as humans or animals. An image generated using a polygon scan conversion algorithm, even one which makes heavy use of texture mapping, will tend to look computer generated.
The image at left was generated using PSC. Texture mapping has
been used to make the back and side walls more interesting. All the
objects are reflected in the floor. This reflection is achieved by
duplicating all of the geometry, upside-down, under the floor, and
making the floor partially transparent. The close-up at right shows
the reflection of the red ball, along with a circular "shadow" of the
ball. This shadow is, in fact, a polygonal approximation to a circle
drawn on the floor polygon and bears no relationship to the lights
whatsoever.
Environment mapping is another clever idea which makes PSC
images look more realistic. In environment mapping we have a texture
map of the environment which can be thought of as wrapping completely
around the entire scene (you could think of it as six textures on the
six inside faces of a big box). The environment map itself is not
drawn, but if any polygon is reflective then the normal to the polygon
is found at each pixel (this normal is needed for Gouraud shading
anyway) and from this the appropriate point (and therefore colour) on
the environment map can be located. This example shows a silvered SGI
O2 computer reflecting an environment map of the interior of a cafe.
PSC is, of course, widely used in interactive games. Here we see
an incautious opponent about to drive into our player's sights. The
graphics are not particularly sophisticated: there are very few
polygons in the scene, but the scene is made more interesting,
visually, by using texture mapping. When playing the game people tend
to worry more about winning (or, in some cases, not losing too badly)
than about the quality of the graphics. Graphical quality is arguably
more useful in selling the game to the player than in actual game
play.
Line drawing is generally faster than PSC unless the PSC is being done by specialist hardware. Line drawing of 3D models is used in Computer Aided Design (CAD) and in 3D model design. The software which people use to design 3D models for ray tracing tends to use both LD and PSC in its user interface. It is interesting to note that, when R&A was first written (1976), the authors had only line drawing algorithms with which to illustrate their 3D models. The only figure in the entire book which does not use exclusively line drawing is Fig. 6-52, which has screen shots of a prototype PSC system.
Visualisation generally does not require realistic looking images. In science we are usually visualising complex three dimensional structures, such as protein molecules, which have no "realistic" visual analogue. In medicine we generally prefer an image that helps in diagnosis over one which looks beautiful. PSC is therefore normally used in visualisation (although some data require voxel rendering -- see Part 4D).
Simulation uses PSC because it can generate images at interactive speeds. At the high (and very expensive) end a great deal of computer power is used. In the most expensive flight simulators (those with full hydraulic suspension and other fancy stuff) the graphics kit can cost £1,000,000 out of a total cost of about ten times that figure.
3D games (for example Quake, Unreal, Descent) use PSC because it gives interactive speeds. A lot of other "3D" games (for example SimCity, Civilisation, Diablo) use pre-drawn sprites (small images) which they simply copy to the appropriate position on the screen. This essentially reduces the problem to an image compositing operation, requiring much less processor time. The sprites can be hand drawn by an artist or created in a 3D modelling package and rendered to sprites in the company's design office. Donkey Kong Country, for example, uses sprites which were ray traced from 3D models.
You may have noticed that the previous sentence is the first mention of ray tracing in this section. It transpires that the principal use of ray tracing, in the commercial world, is in producing special effects for movies and television.
The first movie to use 3D computer graphics was Star Wars [1977]. You may remember that there were some line drawn computer graphics toward the end of the movie. All of the spaceship shots, and all of the other fancy effects, were done using models, mattes (hand-painted backdrops), and hand-painting on the actual film. Computer graphics technology has progressed incredibly since then. The recent re-release of the Star Wars trilogy included a number of computer graphic enhancements, all of which were composited into the original movie.
A recent example of computer graphics in a movie is the (rather bloodythirsty) Starship Troopers [1997]. Most of the giant insects in the movie are completely computer generated. The spaceships are a combination of computer graphic models and real models. The largest of these real models was 18' (6m) long: so it is obviously still worthwhile spending a lot of time and enrgy on the real thing.
Special effects are not necessarily computer generated. Compare King Kong [1933] with Godzilla [1998]. The plots have not changed that much, but the special effects have improved enormously: changing from hand animation (and a man in a monkey suit) to swish computer generated imagery. Not every special effect you see in a modern movie is computer generated. In Starship Troopers, for example, the explosions are real. They were set off by a pyrotechnics expert against a dark background (probably the night sky), filmed, and later composited into the movie. In Titanic [1997] the scenes with actors in the water were shot in the warm Gulf of Mexico. In order that they look as if they were shot in the freezing North Atlantic, cold breaths had to be composited in later. These were filmed in a cold room over the course of one day by a special effects studio.Film makers obviously need to balance quality, ease of production, and cost. They will use whatever technology gives them the best trade off. This is increasingly computer graphics, but computer graphics is still not useful for everything by quite a long way.
A recent development is the completely computer generated movie. Toy Story [1995] was the world's first feature length computer generated movie. Two more were released last year (A Bug's Life [1998] and Antz [1998]). Toy Story 2 [1999] will be released in the UK on 4 February 2000.
Finally, at SIGGRAPH 98 I had the chance to hear about the software that some real special effects companies are using:
Exercises
|
Part 1: Basic 3D Modelling
A: Ray tracing vs polygon scan conversion
B: Polygon mesh management & hardware PSC quirks
on to part 2...