Part 1: Basic 3D Modelling
A: Ray tracing vs polygon scan conversion
B: Polygon mesh management & hardware PSC quirks
on to part 2...
Ray tracing works by firing one or more rays from the eye point through each pixel. The colour assigned to a ray is the colour of the first object that it hits, determined by the object's surface properties at the ray-object intersection point and the illumination at that point. The colour of a pixel is some average of the colours of all the rays fired through it. The power of ray tracing lies in the fact that secondary rays are fired from the ray-object intersection point to determine its exact illumination (and hence colour). This spawning of secondary rays allows reflection, refraction, and shadowing to be handled with ease.
Ray tracing's big disadvantage is that it is slow. It takes minutes, or hours, to render a reasonably detailed scene. Until recently, ray tracing had never been implemented in hardware. A Cambridge company, Advanced Rendering Technologies, has now done this. The quality of the images that they can produce is extraordinarily high compared with polygon scan conversion. This is their main selling point. However, ray tracing is so computationally intensive that it is not possible produce images at the same speed as hardware assisted polygon scan conversion. Other researchers are trying to do this by using multiple (dozens) processors, but ray tracing will always be slower than polygon scan conversion.
Ray tracing therefore is only used where the visual effects cannot be obtained using polygon scan conversion. This means that it is, in practice, used by a minority of movie and television special effects companies, advertising companies, and enthusiastic amateurs.
The advantage of polygon scan conversion is that it is fast. Polygon scan conversion algorithms are used in computer games, flight simulators, and other applications where interactivity is important. To give a human the illusion that they are interacting with a 3D model in real time, you need to present the human with animation running at 10 frames per second or faster. Research at the University of North Carolina has experimentally shown that 15 frames per second is a minimum for immersive virtual reality applications. Polygon scan conversion is capable of providing this sort of speed. The NVIDIA GeForce4 graphics processing unit (GPU) can process 136 million vertices per second in its geometry processor and 4.8 billion antialiased samples per second in its pixel processor. The GPU is capable of over 1.2 trillion operations per second.
One problem with polygon scan conversion is that it can only support simplistic lighting models, so images do not necessarily look realistic. For example: transparency can be supported, but refraction requires the use of an advanced and time-consuming technique called "refraction mapping"; reflections can be supported -- at the expense of duplicating all of the polygons on the "other side" of the reflecting surface; shadows can be produced, by a more complicated method than ray tracing. Where ray tracing is a clean and simple algorithm, polygon scan conversion uses a variety of tricks of the trade to get the desired results. The other limitation of PSC is that it only has a single primitive: the polygon, which means that everything is made up of flat surfaces. This is especially unrealistic when modelling natural objects such as humans or animals, unless you use polygons that are no bigger than a pixel, which is what happens these days. An image generated using a polygon scan conversion algorithm, even one which makes heavy use of texture mapping, will tend to look computer generated.
The image at left was generated using PSC. Texture mapping has
been used to make the back and side walls more interesting. All the
objects are reflected in the floor. This reflection is achieved by
duplicating all of the geometry, upside-down, under the floor, and
making the floor partially transparent. The close-up at right shows
the reflection of the red ball, along with a circular "shadow" of the
ball. This shadow is, in fact, a polygonal approximation to a circle
drawn on the floor polygon and bears no relationship to the lights
whatsoever.
Environment mapping is another clever idea which makes PSC
images look more realistic. In environment mapping we have a texture
map of the environment which can be thought of as wrapping completely
around the entire scene (you could think of it as six textures on the
six inside faces of a big box). The environment map itself is not
drawn, but if any polygon is reflective then the normal to the polygon
is found at each pixel (this normal is needed for Gouraud shading
anyway) and from this the appropriate point (and therefore colour) on
the environment map can be located. You may note that finding the
correct point on the environment map is actually a very simple (and
easily optimised) piece of ray tracing. This example shows a silvered
SGI O2 computer reflecting an environment map of the interior of a
cafe.
PSC is, of course, widely used in interactive games. Here we
see an incautious opponent about to drive into our player's
sights. The graphics are not particularly sophisticated: there are
very few polygons in the scene, but the scene is made more
interesting, visually, by using texture mapping. When playing the game
people tend to worry more about winning (or, in some cases, not losing
too badly) than about the quality of the graphics. Graphical quality
is arguably more useful in selling the game to the player than in
actual game play. Modern games have considerably more complexity in
their models than this ten year old example. The game industry is what
is currently driving the development of graphics card technology.
Line drawing has historically been faster than PSC. However,
modern graphics cards can handle both lines and polygons as about the
same speed. Line drawing of 3D models is used in Computer Aided Design
(CAD) and in 3D model design. The software which people use to design
3D models tends to use both LD in its user interface with
PSC providing preview images of the model. It is interesting to
note that, when R&A was first written (1976), the authors
had only line drawing algorithms with which to illustrate their 3D
models. The only figure in the entire book which does not use
exclusively line drawing is Fig. 6-52, which has screen shots of a
prototype PSC system.
Visualisation generally does not require realistic looking images. In science we are usually visualising complex three dimensional structures, such as protein molecules, which have no "realistic" visual analogue. In medicine we generally prefer an image that helps in diagnosis over one which looks beautiful. PSC is therefore normally used in visualisation (although some data require voxel rendering -- see Part 4D).
Simulation uses PSC because it can generate images at interactive speeds. At the high (and very expensive) end a great deal of computer power is used. Ten years ago, the most expensive flight simulators (those with full hydraulic suspension and other fancy stuff) cost about £10M, of which £1,000,000 went on the graphics kit. Similar rendering power is available today on a graphics card which costs a couple of hundred pounds and fits in a PC.
3D games (for example Quake, Unreal, Descent) use PSC because it gives interactive speeds. A lot of other "3D" games (for example SimCity, Civilisation, Diablo) use pre-drawn sprites (small images) which they simply copy to the appropriate position on the screen. This essentially reduces the problem to an image compositing operation, requiring much less processor time. The sprites can be hand drawn by an artist or created in a 3D modelling package and rendered to sprites in the company's design office. Donkey Kong Country, for example, was the first game to use sprites which were ray traced from 3D models.
You may have noticed that the previous sentence is the first mention of ray tracing in this section. It transpires that the principal uses of ray tracing, in the commercial world, are in producing a small quantity of super-realistic images for advertising and in producing a small proportion of the special effects for film and television. Most special effects outdone using sophisticated PSC algorithms.
The first movie to use 3D computer graphics was Star Wars [1977]. You may remember that there were some line drawn computer graphics toward the end of the movie. All of the spaceship shots, and all of the other fancy effects, were done using models, mattes (hand-painted backdrops), and hand-painting on the actual film. Computer graphics technology has progressed incredibly since then. The recent re-release of the Star Wars trilogy included a number of computer graphic enhancements, all of which were composited into the original movie.
A more recent example of computer graphics in a movie is the (rather bloodythirsty) Starship Troopers [1997]. Most of the giant insects in the movie are completely computer generated. The spaceships are a combination of computer graphic models and real models. The largest of these real models was 18' (6m) long: so it is obviously still worthwhile spending a lot of time and energy on the real thing.
Special effects are not necessarily computer generated. Compare King Kong [1933] with Godzilla [1998]. The plots have not changed that much, but the special effects have improved enormously: changing from hand animation (and a man in a monkey suit) to swish computer generated imagery. Not every special effect you see in a modern movie is computer generated. In Starship Troopers, for example, the explosions are real. They were set off by a pyrotechnics expert against a dark background (probably the night sky), filmed, and later composited into the movie. In Titanic [1997] the scenes with actors in the water were shot in the warm Gulf of Mexico. In order that they look as if they were shot in the freezing North Atlantic, cold breaths had to be composited in later. These were filmed in a cold room over the course of one day by a special effects studio. Film makers obviously need to balance quality, ease of production, and cost. They will use whatever technology gives them the best trade off. This is increasingly computer graphics, but computer graphics is still not useful for everything by quite a long way.
A recent development is the completely computer generated movie. Toy Story [1995] was the world's first feature length computer generated movie. Two more were released in 1998 (A Bug's Life [1998] and Antz [1998]). Toy Story 2 [1999], Dinosaur [2000], Shrek [2001], Monsters Inc [2001], and the graphically less sophisticated Ice Age [2002] have followed. More are in the pipeline. Note the subject matter of these movies (toys, bugs, dinosaurs, monsters). It is still very difficulto model humans realistically and much research in undertaken in the field of realistic human modelling.
At SIGGRAPH 98 I had the chance to hear about the software that some real special effects companies were using. Two of these companies use RT and two are pretty happy using PSC.
Exercises
|
Part 1: Basic 3D Modelling
A: Ray tracing vs polygon scan conversion
B: Polygon mesh management & hardware PSC quirks
on to part 2...