Advanced Graphics Study Guide

Advanced Graphics, Dr Neil Dodgson, University of Cambridge Computer Laboratory
Part II course, 2000

Part 1: Basic 3D Modelling
A: Ray tracing vs polygon scan conversion
B: Polygon mesh management & hardware PSC quirks
on to part 2...

1A) Ray tracing versus polygon scan conversion

These are the two standard methods of producing images of three-dimensional solid objects. They were covered in some detail in the Part IB course. If you want to revise them then check out FvDFH sections 14.4, 15.10 and 15.4 or F&vD sections 16.6 and 15.5. Line drawing is also used for representing three-dimensional objects in some applications. It is briefly covered later on.

Ray tracing

a basic ray traced model showing reflection and refraction.> Ray tracing has the tremendous advantage that it can produce realistic looking images. The technique allows a wide variety of lighting effects to be implemented. It also permits a range of primitive shapes which is limited only by the ability of the programmer to write an algorithm to intersect a ray with the shape.

Ray tracing works by firing one or more rays from the eye point through each pixel. The colour assigned to a ray is the colour of the first object that it hits, determined by the object's surface properties at the ray-object intersection point and the illumination at that point. The colour of a pixel is some average of the colours of all the rays fired through it. The power of ray tracing lies in the fact that secondary rays are fired from the ray-object intersection point to determine its exact illumination (and hence colour). This spawning of secondary rays allows reflection, refraction, and shadowing to be handled with ease.

Ray tracing's big disadvantage is that it is slow. It takes minutes, or hours, to render a reasonably detailed scene. Until recently, ray tracing had never been implemented in hardware. A Cambridge company, Advanced Rendering Technologies, is trying to do just that, but they will probably still not get ray tracing speeds up to those achievable with polygon scan conversion.

Ray tracing is only used where the visual effects cannot be obtained using polygon scan conversion. This means that it is, in practice, used by a minority of movie and television special effects companies and enthusiastic amateurs. Hardware ray tracing may break through into the polygon scan conversion market, but early indications are not promising.


a ray traced model of a kitchen design.> This kitchen was rendered using the ray tracing program rayshade.
a close up of the kitchen sink.> These close-ups of the kitchen scene show some of the power of ray tracing. The kitchen sink reflects the wall tiles. The benchtop in front of the kitchen sink has a specular highlight on its curved front edge.
a close up of the washing machine.> The washing machine door is a perfectly curved object (impossible to achieve with polygons). The inner curve is part of a cone, the outer curve is a a cylinder. You can see the floor tiles reflected in the door. Both the washing machine door and the sink basin were made using CSG techniques (see Part 4C).
a close up of the stove and grill.> The grill on the stove casts interesting shadows (there are two lights in the scene). This sort of thing is much easier to do with ray tracing than with polygon scan conversion.

Polygon scan conversion

a scan converted model of a city - courtesy of Jon Sewell.> This term encompasses a range of algorithms where polygons are rendered, normally one at a time, into a frame buffer. The term scan comes from the fact that an image on a CRT is made up of scan lines. Examples of polygon scan conversion algorithms are the painter's algorithm, the z-buffer, and the A-buffer (FvDFH chapter 15 or F&vD chapter 15). In this course we will generally assume that polygon scan conversion (PSC) refers to the z-buffer algorithm or one of its derivatives.

The advantage of polygon scan conversion is that it is fast. Polygon scan conversion algorithms are used in computer games, flight simulators, and other applications where interactivity is important. To give a human the illusion that they are interacting with a 3D model in real time, you need to present the human with animation running at 10 frames per second or faster. A recent research at the University of North Carolina has experimentally shown that 15 frames per second is a minimum for immersive virtual reality applications. Polygon scan conversion is capable of providing this sort of speed. The fastest hardware implementations of PSC algorithms can process hundreds of millions of polygons per second.

One problem with polygon scan conversion is that it can only support simplistic lighting models, so images do not necessarily look realistic. For example: transparency can be supported, but refraction requires the use of an advanced and time-consuming technique called "refraction mapping"; reflections can be supported -- at the expense of duplicating all of the polygons on the "other side" of the reflecting surface; shadows can be produced, by a more complicated method than ray tracing. Where ray tracing is a clean and simple algorithm, polygon scann conversion uses a variety of tricks of the trade to get the desired results. The other limitation of PSC is that it only has a single primitive: the polygon, which means that everything is made up of flat surfaces. This is especially unrealistic when modelling natural objects such as humans or animals. An image generated using a polygon scan conversion algorithm, even one which makes heavy use of texture mapping, will tend to look computer generated.


an SGI O2 drawn without any texture maps>
Texture mapping is a simple way of making a PSC (or a RT) scene look better without introducing lots of polygons. The image above shows a scene without any texture maps. The equivalent scene with texture maps is shown below. Obviously this scene was designed to be viewed with the texture maps turned on. This example shows that texture mapping can make very simple geometry look interesting to a human observer. <Image:
an SGI O2 drawn with texture maps>

some floating objects in a simulated environment.> <Image:
a close up of the red ball.> The image at left was generated using PSC. Texture mapping has been used to make the back and side walls more interesting. All the objects are reflected in the floor. This reflection is achieved by duplicating all of the geometry, upside-down, under the floor, and making the floor partially transparent. The close-up at right shows the reflection of the red ball, along with a circular "shadow" of the ball. This shadow is, in fact, a polygonal approximation to a circle drawn on the floor polygon and bears no relationship to the lights whatsoever.

an example of environment mapping.> <Image:
an example of environment mapping.> Environment mapping is another clever idea which makes PSC images look more realistic. In environment mapping we have a texture map of the environment which can be thought of as wrapping completely around the entire scene (you could think of it as six textures on the six inside faces of a big box). The environment map itself is not drawn, but if any polygon is reflective then the normal to the polygon is found at each pixel (this normal is needed for Gouraud shading anyway) and from this the appropriate point (and therefore colour) on the environment map can be located. You may note that finding the correct point on the environment map is actually a very simple (and easily optimised) piece of ray tracing. This example shows a silvered SGI O2 computer reflecting an environment map of the interior of a cafe.
a shot from a tank game.>
PSC is, of course, widely used in interactive games. Here we see an incautious opponent about to drive into our player's sights. The graphics are not particularly sophisticated: there are very few polygons in the scene, but the scene is made more interesting, visually, by using texture mapping. When playing the game people tend to worry more about winning (or, in some cases, not losing too badly) than about the quality of the graphics. Graphical quality is arguably more useful in selling the game to the player than in actual game play.

Line drawing

An alternative to the above methods is to draw the 3D model as a wire frame outline. This is obviously unrealistic, but is useful in particular applications. The wire frame outline can be either see through or hidden lines can be removed (FvDFH section 15.3 or F&vD section 14.2.6). In general, the lines that are drawn will be the edges of the polygons which would be drawn by a PSC algorithm.

Line drawing is generally faster than PSC unless the PSC is being done by specialist hardware. Line drawing of 3D models is used in Computer Aided Design (CAD) and in 3D model design. The software which people use to design 3D models tends to use both LD in its user interface with PSC providing preview images of the model. It is interesting to note that, when R&A was first written (1976), the authors had only line drawing algorithms with which to illustrate their 3D models. The only figure in the entire book which does not use exclusively line drawing is Fig. 6-52, which has screen shots of a prototype PSC system.

Out in the real world...

3D graphics finds applications in three main areas:

Visualisation generally does not require realistic looking images. In science we are usually visualising complex three dimensional structures, such as protein molecules, which have no "realistic" visual analogue. In medicine we generally prefer an image that helps in diagnosis over one which looks beautiful. PSC is therefore normally used in visualisation (although some data require voxel rendering -- see Part 4D).

Simulation uses PSC because it can generate images at interactive speeds. At the high (and very expensive) end a great deal of computer power is used. In the most expensive flight simulators (those with full hydraulic suspension and other fancy stuff) the graphics kit can cost 1,000,000 out of a total cost of about ten times that figure.

3D games (for example Quake, Unreal, Descent) use PSC because it gives interactive speeds. A lot of other "3D" games (for example SimCity, Civilisation, Diablo) use pre-drawn sprites (small images) which they simply copy to the appropriate position on the screen. This essentially reduces the problem to an image compositing operation, requiring much less processor time. The sprites can be hand drawn by an artist or created in a 3D modelling package and rendered to sprites in the company's design office. Donkey Kong Country, for example, uses sprites which were ray traced from 3D models.

You may have noticed that the previous sentence is the first mention of ray tracing in this section. It transpires that the principal use of ray tracing, in the commercial world, is in producing a small amount of special effects for movies and television. Many special effects outdone using sophisticated PSC algorithms.

The first movie to use 3D computer graphics was Star Wars [1977]. You may remember that there were some line drawn computer graphics toward the end of the movie. All of the spaceship shots, and all of the other fancy effects, were done using models, mattes (hand-painted backdrops), and hand-painting on the actual film. Computer graphics technology has progressed incredibly since then. The recent re-release of the Star Wars trilogy included a number of computer graphic enhancements, all of which were composited into the original movie.

A recent example of computer graphics in a movie is the (rather bloodythirsty) Starship Troopers [1997]. Most of the giant insects in the movie are completely computer generated. The spaceships are a combination of computer graphic models and real models. The largest of these real models was 18' (6m) long: so it is obviously still worthwhile spending a lot of time and energy on the real thing.

Special effects are not necessarily computer generated. Compare King Kong [1933] with Godzilla [1998]. The plots have not changed that much, but the special effects have improved enormously: changing from hand animation (and a man in a monkey suit) to swish computer generated imagery. Not every special effect you see in a modern movie is computer generated. In Starship Troopers, for example, the explosions are real. They were set off by a pyrotechnics expert against a dark background (probably the night sky), filmed, and later composited into the movie. In Titanic [1997] the scenes with actors in the water were shot in the warm Gulf of Mexico. In order that they look as if they were shot in the freezing North Atlantic, cold breaths had to be composited in later. These were filmed in a cold room over the course of one day by a special effects studio. Film makers obviously need to balance quality, ease of production, and cost. They will use whatever technology gives them the best trade off. This is increasingly computer graphics, but computer graphics is still not useful for everything by quite a long way.

A recent development is the completely computer generated movie. Toy Story [1995] was the world's first feature length computer generated movie. Two more were released in 1998 (A Bug's Life [1998] and Antz [1998]). Toy Story 2 [1999] was released in the UK in February this year. Doubtless more will follow.

PSC or RT for SFX?

While RT gives a better range of lighting effects than PSC, we can often get acceptable results with PSC through the use of techniques such as environment mapping and the use of lots and lots and lots of tiny polygons. The special effects industry still dithers over whether to jump in and use RT. Many special effects are done using PSC, with maybe a bit of RT for special things (giving a hybrid RT/PSC algorithm). Toy Story, for example, used Pixar's proprietary PSC algorithm. It still took between 1 and 3 hours to render each frame (although you must remember that these frames have a resolution of 1526 by 922 pixels) and over 800,000 CPU hours were absorbed in the making of the movie (roughly a CPU century). More expensive algorithms can be used in less time if you are rendering for television (I estimate that about one sixth of the pixels are needed compared to a movie) or if you are only rendering a small part of a big image for compositing into live action.

At SIGGRAPH 98 I had the chance to hear about the software that some real special effects companies were using. Two of these companies use RT and two are pretty happy using PSC.

Everything is ray traced using CGI-Studio.
Digital Domain
Use ray tracing in commercial software, except when the commercial software cannot do what they want. Used MentalRay on Fifth Element [1997]; used Alias models (NURBS) passed to Lightwave (polygons) for one advertisement; used MentalRay plus Renderman for another advertisement.
Rhythm + Hues
Use a proprietry renderer, which is about 10 years old. It has been rewritten many times. They make only limited use of ray tracing.
Station X
Use Lightwave plus an internally developed renderer which is a hybrid between ray tracing and z-buffer.
  1. Compare and contrast the capabilities and uses of ray tracing and polygon scan conversion.
  2. In what circumstances is line drawing more useful than either ray tracing or polygon scan conversion.
  3. (a) When is realism critical? Give 5 examples of applications where different levels of visual realism are necessary. Choose ray tracing or polygon scan conversion for each application and explain why you made your choice. (b) How would you determine what level of visual realism is `necessary' for a given application?
  4. "The quality of the special effects cannot compensate for a bad script." Discuss with reference to movies that you have seen.

Part 1: Basic 3D Modelling
A: Ray tracing vs polygon scan conversion
B: Polygon mesh management & hardware PSC quirks
on to part 2...

Neil Dodgson | Advanced Graphics | Computer Laboratory

Source file: p1a.html
Page last updated on Thu Sep 14 17:23:58 BST 2000
by Neil Dodgson (