Computer Laboratory

Advanced Graphics (Lent 2017)

The supervision work for this course consists of a mixture of written and practical exercises.
Please make sure that you attempt the practical exercise first, and feel free to let me know if you get stuck somehwere.

Please hand in your work by 17:00 (5pm) on the day before the supervision!

These exercises are heavily based on the official course material.


Supervision 1

Practical exercise

Please complete all the practical exercises from the course site and submit the nine screenshots to my email address

Written questions

  1. Explain the difference between uniform, in and out variables in GLSL
  2. Explain how the ambient, diffuse and specular terms contribute to the classical Phong lighting equation. How is Gooch shading different?
  3. What is the purpose of having world, viewing and screen space coordinates?
  4. What kind of transformation matrix do you need to use to find the world coordinates of a normal vector?
  5. What is a pixel shader and how is it different from a fragment shader?

Supervision 2

Written questions

Please work through all the exercises from the course site and submit as a single pdf.

Practical exercise

Write a simplistic ray-tracer which renders a single sphere using the GPU. I recommend putting all the code in the fragment shader (including the definition of the sphere and the intersection function). Shadertoy is an excellent site for prototyping.

What would it take to use multiple objects? How would you change your code to implement an SDF approach?

If needed, consider the following code snippet as a starting point

// structure for storing intersection data
struct Intersection
{
    float dist;
    vec3 position;
    vec3 normal;
};

// check whether a given ray intersects with the provided sphere
// return dist = -1 if there is no intersection
Intersection intersectWithSphere(vec3 origin, vec3 dir, vec3 centre, float radius)
{
    // TODO: complete intersection code
}

// getting ray direction from fragment co-ordinates (0..1)
vec3 getRayDir(vec2 fragCoord)
{
    // TODO: consider accounting for the aspect ratio of the display. This is assuming a square screen
    return normalize(vec3(fragCoord.x - 0.5, fragCoord.y - 0.5, -1));
}

// main method
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    vec3 rayDir = getRayDir(fragCoord / iResolution.xy);
    vec3 rayOrigin = vec3(0, 0, 0);
    
	
    vec3 backgroundCol = vec3(0.3, 0.1, 0.4);
    vec3 sphereCol = vec3(1.0, 0.75, 0.2);
    vec3 lightDir = normalize(vec3(-1.0, -1.0, -1.0));
	
	// find intersection
    Intersection sphereIntersection = intersectWithSphere(rayOrigin, rayDir, vec3(-2, 0, -10.0), 2.2);    
    
    //TODO: figure out colour from intersection data
	vec3 col = vec3(0, 0, 0);
	fragColor = vec4(col, 1.0);
}
		

Supervision 3

Please have a go at the excercise sheet found as on the course website.


Supervision 4

Models of visual perception

  1. In the fovea the average S-cone density (density of cones reacting to short wavelength light) is estimated to be 1 cone in 10 minutes of visual angle. Using the Nyquist frequency derive the highest frequency signal that is still perceivable with the S-cones (you can express the frequency of the signal using cpd -- cycles per degree).
  2. The average density of L and M cones separation is about 30 seconds of visual angle. Similarly to exercise 1), derive the highest frequency signal that is still perceivable using the combination of the L and M cones.
  3. In a CAD software the user needs to be able to distinguish very fine details. Based on the results from above or otherwise find a colour that would be easily visible against a dark background.
  4. How does JPEG exploit the multi-resolution visual model?
  5. In video compression the spatial resolution of the chroma (colour) channel is often reduced. Why is this preferable to subsampling the entire image?
  6. Past Paper Year 2016 Paper 8 Question 1 (except a/iv)

High Dynamic Range

  1. Describes the three main intents/steps of tone-mapping
    • Derive the contrast ratio for a rendered image where
      L_max = 1500 cd/m^2
      L_min = 0.1 cd/m^2
    • How does the contrast ratio change if we add F = 3.14 to the luminance values?
  2. Practical exercise. Try to write your own primitive monochrome tone mapper in Java

    As a starting point, I recommend downloading this zip archive which provides you with an example hdr file and a loader class.

    The original loader class is also available here: https://github.com/aicp7/HDR_file_readin

    • Load in the image file and display the raw luminance values (should be normalised to 0..1)
    • Apply gamma correction. Remember, your display luminance reproduction can be approximated as a^2.2
    • Compute a 255-bucket global histogram of the luminance values. Apply histogram equalization.

    When finished, please submit your final code as an executable jar file (including .java and .class files) alongside any relevant screenshots.