Computer Laboratory

Projects

Supra-threshold Contrast Perception in Augmented Reality

Dongyeon Kim(1), Maliha Ashraf(1), Alexandre Chapiro(2) and Rafał K. Mantiuk(1).

(1) University of Cambridge (2) Reality Labs, Meta

Presented at SIGGRAPH Asia 2025, Conference Proceedings

Supra-threshold contrast perception in optical see-through augmented reality
The light from the environment severely reduces the physical contrast of an AR display. This is shown as continuous lines (colors indicating different reference contrast levels) in the plot. These lines drop as the luminance of the background environment increases when a 100 cd/m² luminance AR image is displayed. Yet, the perceived contrast modeled in this work, shown as the dotted lines in the plot, is higher than expected. We show that this effect can be explained by supra-threshold models of contrast perception. The images were generated with Stable Diffusion.

Abstract

When an image is seen on an optical see-through augmented reality (AR) display, the light from the display is mixed with the background light from the environment. This can severely limit the available contrast in AR, which is often orders of magnitude below that of traditional displays. Yet, the presented images appear sharper and show more details than the reduction in physical contrast would indicate. In this work, we hypothesize two effects that are likely responsible for the enhanced perceived contrast in AR: background discounting, which allows observers focused on the display plane to partially discount the light from environment; and supra-threshold contrast perception, which explains the differences in contrast perception across luminance levels. In a series of controlled experiments on AR high-dynamic-range multi-focal haploscope testbed, we found no statistical evidence supporting the effect of background discounting on contrast perception. Instead, the increase of visibility in AR is better explained with models of supra-threshold contrast perception. Our findings can be generalized to incorporate an image input and this model serves to design better algorithms and hardware for display systems affected by additive light, such as AR.

Short video (5 min)


SIGGRAPH Asia 2025 Presentation (To be uploaded)


Materials

  • Paper:
    Supra-threshold Contrast Perception in Augmented Reality.
    Dongyeon Kim, Maliha Ashraf, Alexandre Chapiro, Rafał K. Mantiuk.
    In SIGGRAPH 2025 Conference Proceedings
    [DOI] [paper] [supplementary doc]
  • Code [Github]

Results

  • Comparison of supra-threshold contrast models [link]
    This is a detailed report comparing the five supra-threshold contrast models.

Related projects

  • AR-DAVID - Augmented Reality Display Artifact Video Dataset - a dataset of AR display artifacts
  • ColorVideoVDP - A visual difference predictor for image, video and display distortions
  • FovVideoVDP - Foveated Video Visual Difference Predictor
  • Suprathreshold Contrast Matching - Suprathreshold Contrast Matching between Different Luminance Levels
  • castleCSF - A Contrast Sensitivity Function of Color, Area, Spatio-Temporal frequency, Luminance and Eccentricity - models contrast sensitivity in ColorVideoVDP
  • ASAP - Active Sampling for Pairwise Comparisons - used to efficiently collect AR-DAVID dataset
  • pwcmp - Bayesian pairwise comparison scaling - used to scale AR-DAVID subjective responses
  • Reproducing Reality with a High-Dynamic-Range Multi-Focal Stereo Display - the display used in the experiments