Computer Laboratory

Projects

Impact of correct and simulated focus cues on perceived realism

Joseph March(1), Anantha Krishnan(2), Simon J. Watt(2), and Marek Wernikowski(1), Hongyun Gao(1), Ali Ozgur Yontem(1), Rafał K. Mantiuk(1).

(1)University of Cambridge, (2)Bangor University

To be presented at SIGGRAPH Asia 2022, Technical Papers

In scenes containing complex objects at different depths we study the impact of focus cues (defocus blur, chromatic aberration, .etc), both near physically correct and simulated, on the perceived realism of content viewed on a multi-focal high-dynamic-range stereoscopic display.

Abstract

The natural accommodation of the human eye to different distances results in focus cues, which contribute to depth perception and appearance. Since focus cues are very difficult to reproduce in an electronic display, it is desirable to know how much they contribute to realistic image appearance. In this work we quantify the potential benefit of focus cues in terms of increased realism compared to regular stereo image presentation. As a secondary goal, we evaluate whether three depth-of-field rendering techniques, which reproduce defocus blur at three different degrees of accuracy, can reintroduce the benefits of focus cues. Our findings confirm the importance of focus cues for realistic image appearance, and also show that they cannot easily be substituted by depth-of-field rendering.

Video

SIGGRAPH Asia'22 talk(12 mins)

Materials

  • Paper:
    Impact of correct and simulated focus cues on perceived realism
    [paper PDF]
  • Supplementary materials [PDF]
  • Scaled per condition data [ZIP]

Citation

Joseph March, Anantha Kishnan, Simon J. Watt, Marek Wernikowski, Hongyun Gao, Ali Ozgur Yontem, and Rafa&#322 K. Mantiuk
In SIGGRAPH Asia 2022 Conference Papers (SA '22). Association for Computing Machinery, New York, NY, USA, Article 22, 1-9. https://doi.org/10.1145/3550469.3555405

Contact

Please contact Joseph March or Rafał K. Mantiuk with any questions regarding the project.

Related projects

Acknowledgements

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the European Research Council (ERC) Consolidator Grant agreement No 725253 (EyeCode) and under the Marie Skłodowska-Curie grant agreement No 765911 (RealVision).