- Rafał Mantiuk
- Reader in Graphics and Displays
- Department of Computer Science and Technology
- The Computer Laboratory
- Rainbow Research Group
- University of Cambridge
- Office address
- University of Cambridge
- Computer Laboratory
- William Gates Building
- 15 JJ Thomson Avenue
- Cambridge CB3 0FD
- United Kingdom
- office: +44 1223 763831
- rafal [dot] mantiuk [at] cl [dot] cam [dot] ac [dot] uk
Important: if you have never sent or received e-mail from me, please include the text "n0t5pam" somewhere in the subject line, for example "[n0t5pam] Your subject". This is to avoid the SPAM filter.
If you are contacting me about internship, PhD studentship, or a PostDoc position, please check the "Jobs" section first.
Applied visual perception; high dynamic range imaging; display algorithms; machine learning for image synthesis; tone-mapping; video coding for new display technologies; image and video quality metric; visibility metrics; virtual reality and low-level perception; computational photography; computational displays; novel display technologies; colour; perception in computer graphics; novel image and video representations (beyond 2D); psychophysics; modeling visual perception with machine learning.
- Senior Lecturer, University of Cambridge, Computer Laboratory, UK (from 2015)
- Lecturer / Senior Lecturer, Bangor University, School of Computer Science, UK (2009-2015)
- Postdoc Fellow, University of British Columbia, Canada (2008-2009)
- Postdoc, Max-Planck-Institut for Computer Science, Germany (2007-2008)
- Internship, Sharp Laboratories of America, Camas WA, USA (2006)
- PhD (summa cum laude, Computer Science), Max-Planck-Institut for Computer Science, Germany (2006)
- Msc (Computer Science), Technical University of Szczecin, Poland (2003)
The perceived contrast in VR/AR or stereoscopic displays is enhanced by showing a slightly modified image to each eye. The effect takes advantage of the binocular fusion mechanism, which shows bias toward the eye that can see a higher contrast.
The method for scaling together the results of rating and pairwise comparison experiments into a unified quality scal and the meaning units of Just Objectionable Differences. The method can be used to combine together existing datasets or to design experiments in which both protocols are combined.
We make generative CNNs produce temporarily coherent video by adding regularization terms to the loss function. The regularization facilitates learning geometric transformations, which should affect the output frame in the same way as the input frame.
See more projects.
- DiCE: Dichoptic Contrast Enhancement for VR and Stereo Displays
Fangcheng Zhong, George Alex Koulieris, George Drettakis, Martin S. Banks, Mathieu Chambe, Fredo Durand, Rafał K. Mantiuk.
In: ACM Transactions on Graphics (Proc. of SIGGRAPH Asia 2017), in print, 2019
(project page) (PDF)
- From pairwise comparisons and rating to a unified quality scale
María Pérez-Ortiz, Aliaksei Mikhailiuk, Emin Zerman, Vedad Hulusic, Giuseppe Valenzise and Rafał K. Mantiuk.
In: IEEE Transactions on Image Processing, in print, 2019
(doi) (project page) (PDF)
- Selecting texture resolution using a task-specific visibility metric
Krzysztof Wolski, Daniele Giunchi, Shinichi Kinuwaki, Piotr Didyk, Karol Myszkowski, Anthony Steed and Rafał K. Mantiuk.
In: Computer Graphics Forum / Proc. of Pacific Graphics, in print, 2019
(project page) (dataset) (PDF)
- Visibility Metric for Visually Lossless Image Compression
Nanyang Ye, Maria Pérez-Ortiz, and Rafał K. Mantiuk.
In: Proc. of Picture Coding Symposium, in print, 2019
- Closed Form Transmittance in Heterogeneous Media Using Cosine Noise
Martin Balint and Rafał K. Mantiuk.
In: The 23rd Central European Seminar on Computer Graphics (CESCG 2019), 2019
[Best Paper Award]
(project page) (PDF)
See all papers.
Awards and grants
- The IEEE Virtual Reality 2019 Best Journal Paper - for the paper Temporal Resolution Multiplexing: Exploiting the limitations of spatio-temporal vision for more efficient VR rendering
- Electronic Imaging / Human Vision and Electronic Imaging 2019 Best Paper Award for the paper A visual model for predicting chromatic banding artifacts
- ERC Consolidator Grant (2017) - Perceptual encoding of high fidelity light fields
- MSCA Innovative Training Network (2018) - RealVision: Hyper-realistic Visual Experience
- EPSRC research grant (2017) - A spatio-chromatic colour appearance model for retargeting high-dynamic-range image appearance across viewing conditions
- HPC Wales Research and Innovation grant (2013/14) - Video retargeting for delivery to mobile and future display technologies
- Royal Society Research Grant (2013) - Limiting factors of perceptual image fidelity
- EPSRC grant EP/I006575/1 (2011) - Quantifying image quality in computer graphics
- Heinz Billing Award 2006
My contribution to the organization of research networks and conferences can be found here.