Computer Laboratory


A benchmark of light field view interpolation

Dingcheng Yue, Muhammad Shahzeb Khan Gul, Michel Bätz, Joachim Keinert, Rafał K. Mantiuk

We prepare a synthetic light field dataset rendered with Blender and a real world dataset captured with a camera rig. We address three challenges that most light field view interpolation algorithm face. We choose five state-of-the art methods based on their intermediate representation. We then evaluate each method in terms of nine metrics


Light field view interpolation provides a solution that reduces the prohibitive size of a dense light field. This paper examines state-of-the-art light field view interpolation methods with a comprehensive benchmark on challenging scenarios specific for interpolation tasks. Each method is analyzed in terms of their strengths and weaknesses in handling different challenges. We find that large disparities in a scene are the main source of challenge for the light field view interpolation methods. We also find that a basic backward warping based on the depth estimation from optical flow provides comparable performance against usually complex learning-based methods.


This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 765911 (RealVision).