Computer Laboratory

Projects

Teaser figure
A subset of scenes from our collected Lab (first row) and Fieldwork (second row) datasets. Our datasets include both controlled laboratory and in-the-wild scenes, each with reference video sequences. We selected a diverse range of objects composed of various materials such as wood, marble, metal, and glass etc.

Abstract

Neural view synthesis (NVS) is one of the most successful techniques for synthesizing free viewpoint videos, capable of achieving high fidelity from only a sparse set of captured images. This success has led to many variants of the techniques, each evaluated on a set of test views typically using image quality metrics such as PSNR, SSIM, or LPIPS. There has been a lack of research on how NVS methods perform with respect to perceived video quality. We present the first study on perceptual evaluation of NVS and NeRF variants. For this study, we collected two datasets of scenes captured in a controlled lab environment as well as in-the-wild. In contrast to existing datasets, these scenes come with reference video sequences, allowing us to test for temporal artifacts and subtle distortions that are easily overlooked when viewing only static images. We measured the quality of videos synthesized by several NVS methods in a wellcontrolled perceptual quality assessment experiment as well as with many existing state-of-the-art image/video quality metrics. We present a detailed analysis of the results and recommendations for dataset and metric selection for NVS evaluation.

Evaluated NVS Methods

We tested ten representative NVS methods (including two generalizable NeRF variants) that encompass a diverse range of models, which feature both explicit and implicit geometric representations, distinct rendering modelings, as well as generalizable and per-scene optimization strategies.

recon figure
Examples of reconstructions by various NVS methods on selected scenes from Fieldwork dataset (first two rows) and Lab dataset (third row). Please refer to the supplementary for more visual results.

Subjective Evaluation

To attain precise subjective quality scores of the videos synthesized by the aforementioned NVS methods, we conducted a controlled quality assessment experiment with human participants.We employed pairwise comparison experiments for subjective evaluation. We scaled the results of the pairwise comparison and expressed the subjective evaluation score in Just-Objectionable-Difference (JOD) units using the Thurstone Case V observer model.

perceptual result figure
Perceptual preference for different methods averaged across our collected Lab and Fieldwork datasets, as well as the LLFF dataset. The bars indicate preference in JOD units, relative to the original NeRF method, which is at 0 JOD. Negative values indicate that, on average, the method produced less preferable results than NeRF. The error bars show 95% confidence intervals.

Assessing Quality Metrics for Neural View Synthesis

Our collected datasets with video references, together with perceptual quality results of reconstructed videos, allow us to test how well the existing image/video quality metrics can measure the perceived quality. To test the reliability of popular metrics, we followed the standard protocol used to evaluate quality metrics, and computed rank-order (Spearman) correlations between metric predictions and perceptual JOD values. For each dataset and each NVS method, we averaged subjective scores and quality metric predictions across all scenes and then computed a single correlation per dataset per metric. This serves two purposes: (a) it mitigates the effects of measurement noise and improves the predictions of quality metrics.

correlation figure
Bootstrapped distributions of correlation coefficients for all metrics computed on (a) Lab, (b) Fieldwork, and (c) LLFF.

Materials

  • Paper: Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods for Front-Facing Views. In Computer Graphics Forum. 2024 [link]
  • Supplementary:[link]
  • Code: Coming soon
  • Dataset: [link]

Acknowledgement

This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement N◦ 765911 (RealVision) and from the Royal Society, award IES\R2\202141. This project is also supported by a UKRI Future Leaders Fellowship [grant number G104084]. We greatly thank Prof. Damiano Marchi and Dr. Simone Farina for granting access to collect NeRFs inside the Natural History Museum of the University of Pisa.