LIGHT SAMPLING FIELD AND BRDF REPRESENTA-TION FOR PHYSICALLY-BASED NEURAL RENDERING

Abstract

Physically-based rendering (PBR) is key for immersive rendering effects used widely in the industry to showcase detailed realistic scenes from computer graphics assets. A well-known caveat is that producing the same is computationally heavy and relies on complex capture devices. Inspired by the success in quality and efficiency of recent volumetric neural rendering, we want to develop a physically-based neural shader to eliminate device dependency and significantly boost performance. However, no existing lighting and material models in the current neural rendering approaches can accurately represent the comprehensive lighting models and BRDFs properties required by the PBR process. Thus, this paper proposes a novel lighting representation that models direct and indirect light locally through a light sampling strategy in a learned light sampling field. We also propose BRDF models to separately represent surface/subsurface scattering details to enable complex objects such as translucent material (i.e., skin, jade). We then implement our proposed representations with an end-to-end physically-based neural face skin shader, which takes a standard face asset (i.e., geometry, albedo map, and normal map) and an HDRI for illumination as inputs and generates a photo-realistic rendering as output. Extensive experiments showcase the quality and efficiency of our PBR face skin shader, indicating the effectiveness of our proposed lighting and material representations.

1. INTRODUCTION

Physically-based rendering (PBR) provides a shading and rendering method to accurately represent how light interacts with objects in virtual 3D scenes. Whether working with a real-time rendering system in computer graphics or film production, employing a PBR process will facilitate the creation of images that look like they exist in the real world for a more immersive experience. Industrial PBR pipelines take the guesswork out of authoring surface attributes like transparency since their methodology and algorithms are based on physically accurate formulae and resemble real-world materials. This process relies on onerous artist tuning and high computational power in a long production cycle. In recent years, academia has shown incredible success using differentiable neural rendering in extensive tasks such as view synthesis (Mildenhall et al., 2020) , inverse rendering (Zhang et al., 2021a), and geometry inference (Liu et al., 2019) . Driven by the efficiency of neural rendering, a natural next step would be to marry neural rendering and PBR pipelines. However, none of the existing neural rendering representations supports the accuracy, expressiveness, and quality mandated by the industrial PBR process. A PBR workflow models both specular reflections, which refers to light reflected off the surface, and diffusion or subsurface scattering, which describes the effects of light absorbed or scattered internally. Pioneering works of differentiable neural shaders such as Softras (Liu et al., 2019) adopted the Lambertian model as BRDF representation, which only models the diffusion effects and results in low-quality rendering. NeRF (Mildenhall et al., 2020) proposed a novel radiance field representation for realistic view-synthesis under an emit-absorb lighting transport assumption without explicitly modeling BRDFs or lighting, and hence is limited to a fixed static scene with no scope for relighting. In follow-up work, NeRV (Srinivasan et al., 2020) took one more step by explicitly modeling directional light, albedo, and visibility maps to make the fixed scene relightable. The indirect illumination was achieved by ray tracing under the assumption of one bounce of incoming light.

