SQA3D: SITUATED QUESTION ANSWERING IN 3D SCENES



xiaojian.ma@ucla.edu, yongzl19@mails.tsinghua.edu.cn {zlzheng,liqing,sczhu,syhuang}@bigai.ai, yitaol@pku.edu.cn

Description

Sitting at the edge of the bed and facing the couch. Question q : Can I go straight to the coffee table in front of me? Given scene context S (e.g., 3D scan, egocentric video, bird-eye view picture), SQA3D requires an agent to first comprehend and localize its situation (position, orientation, etc.) in the 3D scene from a textual description s txt , then answer a question q under that situation. Note that understanding the situation and imagining the corresponding egocentric view correctly is necessary to accomplish our task. We provide more example questions in Figure 2 .

ABSTRACT

We propose a new task to benchmark scene understanding of embodied agents: Situated Question Answering in 3D Scenes (SQA3D). Given a scene context (e.g., 3D scan), SQA3D requires the tested agent to first understand its situation (position, orientation, etc.) in the 3D scene as described by text, then reason about its surrounding environment and answer a question under that situation. Based upon 650 scenes from ScanNet, we provide a dataset centered around 6.8k unique situations, along with 20.4k descriptions and 33.4k diverse reasoning questions for these situations. These questions examine a wide spectrum of reasoning capabilities for an intelligent agent, ranging from spatial relation comprehension to commonsense understanding, navigation, and multi-hop reasoning. SQA3D imposes a significant challenge to current multi-modal especially 3D reasoning models. We evaluate various state-of-the-art approaches and find that the best one only achieves an overall score of 47.20%, while amateur human participants can reach 90.06%. We believe SQA3D could facilitate future embodied AI research with stronger situation understanding and reasoning capabilities. Code and data are released at sqa3d.github.io.

1. INTRODUCTION

In recent years, the endeavor of building intelligent embodied agents has delivered fruitful achievements. Robots now can navigate (Anderson et al., 2018) and manipulate objects (Liang et al., 2019; Savva et al., 2019; Shridhar et al., 2022; Ahn et al., 2022) following natural language commands



Figure1: Task illustration of Situated Question Answering in 3D Scenes (SQA3D). Given scene context S (e.g., 3D scan, egocentric video, bird-eye view picture), SQA3D requires an agent to first comprehend and localize its situation (position, orientation, etc.) in the 3D scene from a textual description s txt , then answer a question q under that situation. Note that understanding the situation and imagining the corresponding egocentric view correctly is necessary to accomplish our task. We provide more example questions in Figure2.

