Last month, Senior Robotics Engineer Blake Buchanan traveled to Seoul, Korea, to present at the 2025 Conference on Robot Learning (CoRL), an international conference focused on the intersection of robotics and machine learning. During the conference, Buchanan showcased his research, “Graph EQA: Using 3D Semantic Screen Graphs for Real-Time Embodied Question Answering,” in both a spotlight presentation and poster session.
Partnering with researchers and scientists from Carnegie Mellon University, Agility Robotics, Hello Robot, and the Bosch Center for AI, Buchanan discussed how GraphEQA uses real-time 3D metric-semantic scene graphs (3DSGs) and task- relevant images as multi-modal memory for grounding Vision-Language Models (VLMs) to navigate and answer questions about indoor environments.
The team’s research focused on how in embodied question answering (EQA), agents must explore and develop a semantic understanding in an unseen environment to answer a situated question with confidence, meaning that it can accurately respond to questions that depend on an object’s specific surroundings. This is a problem that remains challenging in robotics, due to the difficulties in obtaining useful semantic representations, updating these representations online, and leveraging prior world knowledge for efficient planning and exploration.

In the paper’s abstract, the team explains:
“In GraphEQA, we employ a hierarchical planning approach that exploits the hierarchical nature of 3DSGs for structured planning and semantics-guided exploration. We evaluate GraphEQA in simulation on two benchmark datasets, HM-EQA and OpenEQA, and demonstrate that it outperforms key baselines by completing EQA tasks with higher success rates and fewer planning steps. We further demonstrate GraphEQA in multiple real-world home and office environments.”
In addition to Buchanan, contributors to the project include Saumya Saxena and Oliver Kroemer (Carnegie Mellon University), Chris Paxton (Agility Robotics), Peiqui Liu (Hello Robot), Bingqing Chen, Narunas Vaskevicius, Luigi Palmieri (Bosch Center for AI), and Jonathan Francis (CMU, Bosch Center for AI).The full paper can be found online here. Watch Buchanan’s presentation below, or see the full presentation on CoRL’s YouTube channel.



