Efficient Navigation Among Movable Obstacles using a Mobile Manipulator via Hierarchical Policy Learning

¹School of Computing, KAIST    ²Robotics Program, KAIST
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2025
Teaser Image

Our Hierarchical Reinforcement Learning (HRL) framework enables a mobile manipulator to efficiently navigate by dynamically pushing unforeseen obstacles, integrating real-time property estimation with strategic, whole-body coordinated motions.

Abstract

We propose a hierarchical reinforcement learning (HRL) framework for efficient Navigation Among Movable Obstacles (NAMO) using a mobile manipulator. Our approach combines interaction-based obstacle property estimation with structured pushing strategies, facilitating the dynamic manipulation of unforeseen obstacles while adhering to a pre-planned global path. The high-level policy generates pushing commands that consider environmental constraints and path-tracking objectives, while the low-level policy precisely and stably executes these commands through coordinated whole-body movements. Comprehensive simulation-based experiments demonstrate improvements in performing NAMO tasks, including higher success rates, shortened traversed path length, and reduced goal-reaching times, compared to baselines. Additionally, ablation studies assess the efficacy of each component, while a qualitative analysis further validates the accuracy and reliability of the real-time obstacle property estimation.

Video Presentation

Coming Soon

Poster

Coming Soon

BibTeX

@article{yang2025efficient,
        title={Efficient Navigation Among Movable Obstacles using a Mobile Manipulator via Hierarchical Policy Learning},
        author={Yang, Taegeun and Hwang, Jiwoo and Jeong, Jeil and Yoon, Minsung and Yoon, Sung-Eui},
        journal={arXiv preprint arXiv:2506.15380},
        year={2025}
      }