We propose a hierarchical reinforcement learning (HRL) framework for efficient Navigation Among Movable Obstacles (NAMO) using a mobile manipulator. Our approach combines interaction-based obstacle property estimation with structured pushing strategies, facilitating the dynamic manipulation of unforeseen obstacles while adhering to a pre-planned global path. The high-level policy generates pushing commands that consider environmental constraints and path-tracking objectives, while the low-level policy precisely and stably executes these commands through coordinated whole-body movements. Comprehensive simulation-based experiments demonstrate improvements in performing NAMO tasks, including higher success rates, shortened traversed path length, and reduced goal-reaching times, compared to baselines. Additionally, ablation studies assess the efficacy of each component, while a qualitative analysis further validates the accuracy and reliability of the real-time obstacle property estimation.
Coming Soon
Coming Soon
@article{yang2025efficient,
title={Efficient Navigation Among Movable Obstacles using a Mobile Manipulator via Hierarchical Policy Learning},
author={Yang, Taegeun and Hwang, Jiwoo and Jeong, Jeil and Yoon, Minsung and Yoon, Sung-Eui},
journal={arXiv preprint arXiv:2506.15380},
year={2025}
}