A simulated-crowd guided augmentation training framework for realistic crowd-aware robot navigation

A simulated-crowd guided augmentation training framework for realistic crowd-aware robot navigation

Authors

  • Dang Hoang Minh Academy of Military Science and Technology
  • Truong Xuan Tung Le Quy Don Technical University
  • Vu Duc Truong Le Quy Don Technical University
  • Do Viet Binh Academy of Military Science and Technology
  • Do Viet Binh Academy of Military Science and Technology

DOI:

https://doi.org/10.54939/1859-1043.j.mst.CSCE9.2025.92-100

Keywords:

Deep reinforcement learning; Robot navigation; Synthetic simulation; Real-world trajectory data.

Abstract

Navigating autonomous robots through dense human crowds remains a complex challenge due to the dynamic and socially aware nature of pedestrian behavior. While deep reinforcement learning (DRL) has enabled promising advances in crowd-aware navigation, most existing methods are trained exclusively on synthetic simulations, limiting their generalization to real-world environments. In this work, we propose Synthetic Crowd Generation with Augmentation (SCGA), an augmented training framework that bridges the gap between simulated learning and real-world pedestrian dynamics. SCGA incorporates motion features extracted from a real-world trajectory dataset into the simulation-based DRL training loop. By dynamically enriching synthetic environments with realistic motion patterns, SCGA allows the navigation policy to learn from real human behavior without the need for costly and risky real-world training. We validate our approach through extensive experiments on crowd navigation tasks. Results show that models trained with SCGA exhibit improved safety and performance in both simulated and real-world-inspired settings. Our framework not only enhances the training process but also offers a reusable environment for developing and benchmarking future navigation models. This work presents a practical and effective pathway for improving the real-world applicability of DRL-based navigation systems.

References

[1]. C. Chen, Y. Liu, S. Kreiss, and A. Alahi, “Crowd-robot interaction: Crowd-aware robot navigation with attention-based deep reinforcement learning”, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 6015–6022, (2019).

[2]. S. Liu, D. Sun, M. Tomizuka, and C. Liu, “Decentralized Structural-RNN for robot crowd navigation with deep reinforcement learning”, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 6166–6172, (2021).

[3]. S. Liu, J. Wang, C. Liu, and D. Sun, “Intention-aware robot crowd navigation with attention-based interaction graph”, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), (2023).

[4]. W. Wang, R. Wang, L. Mao, and B.-C. Min, “NaviSTAR: Socially aware robot navigation with hybrid spatio-temporal graph transformer and preference learning”, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2023).

[5]. Bingxin Xue, et al., “Crowd-aware socially compliant robot navigation via deep reinforcement learning”, International Journal of Social Robotics, vol. 16, no. 1, pp. 197–209, (2024).

[6]. W. Zhao, J. P. Queralta, and T. Westerlund, “Sim-to-real transfer in deep reinforcement learning for robotics: A survey”, Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI), pp. 737–744, (2020).

[7]. H. Karnan, D. Everett, M. C. Yip, and G. S. Sukhatme, “Socially compliant navigation dataset (SCAND): A large-scale dataset of demonstrations for social navigation”, arXiv preprint, arXiv:2203.15041, (2022).

[8]. Ess, B. Leibe, and L. Van Gool, “Depth and appearance for mobile scene analysis”, Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1–8, (2007).

[9]. Lerner, Y. Chrysanthou, and D. Lischinski, “Crowds by example”, Computer Graphics Forum, vol. 26, no. 3, pp. 655–664, (2007).

[10]. P. Kothari, S. Kreiss, and A. Alahi, “Human trajectory forecasting in crowds: A deep learning perspective”, IEEE Transactions on Intelligent Transportation Systems, pp. 1–15, (2021).

[11]. M. H. Dang, V. B. Do, T. C. Tan, L. A. Nguyen, and X. T. Truong, “A synthetic crowd generation framework for socially aware robot navigation”, Proceedings of ICISN 2023 – Intelligent Systems and Networks, Lecture Notes in Networks and Systems, vol. 752, pp. 811–821, (2023).

[12]. J. van den Berg, S. J. Guy, M. Lin, and D. Manocha, “Reciprocal n-body collision avoidance”, Springer Tracts in Advanced Robotics, vol. 70, pp. 3–19, (2011).

[13]. M. Moussaïd, N. Perozo, S. Garnier, D. Helbing, and G. Theraulaz, “The walking behaviour of pedestrian social groups and its impact on crowd dynamics”, PLoS ONE, vol. 5, no. 4, e10047, (2010).

Downloads

Published

2025-12-31

How to Cite

[1]
M. Dang, Truong Xuan Tung, Vu Duc Truong, Do Viet Binh, and Do Viet Binh, “A simulated-crowd guided augmentation training framework for realistic crowd-aware robot navigation”, JMST’s CSCE, no. CSCE9, pp. 92–100, Dec. 2025.

Issue

Section

Articles
Loading...