Collision avoidance from monocular vision trained with novel view synthesis

Valentin Tordjman--Levavasseur, Stéphane Caron. April 2025.

Abstract

Collision avoidance can be checked in explicit environment models such as elevation maps or occupancy grids, yet integrating such models with a locomotion policy requires accurate state estimation. In this work, we consider the question of collision avoidance from an implicit environment model. We use monocular RGB images as inputs and train a collisionavoidance policy from photorealistic images generated by 2D Gaussian splatting. We evaluate the resulting pipeline in realworld experiments under velocity commands that bring the robot on an intercept course with obstacles. Our results suggest that RGB images can be enough to make collision-avoidance decisions, both in the room where training data was collected and in out-of-distribution environments.

Content

pdf Paper
github Source code
youtube Video

BibTeX

@unpublished{tordjmanlevavasseur:hal-05005146,
    title = {{Collision avoidance from monocular vision trained with novel view synthesis}},
    author = {Tordjman{-}{-}Levavasseur, Valentin and Caron, St{\'e}phane},
    url = {https://hal.science/hal-05005146},
    note = {working paper or preprint},
    year = {2025},
    month = Mar,
}

Discussion

Feel free to post a comment by e-mail using the form below. Your e-mail address will not be disclosed.

📝 You can use Markdown with $\LaTeX$ formulas in your comment.

By clicking the button below, you agree to the publication of your comment on this page.

Opens your e-mail client.