Underwater Visual Localization Using Machine Learning and LSTM: Datasets
2024-7-18 01:0:17 Author: hackernoon.com(查看原文) 阅读量:0 收藏

Authors:

(1) Luyuan Peng, Acoustic Research Laboratory, National University of Singapore;

(2) Hari Vishnu, Acoustic Research Laboratory, National University of Singapore;

(3) Mandar Chitre, Acoustic Research Laboratory, National University of Singapore;

(4) Yuen Min Too, Acoustic Research Laboratory, National University of Singapore;

(5) Bharath Kalyan, Acoustic Research Laboratory, National University of Singapore;

(6) Rajat Mishra, Acoustic Research Laboratory, National University of Singapore.

I Introduction

II Method

III Datasets

IV Experiments, Acknowledgment, and References

III. DATASETS

To train and test our model, we used one dataset collected from an underwater robotics simulator [8] as shown in Fig. 3 as well as two datasets collected from a tank as shown in Fig. 2. In the simulator dataset, we operated the ROV in simulation to perform inspection on a vertical pipe in a spiral motion. The total spatial extent covered by the ROV during the inspection is about 2×4×2 m. 14,400 samples of image-pose pair data were collected.

In the first tank dataset, we operated the ROV in a lawnmower path with translations only (Fig.2), and the rotations were minimal. We collected 3,437 data samples. In the second tank dataset, the ROV primarily performed rotation maneuvers at 5 selected points. We collected 4,977 data samples. We augmented the left-camera dataset by adding the right-camera data, and thereby using the geometry of the stereo camera placement to provide more training data. This worked well and yielded better performance. The total spatial extent covered in tank datasets was 0.4×0.6×0.2 m.


文章来源: https://hackernoon.com/underwater-visual-localization-using-machine-learning-and-lstm-datasets?source=rss
如有侵权请联系:admin#unsafe.sh