Adaptive Visual Information Gathering for Autonomous Exploration of Underwater Environments

Show simple item record

dc.contributor.author Guerrero-Font, E.
dc.contributor.author Bonin-Font, F.
dc.contributor.author Oliver-Codina, G.
dc.date.accessioned 2025-07-09T08:16:53Z
dc.date.available 2025-07-09T08:16:53Z
dc.identifier.citation Guerrero-Font, E., Bonin-Font, F., i Oliver-Codina, G. (2021). Adaptive Visual Information Gathering for Autonomous Exploration of Underwater Environments. IEEE Access, 9(1), 136487-136506. https://doi.org/10.1109/ACCESS.2021.3117343 ca
dc.identifier.uri http://hdl.handle.net/11201/170671
dc.description.abstract [eng] This work presents the development and field testing of a novel adaptive visual information gathering (AVIG) framework for autonomous exploration of benthic environments using AUVs. The objective is to adapt dynamically the robot exploration using the visual information gathered online. This framework is based on a novel decision-time adaptive replanning (DAR) behavior that works together with a sparse Gaussian process (SGP) for environmental modeling and a Convolutional Neural Network (CNN) for semantic image segmentation. The framework is executed in mission time. The SGP uses semantic data obtained from stereo images to probabilistically model the spatial distribution of certain species of seagrass that colonize the sea bottom forming widespread meadows. The uncertainty of the probabilistic model provides a measure of sampling informativeness to the DAR behavior. The DAR behavior has been designed to execute successive informative paths, without stopping, considering the newest information obtained from the SGP. We solve the information path planning (IPP) problem by means of a novel depth-first (DF) version of the Monte Carlo tree search (MCTS). The DF-MCTS method has been designed to explore the state-space in a depth-first fashion, provide solution paths of a given length in an anytime manner, and reward smooth paths for field realization with non-holonomic robots. The complete framework has been integrated in a ROS environment as a high level layer of the AUV software architecture. A set of simulations and field testing show the effectiveness of the framework to gather data in P. oceanica environments. en
dc.format application/pdf en
dc.format.extent 136487-136506
dc.publisher IEEE
dc.relation info:eu-repo/grantAgreement/AEI//DPI2017-86372-C3-3-R/[ES]
dc.relation info:eu-repo/grantAgreement/CAIB//FPI/2031/2017/[ES]
dc.relation.ispartof IEEE Access, 2021, vol. 9, num.1, p. 136487-136506
dc.rights Attribution 4.0 International
dc.rights.uri https://creativecommons.org/licenses/by/4.0/
dc.subject.classification 004 - Informàtica ca
dc.subject.classification 57 - Biologia ca
dc.subject.classification 62 - Enginyeria. Tecnologia ca
dc.subject.other 004 - Computer Science and Technology. Computing. Data processing en
dc.subject.other 57 - Biological sciences in general en
dc.subject.other 62 - Engineering. Technology in general en
dc.title Adaptive Visual Information Gathering for Autonomous Exploration of Underwater Environments en
dc.type info:eu-repo/semantics/article
dc.type info:eu-repo/semantics/publishedVersion
dc.type Article
dc.date.updated 2025-07-09T08:16:53Z
dc.rights.accessRights info:eu-repo/semantics/openAccess
dc.identifier.doi https://doi.org/10.1109/ACCESS.2021.3117343


Files in this item

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International Except where otherwise noted, this item's license is described as Attribution 4.0 International

Search Repository


Advanced Search

Browse

My Account

Statistics