Indoor autonomous robot navigation by using visually grounded semantic instructions

Navigation is becoming an increasingly substantial part of autonomy in robotics. As the capability of robots is continually researched and developed towards increasing complex tasks, navigation has become a topic that many are diving into. There has been numerous developments in navigation, from usi...

Full description

Bibliographic Details
Main Author: Tan, Mei Yu
Other Authors: Soong Boon Hee
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158164
_version_ 1824454736688971776
author Tan, Mei Yu
author2 Soong Boon Hee
author_facet Soong Boon Hee
Tan, Mei Yu
author_sort Tan, Mei Yu
collection NTU
description Navigation is becoming an increasingly substantial part of autonomy in robotics. As the capability of robots is continually researched and developed towards increasing complex tasks, navigation has become a topic that many are diving into. There has been numerous developments in navigation, from using Simultaneous Localization and Mapping to build maps in navigation, to exploring other forms of maps like Topological, Semantics and even Abstract maps. This project will work towards exploring a different kind of navigation that is essentially map-less. This is especially useful in foreign environments that the robot have no knowledge on whatsoever. More specifically, this project focuses on defining the navigation goal as a mere string, of either a room name or a unit number. Without a map as a reference, the robot is forced to rely on extracting text information from its surrounding to identify its goal. Although the focus of this project is on goal-oriented navigation using text from the environment, the idea being explored in this project could be applied to many different functions and even used together with other existing work. This study is similar to previous work on extracting directional instructions for the robot to follow by extracting information from signs with the exception that it focus on finding the goal by verifying different potential end points, which are doors in this case, in the environment.
first_indexed 2025-02-19T03:27:03Z
format Final Year Project (FYP)
id ntu-10356/158164
institution Nanyang Technological University
language English
last_indexed 2025-02-19T03:27:03Z
publishDate 2022
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1581642023-07-07T19:24:34Z Indoor autonomous robot navigation by using visually grounded semantic instructions Tan, Mei Yu Soong Boon Hee School of Electrical and Electronic Engineering A*STAR Institute of Material Research and Engineering EBHSOONG@ntu.edu.sg Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Navigation is becoming an increasingly substantial part of autonomy in robotics. As the capability of robots is continually researched and developed towards increasing complex tasks, navigation has become a topic that many are diving into. There has been numerous developments in navigation, from using Simultaneous Localization and Mapping to build maps in navigation, to exploring other forms of maps like Topological, Semantics and even Abstract maps. This project will work towards exploring a different kind of navigation that is essentially map-less. This is especially useful in foreign environments that the robot have no knowledge on whatsoever. More specifically, this project focuses on defining the navigation goal as a mere string, of either a room name or a unit number. Without a map as a reference, the robot is forced to rely on extracting text information from its surrounding to identify its goal. Although the focus of this project is on goal-oriented navigation using text from the environment, the idea being explored in this project could be applied to many different functions and even used together with other existing work. This study is similar to previous work on extracting directional instructions for the robot to follow by extracting information from signs with the exception that it focus on finding the goal by verifying different potential end points, which are doors in this case, in the environment. Bachelor of Engineering (Electrical and Electronic Engineering) 2022-05-30T07:58:00Z 2022-05-30T07:58:00Z 2022 Final Year Project (FYP) Tan, M. Y. (2022). Indoor autonomous robot navigation by using visually grounded semantic instructions. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/158164 https://hdl.handle.net/10356/158164 en B3204-211 application/pdf Nanyang Technological University
spellingShingle Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Tan, Mei Yu
Indoor autonomous robot navigation by using visually grounded semantic instructions
title Indoor autonomous robot navigation by using visually grounded semantic instructions
title_full Indoor autonomous robot navigation by using visually grounded semantic instructions
title_fullStr Indoor autonomous robot navigation by using visually grounded semantic instructions
title_full_unstemmed Indoor autonomous robot navigation by using visually grounded semantic instructions
title_short Indoor autonomous robot navigation by using visually grounded semantic instructions
title_sort indoor autonomous robot navigation by using visually grounded semantic instructions
topic Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
url https://hdl.handle.net/10356/158164
work_keys_str_mv AT tanmeiyu indoorautonomousrobotnavigationbyusingvisuallygroundedsemanticinstructions