Autonomous navigation of mobile robots using visual servoing

Technological revolution has allowed robots to play a more important role than before due to its immense potential in bringing more convenience to people’s lives. This convenience is especially valuable to, for example, people who are feeling unwell or immobile. Hence, providing personal services to...

Full description

Bibliographic Details
Main Author: Lim, Zhi Xuan
Other Authors: Soong Boon Hee
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2020
Subjects:
Online Access:https://hdl.handle.net/10356/139682
_version_ 1811679334403932160
author Lim, Zhi Xuan
author2 Soong Boon Hee
author_facet Soong Boon Hee
Lim, Zhi Xuan
author_sort Lim, Zhi Xuan
collection NTU
description Technological revolution has allowed robots to play a more important role than before due to its immense potential in bringing more convenience to people’s lives. This convenience is especially valuable to, for example, people who are feeling unwell or immobile. Hence, providing personal services to cater their needs can provide a more holistic medical care to patients. Unfortunately, current autonomous navigation of robot does not take into account of the orientation of the object of interest in determining the target location. This would mean that there is an extremely high chance that the robot is not facing the frontal pose of the object of interests which results in inconvenience. Thus, this provides motivation in exploring the use of visual servoing for autonomous navigation. This project aims to develop an autonomous robot that is able to approach two respective targets – an empty and occupied chair according to their desired pose. Thus, this makes it suitable for applications such as food or medicine delivery in which the robot is able to move to the target person and deliver items or medicines to him/her. Even in cases where the person is not in his/her seat, this will still not affect the robot’s ability in moving towards the target. Point Cloud processing and deep learning detection - Openpose will be used to determine the pose of an empty chair and occupied chair respectively. With point cloud processing, an algorithm is developed to carry out the segmentation of planes – backrest and seat and hence identify the pose of an empty chair. For the case of occupied chair, an algorithm has been created to identify the pose of the person using three-dimensional coordinates of the body parts. Lastly, these data will determine the path planning algorithms for the robot to move independently towards the front of object of interest. Results have shown that the robot is able to determine the position and orientation of an empty chair and occupied chair and navigate autonomously to the front of the empty and occupied chair. Moreover, further improvements such as reducing the amount of time for the robot to reach to its target pose and usage of other sensors to enable the robot to move in a more complex environment are suggested.
first_indexed 2024-10-01T03:07:30Z
format Final Year Project (FYP)
id ntu-10356/139682
institution Nanyang Technological University
language English
last_indexed 2024-10-01T03:07:30Z
publishDate 2020
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1396822023-07-07T18:23:42Z Autonomous navigation of mobile robots using visual servoing Lim, Zhi Xuan Soong Boon Hee School of Electrical and Electronic Engineering Institute for Infocomm Research, Agency for Science, Technology and Research Wan Kong Wah ebhsoong@ntu.edu.sg Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics Technological revolution has allowed robots to play a more important role than before due to its immense potential in bringing more convenience to people’s lives. This convenience is especially valuable to, for example, people who are feeling unwell or immobile. Hence, providing personal services to cater their needs can provide a more holistic medical care to patients. Unfortunately, current autonomous navigation of robot does not take into account of the orientation of the object of interest in determining the target location. This would mean that there is an extremely high chance that the robot is not facing the frontal pose of the object of interests which results in inconvenience. Thus, this provides motivation in exploring the use of visual servoing for autonomous navigation. This project aims to develop an autonomous robot that is able to approach two respective targets – an empty and occupied chair according to their desired pose. Thus, this makes it suitable for applications such as food or medicine delivery in which the robot is able to move to the target person and deliver items or medicines to him/her. Even in cases where the person is not in his/her seat, this will still not affect the robot’s ability in moving towards the target. Point Cloud processing and deep learning detection - Openpose will be used to determine the pose of an empty chair and occupied chair respectively. With point cloud processing, an algorithm is developed to carry out the segmentation of planes – backrest and seat and hence identify the pose of an empty chair. For the case of occupied chair, an algorithm has been created to identify the pose of the person using three-dimensional coordinates of the body parts. Lastly, these data will determine the path planning algorithms for the robot to move independently towards the front of object of interest. Results have shown that the robot is able to determine the position and orientation of an empty chair and occupied chair and navigate autonomously to the front of the empty and occupied chair. Moreover, further improvements such as reducing the amount of time for the robot to reach to its target pose and usage of other sensors to enable the robot to move in a more complex environment are suggested. Bachelor of Engineering (Electrical and Electronic Engineering) 2020-05-21T02:15:05Z 2020-05-21T02:15:05Z 2020 Final Year Project (FYP) https://hdl.handle.net/10356/139682 en B1190-191 application/pdf Nanyang Technological University
spellingShingle Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Lim, Zhi Xuan
Autonomous navigation of mobile robots using visual servoing
title Autonomous navigation of mobile robots using visual servoing
title_full Autonomous navigation of mobile robots using visual servoing
title_fullStr Autonomous navigation of mobile robots using visual servoing
title_full_unstemmed Autonomous navigation of mobile robots using visual servoing
title_short Autonomous navigation of mobile robots using visual servoing
title_sort autonomous navigation of mobile robots using visual servoing
topic Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
url https://hdl.handle.net/10356/139682
work_keys_str_mv AT limzhixuan autonomousnavigationofmobilerobotsusingvisualservoing