System integration of a vision-based robot system for the food industry

As technology becomes more advance, so too does the field of robotics and automation. Many robots can be found assembling vehicle parts in automotive factories. These robots consist mainly of mechanical arms programmed to do welding and screwing on some parts of the cars. Nowadays, the definition of...

Full description

Bibliographic Details
Main Author: CHIA, JING CHENG
Other Authors: Chen I-Ming
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/158861
Description
Summary:As technology becomes more advance, so too does the field of robotics and automation. Many robots can be found assembling vehicle parts in automotive factories. These robots consist mainly of mechanical arms programmed to do welding and screwing on some parts of the cars. Nowadays, the definition of robotics evolved and expanded that includes the development, innovation, and use of robot for surveillance in harsh environment, robot that assist in many aspects in healthcare and even autonomous vehicle deploying in many places in Singapore for a future intelligent traffic system. This is especially true with the development of Artificial Intelligence in robotics industry which makes high level autonomy of robots possible in the complicated environment. Deep learning approach is widely utilised in robotics field such as object detection, robot navigation, natural language processing and point cloud registration. The purpose of this Final-Year Project is to integrate point cloud registration method into the vision-based food assembly robot. The main objective is to match two point cloud data collected from two depth cameras into higher quality depth information, wider perspective and lesser blind spot missing area point cloud data. Recent deep point cloud matching method mostly focus on standard point cloud data with high overlapping ratio but rarely deploy in practical application. Therefore, this project will focus on comparison on different developed point cloud registration approach on both standard data and real-world data. To compare quality of data collected from different tilt angles of camera, the author will design a tilt module for the depth cameras.