Binocular vision-guided manipulation by robotic arm

Visual signals are paramount in conferring human-like intelligence to robots, vehicles, and machines. Binocular vision, akin to its role in human comprehension of a dynamic world, is equally crucial for intelligent robots and machines to extract knowledge from visual signals. However, stereovision m...

Повний опис

Бібліографічні деталі
Автор: Fang, Yuhui
Інші автори: Xie Ming
Формат: Thesis-Master by Coursework
Мова:English
Опубліковано: Nanyang Technological University 2024
Предмети:
Онлайн доступ:https://hdl.handle.net/10356/173393
Опис
Резюме:Visual signals are paramount in conferring human-like intelligence to robots, vehicles, and machines. Binocular vision, akin to its role in human comprehension of a dynamic world, is equally crucial for intelligent robots and machines to extract knowledge from visual signals. However, stereovision matching presents a notable challenge for these entities. This paper introduces an innovative approach to tackle this challenge, emphasizing a robust matching solution that incorporates top-down image sampling, hybrid feature extraction, and the integration of a Restricted Coulomb Energy (RCE) neural network for incremental learning and robust recognition. Furthermore, the paper explores the analogy between the human eye and a pan-tiltzoom (PTZ) camera, prompting the intriguing question of whether simpler, easily calibratable formulas exist for computing depth and displacement. The paper unveils a groundbreaking discovery in the domain of 3D projection for human-like binocular vision systems. This discovery facilitates forward and inverse transformations between 2D digital images and a 3D analogue scene. The revealed formulas are accurate, easily computable, tunable on the fly, and suitable for implementation in a neural system. Experimental results affirm the efficacy of these formulas, offering a promising avenue for simplified and calibration-friendly 3D projection in binocular vision systems.