Cognitive System of a Virtual Robot Based on Perception, Memory, and Hypothesis Models for Calligraphy Writing Task

In this paper, we propose a robotic cognitive system which can learn itself to do a specific assignment by accumulating experiences through bottom-up thinking to make decision by itself via top-down thinking according to the experiences. That is, the cognitive system has a self-learning ability whic...

Full description

Bibliographic Details
Main Authors: Wei-Yen Wang, Min-Jie Hsu, Yi-Hsing Chien, Chen-Chien Hsu, Hsin-Han Chiang, Li-An Yu
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9938979/
Description
Summary:In this paper, we propose a robotic cognitive system which can learn itself to do a specific assignment by accumulating experiences through bottom-up thinking to make decision by itself via top-down thinking according to the experiences. That is, the cognitive system has a self-learning ability which can accumulate its experiences to make itself smarter. In essence, the cognitive system possesses a perception model, a memory model, and a hypothesis model. The perception model converts image information into perception codes. The memory model stores experiences in the past and present to provide to the perception model and the hypothesis model. The hypothesis model, which generates the next decision according to the experiences from the memory model, is the most important part of the proposed cognitive system. To validate the performance of the proposed system, we utilize Chinese calligraphy writing tasks by a virtual robot through simulation to evaluate the abilities of the cognitive system. In order to generate the coordinates of the writing brush, we made the virtual robot practice to learn Chinese calligraphy through bottom-up thinking to construct the writing patterns. The illustrative examples in this paper show that the virtual robot can learn to write Chinese calligraphy by top-down thinking according to its own experiences.
ISSN:2169-3536