Machines Imitating Human Thinking Using Bayesian Learning and Bootstrap

In the field of cognitive science, much research has been conducted on the diverse applications of artificial intelligence (AI). One important area of study is machines imitating human thinking. Although there are various approaches to development of thinking machines, we assume that human thinking...

Full description

Bibliographic Details
Main Author: Sunghae Jun
Format: Article
Language:English
Published: MDPI AG 2021-02-01
Series:Symmetry
Subjects:
Online Access:https://www.mdpi.com/2073-8994/13/3/389
_version_ 1797417298173100032
author Sunghae Jun
author_facet Sunghae Jun
author_sort Sunghae Jun
collection DOAJ
description In the field of cognitive science, much research has been conducted on the diverse applications of artificial intelligence (AI). One important area of study is machines imitating human thinking. Although there are various approaches to development of thinking machines, we assume that human thinking is not always optimal in this paper. Sometimes, humans are driven by emotions to make decisions that are not optimal. Recently, deep learning has been dominating most machine learning tasks in AI. In the area of optimal decisions involving AI, many traditional machine learning methods are rapidly being replaced by deep learning. Therefore, because of deep learning, we can expect the faster growth of AI technology such as AlphaGo in optimal decision-making. However, humans sometimes think and act not optimally but emotionally. In this paper, we propose a method for building thinking machines imitating humans using Bayesian decision theory and learning. Bayesian statistics involves a learning process based on prior and posterior aspects. The prior represents an initial belief in a specific domain. This is updated to posterior through the likelihood of observed data. The posterior refers to the updated belief based on observations. When the observed data are newly added, the current posterior is used as a new prior for the updated posterior. Bayesian learning such as this also provides an optimal decision; thus, this is not well-suited to the modeling of thinking machines. Therefore, we study a new Bayesian approach to developing thinking machines using Bayesian decision theory. In our research, we do not use a single optimal value expected by the posterior; instead, we generate random values from the last updated posterior to be used for thinking machines that imitate human thinking.
first_indexed 2024-03-09T06:17:48Z
format Article
id doaj.art-70bd858901634b69af8ed4a953e7b32d
institution Directory Open Access Journal
issn 2073-8994
language English
last_indexed 2024-03-09T06:17:48Z
publishDate 2021-02-01
publisher MDPI AG
record_format Article
series Symmetry
spelling doaj.art-70bd858901634b69af8ed4a953e7b32d2023-12-03T11:51:50ZengMDPI AGSymmetry2073-89942021-02-0113338910.3390/sym13030389Machines Imitating Human Thinking Using Bayesian Learning and BootstrapSunghae Jun0Department of Big Data and Statistics, Cheongju University, Chungbuk 28503, KoreaIn the field of cognitive science, much research has been conducted on the diverse applications of artificial intelligence (AI). One important area of study is machines imitating human thinking. Although there are various approaches to development of thinking machines, we assume that human thinking is not always optimal in this paper. Sometimes, humans are driven by emotions to make decisions that are not optimal. Recently, deep learning has been dominating most machine learning tasks in AI. In the area of optimal decisions involving AI, many traditional machine learning methods are rapidly being replaced by deep learning. Therefore, because of deep learning, we can expect the faster growth of AI technology such as AlphaGo in optimal decision-making. However, humans sometimes think and act not optimally but emotionally. In this paper, we propose a method for building thinking machines imitating humans using Bayesian decision theory and learning. Bayesian statistics involves a learning process based on prior and posterior aspects. The prior represents an initial belief in a specific domain. This is updated to posterior through the likelihood of observed data. The posterior refers to the updated belief based on observations. When the observed data are newly added, the current posterior is used as a new prior for the updated posterior. Bayesian learning such as this also provides an optimal decision; thus, this is not well-suited to the modeling of thinking machines. Therefore, we study a new Bayesian approach to developing thinking machines using Bayesian decision theory. In our research, we do not use a single optimal value expected by the posterior; instead, we generate random values from the last updated posterior to be used for thinking machines that imitate human thinking.https://www.mdpi.com/2073-8994/13/3/389Bayesian learningartificial intelligencethinking machinesprior and posteriorBayesian bootstrap
spellingShingle Sunghae Jun
Machines Imitating Human Thinking Using Bayesian Learning and Bootstrap
Symmetry
Bayesian learning
artificial intelligence
thinking machines
prior and posterior
Bayesian bootstrap
title Machines Imitating Human Thinking Using Bayesian Learning and Bootstrap
title_full Machines Imitating Human Thinking Using Bayesian Learning and Bootstrap
title_fullStr Machines Imitating Human Thinking Using Bayesian Learning and Bootstrap
title_full_unstemmed Machines Imitating Human Thinking Using Bayesian Learning and Bootstrap
title_short Machines Imitating Human Thinking Using Bayesian Learning and Bootstrap
title_sort machines imitating human thinking using bayesian learning and bootstrap
topic Bayesian learning
artificial intelligence
thinking machines
prior and posterior
Bayesian bootstrap
url https://www.mdpi.com/2073-8994/13/3/389
work_keys_str_mv AT sunghaejun machinesimitatinghumanthinkingusingbayesianlearningandbootstrap