Artificial Intelligence, Value Alignment and Rationality

The problem of value alignment in the context of AI studies is becoming more and more acute. This article deals with the basic questions concerning the system of human values corresponding to what we would like digital minds to be capable of. It has been suggested that as long as humans cannot agree...

Full description

Bibliographic Details
Main Authors: Bekenova Zhumagul, Müürsepp Peeter, Nurysheva Gulzhikhan, Turarbekova Laura
Format: Article
Language:English
Published: Sciendo 2022-05-01
Series:TalTech Journal of European Studies
Subjects:
Online Access:https://doi.org/10.2478/bjes-2022-0004
_version_ 1797814147108306944
author Bekenova Zhumagul
Müürsepp Peeter
Nurysheva Gulzhikhan
Turarbekova Laura
author_facet Bekenova Zhumagul
Müürsepp Peeter
Nurysheva Gulzhikhan
Turarbekova Laura
author_sort Bekenova Zhumagul
collection DOAJ
description The problem of value alignment in the context of AI studies is becoming more and more acute. This article deals with the basic questions concerning the system of human values corresponding to what we would like digital minds to be capable of. It has been suggested that as long as humans cannot agree on a universal system of values in the positive sense, we might be able to agree on what has to be avoided. The article argues that while we may follow this suggestion, we still need to keep the positive approach in focus as well. A holistic solution to the value alignment problem is not in sight and there might possibly never be a final solution. Currently, we are facing an era of endless adjustment of digital minds to biological ones. The biggest challenge is to keep humans in control of this adjustment. Here the responsibility lies with the humans. Human minds might not be able to fix the capacity of digital minds. The philosophical analysis shows that the key concept when dealing with this issue is value plurality. It may well be that we have to redefine our understanding of rationality in order to successfully deal with the value alignment problem. The article discusses an option to elaborate on the traditional understanding of rationality in the context of AI studies.
first_indexed 2024-03-13T08:03:15Z
format Article
id doaj.art-720ae28209a444d7847a59b168428e58
institution Directory Open Access Journal
issn 2674-4619
language English
last_indexed 2024-03-13T08:03:15Z
publishDate 2022-05-01
publisher Sciendo
record_format Article
series TalTech Journal of European Studies
spelling doaj.art-720ae28209a444d7847a59b168428e582023-06-01T09:44:19ZengSciendoTalTech Journal of European Studies2674-46192022-05-01121799810.2478/bjes-2022-0004Artificial Intelligence, Value Alignment and RationalityBekenova Zhumagul0Müürsepp Peeter1Nurysheva Gulzhikhan2Turarbekova Laura3Department of Philosophy and Political Science, Al-Farabi Kazakh National University, Al-Farabi 71, Almaty050040, KazakhstanDepartment of Philosophy and Political Science, Al-Farabi Kazakh National University, Department of Law, Tallinn University of Technology, Akadeemia tee 3, Tallinn12618, EstoniaDepartment of Philosophy and Political Science, Al-Farabi Kazakh National University, Al-Farabi 71, Almaty050040, KazakhstanDepartment of Philosophy and Political Science, Al-Farabi Kazakh National University, Al-Farabi 71, Almaty050040, KazakhstanThe problem of value alignment in the context of AI studies is becoming more and more acute. This article deals with the basic questions concerning the system of human values corresponding to what we would like digital minds to be capable of. It has been suggested that as long as humans cannot agree on a universal system of values in the positive sense, we might be able to agree on what has to be avoided. The article argues that while we may follow this suggestion, we still need to keep the positive approach in focus as well. A holistic solution to the value alignment problem is not in sight and there might possibly never be a final solution. Currently, we are facing an era of endless adjustment of digital minds to biological ones. The biggest challenge is to keep humans in control of this adjustment. Here the responsibility lies with the humans. Human minds might not be able to fix the capacity of digital minds. The philosophical analysis shows that the key concept when dealing with this issue is value plurality. It may well be that we have to redefine our understanding of rationality in order to successfully deal with the value alignment problem. The article discusses an option to elaborate on the traditional understanding of rationality in the context of AI studies.https://doi.org/10.2478/bjes-2022-0004artificial intelligencerationalityresponsibilityvaluevalue alignment
spellingShingle Bekenova Zhumagul
Müürsepp Peeter
Nurysheva Gulzhikhan
Turarbekova Laura
Artificial Intelligence, Value Alignment and Rationality
TalTech Journal of European Studies
artificial intelligence
rationality
responsibility
value
value alignment
title Artificial Intelligence, Value Alignment and Rationality
title_full Artificial Intelligence, Value Alignment and Rationality
title_fullStr Artificial Intelligence, Value Alignment and Rationality
title_full_unstemmed Artificial Intelligence, Value Alignment and Rationality
title_short Artificial Intelligence, Value Alignment and Rationality
title_sort artificial intelligence value alignment and rationality
topic artificial intelligence
rationality
responsibility
value
value alignment
url https://doi.org/10.2478/bjes-2022-0004
work_keys_str_mv AT bekenovazhumagul artificialintelligencevaluealignmentandrationality
AT muursepppeeter artificialintelligencevaluealignmentandrationality
AT nuryshevagulzhikhan artificialintelligencevaluealignmentandrationality
AT turarbekovalaura artificialintelligencevaluealignmentandrationality