Argument-based inductive logics, with coverage of compromised perception
Formal deductive logic, used to express and reason over declarative, axiomatizable content, captures, we now know, essentially all of what is known in mathematics and physics, and captures as well the details of the proofs by which such knowledge has been secured. This is certainly impressive, but d...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2024-01-01
|
Series: | Frontiers in Artificial Intelligence |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2023.1144569/full |
_version_ | 1797362383114469376 |
---|---|
author | Selmer Bringsjord Michael Giancola Naveen Sundar Govindarajulu John Slowik James Oswald Paul Bello Micah Clark |
author_facet | Selmer Bringsjord Michael Giancola Naveen Sundar Govindarajulu John Slowik James Oswald Paul Bello Micah Clark |
author_sort | Selmer Bringsjord |
collection | DOAJ |
description | Formal deductive logic, used to express and reason over declarative, axiomatizable content, captures, we now know, essentially all of what is known in mathematics and physics, and captures as well the details of the proofs by which such knowledge has been secured. This is certainly impressive, but deductive logic alone cannot enable rational adjudication of arguments that are at variance (however much additional information is added). After affirming a fundamental directive, according to which argumentation should be the basis for human-centric AI, we introduce and employ both a deductive and—crucially—an inductive cognitive calculus. The former cognitive calculus, DCEC, is the deductive one and is used with our automated deductive reasoner ShadowProver; the latter, IDCEC, is inductive, is used with the automated inductive reasoner ShadowAdjudicator, and is based on human-used concepts of likelihood (and in some dialects of IDCEC, probability). We explain that ShadowAdjudicator centers around the concept of competing and nuanced arguments adjudicated non-monotonically through time. We make things clearer and more concrete by way of three case studies, in which our two automated reasoners are employed. Case Study 1 involves the famous Monty Hall Problem. Case Study 2 makes vivid the efficacy of our calculi and automated reasoners in simulations that involve a cognitive robot (PERI.2). In Case Study 3, as we explain, the simulation employs the cognitive architecture ARCADIA, which is designed to computationally model human-level cognition in ways that take perception and attention seriously. We also discuss a type of argument rarely analyzed in logic-based AI; arguments intended to persuade by leveraging human deficiencies. We end by sharing thoughts about the future of research and associated engineering of the type that we have displayed. |
first_indexed | 2024-03-08T16:07:15Z |
format | Article |
id | doaj.art-3f07fd3f5c6b4f08889cc81d6263e7bf |
institution | Directory Open Access Journal |
issn | 2624-8212 |
language | English |
last_indexed | 2024-03-08T16:07:15Z |
publishDate | 2024-01-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Artificial Intelligence |
spelling | doaj.art-3f07fd3f5c6b4f08889cc81d6263e7bf2024-01-08T05:03:21ZengFrontiers Media S.A.Frontiers in Artificial Intelligence2624-82122024-01-01610.3389/frai.2023.11445691144569Argument-based inductive logics, with coverage of compromised perceptionSelmer Bringsjord0Michael Giancola1Naveen Sundar Govindarajulu2John Slowik3James Oswald4Paul Bello5Micah Clark6Rensselaer AI & Reasoning (RAIR) Lab, Department of Computer Science, Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United StatesRensselaer AI & Reasoning (RAIR) Lab, Department of Computer Science, Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United StatesRensselaer AI & Reasoning (RAIR) Lab, Department of Computer Science, Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United StatesRensselaer AI & Reasoning (RAIR) Lab, Department of Computer Science, Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United StatesRensselaer AI & Reasoning (RAIR) Lab, Department of Computer Science, Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, United StatesNaval Research Laboratory, Washington, DC, United StatesCollege of Information Sciences and Technology, Pennsylvania State University, State College, PA, United StatesFormal deductive logic, used to express and reason over declarative, axiomatizable content, captures, we now know, essentially all of what is known in mathematics and physics, and captures as well the details of the proofs by which such knowledge has been secured. This is certainly impressive, but deductive logic alone cannot enable rational adjudication of arguments that are at variance (however much additional information is added). After affirming a fundamental directive, according to which argumentation should be the basis for human-centric AI, we introduce and employ both a deductive and—crucially—an inductive cognitive calculus. The former cognitive calculus, DCEC, is the deductive one and is used with our automated deductive reasoner ShadowProver; the latter, IDCEC, is inductive, is used with the automated inductive reasoner ShadowAdjudicator, and is based on human-used concepts of likelihood (and in some dialects of IDCEC, probability). We explain that ShadowAdjudicator centers around the concept of competing and nuanced arguments adjudicated non-monotonically through time. We make things clearer and more concrete by way of three case studies, in which our two automated reasoners are employed. Case Study 1 involves the famous Monty Hall Problem. Case Study 2 makes vivid the efficacy of our calculi and automated reasoners in simulations that involve a cognitive robot (PERI.2). In Case Study 3, as we explain, the simulation employs the cognitive architecture ARCADIA, which is designed to computationally model human-level cognition in ways that take perception and attention seriously. We also discuss a type of argument rarely analyzed in logic-based AI; arguments intended to persuade by leveraging human deficiencies. We end by sharing thoughts about the future of research and associated engineering of the type that we have displayed.https://www.frontiersin.org/articles/10.3389/frai.2023.1144569/fullinductive logiccompromised perceptionargument and automated reasoningMonty Hall dilemmacognitive roboticsAI |
spellingShingle | Selmer Bringsjord Michael Giancola Naveen Sundar Govindarajulu John Slowik James Oswald Paul Bello Micah Clark Argument-based inductive logics, with coverage of compromised perception Frontiers in Artificial Intelligence inductive logic compromised perception argument and automated reasoning Monty Hall dilemma cognitive robotics AI |
title | Argument-based inductive logics, with coverage of compromised perception |
title_full | Argument-based inductive logics, with coverage of compromised perception |
title_fullStr | Argument-based inductive logics, with coverage of compromised perception |
title_full_unstemmed | Argument-based inductive logics, with coverage of compromised perception |
title_short | Argument-based inductive logics, with coverage of compromised perception |
title_sort | argument based inductive logics with coverage of compromised perception |
topic | inductive logic compromised perception argument and automated reasoning Monty Hall dilemma cognitive robotics AI |
url | https://www.frontiersin.org/articles/10.3389/frai.2023.1144569/full |
work_keys_str_mv | AT selmerbringsjord argumentbasedinductivelogicswithcoverageofcompromisedperception AT michaelgiancola argumentbasedinductivelogicswithcoverageofcompromisedperception AT naveensundargovindarajulu argumentbasedinductivelogicswithcoverageofcompromisedperception AT johnslowik argumentbasedinductivelogicswithcoverageofcompromisedperception AT jamesoswald argumentbasedinductivelogicswithcoverageofcompromisedperception AT paulbello argumentbasedinductivelogicswithcoverageofcompromisedperception AT micahclark argumentbasedinductivelogicswithcoverageofcompromisedperception |