Content-filtering AI systems–limitations, challenges and regulatory approaches
Online service providers, and even governments, have increasingly relied on Artificial Intelligence (‘AI’) to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content su...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Journal Article |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/170526 |
_version_ | 1826117456008052736 |
---|---|
author | Marsoof, Althaf Luco, Andrés Tan, Harry Joty, Shafiq |
author2 | School of Computer Science and Engineering |
author_facet | School of Computer Science and Engineering Marsoof, Althaf Luco, Andrés Tan, Harry Joty, Shafiq |
author_sort | Marsoof, Althaf |
collection | NTU |
description | Online service providers, and even governments, have increasingly relied on Artificial Intelligence (‘AI’) to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content such as terrorist propaganda. Consequently, service providers are being pushed to use AI to moderate online content. However, content-filtering AI systems are subject to limitations that affect their accuracy and transparency. These limitations open the possibility for legitimate content to be removed and objectionable content to remain online. Such an outcome could endanger human well-being and the exercise of our human rights. In view of these challenges, we argue that the design and use of content-filtering AI systems should be regulated. AI ethics principles such as transparency, explainability, fairness, and human-centricity should guide such regulatory efforts. |
first_indexed | 2024-10-01T04:27:52Z |
format | Journal Article |
id | ntu-10356/170526 |
institution | Nanyang Technological University |
language | English |
last_indexed | 2024-10-01T04:27:52Z |
publishDate | 2023 |
record_format | dspace |
spelling | ntu-10356/1705262023-09-18T06:44:21Z Content-filtering AI systems–limitations, challenges and regulatory approaches Marsoof, Althaf Luco, Andrés Tan, Harry Joty, Shafiq School of Computer Science and Engineering Nanyang Business School School of Humanities Business::Law Engineering::Computer science and engineering Content Moderation AI and Automation Online service providers, and even governments, have increasingly relied on Artificial Intelligence (‘AI’) to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content such as terrorist propaganda. Consequently, service providers are being pushed to use AI to moderate online content. However, content-filtering AI systems are subject to limitations that affect their accuracy and transparency. These limitations open the possibility for legitimate content to be removed and objectionable content to remain online. Such an outcome could endanger human well-being and the exercise of our human rights. In view of these challenges, we argue that the design and use of content-filtering AI systems should be regulated. AI ethics principles such as transparency, explainability, fairness, and human-centricity should guide such regulatory efforts. Nanyang Technological University We thank Micron Technology and the NTU Institute of Science and Technology for Humanity, an interdisciplinary research institute at Singapore’s Nanyang Technological University (‘NTU’), for funding the research that underpins this paper. 2023-09-18T05:50:57Z 2023-09-18T05:50:57Z 2023 Journal Article Marsoof, A., Luco, A., Tan, H. & Joty, S. (2023). Content-filtering AI systems–limitations, challenges and regulatory approaches. Information and Communications Technology Law, 32(1), 64-101. https://dx.doi.org/10.1080/13600834.2022.2078395 1360-0834 https://hdl.handle.net/10356/170526 10.1080/13600834.2022.2078395 2-s2.0-85130732105 1 32 64 101 en Information and Communications Technology Law © 2022 Informa UK Limited, trading as Taylor & Francis Group. All rights reserved. |
spellingShingle | Business::Law Engineering::Computer science and engineering Content Moderation AI and Automation Marsoof, Althaf Luco, Andrés Tan, Harry Joty, Shafiq Content-filtering AI systems–limitations, challenges and regulatory approaches |
title | Content-filtering AI systems–limitations, challenges and regulatory approaches |
title_full | Content-filtering AI systems–limitations, challenges and regulatory approaches |
title_fullStr | Content-filtering AI systems–limitations, challenges and regulatory approaches |
title_full_unstemmed | Content-filtering AI systems–limitations, challenges and regulatory approaches |
title_short | Content-filtering AI systems–limitations, challenges and regulatory approaches |
title_sort | content filtering ai systems limitations challenges and regulatory approaches |
topic | Business::Law Engineering::Computer science and engineering Content Moderation AI and Automation |
url | https://hdl.handle.net/10356/170526 |
work_keys_str_mv | AT marsoofalthaf contentfilteringaisystemslimitationschallengesandregulatoryapproaches AT lucoandres contentfilteringaisystemslimitationschallengesandregulatoryapproaches AT tanharry contentfilteringaisystemslimitationschallengesandregulatoryapproaches AT jotyshafiq contentfilteringaisystemslimitationschallengesandregulatoryapproaches |