Three pathways for standardisation and ethical disclosure by default under the European Union Artificial Intelligence Act

<p>Under its proposed Artificial Intelligence Act (‘AIA’), the European Union seeks to develop harmonised standards involving abstract normative concepts such transparency, fairness, and accountability. Applying such concepts inevitably requires answering “hard normative questions”, i.e., endo...

Full description

Bibliographic Details
Main Authors: Laux, J, Wachter, S, Mittelstadt, B
Format: Internet publication
Language:English
Published: SSRN 2023
Description
Summary:<p>Under its proposed Artificial Intelligence Act (‘AIA’), the European Union seeks to develop harmonised standards involving abstract normative concepts such transparency, fairness, and accountability. Applying such concepts inevitably requires answering “hard normative questions”, i.e., endorsing specific interpretations or theoretical approaches or specifying acceptable or preferred trade-offs between competing interests. Considering these normative challenges, we argue that there are three possible pathways for future standardisation under the AIA.</p> <br> <p>First, European standard-setting organisations (‘SSOs’) could answer hard normative questions themselves. This approach would raise concerns about its democratic legitimacy. Standardisation is largely a technical discourse and tends to exclude non-expert stakeholders and the public at large.</p> <br> <p>Second, instead of passing their own normative judgments, SSOs could track the normative consensus they find available. SSOs would then defer to norms posited in documents with a higher pedigree of democratic legitimacy such as national and international laws. By analysing the standard-setting history of one major SSO, we show that such consensus tracking has historically been its pathway of choice. If standardisation under the AIA took the same route, we demonstrate how this would lead to a false sense of safety as the process is not infallible. Trends identified through this analysis suggest that future standardisation work on AI by European SSOs will be unlikely to produce detailed, specific, and coherent normative thresholds and requirements for the development and use of AI in the European Union. Consensus tracking would furthermore push the need to solve unavoidable normative problems down the line. Instead of regulators, AI developers and/or users could define what, for example, fairness requires within a concrete implementation of AI. By the institutional design of its AIA, the European Commission would have essentially kicked the ‘AI Ethics’ can down the road.</p> <br> <p>To not miss the opportunity of addressing hard normative questions around AI, we thus suggest a third pathway which aims to avoid the pitfalls of the previous two: SSOs should create standards which require “ethical disclosure by default.” These standards will specify minimum technical testing, documentation, and public reporting requirements to shift ethical decision-making to local stakeholders and limit provider discretion in answering hard normative questions in the development of AI products and services. Compliance would require AI providers to furnish relevant third parties with a standardised set of test results and documentation to enable local decision-makers to set normative requirements in a procedurally consistent way. Our proposed pathway is about putting the right information in the hands of the people with the legitimacy to make complex normative decisions at a local, context-sensitive level.</p>