Is the Artificial Intelligence Act Trustworthy?

Font Size:

Scholars argue that the European Union’s proposed legislation will be inadequate to develop trust in AI.

Font Size:

Would you be comfortable paying for a public service that deployed artificial intelligence (AI) you did not trust? The European Union’s (EU) proposed AI Act assumes the answer is “no.” In adopting the Act, the EU aims to promote technology uptake by addressing risks associated with the deployment of AI.

By regulating AI’s risks, the proposed law seeks to increase trust in this digital technology, which could boost the consumption of goods and services and ultimately foster greater innovation. But in a recent paper, professors Johann Laux, Sandra Watcher, and Brent Mittelstadt of the University of Oxford critique the Act’s approach,  arguing that the AI Act’s risk-based framework will be inadequate to foster necessary trust in AI, especially for government use. The authors recommend that regulators instead adopt a more participatory approach to public accountability for AI use by the public sector.

Through the proposed AI Act,  the EU would establish standards based on whether particular uses of AI are expected to have unacceptable, high, or low risks, based on legal, ethical, and social norms. Based on these categorizations, the Act would prohibit use of the technology for some purposes or allow its use for others with differing degrees of restraint.

Laux, Watcher, and Mittelstadt note that, by classifying AI risks in tiers, the EU seeks to increase trust in AI systems by making certain risks more acceptable. They emphasize, however, that other factors can increase public trust in AI technology, especially when used by government entities. These other factors include promoting government efficiency in delivering services and improving how institutions perform their duty to protect citizens from risks caused by AI systems.

Moreover, the public sector’s use of AI poses other distinct challenges, according to Laux, Watcher, and Mittelstadt. When the government deploys AI in public services, for example, people have little choice but to be exposed to it. In addition, many times governments fund those services with tax revenue, which taxpayers are obliged to pay. Going forward, the authors argue that government actors should provide more clarity around governmental use of AI to foster public trust in the technology it uses to promote its authority and legitimacy.

Creating trust in AI systems, however, is a challenging endeavor. After conducting a systematic review of 71 academic articles on AI and trust, Laux, Watcher, and Mittelstadt conclude that “trust and trustworthy AI” are affected by heterogeneous factors. Furthermore, the authors explain that the underpinnings of public trust in technology vary widely according to the conditions in different markets and sectors, making it challenging to regulate AI effectively through uniform regulation.

To Laux, Watcher, and Mittelstadt, the literature reveals that algorithmic accountability takes many different forms. These include screening for biases, creating a right to challenge decision-making systems and provide redress, promoting transparency, and providing explanations of AI-based decisions. They also suggest that accountability requires a participatory framework “that puts humans in the decision loop” to oversee the technology and ensure control of personal data.

But Laux, Watcher, and Mittelstadt also note that, when it comes to public sector use of AI, equally important are matters such as institutional trust—which they define as the legitimacy of governmental institutions—and interpersonal trust—which they define as the implicit trust invested in those individual stakeholders involved in designing and deploying AI systems.

This latter factor—interpersonal trust—is vital because it also relates to the type of intermediary bodies that regulators could place in the technology’s decision-making loop. On this issue, Laux, Watcher, and Mittelstadt criticize the proposed EU AI Act’s technocratic approach to assessing risk. Under the proposed law, this task would fall on AI developers, who ultimately will determine the trustworthiness of AI.

Regulators’ current reliance on expert judgment conflicts with the more participatory approach recommended by Laux, Watcher, and Mittelstadt. They argue that a participatory approach to AI risk assessment would be more likely to establish citizens’ trustworthiness in AI systems. Putting laypeople on a board charged with AI risk assessment, for example, could be crucial for enhancing trust in the intermediary institutions responsible for this task, contend Laux, Watcher, and Mittelstadt.

Laux, Watcher, and Mittelstadt argue that developing proper regulations will be central to ensuring that the intermediary institutions deliver accountable work. Accountability should mitigate any “capture” of auditors by industry interests, state Laux, Watcher, and Mittelstadt.

Laux, Watcher, and Mittelstadt warn that the approach of bringing trustworthiness to AI by risk assessment outlined in the EU’s AI Act is too simplistic because trust has different dimensions and varies according to different communities, sectors, and circumstances. They conclude that building trust in public sector use of AI will require the European Parliament to develop a framework that goes beyond the minimum requirements of the EU’s AI Act.