
UN advisory body evaluates opportunities for the international regulation of artificial intelligence.
There has been significant growth in the usage of artificial intelligence (AI) technologies in recent years. Use of AI has become increasingly prevalent, with 72 percent of companies using AI in at least one business function—up from 20 percent in 2017. But the adoption of such technology comes with risks, with leading figures in the technology sphere calling for greater efforts to mitigate such AI-related risk.
The United Nations Secretary-General’s High-level Advisory Body on AI (HLAB) recently released its final report evaluating how AI may be governed at the global level for the good of humanity. Formed to advise the U.N. on AI governance, the HLAB is made up of 39 members drawn from the ranks of industry, government, and academia.
The HLAB argues that AI has the potential to be used for good, raising biomedical research and energy as examples of sectors that might benefit from its further development. The adoption and development of AI, it asserts, may also contribute to the productivity of small businesses—fueling economic growth.
But the HLAB points out that there has been unequal distribution of AI development and usage, contending that there is a need for regulation that ensures that “AI is deployed for the common good, and that its opportunities are distributed equitably.”
AI experts are also concerned that expanding use of AI may lead to harms in areas such as information inequality and national security, the HLAB’s survey of experts reveals. The HLAB suggests that, when determining potential regulatory options, it is preferable to take a “vulnerability-based approach”—that categorizes AI-related risks based on who might be affected. Explaining that such an approach will allow for the perspectives of vulnerable groups to be included in evaluations of AI regulation, the HLAB stresses that it is important for regulatory agendas to be inclusive in their coverage.
Despite AI’s global nature, current regulations related to AI are usually enacted at the national or regional levels, the HLAB notes. The development of AI frequently draws on resources from international sources, while deployment of AI products is similarly global in nature. AI’s global nature, the HLAB argues, necessitates a global approach to AI regulation since national-level regulation cannot mitigate “downstream impacts” of AI in a satisfactory manner.
Although the HLAB recognizes that there have been large-scale initiatives to improve AI regulation, it assesses these efforts as not “truly global in reach.” Lack of global coordination, it suggests, may lead to a lack of coherence and compatibility between varying AI governance regimes. Furthermore, this lack of a global approach may be worsened by the exclusion of certain countries from existing regional initiatives on AI regulation, including those organized by groupings such as the Association of Southeast Asian Nations. Additional effort is required to ensure that no countries lag behind in accessing the benefits of AI, it contends.
Other gaps identified by the HLAB are in the areas of coordination and implementation. Pointing out that most initiatives are specific to certain domains of AI, the HLAB suggests that the nature of AI requires a “transversal approach” that looks at the technology as a whole.
The legitimacy stemming from the U.N.’s inclusive nature, the HLAB argues, makes it well positioned to close gaps by facilitating the governance of AI at the international level.
First, the HLAB contends that the fast pace of development in the AI sector makes it necessary to develop a “common understanding of its capabilities, opportunities, risks, and uncertainties” that would inform the development of regulation. This development would be best addressed, the HLAB suggests, by creating a scientific panel of experts on AI, similar to the Intergovernmental Panel on Climate Change. A report written by the panel would then provide independent advice on the creation of an AI governance agenda, through means such as the HLAB’s proposed biannual policy dialogue on AI governance.
The HLAB also recommends a greater emphasis on the harmonization of AI standards. Its proposed AI Standards Exchange—a forum for stakeholders in the AI space to collaborate—would be tasked with reducing the inconsistency between varying AI standards in different jurisdictions to improve operability between differing national-level regulation.
Finally, in recognition of the importance of data to the development of AI, the HLAB recommends the establishment of common standards for AI training data at the international level. This framework, the HLAB envisions, should not just regulate the management of data used for AI training, but also provide guidance on how diversity can be promoted through the usage of such data. This proposal would allow regulation of AI as an entire technology, as opposed to regulation that focuses on discrete aspects of the technology.
AI has many benefits to offer, concludes the HLAB, but harnessing them will require a more deliberate global approach. To that end, the HLAB has sought to make recommendations that will be inclusive, and serve as a springboard for further collaboration among stakeholders.