GDPR and Artificial Intelligence

Font Size:

Europe’s General Data Protection Regulation could impact AI startups that depend heavily on data.

Font Size:

According to some estimates, developments in artificial intelligence (AI) could boost the global GDP in 2030 by 14 percent—or in absolute terms, $15.7 trillion. In attempting to capture gains from this economic growth, governments worldwide have been competing to support AI development and adoption.

But that growth may be affected by the way governments regulate AI and the large volumes of digitally stored data on which AI depends. In 2018, the European Union (EU) introduced what has been described as “the toughest privacy and security law in the world,” the General Data Protection Regulation (GDPR). The GDPR enshrines a series of data protection principles and regulates entities that “process the personal data of EU citizens or residents” or “offer goods or services to such people” regardless of whether such entities are located within the EU. To encourage compliance, the GDPR allows each EU member state’s data protection authority—the “independent public authorities that supervise” GDPR application—to fine violators the greater of either 20 million euros “or 4 percent of the firm’s worldwide annual revenue from the preceding financial year.”

Some commentators assert that various GDPR provisions are affecting the development of AI startups, and technology firms more generally, within countries in the EU. For instance, the GDPR’s Article 22 covers “automated individual decision-making, including profiling.” Some scholars assert that this provision could lead AI companies to limit activities such as offering customers loans or to implement additional and expensive human review of AI-powered decisions.

Others, such as Kalliopi Spyridaki, chief privacy strategist at SAS Institute Inc., argue that although the GDPR may sometimes limit or complicate how AI technologies use data, the GDPR could also “help create the trust that is necessary for AI acceptance by consumers and governments.”

This week’s Saturday Seminar rounds up selected research and commentary on the GDPR and how it may affect the future of AI.

  • In a recent policy brief from the School of Transnational Governance, Maciej Kuziemski of the University of Sussex and Przemyslaw Palka of Yale Law School propose three ways that policymakers can best regulate AI. First, Kuziemski and Palka argue that policymakers should encourage “compliance-centered innovation” in AI. Next, they suggest “empowerment of civil society through AI,” and they specifically advocate encouraging citizen engagement and preventing market dominance. Finally, they “recommend the creation of permanent groups facilitating dialogue between different regulatory agencies and policy-making bodies” to create tailored regulations for different uses and types of AI.
  • As AI and machine learning evolve, regulators seek to protect the public without stifling innovation. Because these technologies rely on ever-growing volumes of data, laws such as the GDPR could limit AI development. In a recent paper, Joel Thayer of Phillips Lytle LLP and Bijan Madhani of the Computer & Communications Industry Association consider whether compliance with the GDPR is even possible for companies developing and using machine learning and AI. They argue that the GDPR articulates four rights that could pose a significant challenge to AI development: the right against automated decision-making, the right to erasure, the right to data portability, and the right to explanation.
  • AI startups rely on data to train the algorithms that power their offerings. In a recent article, Boston University’s James Bessen and Lydia Reichensperger and New York University’s Stephen Impink and Robert Seamans survey the impacts of the GDPR on AI startups both in and out of Europe. Bessen and his coauthors find that AI startups have reallocated resources to comply with the GDPR, which could harm their potential for eventual success. In addition, they report that, in response to the GDPR, a majority of respondents have deleted data, which could in turn slow their ability to innovate.
  • Although the GDPR and its companion Directive on Data Protection in Criminal Matters “clearly give the right to the data subject not to be subjected to a fully automated decision, including profiling, the exceptions to this right hollow it out to the extent that the exceptions themselves become a rule,” Maastricht University’s Maja Brkan argues. Brkan suggests that these weaknesses become even more apparent where “the member states or the Union itself might provide for further exceptions to allow for a broader use of automated decision-making.” Brkan further argues that “data subjects should have the right to familiarize themselves with the reasons why a particular decision was taken” to protect themselves using the GDPR, but the Directive on Data Protection in Criminal Matters “does not provide for such a right, which puts into question the compatibility of its provision on automated decision-making with the EU Charter of Fundamental Rights.”
  • Focusing on the GDPR’s Article 22 and the right to an explanation, the University Carlo Cattaneo’s Elena Falletti argues that, to be appropriate, the measures called for by this provision require human intervention—that is, “someone who has the necessary authority, ability, and competence to modify or revise the decision disputed by the user.” Falletti also addresses the idea that in striving to provide transparency, explanations of technical subject matter such as AI “may not be sufficient if the information received is not comprehensible to the recipient.” Falletti asserts that instead of explaining how an algorithm works, providing comprehensible information and describing the relative emphasis placed on different information would be appropriate.