Should Pandora’s Brain Be Regulated?

Font Size:

Attempts at regulating machines with human-like intelligence should be of pressing concern.

Font Size:

The creation of human-like intelligence in a non-biological being would be the greatest achievement in human history.

Many neuroscientists, computer engineers, and physicists say that just such an achievement will happen. Although their estimates of when this will occur vary, many experts make the case that it is a question of decades, not centuries. In the run up to this development—the implications of which can hardly be imagined—what role should, or could, regulatory regimes play?

The technical pathway to creating a machine with general intelligence—that is, a machine that possesses the cognitive capabilities of human beings, like intelligence, attention, and working memory—has been described by many scientists since it was first theorized by Charles Spearman more than 100 years ago.

One way to understand general intelligence is by contrasting human-like intelligence with every other living thing on Earth.

Put in terms of computer science, humans have already found ways to modify some of their hardware, such as by means of dental implants, hearing aids, and cardiac pacemakers. The rudiments of cognitive hardware modification are beginning to appear in the form of a few drugs and certain implantable devices in the brain. This ability to modify cognitive hardware may finally lead to “smart” limb prostheses and brain chips that encode retrievable memory, both concepts of great interest to national security agencies as well as medical scientists.

Human beings have also evolved to have the unique ability to modify their software in light of their experience and to transmit it in signs and symbols. Humans’ nearest evolutionary relatives, the higher primates, also show some ability to learn and adapt, but not to store memory in symbols.

For a long time now, Google has had more memory in its servers than any human brain, but Google’s software still is not as good as humans’ overall cognitive performance.

Some computer scientists say that a system like Google’s will eventually achieve human-like intelligence through the “brute force” of its memory capacity, but they are in a shrinking minority. So far, lacking the advantage of millions of years of evolution, no artificial intelligence (AI) can modify its own software. But a subfield of AI, called machine learning, is now emerging that could well lead to such an advanced AI.

The basic idea of machine learning is that, unlike older computers, a modern device is able to identify patterns in massive amounts of data and adapt to that data more and more efficiently. This is pretty much what human brains routinely do. For an AI to do that, it would have to have access to its basic list of instructions or source code.

Self-driving cars are one well-publicized example of machine learning in action, although they are still not advanced enough to interpret all of the clues in settings more complicated than a fairly featureless interstate highway.

A little reflection on the idea of machine learning illuminates why, as soon as a machine reaches human intelligence, it can write software that no human being could write, thus instantly superseding its creators. A machine with artificial general intelligence could also design new hardware to extend the potential of its software.

In 1993, the mathematician Vernor Vinge coined the term “singularity” to refer to the moment when a machine achieves superhuman intelligence. That moment has almost certainly not yet happened, partly due to disagreement about what Vinge’s term actually means and how humans could recognize that moment if and when it arrives.

Participants in these discussions are divided between techno-optimists and techno-pessimists about what artificial general intelligence would mean for human beings.

Techno-optimists like Ray Kurzweil argue that humans’ greatest challenges, like climate change, could be solved for the common good of the species. Techno-pessimists like Nick Bostrom view AI as an existential threat. Even if the techno-pessimists are wrong, it seems foolish not to take the risk seriously.

This debate leads me to a regulatory gap: There is a great deal of regulation concerning biological experiments that could inadvertently create a “smart” laboratory animal—like putting human-sourced neurons into a non-human primate embryo—but none concerning engineering developments that could lead to the singularity.

Even if the worst fears of the techno-pessimists never materialize, at the very least the existence of even a single AI with general intelligence would be a challenge to the ways human beings have seen their place in creation. And if there is a single machine with artificial general intelligence, and even if the techno-optimists are right, there might well be many ways that humans come to feel challenged.

At a certain point, even if these primary fears are never realized, a large number of machines with general intelligence could present another threat to human beings. The only thing that is obvious, at this point of such speculation, is that nothing is obvious.

What is certain is that all over the world thousands of engineers are making progress in improving AI almost daily.

Some neuroscientists refuse to concede that human-like intelligence can happen in a non-biological entity. That may be true. But there is also reason to believe that the chances that an AI with general intelligence can be created in a non-biological machine are vastly greater than that happening in a laboratory animal, even without all the research oversight by agencies like the U.S. Food and Drug Administration. In terms of gross architecture, animals’ skulls are not shaped to accommodate a human brain. Inside-the-brain experiments have found that simply introducing human neural cells into mice may not make them “smarter,” although modifying certain genes does improve their performance on tasks like maze running. Still, their improvements are within a mouse range, not a human one.

What, then, is to be done from a regulatory standpoint about the prospects of an “emergent” AI with general intelligence? (I use the term “emergent” to allow for the fact that there will still be arguments about whether or not it has taken place even after it has.)

Perhaps some industry standards analogous to pre-review of clinical trials for new drugs should be crafted, so that industry players would be required to assess the implications before they take critical steps. Should some agency like the U.S. Consumer Product Safety Commission be empowered to verify that the standards are being administered? By the time the singularity has been achieved, a recall may be beside the point. At that point, in the words of the Borg in Star Trek, “resistance is futile.”

Jonathan D. Moreno

Jonathan D. Moreno is the David and Lyn Silfen University Professor of Ethics at the University of Pennsylvania Perelman School of Medicine.

This essay is part of a 12-part series, entitled What Tomorrow Holds for U.S. Health Care.