Four ethical priorities for neurotechnologies and AI

A man with a spinal-cord injury (right) prepares for a virtual cycle race in which competitors steer avatars using brain signals.

 

Authors: Rafael Yuste, Sara Goering, Blaise Agüera y Arcas, Guoqiang Bi, Jose M. Carmena, Adrian Carter, Joseph J. Fins, Phoebe Friesen, Jack Gallant, Jane E. Huggins, Judy Illes, Philipp Kellmeyer, Eran Klein, Adam Marblestone, Christine Mitchell, Erik Parens, Michelle Pham, Alan Rubel, Norihiro Sadato, Laura Specker Sullivan, Mina Teicher, David Wasserman, Anna Wexler, Meredith Whittaker, Jonathan Wolpaw

Publication: Nature

Date: November 8, 2017

DOI: 10.1038/551159a

Introduction

Consider the following scenario. A paralysed man participates in a clinical trial of a brain–computer interface (BCI). A computer connected to a chip in his brain is trained to interpret the neural activity resulting from his mental rehearsals of an action. The computer generates commands that move a robotic arm. One day, the man feels frustrated with the experimental team. Later, his robotic hand crushes a cup after taking it from one of the research assistants, and hurts the assistant. Apologizing for what he says must have been a malfunction of the device, he wonders whether his frustration with the team played a part.

This scenario is hypothetical. But it illustrates some of the challenges that society might be heading towards.

Current BCI technology is mainly focused on therapeutic outcomes, such as helping people with spinal-cord injuries. It already enables users to perform relatively simple motor tasks — moving a computer cursor or controlling a motorized wheelchair, for example. Moreover, researchers can already interpret a person's neural activity from functional magnetic resonance imaging scans at a rudimentary level1 — that the individual is thinking of a person, say, rather than a car.

It might take years or even decades until BCI and other neurotechnologies are part of our daily lives. But technological developments mean that we are on a path to a world in which it will be possible to decode people's mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals could communicate with others simply by thinking; and where powerful computational systems linked directly to people's brains aid their interactions with the world such that their mental and physical abilities are greatly enhanced.

Such advances could revolutionize the treatment of many conditions, from brain injury and paralysis to epilepsy and schizophrenia, and transform human experience for the better. But the technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.

It is crucial to consider the possible ramifications now.

The Morningside Group comprises neuroscientists, neurotechnologists, clinicians, ethicists and machine-intelligence engineers. It includes representatives from Google and Kernel (a neurotechnology start-up in Los Angeles, California); from international brain projects; and from academic and research institutions in the United States, Canada, Europe, Israel, China, Japan and Australia. We gathered at a workshop sponsored by the US National Science Foundation at Columbia University, New York, in May 2017 to discuss the ethics of neurotechnologies and machine intelligence.

We believe that existing ethics guidelines are insufficient for this realm2. These include the Declaration of Helsinki, a statement of ethical principles first established in 1964 for medical research involving human subjects (go.nature.com/2z262ag); the Belmont Report, a 1979 statement crafted by the US National Commission for the Protection of Human Subjects of Biomedical and Behavioural Research (go.nature.com/2hrezmb); and the Asilomar artificial intelligence (AI) statement of cautionary principles, published early this year and signed by business leaders and AI researchers, among others (go.nature.com/2ihnqac).

To begin to address this deficit, here we lay out recommendations relating to four areas of concern: privacy and consent; agency and identity; augmentation; and bias. Different nations and people of varying religions, ethnicities and socio-economic backgrounds will have differing needs and outlooks. As such, governments must create their own deliberative bodies to mediate open debate involving representatives from all sectors of society, and to determine how to translate these guidelines into policy, including specific laws and regulations.

Read the full publication.

Wednesday, November 8, 2017