Unsettling research advances bring neuroethics to the fore. Optical stimulation of light-responsive neurons in engineered mice can be used to create false memories. The false mouse memories made the ethicists uneasy. By stimulating certain neurons in the hippocampus, Susumu Tonegawa and his colleagues caused mice to recall receiving foot shocks in a setting in which none had occurred. Tonegawa, a neuroscientist at the Massachusetts Institute of Technology in Cambridge, says that he has no plans to ever implant false memories into humans — the study, published last month, was designed just to offer insight into memory formation. But the experiment has nonetheless alarmed some neuroethicists. “That was a bell-ringer, the idea that you can manipulate the brain to control the mind,” says James Giordano, chief of neuroethics studies at Georgetown University in Washington DC. He says that the study is one of many raising ethical concerns, and more are sure to come as an ambitious, multi-year US effort to parse the human brain gets under way.
The BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative will develop technologies to understand how the brain’s billions of neurons work together to produce thought, emotion, movement and memory. But, along with the discoveries, it could force scientists and society to grapple with a laundry list of ethical issues: the responsible use of cognitive-enhancement devices, the protection of personal neural data, the prediction of untreatable neurodegenerative diseases and the assessment of criminal responsibility through brain scanning. On 20 August, US President Barack Obama’s commission on bioethics will hold a meeting in Philadelphia, Pennsylvania, to begin to craft a set of ethics standards to guide the BRAIN project. There is already one major mechanism for ethical oversight in US research: institutional review boards, which must approve any studies involving human subjects. But many ethicists say that as neuroscience discoveries creep beyond laboratory walls into the marketplace and the courtroom, more comprehensive oversight is needed. “The long-term consequences of more brain knowledge — whether it’s good for an ethnic group or threatens your personal identity — there’s sort of no one in charge of that,” says Arthur Caplan, director of medical ethics at New York University’s Langone Medical Center.
Tonegawa’s study adds to the growing evidence that memories are surprisingly pliable. In the past few years, researchers have shown that drugs can erase fearful memories or disrupt alcoholic cravings in rodents. Some scientists have even shown that they can introduce rudimentary forms of learning during sleep in humans. Giordano says that dystopian fears of complete human mind control are overblown. But more limited manipulations may not be far off: the US Defense Advanced Research Projects Agency (DARPA), one of three government partners in the BRAIN Initiative, is working towards ‘memory prosthetic’ devices to help soldiers with brain injuries to regain lost cognitive skills. Deep brain stimulation (DBS), in which implants deliver simple electrical pulses, is another area that concerns neuroethicists. The devices have been used since the 1990s to treat motor disorders such as Parkinson’s disease, and are now being tested in patients with psychiatric conditions such as obsessive–compulsive disorder and major depression. Giordano says that applying DBS technology more widely requires ethical care. “We’re dealing with things affecting thought, emotion, behaviour — what people hold valuable as the essence of the self,” he says.
Neuroethicists are noticing challenges beyond the medical system, too, particularly in the courtroom. Judy Illes, a neurology researcher at the University of British Columbia in Vancouver, Canada, and co-founder of the International Neuroethics Society, says that brain imaging could affect the criminal-justice system by changing definitions of personal responsibility. Patterns of brain activity have already been used in some courtrooms to assess the mental fitness of the accused. Some ethicists worry that an advanced ability to map human brain function might be used to measure an individual’s propensity for violent or aberrant behaviour — or even, one day, to predict it. At next week’s meeting, the presidential commission will hear from each of the US agencies involved in the BRAIN Initiative — DARPA, the National Institutes of Health and the National Science Foundation — about preliminary scientific plans and anticipated ethical issues. Lisa Lee, the commission’s executive director, says that the group plans to discuss broad ethical concerns for human and animal participants in neuroscience research, and also the societal implications of discoveries that could arise from the BRAIN Initiative. Although no specific timeline has been set, the commission typically holds three to four meetings over a period of up to 18 months, culminating in recommendations to the President.
As neuroethicists wade into the issues, they may look to the precedent set by the Human Genome Project’s Ethical, Legal and Social Implications (ELSI) research programme, which has provided about US$300 million in study support over 23 years. The programme raised the profile of genetic privacy issues and laid the foundations for the Genetic Information Nondiscrimination Act of 2008, which prohibits discrimination by employers and health insurers on the basis of genetic information. Thomas Murray, one of the architects of ELSI and president emeritus of the Hastings Center, a bioethics research institute in Garrison, New York, is among the speakers invited to the commission meeting. He considers the BRAIN Initiative a timely opportunity to develop an ELSI programme for neuroscience. “There will be wonderful questions about human responsibility, human agency,” he says. “It’s never too soon to begin.”
September 3, 2013
Original web page at Nature