Gene-therapy trials must proceed with caution

Jesse Gelsinger was 18 and healthy when he died in 1999 during a gene-therapy experiment. He had a condition called ornithine transcarbamylase deficiency (OTC), but it was under control through a combination of diet and medication. Like others with the disorder, Gelsinger lacked a functional enzyme involved in breaking down ammonia, a waste product of protein metabolism that becomes toxic when its levels become too high. The gene therapy that he received used a viral vector to introduce a normal gene for the enzyme.

Gene therapy remains an obvious route to treat OTC. Simply adding the missing gene has been shown to repair metabolism in mice. But the memory of what happened to Gelsinger has slowed progress in gene therapy for any condition.

That memory was firmly on the agenda at a meeting of the US National Institutes of Health’s Recombinant DNA Advisory Committee (RAC) last week. The RAC evaluates proposals to use modified DNA in human trials, and presenting to it were Cary Harding, a medical geneticist at Oregon Health and Science University in Portland, and Sam Wadsworth, chief scientific officer at Dimension Therapeutics in Cambridge, Massachusetts. The duo were proposing the first new trial of gene therapy for OTC.

Harding and the researchers at Dimension argue that the technology and our understanding of physiology have advanced enough since 1999 to try it again in people. Gelsinger died after his body overreacted to the vector used to introduce the OTC gene. Dimension’s therapy uses a different viral vector, called AAV8, which has been tested numerous times in people with other conditions, with few adverse effects.

Such assurances were not enough for the RAC, and particularly not for its bioethicists and historians. Dawn Wooley, a virologist at Wright State University in Dayton, Ohio, pointed out that an RAC panel raised concerns about Gelsinger’s trial in 1995, but decided to let the test go ahead. “We can’t let it happen again, we cannot,” she says.

Perhaps the greatest indication of how Gelsinger’s death haunts the RAC came when one member suggested that the researchers explain in the consent form to be sent to prospective participants that someone had died in a similar study and attracted media attention.

There are some scientific reasons to be careful. AAV8 can cause mild liver toxicity in healthy people, and the steroids used to treat that could lead to complications in people with OTC. With so little known about these effects, the RAC members suggested that the researchers lower the dose to one that is more likely to be safe, even if it is potentially not effective.

After some discussion, the RAC voted unanimously to approve the trial. However, that came with a long list of conditions, including that the treatment first be tested in a second animal species. The researchers disagree with most of the conditions, believing that more expensive animal trials will add nothing. They feel that they are being held to a different standard from most trials.

Dimension still plans to submit an application to the US Food and Drug Administration (FDA) later this year to start a clinical trial. It is unclear how heavily the RAC’s recommendations weigh into FDA decisions, but Wadsworth says that the company will conduct its trials overseas if necessary. “These patients have been waiting a long time,” he says.

He is right. Therapies can be tested in non-human animals only for so long — at some point, volunteers such as Gelsinger must step forward. Yet the echoes of a trial done 17 years ago cannot be easily silenced. In fact, Gelsinger’s name came up several times at the RAC meeting. Researchers from the University of Pennsylvania in Philadelphia had even mentioned him earlier that morning, when proposing the first human trial of CRISPR gene-editing technology as a treatment for cancer. The RAC approved that proposal, but its implication was clear: take care. Avoidable failures could stymie CRISPR research for decades. History must not repeat itself.

Nature 534, 590 (30 June 2016) doi:10.1038/534590a  Nature Original web page at Nature


Mothers will do anything to protect their children, but mongooses go a step further

Mongooses risk their own survival to protect their unborn children through a remarkable ability to adapt their own bodies, says new research published in Frontiers in Ecology and Evolution.

Pregnancy can takes a physical toll that, according to some theories, may increase the mother’s levels of toxic metabolites that cause oxidative damage.

Increased oxidative damage can cause complications during pregnancy, but these results show how some mammals have evolved to specifically minimize such damage, albeit only temporarily.

“We think mother mongooses shield their offspring by reducing their own levels of oxidative damage during breeding;” explained Dr. Emma Vitikainen of the Centre of Ecology and Conservation at the University of Exeter, and lead author of the study, “However, she could be trading her own long term well being for the short term benefit of protecting the growing pups.”

Vitikainen and her colleagues followed groups of wild banded mongooses over five years, measuring oxidative damage markers, as well as the animals’ health and survival. Oxidative damage is a normal byproduct of metabolism throughout an animal’s lifespan, but they found that pregnant mongooses showed lower than expected toxin levels, refuting current theories that damage increases during pregnancy.

The mongooses with the least evidence of oxidative damage were also the most successful at reproducing. They had the largest litters of pups, and these pups had higher chances of surviving to independence. Not all mongooses showed the same protective capabilities however, and mongooses with more oxidative damage produced pups with lower survival rates, while also being in poor health themselves.

This shielding effect may be partially explained by changes in the content of the mother’s’ blood, but the details are not yet fully understood. “Our study shows that mothers might be adjusting their physiology,” said Vitikainen; “It would be quite a remarkable adaptation.”

Vitikainen also found that this effect was only temporary and that oxidative damage returned to normal levels after pregnancy. This suggests that the protective mechanisms during pregnancy may be unsustainable and that they have long term, potentially harmful, consequences for the mother’s survival.

“An important subject for future research is to determine whether the changes that happen in pregnant mothers are there to benefit the mother, child, or both;” continued Vitikainen; “If there are negative consequences for mothers, we’d like to understand how they could be mitigated.” Science Daily  Original web page at Science Daily


Why women earn less: Just two factors explain post-PhD pay gap

Women earn nearly one-third less than men within a year of completing a PhD in a science, technology, engineering or mathematics (STEM) field, suggests an analysis of roughly 1,200 US graduates.

Much of the pay gap, the study found, came down to a tendency for women to graduate in less-lucrative academic fields — such as biology and chemistry, which are known to lead to lower post-PhD earnings than comparatively industry-friendly fields, such as engineering and mathematics.

But after controlling for differences in academic field, the researchers found that women still lagged men by 11% in first-year earnings. That difference, they say, was explained entirely by the finding that married women with children earned less than men. Married men with children, on the other hand, saw no disadvantage in earnings.

Many studies have reported similar gender pay gaps and have identified similar contributing factors — but few have systematically broken down the relative contributions of different variables, says Bruce Weinberg, an economist at the Ohio State University in Columbus who led the study, published in the May issue of American Economic Review. “I was quite surprised that we could explain the wage gap using just field of study and family structure,” he says.

An unmarried, childless woman earned — on average — the same annual salary after receiving her doctorate as a man with a PhD in the same field, the researchers found. The study examined the employment and earnings of 867 men and 370 women who graduated between 2007 and 2010 from 4 different universities.

Weinberg says that the data cannot identify or tease apart factors that might explain why married women with children earn less — among the possibilities, whether employers assign different responsibilities and salaries to these women, or whether the women spend less time or energy on their careers. But, he says, “our data suggest that these positions, as they are currently structured and operate, are not fully family-friendly for women”.

The findings support earlier research that suggests that parental and household responsibilities often affect women disproportionately, particularly in environments without adequate work–life and family policies, says Heather Metcalf, director of research and analysis for the US Association for Women in Science (AWIS) in Alexandria, Virginia.

The analysis is part of the UMETRICS project, based at the University of Michigan in Ann Arbor, which links anonymized census data on employment and income to student information from a consortium of universities, mainly in the midwestern United States.

Mary Ann Mason, a law professor at the University of California, Berkeley, says that the work is a “good, careful study”, albeit limited in that it cannot yet provide information on what happened in later postdoc years. Research by Mason and others suggests that women who have young children within 5–10 years after earning their PhDs are less likely to have tenure-track jobs or to hold tenured faculty positions than men or women without children, for instance.

An important missing piece, says economist Shulamit Kahn at Boston University in Massachusetts, is whether the women and men in the study worked equal numbers of hours. Kahn’s research suggests that, outside of academia, female scientists tend to work slightly fewer hours than do their male counterparts. (That paper did not examine scientists’ family status).

Weinberg says that the team is working to expand and extend the project, first by securing participation from more universities. He hopes, eventually, to be able to track doctoral recipients over the first 5–10 years of their post-PhD careers.

Nature doi:10.1038/nature.2016.19950  Nature Original web page at Nature


Few studies focus on threatened mammalian species that are ‘ugly’

Many Australian mammalian species of conservation significance have attracted little research effort, little recognition, and little funding, new research shows. The overlooked non-charismatic species such as fruit bats and tree rats may be most in need of scientific and management research effort.

Investigators looked at research publications concerning 331 Australian terrestrial mammal species that broadly fall into categories they labeled as the ‘good’ (monotreme and marsupial native species such as kangaroos, echidnas, and koalas), the ‘bad’ (introduced and invasive species such as rabbit and foxes), and the ‘ugly’ (native bats and rodents).

Studies on the ‘good’ were mostly on their physiology and anatomy, with less ecological focus. The ‘bad’ have been the subject of ecological research and methods and technique studies for population control. Despite making up 45% of the 331 species studied, the ‘ugly’ have attracted little study.

“We know so little about the biology of many of these species. For many, we have catalogued their existence through genetics or taxonomic studies, but when it comes to understanding what they eat, their habitat needs, or how we could improve their chances of survival, we are very much still in the dark,” said Dr. Patricia Fleming, lead author of the Mammal Review article. “These smaller animals make up an important part of functioning ecosystems, a role that needs greater recognition through funding and research effort.”  Science Daily  Original web page at Science Daily


* NIH to retire all research chimpanzees

Fifty animals held in “reserve” by the US government will be sent to sanctuaries.

The US National Institutes of Health once maintained a colony of roughly 350 research chimpanzees. Two years after retiring most of its research chimpanzees, the US National Institutes of Health (NIH) is ceasing its chimp programme altogether, Nature has learned.

In a 16 November e-mail to the agency’s administrators, NIH director Francis Collins announced that the 50 NIH-owned animals that remain available for research will be sent to sanctuaries. The agency will also develop a plan for phasing out NIH support for the remaining chimps that are supported by, but not owned by, the NIH.

“I think this is the natural next step of what has been a very thoughtful five-year process of trying to come to terms with the benefits and risks of trying to perform research with these very special animals,” Collins said in an interview with Nature. “We reached a point where in that five years the need for research has essentially shrunk to zero. “

Many advocates of animal research are unhappy with the plan. “Given NIH’s primary mission to protect public health, it seems surprising,” says Frankie Trull, president of the Foundation for Biomedical Research in Washington DC.

The NIH retired about 310 chimpanzees in 2013, in line with recommendations from an internal advisory panel that drew on recommendations from the then-US Institute of Medicine (now the US National Academy of Medicine). The agency maintained a colony of 50 “reserve” animals that could only be used in cases where the research meets a very high bar, such as public-health emergencies

The US Fish and Wildlife Service (FWS) added a second, separate bar to chimp research in June, when it gave research chimps endangered-species protection. This prevents scientists from stressing chimps unless the FWS determines that the work would benefit wild chimpanzees. Researchers have, however, been able to continue some non-invasive behavioural research with the NIH chimps and others.

Trull sees the NIH’s latest move as inconsistent with the logic that drove it to keep a group of reserve chimps. “I don’t understand the decision of ‘we’re going to take that resource away forever’,” she says.

But Stephen Ross, an animal behaviour specialist who served on the 2013 advisory panel, says that from the beginning, the NIH considered retirement of the reserve animals to be an option if researchers were not using them. “It’s clear that chimpanzees are not a needed resource in the biomedical research world,” says Ross, who works at the Lincoln Park Zoo in Chicago, Illinois.

According to Collins, the NIH has received only one application since 2013 to use chimps in research; that application was later withdrawn. The FWS has not received any requests for research exemptions since the endangered-species protection took effect earlier this year.

The NIH’s first priority, Collins writes, will be to transfer 20 NIH-owned chimps from the Southwest National Primate Research Center in San Antonio, Texas, to Chimp Haven, a government-funded sanctuary in Keithville, Louisiana. Next will be the 139 chimps at a facility in Bastrop, Texas, which is owned by the University of Texas MD Anderson Cancer Center.

He told Nature that the NIH will also address the fate of 82 agency-supported chimps that are housed at the Southwest National Primate Research Center, although it has not made any decisions yet. NIH currently pays for these animals’ maintenance, but does not own them.

Collins has asked the NIH’s Division of Program Coordination, Planning and Strategic Initiatives to develop plans for this process. Few sanctuaries can accept research chimps, and many of the ones that can — including Chimp Haven — are nearly full. Collins says that Chimp Haven has, however, immediately opened 25 spots for retired NIH chimps.

The NIH decision essentially ends chimpanzee research at the MD Anderson’s Keeling Center for Comparative Medicine and Research in Bastrop, says the centre’s director, Christian Abee, because its chimps are owned by the government. The centre only performs behavioural and observational research.

“If these chimpanzees are moved to Chimp Haven, these facilities will be empty, while Chimp Haven will have to build more facilities,” says Abee, who notes that the NIH helped to pay for the construction of the Bastrop centre — and that MD Anderson has committed to spend US$500,000 on renovations to the centre, in part to accommodate some of the NIH’s 50 reserve chimps.

“This decision demonstrates a fundamental lack of understanding of the quality of care and the quality of life provided chimpanzees at the Keeling Center.”

Collins says that a 2013 law passed by Congress requires NIH to move chimps to federal sanctuaries, of which Chimp Haven is the only accredited facility. “Bastrop does have many positives and I’m sympathetic with their question,” he says. “But right at the moment we’re bound by the law.”

That law presents major problems for NIH, as Chimp Haven is nearly out of space. If the agency decides that more retirement facilities are needed, it will need to find the money to pay for them. Collins says that the NIH is still discussing how to house its retired chimps, especially since the animals’ eventual deaths will make sanctuary space unnecessary.

Animal-rights activists are thrilled by Collins’s move to retire the NIH chimp colony. “Experimenting on chimpanzees is ethically, scientifically and legally indefensible and we are relieved and happy that NIH is fulfilling its promise to finally end this dark legacy,” says Justin Goodman, director of laboratory investigations at the US organization People for the Ethical Treatment of Animals (PETA). “ We will continue to encourage the same considerations be made for all primates in laboratories.

The group has been pushing the NIH to end primate research. On 20 October, it delivered letters to about 100 people in Collins’s home neighbourhood, asking them to approach him to protest a set of experiments at the NIH that involve removing baby monkeys from their mothers to study stress in infants.

Allyson Bennett, a developmental psychobiologist at the University of Wisconsin–Madison, questions the decision to move them from research facilities to sanctuaries, which are not subject to the same strict oversight and welfare standards that govern NIH-supported centres. She adds that moving the animals to new facilities may create more stress for them.

And researchers who use chimps for conservation work — which may be allowable under the FWS rule — are furious about the NIH decision. Peter Walsh, a disease ecologist at the University of Cambridge, UK, has been leading an effort to develop an Ebola vaccine for wild chimps. He says that the FWS endangered-species protection has jeopardized his work at the University of Louisiana at Lafayette’s New Iberia Research Center, which houses chimps that are neither owned nor supported by the NIH.

The NIH decision further narrows the possibilities for conservation research, Walsh adds, in part because Gabon is the only country other than the United States that allows such work.

“There really is no other place to do conservation-related trials but the US biomed facilities,” he says. “A lot of wild chimps died in order to capture infants for originally stocking NIH’s own captive populations, and populations they have long supported financially. Now, the first time that NIH has ever been asked to give anything back to wild chimps, they cut and run.”

Nature doi:10.1038/nature.2015.18817  Nature  Original web page at Nature


Illegal trade of Indian star tortoises is a far graver issue

Patterned with star-like figures on their shells, Indian star tortoises can be found in private homes across Asia, where they are commonly kept as pets. One can also see them in religious temples, praised as the living incarnation of the Hindu god Vishnu. How did they get there? Suspicious of a large-scale illegal international trade of these tortoises that could in fact pose a grave threat to the survival of the Indian Star tortoise, a team of researchers, led by Dr. Neil D’Cruze from Wildlife Conservation Research Unit, University of Oxford, and World Animal Protection, London, spent 17 months investigating the case focusing on India and Thailand. They have their study published in the open-access journal Nature Conservation.

The present study established that at least 55,000 Indian star tortoise individuals are being poached over the span of a year from a single trade hub in India. Helped by a number of herpetologists and wildlife enforcement officials, the researchers have tracked signals about how sophisticated criminal gangs are exploiting “legal loopholes” and people alike, taking advantage of rural communities and urban consumers in India and other Asian countries.

“We were shocked at the sheer scale of the illegal trade in tortoises and the cruelty inflicted upon them,” comments Dr. Neil D’Cruze. “Over 15 years ago wildlife experts warned that the domestic trade in Indian star tortoises needed to be contained before it could become established as an organised international criminal operation.”

“Unfortunately, it seems that our worst nightmare has come true — sophisticated criminal gangs are exploiting both impoverished rural communities and urban consumers alike,” he also added. “Neither group is fully aware how their actions are threatening the welfare and conservation of these tortoises.”

Although deemed of “Least Concern” on the IUCN Red List when last formally assessed back in 2000, the Indian star tortoise and its increasing illegal poaching and trading can easily lead to a serious risk of the species’ extinction. Other dangers of such unregulated activities include the introduction of invasive species and diseases.

Having spent a year among a rural hunter-gatherer community, researchers established the collection of at least 55,000 juvenile wild Indian star tortoises between January and December 2014. This is already between three and six times more than the last such record dating from about ten years ago.

Collectors tend to poach juvenile tortoises, but it is not rare for them to also catch adults. Based on the individual’s age and health, the tortoises are later sold to vendors at a price of between 50 and 300 Indian Rupees (INR), or between 1 and 5 USD, per animal. “Therefore, we conservatively estimate (assuming no mortalities) that the collector engagement in this illegal operation has a collective annual value of up to 16,500,000 INR (263,000 USD) for their impoverished communities,” comment the researchers.

Consumers seek the Indian star tortoise for either exotic pets or spiritual purposes. With their star-like radiating yellow patterns splashed with black on their shells, not only is this tortoise species an attractive animal, but it was also found to be considered as a good omen among the locals in the Indian state of Gujarat. During their survey, the researchers found over a hundred hatchlings in a single urban household. However, their owner claimed that none of them was kept with commercial intent, although some of the tortoises were meant for close friends and relatives.

On the other hand, there was a case where the researchers came across a Shiva temple hosting a total of eleven Indian star tortoises. Temple representatives there confirmed that the tortoise is believed to represent an incarnation of the Hindu god Vishnu, one of the three central gods in the religion, recognised as the preserver and protector of the universe.

In India vendors do not show the reptiles in public, but they are made available upon a special request. If paid for in advance, a vendor can also supply a larger quantity of the animals at a price ranging from 1,000 to 3,000 INR (15 to 50 USD) per animal. The researchers managed to see seven captive tortoises in private, including six juveniles and one adult, all in visibly poor health. Disturbingly, in order to reach these vendors, the collected tortoise are usually wrapped in cloths and packed into suitcases. Covered by a ‘mask’ of legal produce such as fruit and vegetables, they are transported to the ‘trade hubs’. They are also smuggled abroad to satisfy consumer demand among the growing middle classes in countries such as Thailand and China.

“Despite being protected in India since the 1970’s, legal ‘loopholes’ in other Asian countries such as Thailand and China appear to undermine India’s enforcement efforts,” explains Mr. Gajender Sharma, India’s Director at World Animal Protection, “They are smuggled out of the country in confined spaces, it’s clear there is little or no concern about the welfare of these reptiles.”

“World Animal Protection is concerned about the suffering that these tortoises endure,” he further notes. “We are dealing with an organised international criminal operation which requires an equally organised international approach to combat it.

As a result of their study, the authors conclude that more research into both the illegal trafficking of Indian star tortoise and its effects as well as the consumer demand is urgently needed in order to assess, address and subsequently tackle the issue.  Science Daily  Original web page at Science Daily


Is the eco-tourism boom putting wildlife in a new kind of danger?

Many tourists today are drawn to the idea of vacationing in far-flung places around the globe where their dollars can make a positive impact on local people and local wildlife. But researchers writing in Trends in Ecology & Evolution on October 9th say that all of those interactions between wild animals and friendly ecotourists eager to snap their pictures may inadvertently put animals at greater risk of being eaten.

It’s clear that the ecotourism business is booming. “Recent data showed that protected areas around the globe receive 8 billion visitors per year; that’s like each human on Earth visited a protected area once a year, and then some!” said Daniel Blumstein of the University of California, Los Angeles. “This massive amount of nature-based and eco-tourism can be added to the long list of drivers of human-induced rapid environmental change.”

Blumstein says the new report sets out “a new way of thinking about possible long-term effects of nature-based tourism and encourages scientists and reserve managers to take into account these deleterious impacts to assess the sustainability of a type of tourism, which typically aims to enhance, not deplete, biodiversity.”

The basic premise of the report is this–human presence changes the way animals act and those changes might spill over into other parts of their lives. Those changes in behavior and activity may put animals at risk in ways that aren’t immediately obvious.

“When animals interact in ‘benign’ ways with humans, they may let down their guard,” Blumstein said. As animals get used to feeling comfortable with humans nearby, they may become bolder in other situations, he says. “If this boldness transfers to real predators, then they will suffer higher mortality when they encounter real predators.”

Eco-tourism is in some respects similar in this regard to domestication or urbanization. In all three cases, regular interactions between people and animals may lead to habituation–a kind of taming. Evidence has shown that domesticated silver foxes become more docile and less fearful; a process that results from evolutionary changes but also from regular interactions with humans. Domesticated fish are less responsive to simulated predatory attacks. Fox squirrels and birds that live in urbanized areas are bolder. It takes more to make them flee.

The presence of humans can also discourage natural predators, creating a kind of safe haven for smaller animals that might make them bolder, too. When humans are around, for example, vervet monkeys have fewer run-ins with predatory leopards. In Grand Teton National Park, elk and pronghorn in areas with more tourists spend less time at alert and more time feeding

The question is to what extent do these more relaxed and bolder behaviors around humans transfer to other situations, leaving animals at greater risk in the presence of their natural predators? And what happens if a poacher comes around?

“We know that humans are able to drive rapid phenotypic change in other species,” the researchers write. “If individuals selectively habituate to humans–particularly tourists–and if invasive tourism practices enhance this habituation, we might be selecting for or creating traits or syndromes that have unintended consequences, such as increased predation risk. Even a small human-induced perturbation could affect the behavior or population biology of a species and influence the species’ function in its community.”

Blumstein says they hope to stimulate more research on the interactions of humans with wildlife. It will “now be essential to develop a more comprehensive understanding of how different species and species in different situations respond to human visitation and under what precise conditions human exposure might put them at risk,” he says.  Science Daily  Original web page at Science Daily


Biomedical researchers lax in checking for imposter cell lines

More than half of biomedical researchers say that they do not bother to verify the identity of their cell lines, a survey suggests — even though scientists have been warned for years that many studies are undermined because the cells they use are contaminated or mislabelled.

Of the 446 survey respondents, 52% said that they did not bother to authenticate their cells (which involves checking their species, tissue-type and sex), although more performed checks for obvious signs of contamination. And of those who did make sure their cells were not mislabelled, slightly under half failed to use the gold-standard DNA-based testing method. The survey was carried out by a task force of researchers concerned about cell authentication, and co-ordinated by the non-profit Global Biological Standards Institute (GBSI) in Washington DC.

Misidentified or contaminated cell lines waste research dollars and hamper the reproducibility of research findings. But many scientists told the GBSI that cost and time constraints deterred them from testing. One-fifth said that they were unaware of the issue, and one-quarter seemed complacent, either seeing no need for testing or believing that they were “careful”, according to the survey results, which are published in the journal BioTechniques. Most of the respondents — half of whom were senior or mid-career scientists — had great faith in their own abilities, with almost three-quarters rating themselves as “either expert or above average cell-culturists”.

Little has changed in cell-line authentication and cell-culture practices in the past decade, says Leonard Freedman, the GBSI’s president, and co-author of the study. (A 2004 survey suggested that one-third of laboratories ran checks on cell lines.) “While support for change is strengthening, the scientific community has still not embraced cell authentication as an expected part of the research process,” says Freedman.

Freedman says that the problem of cell identity is a “microcosm of the bigger problems contributing to data irreproducibility”. He wants journals to refuse to publish papers unless authors describe how they authenticated the cell lines used in their work.

In 2013, Nature journals asked authors to give the source of their cell line and whether it had been authenticated, but an analysis earlier this year of a sample of about 60 submitted papers found that only 10% had authenticated their cell lines.

In response, Nature and associated research journals introduced a policy in May requiring authors to check the cell lines used against a database of almost 500 known misidentified cell lines (provided by the International Cell Line Authentication Committee), and to provide details about the source and testing of the cells.

Ultimately, the problem demands that the entire biomedical community embrace the need to systematically authenticate cell lines, says Freedman. “We must commit sufficient time, resources, and expertise to adequately train and educate scientists in best practices for cell-culture experiments,” he says.

Nature doi:10.1038/nature.2015.18544  Nature  Original web page at Nature


Poorly-designed animal experiments in the spotlight

Preclinical research to test drugs in animals suffers from a “substantial” risk of bias because of poor study design, even when it is published in the most-acclaimed journals or done at top-tier institutions, an analysis of thousands of papers suggests. “You can’t rely on where the work was done or where it was published.”

Scientists can take basic steps to avoid possible biases in such experiments, says Malcolm Macleod, a stroke researcher and trial-design expert at the University of Edinburgh, UK. These include randomizing the assignment of animals to the trial’s treatment or control arm; calculating how large the sample needs to be to produce a statistically robust result; ‘blinding’ investigators as to which animals were assigned to which treatment until the end of the study; and producing a conflict-of-interest (COI) statement.

But many published papers make no mention of these methods, according to an analysis that Macleod conducted with Emily Sena, at the University of Edinburgh, and other colleagues. Looking at 2,671 papers from 1992 to 2011 that reported trials in animals, the team found randomization reported in 25%, blinding in 30%, sample-size calculations in fewer than 1% and COI statements in 12%. The papers were not selected at random; they had been included in meta-analyses of experimental disease treatments. Later studies reported randomization, blinding and COI statements at higher rates than did earlier ones, but rates never reached above 45%. “We could clearly be doing a lot better,” Macleod says.

The most-cited scientific journals don’t necessarily publish papers with more robust methods, Macleod adds. In fact, in 2011, the median journal impact factor was generally lower for studies that reported randomization than for publications that didn’t.

The researchers also looked at papers submitted by leading UK institutions to a national research-quality audit. They found that work done at the University of Oxford, the University of Cambridge, University College London, Imperial College London and the University of Edinburgh reported randomization only 14% of the time, and blinding only 17% of the time where it would have been appropriate. Of more than 1,000 publications, only one reported all four bias-reducing measures.

“Although sobering, the findings of this paper are not a surprise, as they add to the existing body of evidence on the need for more rigorous assessments of the experimental design and methodology used in animal research. This is another wake-up call for the scientific community,” said Vicky Robinson, chief executive of the London-based National Centre for the Replacement, Refinement and Reduction of Animals in Research, in a statement distributed by the UK Science Media Centre.

A separate analysis, also published today, shows that animal-based research on the cancer drug sunitinib is plagued with poor study design. A team at McGill University in Montreal, Canada, analysed the design of 158 published preclinical experiments, finding that none reported blinding or sample-size calculations and only 58 reported randomization. The researchers reported that publications were skewed towards those that reported positive effects — so much so that the team believes that published studies overestimate the effect of sunitinib on cancer by 45%.

Jonathan Kimmelman, the biomedical ethicist who led the work, says that journal editors, referees, institutions and researchers must all take responsibility for the poor quality of reporting and the consequent risk of bias. “There’s plenty of blame to go round,” he says.

But journals have in the last few years made efforts to address the problem. In 2010, researchers published ARRIVE guidelines for reporting animal research, which many journals have now endorsed, including Cell, Nature and Science Translational Medicine. And Philip Campbell, editor-in-chief of Nature, notes that since 2013, the journal has asked authors of life-sciences articles to include details about experimental and analytical design in their papers, and, during peer review, to complete a checklist focusing on often poorly-reported methods such as sample size, randomization and blinding.

Nature doi:10.1038/nature.2015.18559   Nature  Original web page at Nature


Way for eagles and wind turbines to coexist

Collisions with wind turbines kill about 100 golden eagles a year in some locations, but a new study that maps both potential wind-power sites and nesting patterns of the birds reveals sweet spots, where potential for wind power is greatest with a lower threat to nesting eagles

Brad Fedy, a professor in the Faculty of Environment at the University of Waterloo, and Jason Tack, a PhD student at Colorado State University, took nesting data from a variety of areas across Wyoming, and created models using a suite of environmental variables and referenced them against areas with potential for wind development. The results of their research appear in PLOS ONE.

Increased mortalities threaten the future of long-lived species and, when a large bird like a golden eagle is killed by wind development, the turbine stops, causes temporary slowdowns and can result in fines to operators

“We can’t endanger animals and their habitats in making renewable energy projects happen,” said Professor Fedy, a researcher in Waterloo’s Department of Environment and Resource Studies. “Our work shows that it’s possible to guide development of sustainable energy projects, while having the least impact on wildlife populations.”

Golden eagles are large-ranging predators of conservation concern in the United States. With the right data, stakeholders can use the modelling techniques the researchers employed to reconcile other sustainable energy projects with ecological concerns.

“Golden eagles aren’t the only species affected by these energy projects, but they grab people’s imaginations,” said Professor Fedy. “We hope that our research better informs collaboration between the renewable energy industry and land management agencies.

An estimated 75 to 110 golden eagles die at a wind-power generation operation in Altamont, California each year. This figure represents about one eagle for every 8 megawatts of energy produced.

Professor Fedy’s map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development.  Science Daily  Original web page at Science Daily


Where commerce, conservation clash: Bushmeat trade grows with economy in 13-year study

Conservation laws also likely drove increased hunting on Bioko Island in Central Africa.

The bushmeat market in the city of Malabo is bustling–more so today than it was nearly two decades ago, when Gail Hearn, PhD, began what is now one of the region’s longest continuously running studies of commercial hunting activity. At the peak of recorded activity in 2010, on any given day more than 30 freshly killed primates, such as Bioko red-eared monkeys and drills, were brought to market and sold to shoppers seeking such high-priced delicacies.

Hearn’s team has now published its comprehensive results of 13 years of daily monitoring bushmeat market activity in the journal PLOS ONE. The researchers recorded more than 197,000 animal carcasses for sale during that time and analyzed market patterns in relation to political, economic and legal factors in the country of Equatorial Guinea in central Africa.

Among their notable findings: Bushmeat sales, a proxy for the level of wildlife hunting, increased steadily over the course of the study period, in tandem with increasing economic prosperity. Bushmeat hunting also rose in response to unenforced environmental conservation laws intended to limit the practice. The study and its findings are noteworthy both for the history of the long-running project and the conservation implications of the results.

The Bioko Biodiversity Protection Program (BBPP), a joint venture of Drexel University in Philadelphia and the National University of Equatorial Guinea (UNGE), is a comprehensive program for research, education and biodiversity protection on Bioko Island. Bioko, a volcanic island in the Gulf of Guinea, is located off the coast of Cameroon in central Africa and is part of the nation of Equatorial Guinea. Bioko’s tropical coastal and montane forests form a relatively understudied biodiversity hotspot, a critical site for numerous species of threatened and endangered monkeys, many of which are at risk because of commercial hunting

Hearn, now an emeritus professor at Drexel University, established BBPP in 1998 and led it until her retirement in 2014. BPPP is now led by Mary Katherine Gonder, PhD, an associate professor in Drexel’s College of Arts and Sciences. Since the start of BBPP, the steady daily monitoring of the commercial bushmeat trade in the capital city of Malabo has formed one of the strongest sources of tangible knowledge about threats to monkeys and other species in the island’s forests.

While Hearn, Gonder and their BBPP colleagues have shared selected data from their bushmeat market surveys with government officials and others over the years, their study’s publication this week is the first time that the full set of data from many years of monitoring–from October 1997 through September 2010–has been made publicly available.

“Every number represents an animal,” said Drew Cronin, PhD, lead author of the new study and a postdoctoral fellow in Gonder’s lab, who earned his doctorate at Drexel under Hearn. The count of carcasses in the bushmeat market creates an objective, comprehensible representation of the losses to Bioko’s forest ecosystem for those with the power to intervene to protect species at risk from overhunting. Their count included carcasses of over 35,200 monkeys; nearly 59,000 wild ungulates; over 4,100 birds; and over 80,900 rodents.

The dynamics in how many animals were sold in the bushmeat market over time, and under what economic, political and legal conditions, are as important as the raw numbers. These trends can inform management and enforcement efforts, both in Bioko and in other places where economic and legal considerations influence the trade and overconsumption of wildlife.

One major trend that Cronin and colleagues found is that bushmeat hunting and availability increased in parallel with economic growth during the 13-year monitoring period. Concurrent with that growth, the dominant method used to capture animals for bushmeat shifted from trapping to shotguns, contributing to more hunting of endangered monkeys, which are predominantly killed by shotgun, in the later years of the study.

Cronin notes that the relationship between economic growth and bushmeat sales on Bioko reflects the nature of bushmeat consumption there. “This bushmeat trade is being largely driven by urban consumers in Malabo who don’t need to eat wildlife to survive,” he said. “There has been a considerable amount of economic development on Bioko, which has resulted in readily accessible alternative protein sources, such as chicken, fish and pork, throughout much of the island, but especially in Malabo. Despite this, most of the valuable bushmeat is being brought to the city and sold.”

Another major finding is that the legal protections Equatorial Guinea enacted in 2007 to limit hunting and sales of primates, the species most highly threatened by the bushmeat trade on the island, were not upheld–and even backfired to the point where bushmeat hunting actually increased. Legal protections of species are necessary to limit hunting, Cronin noted, but that is not sufficient without strong governmental support and enforcement of those laws.  Science Daily  Original web page at Science Daily


* Human embryos are at the centre of a debate over the ethics of gene editing

In the wake of the first ever report that scientists have edited the genomes of human embryos, experts cannot agree on whether the work was ethical. They also disagree over how close the methods are to being an option for treating disease. The work in question was led by Junjiu Huang, a gene-function researcher at Sun Yat-sen University in Guangzhou, China. His team used a technique called CRISPR/Cas9 to cut and replace DNA in non-viable embryos that could not result in a live birth because they were created from eggs that had been fertilized by two sperm. They published a paper in Protein & Cell, which was reported by Nature’s news team on 22 April, confirming rumours that had been circulating for months that scientists were applying such gene-editing techniques to human embryos.

In March, the rumours prompted calls for a moratorium on such research: work in human embryos is contentious because, in principle, any genetic changes will be passed to future generations, a scenario known as germline modification. Some feel that Huang’s group has already crossed an ethical line. “No researcher has the moral warrant to flout the globally widespread policy agreement against altering the human germline,” Marcy Darnovsky, executive director of the non-profit Centre for Genetics and Society in Berkeley, California, wrote in a statement. But whether the experiments carried out by Huang count as germline modification is not straightforward, because the embryos could not have led to a live birth. “It’s no worse than what happens in IVF all the time, which is that non-viable embryos are discarded,” says John Harris, a bioethicist at the University of Manchester, UK. “I don’t see any justification for a moratorium on research,” he adds. Huang says that he chose non-viable embryos to avoid ethical concerns.

Others say that modifying germline cells could be acceptable if it is solely for the purposes of research. George Daley, a stem-cell biologist at Harvard Medical School in Boston, Massachusetts, points out that using CRISPR/Cas9 and other gene-editing tools in human embryos, eggs and sperm could answer plenty of basic scientific questions that have nothing to do with clinical applications. Moreover, a moratorium may be an unrealistic goal. Modifying human embryos is legal in China and in many US states, although the US National Institutes of Health (NIH) forbids the use of federal funds for such research. Asked whether Huang’s study would have been allowed under its rules, the NIH says that it “would likely conclude it could not fund such research” and is watching the technology to see whether its rules need to be modified.

Another point of contention is that Huang’s and colleagues’ gene editing had a low success rate. The CRISPR/Cas9 system was supposed to cut and replace only a gene responsible for a blood disorder. But his team reported that the genome had acquired mutations in many other places too — which could introduce further health problems in a viable embryo. Furthermore, the technique failed entirely in most of the embryos they experimented on. Edward Lanphier, president of Sangamo BioSciences in Richmond, California, who co-authored a 19 March Comment piece in Nature calling for a halt on such research, says that these technical challenges point to the immaturity of the field and thus support arguments for a moratorium on all human germline-modification research. “I think the paper itself actually provides all of the data that we kind of pointed to,” he says. But George Church, a geneticist at Harvard Medical School, disagrees that the technology is that immature. He says that many of the problems with the latest work, such as the off-target mutations in DNA, could have been avoided or lessened had the researchers used the most up-to-date CRISPR/Cas9 methods.

Even if there are side effects, it may still be ethical to allow the technique to become available in clinics, says Harris. To justify banning gene editing for safety reasons, he says, one would not only need to have a reason to think that it will be harmful, but also that this harm would be worse than the genetic disease itself. “It’s not as if the alternative is safe,” he says. “People with genetic diseases are going to go on reproducing.” He likens the concerns to avoiding a surgery because of fear of complications. Hank Greely, a bioethicist at Stanford University in California, notes that there will be different degrees of safety concerns. A situation in which embryos have difficulty implanting and developing in the uterus, for instance, might engender different ethical questions from one in which there is a significant chance that a modification would result in a disability. Greely is not surprised by the disagreements. “I do think it points out and increases the urgency towards getting a conversation going about this,” he says. “A consensus is probably too much for hope for.”

Nature doi:10.1038/nature.2015.17410  Nature

Original web page at Nature


* Scientific publishing: The inside track

Members of the US National Academy of Sciences have long enjoyed a privileged path to publication in the body’s prominent house journal. Meet the scientists who use it most heavily. The building for the National Academy of Sciences was completed in 1924 as a “home of science in America”. The academy’s house journal was established a decade earlier, in part, as a home for members’ papers. In April, the US National Academy of Sciences elected 105 new members to its ranks. Academy membership is one the most prestigious honours for a scientist, and it comes with a tangible perk: members can submit up to four papers per year to the body’s high-profile journal, the venerable Proceedings of the National Academy of Sciences (PNAS), through the ‘contributed’ publication track. This unusual process allows authors to choose who will review their paper and how to respond to those reviewers’ comments. For many academy members, this privileged path is central to the appeal of PNAS. But to some scientists, it gives the journal the appearance of an old boys’ club. “Sound anachronistic? It is,” wrote biochemist Steve Caplan of the University of Nebraska, Omaha, in a 2011 blogpost that suggested the contributed track could be used as a “dumping ground” for some papers. Editors at the journal have strived to dispel that perception. With PNAS currently celebrating its centenary, the news team at Nature decided to examine the contributed track, both to assess its scientific impact and to see which members use it most heavily and why. After analysing a decade’s worth of PNAS papers, we found that only a small number of scientists have used the track at close to the maximum allowable rate. The group includes some of the biggest names in science, and six are past or current members of the journal’s editorial board. These scientists say that the main motivator for using the contributed track is an intense frustration with the peer-review process at other high-profile journals, which they argue has become excessive and laborious.

Our analysis also suggests that the efforts by PNAS to prevent abuse of the contributed track and to boost the quality of papers published by this route are bearing fruit. Although contributed PNAS papers attract fewer citations than those handled through the journal’s standard review process, the gap has narrowed in recent years. “We have worked really hard at this,” says Alan Fersht, a biophysicist at the University of Cambridge, UK, one of PNAS’s associate editors and a heavy user of the contributed track. An inside track to publication for academy members rests deep in PNAS’s DNA. The journal was established in 1914 with the explicit goal of publishing members’ “more important contributions to research” in addition to “work that appears to a member to be of particular importance”. That remit led to the creation of two publishing tracks: contributed and ‘communicated’ papers (manuscripts sent by non-members to colleagues in the academy, who would shepherd them through review). These two tracks were the only ways to get a paper into PNAS until 1995, when biochemist Nicholas Cozzarelli of the University of California, Berkeley, took over as editor-in-chief and introduced ‘direct submissions’, which are handled more like papers at other journals. Direct submissions must pass an initial screen by a member of the editorial board, after which they are assigned to an independent editor — either an academy member or a guest editor — who organizes peer review. Starting in 1972, the journal placed limits on the number of contributed papers that an academy member could submit, and the current annual cap of four was imposed in 1996. Then in 2010, PNAS abolished the communicated track, which was already declining in popularity. Today, more than three-quarters of the papers published in the journal are direct submissions. These papers are much less likely to be accepted than those contributed by academy members. Only 18% of direct submissions were published in 2013, whereas more than 98% of contributed papers were published, according to figures on the journal’s website. (The one caveat is that PNAS has no data on how many papers intended for the contributed track receive negative reviews and never get submitted.)

Despite the impressive acceptance rate for contributed papers, the data collected show that many eligible scientists choose not to submit papers through this track. Of the more than 3,100 academy members who could have used the contributed track between 2004 and 2013, fewer than 1,400 scientists did so. (This might in part reflect where researchers from different fields prefer to publish their work; the academy draws its members from all disciplines, including researchers from fields such as astronomy and mathematics, who rarely send their papers to PNAS.) Most members who used the contributed track did so sparingly: the majority published on average fewer than one contributed paper per year. Only a small group consistently used the track at close to the allowable maximum: from 2004 to 2013, 13 scientists each contributed more than 30 of their own papers. This roster includes some of the best-known people in contemporary science.  Nature

July 22, 2014  Original web page at Nature


* How Australia’s Outback got one million feral camels: Camels culled on large scale

A new study has shed light on how an estimated one million-strong population of wild camels thriving in Australia’s remote outback have become reviled as pests and culled on a large scale. Camels played a significant role in the establishment of Australia’s modern infrastructure, but rapidly lost their economic value in the early part of the 20th century and were either shot or released into the outback. A new study by a University of Exeter researcher has shed light on how an estimated one million-strong population of wild camels thriving in Australia’s remote outback have become reviled as pests and culled on a large scale. Sarah Crowley, of the Environment and Sustainability Institute at the University of Exeter’s Penryn Campus, explored the history of the camel in Australia, from their historic role helping to create the country’s infrastructure through to their current status as unwelcome “invader.” The deserts of the Australian outback are a notoriously inhospitable environment where few species can survive. But the dromedary camel (Camelus dromedarius) prospers where others perish, eating 80% of native plant species and obtaining much of their water through ingesting this vegetation. Yet for numerous Australians, particularly ranchers, conservation managers, and increasingly local and national governments, camels are perceived as pests and extreme measures — including shooting them with rifles from helicopters — are being taken to reduce their population. In her article, published in the journal Anthrozoös, Crowley proposes that today’s Australian camels exemplify the idea of “animals out of place” and discusses how they have come to inhabit this precarious position. She said: “Reports estimate there are upwards of a million free-ranging camels in Australia and predict that this number could double every eight years. As their population burgeons, camels encroach more frequently upon human settlements and agricultural lands, raising their media profile and increasing local animosity toward them.”

The camel was first brought to Australia in the 1800s when the country was in the midst of a flurry of colonial activity. The animals were recognized by pioneers as the most appropriate mode of transport for the challenging environment because they require significantly less water, feed on a wider variety of vegetation, and are capable of carrying heavier loads than horses and donkeys. Camels therefore played a significant role in the establishment of Australia’s modern infrastructure, including the laying of the Darwin-Adelaide Overland Telegraph Line and the construction of the Transnational Railway. Once this infrastructure was in place, however, and motorized transport became increasingly widespread, camels were no longer indispensable. In the early part of the 20th century they rapidly lost their economic value and their displaced handlers either shot their wards or released them into the outback where, quite discreetly, they thrived. It was not until the 1980s that surveys hinted at the true extent of their numbers, and only in 2001 that reports of damage caused by camels were brought to the general populace. Camels are not the most dainty of creatures. Dromedaries are on average six feet tall at the shoulder, rendering cattle fencing no particular obstacle to their movement. By some accounts, camels may not even see small fences and consequently walk straight through them. Groups of camels arriving on agricultural properties and settlements in Australia, normally in times of severe drought, can also cause significant damage in their search for water. In 2009, a large-scale culling operation began. There were objections from animal welfare groups and some landowners who were concerned that the method of culling from helicopters, leaving the bodies to waste, is inhumane. Most objectors, however, were primarily concerned that culling is economically wasteful and felt that the camels should be mustered for slaughter or export.

There are also concerns regarding the global environment, as camels may contribute to the desertification of the Australian landscape. They are also ruminants and thus produce methane, adding to Australia’s carbon emissions. Crowley does not question the accuracy or significance of this, but points out that the environmental impacts of even 1,000,000 feral camels pales in comparison to that of the 28,500,000 cattle currently residing in the country. Still, when dust storms gathered over Sydney in 2009, media reports implied that the camel was the culprit. Camels have in recent times been referred to in Australia as “humped pests,” “a plague,” a “real danger” and “menacing,” and their actions described as “ravaging” and “marauding.” Crowley added: “These terms show how camels have suddenly been attributed agency — their crossing of acceptable human boundaries is somehow deemed purposeful and rebellious. These accusations lie in stark contrast to the praise laid upon those dromedaries who assisted colonists in the exploration and establishment of modern Australia, and highlight how temporal changes in culture — specifically, shifting economic and environmental values — have affected human interpretations of the presence, purpose, and even behavior of Australian camels.”  Science Daily

May 13, 2014  Original web page at Science Daily


Canadian grizzly bears face expanded hunt

Data on grizzly bears in British Columbia are not reliable enough to justify higher hunting quotas, researchers argue. As the Canadian province of British Columbia prepares to open its annual grizzly-bear hunting season, conservation scientists are protesting the provincial government’s decision to expand the number of animals that can be killed. British Columbia officials estimate that there are 15,000 grizzlies (Ursos arctos horribilis) in the province, making up roughly one-quarter of the North American population. Although some sub-populations are declining and the species is listed as of “special concern” by some environmental bodies, it is not listed under Canada’s Species at Risk Act, which would afford the bears government protection. Citing the recovery of some sub-populations, the government has opened up previously closed areas to hunting and increased the number of hunting tags for bear kills from about 1,700 to 1,800. But some researchers say that the original limits for the bear hunt were set too high for sustainable management, and the revised quota could exacerbate that problem. “Wildlife management wraps itself in science and presents itself as being scientific, but really, when you examine it, it isn’t true,” says Paul Paquet, a biologist at the Raincoast Conservation Foundation in Sidney and the University of Victoria, Canada, and a co-author of a letter in Science this week making the complaint.

The allowance is much higher than the actual kill rate — about 300 grizzlies are taken by hunters each year in the province, mainly as trophies — but Paquet and other conservation scientists argue that it is still possible that grizzly bears are dying at a rate that is too high for sub-populations to support. “They’re going in the wrong direction,” says Kyle Artelle, a conservation ecologist at SimonFraserUniversity in Burnaby, Canada, and a co-author of the letter. Last year, Artelle and his colleagues reported that it is common for more bears to die than the government’s stated “maximum allowable mortality rate” of 6% of the population per year. In more than half of British Columbia’s 42 huntable regions the number of deaths from ‘unnatural causes’, such as road accidents and hunting, exceeded that target for at least one three-year period between 2001–2011. The researchers conclude that reducing the risk of such ‘overkills’ to a low level would require an 81% reduction in the target. “Because these are long-lived, slow-reproducing populations, they don’t necessarily recover from overkill,” says Paquet. Garth Mowat a biologist with British Columbia’s ministry of forests, lands and natural-resource operations, counters that the 6% target was never meant to be a hard cap. “We choose a conservative number because we know we’re going to go over it occasionally,” he says. “I think [the quotas] are as good as we can do with the data we have, and based on all that, the hunt is sustainable.”

Artelle disagrees that a 6% allowable mortality figure is conservative. He points out that other studies have come up with estimates of 0–5% for British   Columbia. And although a December 2013 study by Mowat and his colleagues concluded that there are about 13,000–14,000 grizzlies in the province3, Paquet says that the number could be as low as 8,000 or higher than 15,000. The data behind such estimates, which come from sources ranging from aerial surveys to traps that snag the hair of passing bears, are often sparse or outdated, he says. “In many cases [the population estimate] will be based on assumptions that are maybe 10 years old. None of this is easy, obviously. But we need to take account of the uncertainties,” he says. The Convention on International Trade in Endangered Species of Wild Fauna and Flora has banned the import of products from grizzly hunts in British Columbia to Europe, citing the province’s failure to implement a grizzly bear strategy it proposed in 2003, which called for better population assessments, among other things. “In the United States, there’s recourse to courts,” says Paquet, who notes that there are frequent legal battles over US hunting and the country’s Endangered Species Act. “In Canada there’s essentially no appeal.” Nature doi:10.1038/nature.2014.14914  Nature

April 1, 2014  Original web page at Nature



Charismatic mammals can help guide conservation

Formula combines flagship species with lesser-known groups to measure value of hotspots. Does highlighting the plight of charismatic species help conservation efforts as a whole? Expand lions, elephants and other charismatic species are not by themselves good indicators of biodiversity hotspots. But a new analysis suggests that studies of tourist-pleasing big mammals can be part of a cocktail of indicators that produce useful maps for conservation planning. Scientists at conservation organizations often focus their research on large, interesting animals that the public — and donors — love, such as pandas, tigers and gorillas. One rationale is that because many of these ‘charismatic megafauna’ thrive only in large, rich, biodiverse areas, their distribution can act as a proxy for the diversity of whole ecosystems, from microbes up, which is extremely difficult to measure. Conservationists have argued that actions intended to preserve one iconic animal can have an ‘umbrella effect’ and save less-glamorous species that thrive in its shadow. However, some studies have cast serious doubt on the reality of the umbrella effect. A 1998 review by Daniel Simberloff, a biologist now at the University of Tennessee in Knoxville, noted that “whether many other species will really fall under the umbrella is a matter of faith rather than research”. And a report in 2000 found that maps of the ranges of the ‘Big Five’ African mammals popular with tourists — lions (Panthera leo), leopards (Panthera pardus), elephants (Loxodonta africana), African buffalo (Syncerus caffer) and rhinos (Diceros bicornis and Ceratotherium simum) — were “not significantly better for representing the diversity of mammals and birds than choosing areas at random”. So Enrico Di Minin and Atte Moilanen, population biologists at the University of Helsinki, decided to construct a formula that would combine the ranges of the Big Five with other information to make truly useful maps. Their analysis appeared on 9 December in Journal of Applied Ecology.

The duo focused on KwaZulu-Natal, a South African province long known to be a biodiversity hotspot, where the Big Five roam among forests, thickets, bushveld and grasslands. The researchers made thousands of maps at 200 × 200 metre resolution using 662 biodiversity measures, each describing the distribution of a habitat type or of a species. They considered species that conservationists care about most: the endangered, the rare and especially the endemic, meaning the plants and animals that live in KwaZulu-Natal and nowhere else. Di Minin and Moilanen found that the distributions of the Big Five, on their own, did not do a great job of predicting where one might find high biodiversity for other species. In particular, the areas with lots of the charismatic mammals were not necessarily the same places that were rich in invertebrates, reptiles, amphibians or plants. But the researchers also created maps that overlapped several layers of data, showing the distribution of the Big Five as well as those of key birds, reptiles and amphibians. Moreover, they added a layer of information concerning the diversity of habitat types within each unit of surface area they considered. They found that, for a given amount of land, areas that included as much of this diversity as possible also included a high percentage of the area’s plant and invertebrate diversity.

Thus, even in places — and there are many — where data about plants and invertebrates are lacking, information on charismatic megafauna can be useful if it’s supplemented by information on additional animal groups and habitat types may be a reasonable surrogate for all the rest of biodiversity, from bugs to trees to molds to microbes. The “more layers” approach to measurements of biodiversity seems to work in every land- and seascape, says ecological modeller Hugh Possingham of the University of Queensland in Brisbane, Australia. “There are now many surrogacy studies like this one. If you add more layers you get a better result. If you have got more data, use it,” he says. But why use charismatic megafauna at all, if these species are so bad at predicting where less-alluring biodiversity is found? Di Minin says that a map is more useful when it explicitly includes the economically important large animals. “A big proportion of the tourists visiting South Africa are attracted by the big guys. These guys are generating a lot of cash,” he says. The important question, he adds, is “how can we use them to protect more biodiversity?”

January 21, 2014

Original web page at Nature


Medics should plan ahead for incidental findings

US bioethics commission weighs in on debate over how scientists and companies should handle inadvertent discoveries in diagnostic tests. Doctors, researchers and companies should expect to find information they were not looking for in genetic analyses, imaging scans and other tests, concludes a report from the US Presidential Commission for the Study of Bioethical Issues. Moreover, medics and investigators should discuss with patients and research volunteers how these potentially serious findings will be handled before the tests are carried out. The advice given by the report echoes previous recommendations in specific fields. Still, researchers say it is a useful summary of basic overarching principles for grappling with ‘incidental findings’ that occur when a test ordered for one purpose uncovers information about another, unrelated health risk. “They get to the heart of what needs to be done, and there is a need to codify this,” says James Evans, a geneticist at the University of North Carolina in Chapel Hill, of the panel’s recommendations. Although incidental findings have always occurred in medicine and research, the rise of more omniscient tests that can reveal large amounts of information about a person’s risk factors have posed urgent ethical questions about how to deal with such data. Amy Gutmann, who leads the presidential commission, says new technology and other factors “make the likelihood of discovering incidental findings in the clinic, in research and in the commercial direct-to-consumer context a growing certainty”.

“This is an issue that affects everybody,” adds Gutmann, who is also president of the University of Pennsylvania in Philadelphia. Guidelines issued by the American College of Medical Genetics and Genomics (ACMG) in March recommended that doctors who order genetic sequencing for their patients for any reason should also specifically look for unrelated mutations in dozens of named genes, and should tell patients about medical risks associated with these genes. The new report endorses the idea of professional societies developing such guidance, but also says that patients should not have to learn of incidental findings if they do not want to — an important point for those bioethicists who disagreed with the ACMG’s recommendation that doctors could override patient consent in some circumstances. “It is absolutely critical that they endorsed the notion that patients and research participants have a far-reaching right to choose not to get results,” said Ellen Wright Clayton, a paediatrician and lawyer at the Center for Biomedical Ethics and Society at Vanderbilt University in Nashville, Tennessee. The report also says that researchers do not have a duty to seek out health-related findings in people who enrol in their research studies, because that could prove too costly and because researchers may not be adequately trained to find and interpret such results. Yet the analysis concludes that researchers should plan for how they will handle incidental findings — both those that can be anticipated and those that are unexpected — that do arise.

And it advises federal agencies to “continue to evaluate regulatory oversight” of direct-to-consumer testing companies, seemingly supporting actions such as that taken last month by the US Food and Drug Administration, which sent a warning letter to genetic-testing company 23andMe in Mountain View, California. The agency said that 23andMe has not provided information showing that its service is safe and effective. But by attempting to provide advice for such a wide range of settings, the presidential commission may have missed an opportunity to provide more detailed and helpful guidance, says ethicist Alex John London at Carnegie Mellon University in Pittsburgh, Pennsylvania. “Although there are many valuable recommendations in the report, it is not clear to me that such a highly general treatment will significantly advance the state of the debate,” he says. For instance, the report says that researchers may exclude research participants who don’t want to receive incidental findings from their studies, but also that there may be circumstances in which they should override a research participant’s preference not to receive such findings. “If there are cases where the researcher’s duty to look after the welfare of participants is so strong that it overrides the participant’s decision not to learn about incidental findings, then what we need are policies that provide some guidance about how to identify such cases and then to disclose such policies to participants before testing,” London says.

January 7, 2014

Original web page at Nature


Just thinking about science triggers moral behavior

Psychologists find deep connection between scientific method and morality. Public opinion towards science has made headlines over the past several years for a variety of reasons — mostly negative. High profile cases of academic dishonesty and disputes over funding have left many questioning the integrity and societal value of basic science, while accusations of politically motivated research fly from left and right. There is little doubt that science is value-laden. Allegiances to theories and ideologies can skew the kinds of hypotheses tested and the methods used to test them. These, however, are errors in the application of the method, not the method itself. In other words, it’s possible that public opinion towards science more generally might be relatively unaffected by the misdeeds and biases of individual scientists. In fact, given the undeniable benefits scientific progress yielded, associations with the process of scientific inquiry may be quite positive. Researchers at the University of California Santa Barbara set out to test this possibility. They hypothesized that there is a deep-seated perception of science as a moral pursuit — its emphasis on truth-seeking, impartiality and rationality privileges collective well-being above all else. Their new study, published in the journal PLOSOne, argues that the association between science and morality is so ingrained that merely thinking about it can trigger more moral behavior.

The researchers conducted four separate studies to test this. The first sought to establish a simple correlation between the degree to which individuals believed in science and their likelihood of enforcing moral norms when presented with a hypothetical violation. Participants read a vignette of a date-rape and were asked to rate the “wrongness” of the offense before answering a questionnaire measuring their belief in science. Indeed, those reporting greater belief in science condemned the act more harshly. Of course, a simple correlation is susceptible to multiple alternative explanations. To rule out these possibilities, Studies 2-4 used experimental manipulations to test whether inducing thoughts about science could influence both reported, as well as actual, moral behavior. All made use of a technique called “priming” in which participants are exposed to words relevant to a particular category in order to increase its cognitive accessibility. In other words, showing you words like “logical,” “hypothesis,” “laboratory” and “theory” should make you think about science and any effect the presentation of these words has on subsequent behavior can be attributed to the associations you have with that category.

Participants first completed a word scramble task during which they either had to unscramble some of these science-related words or words that had nothing to do with science. They then either read the date-rape vignette and answered the same questions regarding the severity of that transgression (Study 2), reported the degree to which they intended to perform a variety of altruistic actions over the next month (Study 3), or engaged in a behavioral economics task known as the dictator game (Study 4). In the dictator game the participant is given a sum of money (in this case $5) and told to divide that sum however they please between themselves and an anonymous other participant. The amount that participants give to the other is taken to be an index of their altruistic motivation. Across all these different measures, the researchers found consistent results. Simply being primed with science-related thoughts increased a) adherence to moral norms, b) real-life future altruistic intentions, and c) altruistic behavior towards an anonymous other. The conceptual association between science and morality appears strong. Though this finding replicates across different measures and methods, there’s one variable that might limit the generalizability of the effect. There is some evidence suggesting that attitudes towards science vary across political parties with conservatives having become decreasingly trustworthy of science over the past several decades. Though the researchers did include measures of religiosity in their studies, which did not affect the relationship between science and morality, ideally they would have also controlled for political affiliation. It’s not a stretch to imagine that undergraduate students at the University of Santa Barbara disproportionately represent liberals. If so, the relationship between science and morality found here might be stronger in self-described liberals.

That said, there’s also reason to believe that the general public, liberal or conservative, can draw a distinction between the scientific process and its practitioners. In the same way that people might mistrust politicians but still see nobility in the general organizing principles of our political structure, we could hold charitable views of science independent of how it might be conducted. These results might seem encouraging, particularly to fans of science. But one possible cost of assigning moral weight to science is the degree to which it distorts the way we respond to research conclusions. When faced with a finding that contradicts a cherished belief (e.g. a new study suggesting that humans have, or have not, contributed to global warming), we are more likely to question the integrity of the practitioner. If science is fundamentally moral, then how could it have arrived at such an offensive conclusion? Blame the messenger. How can we correct this thought process? A greater emphasis on, and better understanding of, the method might do the trick. It’s significantly harder to deny the import of challenging findings when you have the tools necessary to evaluate the process by which scientists arrived at their results. That new study on global warming is tougher to dismiss when you know (and care enough to check) that the methods used are sound, regardless of what you think the authors’ motivations might be. In the absence of such knowledge, the virtue assigned to “science” might also be a motivational force for ideological distortion, the precise opposite of impartial truth-seeking.

September 17, 2013

Original web page at Nature


US behavioural research studies skew positive

Unconscious biases may drive researchers to overestimate their findings. US behavioural researchers have been handed a dubious distinction — they are more likely than their colleagues in other parts of the world to exaggerate findings, according to a study published today. The research highlights the importance of unconscious biases that might affect research integrity, says Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, Minnesota, who was not involved with the study. “The take-home here is that the ‘bad guy/good guy’ narrative — the idea that we only need to worry about the monsters out there who are making up data — is naive,” Martinson says. The study, published in Proceedings of the National Academy of Sciences, was conducted by John Ioannidis, a physician at Stanford University in California, and Daniele Fanelli, an evolutionary biologist at the University of Edinburgh, UK. The pair examined 82 meta-analyses in genetics and psychiatry that collectively combined results from 1,174 individual studies. The researchers compared meta-analyses of studies based on non-behavioural parameters, such as physiological measurements, to those based on behavioural parameters, such as progression of dementia or depression.

The researchers then determined how well the strength of an observed result or effect reported in a given study agreed with that of the meta-analysis in which the study was included. They found that, worldwide, behavioural studies were more likely than non-behavioural studies to report ‘extreme effects’ — findings that deviated from the overall effects reported by the meta-analyses. And US-based behavioural researchers were more likely than behavioural researchers elsewhere to report extreme effects that deviated in favour of their starting hypotheses. “We might call this a ‘US effect,’” Fanelli says. “Researchers in the United States tend to report, on average, slightly stronger results than researchers based elsewhere.” This ‘US effect’ did not occur in non-behavioral research, and studies with both behavioural and non-behavioural components exhibited slightly less of the effect than purely behavioural research. Fanelli and Ioannidis interpret this finding to mean that US researchers are more likely to report strong effects, and that this tendency is more likely to show up in behavioural research, because researchers in these fields have more flexibility to make different methodological choices that produce more diverse results.

The study looked at a larger volume of research than has been examined in previous studies on bias in behavioural research, says Brian Nosek, a psychologist at the University of Virginia in Charlottesville. However, he and other researchers say that this study shows only a correlation, so it does not prove that being a behavioural researcher or working in the United States causes the more extreme results. Behavioural studies may report more extreme outcomes because they examine more diverse conditions, researchers argue. “One cannot straightforwardly conclude that the predictors are causes of the outcomes,” Nosek says. “To do an experimental test, we would need random assignment to biological or behavioural research and to US or non-US locations.” Fanelli says that the new paper shows that behavioral research outcomes are more variable than in another fields – genetics — which has tighter methodological standards. A key question raised by this study, Fanelli says, is why such differences lead more often towards favourable extreme results in the United States. “Whatever methodological choices are made, those made by researchers in the United States tend to yield subtly stronger supports for whatever hypothesis they test,” Fanelli says.

Fanelli and Ioannidis do not explain why that might be. They found that the ‘small-study effect’, in which overall results are biased towards positive, extreme findings because negative findings from small studies are not published, did not explain their results. “It has to be because of methodological choices made before the study is submitted,” Fanelli says, possibly under pressure from the ‘publish or perish’ mentality that takes hold when career progress depends on high-profile publications. Zubin Master, a bioethicist at Albany Medical College in New York, finds this explanation credible. “The current economic climate may further add to the pressure on researchers to publish in high-profile journals in order to enhance their chances of securing research funds,” he says. But how to verify that possibility is a bigger question. “The value of this study is not to say that this phenomenon is hugely worse in the United States, or in this field of science compared to that one,” Martinson says. “But the fact that you can show it raises the question of what it means.”

September 17, 2013

Original web page at Nature


US brain project puts focus on ethics

Unsettling research advances bring neuroethics to the fore. Optical stimulation of light-responsive neurons in engineered mice can be used to create false memories. The false mouse memories made the ethicists uneasy. By stimulating certain neurons in the hippocampus, Susumu Tonegawa and his colleagues caused mice to recall receiving foot shocks in a setting in which none had occurred. Tonegawa, a neuroscientist at the Massachusetts Institute of Technology in Cambridge, says that he has no plans to ever implant false memories into humans — the study, published last month, was designed just to offer insight into memory formation. But the experiment has nonetheless alarmed some neuroethicists. “That was a bell-ringer, the idea that you can manipulate the brain to control the mind,” says James Giordano, chief of neuroethics studies at Georgetown University in Washington DC. He says that the study is one of many raising ethical concerns, and more are sure to come as an ambitious, multi-year US effort to parse the human brain gets under way.

The BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative will develop technologies to understand how the brain’s billions of neurons work together to produce thought, emotion, movement and memory. But, along with the discoveries, it could force scientists and society to grapple with a laundry list of ethical issues: the responsible use of cognitive-enhancement devices, the protection of personal neural data, the prediction of untreatable neurodegenerative diseases and the assessment of criminal responsibility through brain scanning. On 20 August, US President Barack Obama’s commission on bioethics will hold a meeting in Philadelphia, Pennsylvania, to begin to craft a set of ethics standards to guide the BRAIN project. There is already one major mechanism for ethical oversight in US research: institutional review boards, which must approve any studies involving human subjects. But many ethicists say that as neuroscience discoveries creep beyond laboratory walls into the marketplace and the courtroom, more comprehensive oversight is needed. “The long-term consequences of more brain knowledge — whether it’s good for an ethnic group or threatens your personal identity — there’s sort of no one in charge of that,” says Arthur Caplan, director of medical ethics at New York University’s Langone Medical Center.

Tonegawa’s study adds to the growing evidence that memories are surprisingly pliable. In the past few years, researchers have shown that drugs can erase fearful memories or disrupt alcoholic cravings in rodents. Some scientists have even shown that they can introduce rudimentary forms of learning during sleep in humans. Giordano says that dystopian fears of complete human mind control are overblown. But more limited manipulations may not be far off: the US Defense Advanced Research Projects Agency (DARPA), one of three government partners in the BRAIN Initiative, is working towards ‘memory prosthetic’ devices to help soldiers with brain injuries to regain lost cognitive skills. Deep brain stimulation (DBS), in which implants deliver simple electrical pulses, is another area that concerns neuroethicists. The devices have been used since the 1990s to treat motor disorders such as Parkinson’s disease, and are now being tested in patients with psychiatric conditions such as obsessive–compulsive disorder and major depression. Giordano says that applying DBS technology more widely requires ethical care. “We’re dealing with things affecting thought, emotion, behaviour — what people hold valuable as the essence of the self,” he says.

Neuroethicists are noticing challenges beyond the medical system, too, particularly in the courtroom. Judy Illes, a neurology researcher at the University of British Columbia in Vancouver, Canada, and co-founder of the International Neuroethics Society, says that brain imaging could affect the criminal-justice system by changing definitions of personal responsibility. Patterns of brain activity have already been used in some courtrooms to assess the mental fitness of the accused. Some ethicists worry that an advanced ability to map human brain function might be used to measure an individual’s propensity for violent or aberrant behaviour — or even, one day, to predict it. At next week’s meeting, the presidential commission will hear from each of the US agencies involved in the BRAIN Initiative — DARPA, the National Institutes of Health and the National Science Foundation — about preliminary scientific plans and anticipated ethical issues. Lisa Lee, the commission’s executive director, says that the group plans to discuss broad ethical concerns for human and animal participants in neuroscience research, and also the societal implications of discoveries that could arise from the BRAIN Initiative. Although no specific timeline has been set, the commission typically holds three to four meetings over a period of up to 18 months, culminating in recommendations to the President.

As neuroethicists wade into the issues, they may look to the precedent set by the Human Genome Project’s Ethical, Legal and Social Implications (ELSI) research programme, which has provided about US$300 million in study support over 23 years. The programme raised the profile of genetic privacy issues and laid the foundations for the Genetic Information Nondiscrimination Act of 2008, which prohibits discrimination by employers and health insurers on the basis of genetic information. Thomas Murray, one of the architects of ELSI and president emeritus of the Hastings Center, a bioethics research institute in Garrison, New York, is among the speakers invited to the commission meeting. He considers the BRAIN Initiative a timely opportunity to develop an ELSI programme for neuroscience. “There will be wonderful questions about human responsibility, human agency,” he says. “It’s never too soon to begin.”

September 3, 2013

Original web page at Nature


Many animal studies of neurological disease appear to overstate the significance of their results

A statistical analysis of more than 4,000 data sets from animal studies of neurological diseases has found that almost 40% of studies reported statistically significant results — nearly twice as many as would be expected on the basis of the number of animal subjects. The results suggest that the published work — some of which was used to justify human clinical trials — is biased towards reporting positive results. This bias could partly explain why a therapy that does well in preclinical studies so rarely predicts success in human patients, says John Ioannidis, a physician who studies research methodology at Stanford University in California, and who is a co-author on the study published today in PLoS Biology. “The results are too good to be true,” he says. Ioannidis’s team is not the first to find fault with animal studies: others have highlighted how small sample sizes and unblinded studies can skew results. Another key factor is the tendency of researchers to publish only positive results, leaving negative findings buried in lab notebooks. Creative analyses are also likely culprits, says Ioannidis, such as selecting the statistical technique that gives the best result.

These problems that can affect patient care in hospitals, cautions Matthias Briel, an epidemiologist at University Hospital in Basel, Switzerland. Preclinical studies influence clinical guidelines when human data are lacking, he says. “A lot of clinical researchers are not aware that animal studies are not as well planned as clinical trials,” he adds. Ioannidis and his colleagues mined a database of meta-analyses — analyses of data from multiple studies — of neurological disease research on animals. They focused on 160 meta-analyses of Alzheimer’s disease, Parkinson’s disease and spinal-cord injury, among others. The researchers then estimated the expected number of statistically significant findings, using the largest study as a reference. Studies with the largest sample sizes are considered the most precise, and the assumption was that these would best approximate the effectiveness of a given intervention. Of the 4,445 studies, 919 were expected to be significant. But nearly twice as many — 1,719 — reported significant findings. Among the groups most likely to report an inflated number of significant findings were studies with the smallest sample sizes, and those with a corresponding author who reported a financial conflict of interest. The study does not mean that animal studies are meaningless, says Ioannidis, but rather that they should be better controlled and reported. He and his co-authors advocate a registry for animal studies, akin to clinical-trial registries, as a way to publicize negative findings and detailed research protocols. Briel would also like to see standards for clinical research applied to preclinical studies. Clinical trials are often blinded, use predetermined sample sizes and analysis methods, and are stopped only at pre-specified points for an interim analysis. Animal studies, by contrast, rarely follow this schema, says Briel. “These quality-control methods should be introduced more often into preclinical research,” he says. “Preclinical researchers have to realize that their experiments can already have implications for clinical decisions.”

August 6, 2013

Original web page at Nature


Italian stem-cell trial based on flawed data

Scientists raise serious concerns about a patent that forms the basis of a controversial stem-cell therapy. Davide Vannoni, a psychologist turned medical entrepreneur, has polarized Italian society in the past year with a bid to get his special brand of stem-cell therapy authorized. He has gained fervent public support with his claims to cure fatal illnesses — and equally fervent opposition from many scientists who say that his treatment is unproven. Now those scientists want the Italian government to pull out of a €3-million (US$3.9-million) clinical trial of the therapy that it promised to support in May, after bowing to patient pressure. They allege that Vannoni’s method of preparing stem cells is based on flawed data. And Nature’s own investigation suggests that images used in the 2010 patent application, on which Vannoni says his method is based, are duplicated from previous, unrelated papers. The trial is “a waste of money and gives false hope to desperate families”, says Paolo Bianco, a stem-cell researcher at the University of Rome and one of the scientists who says that Vannoni’s 2010 application to the US patent office does not stand up to scrutiny.

“I am not surprised to learn this,” says Luca Pani, director-general of the Italian Medicines Agency (AIFA), which suspended operations at the Brescia-based laboratories of Vannoni’s Stamina Foundation in May 2012, after inspectors concluded that the labs would not be able to guarantee contamination-free preparations of stem cells. Inspectors were not shown systematic methodologies or protocols. “We saw such chaos there, I knew that a formal method wouldn’t exist,” he says. But questions raised over the patent that underpins the methodology needed for the trial could be political dynamite. Well over 100 people with conditions ranging from Parkinson’s to motor neuron disease to coma — nearly half of them children — have already signed up to participate in the government-sponsored trial, despite there being no published evidence that the therapy could be effective. The Stamina Foundation has been given permission to treat more than 80 people on compassionate grounds since 2007, and Vannoni — who has not published follow-up data on patients — says that hundreds more were lined up waiting for treatment when the lab’s operations were suspended. Supporters held angry demonstrations up and down Italy earlier this year. The therapy involves extracting cells from patients’ bone marrow, manipulating them in vitro, and then injecting them back into the same patients. Vannoni has repeatedly avoided revealing details of his method beyond those available in his patent application, which he has referred to as complete.

Nature has independently confirmed that a key micrograph in that patent application, depicting two nerve cells that had apparently differentiated from bone-marrow stromal cells, is not original. Elena Schegelskaya, a molecular biologist at the Kharkov National Medical University, and a co-author of the 2003 paper, told Nature that the micrograph under scrutiny had originated with her team. Like Vannoni’s patent, Schegelskaya’s paper looked at coaxing bone-marrow cells to differentiate into nerve cells. But whereas Vannoni’s patent says that the transformation involved incubating cultured bone-marrow cells for two hours in an 18-micromolar solution of retinoic acid dissolved in ethanol, Schegelskaya’s paper uses a retinoic acid solution with only one-tenth of that concentration, and incubates the cells for several days. So the identical figures represent very different experimental conditions.

Schegelskaya also points out that a monochrome micrograph in the patent, is identical to a colour micrograph that she had published in the Ukrainian Neurosurgical Journal in 2006. Bone-marrow cells can differentiate into only bone, skin or cartilage cells. The idea that they can be forced to turn into other cell types is the basis of the claimed therapeutic potential in Vannoni’s patent. “In fact no-one has ever been able to convincingly show that bone-marrow cells can be converted into nerve cells,” says Elena Cattaneo, a stem-cell researcher who studies Huntington’s disease at the University of Milan, Italy. Vannoni, along with other proponents of bone-marrow stromal-cell therapy, says that the cells in his treatment may work by homing in on any damaged tissue and reducing inflammation or promoting blood-vessel growth there. Last year, the US Patent Office issued a ‘pre-final’ rejection of Vannoni’s patent — one that allows re-submission, although Vannoni has not yet resubmitted. The rejection noted that the application included insufficient details on methodology, that differentiation was unlikely to occur during the very short incubation time described, and that the appearance of nerve-like cells in the culture is likely to “reflect cytotoxic changes”.

The government-sponsored Italian clinical trial was intended to begin on 1 July, but is now being delayed because Vannoni has three times postponed commitments to reveal his method to the government-appointed committee that will prepare the trial’s details. That committee includes representatives from AIFA, the Italian health ministry and various scientific experts. It is led by the Superior Health Institute in Rome, Italy’s leading biomedical-research body. The institute’s president, Fabrizio Oleari, declined to comment. Nature made repeated attempts to contact Vannoni by e-mail and telephone for comment, since last Wednesday, 26 June, but received no response. The Stamina Foundation did not respond to emails. Irving Weissman, director of the Stanford Institute for Stem Cell Biology and Regenerative Medicine in California, says that the Italian government would be unwise to support a trial with so little evidence of efficacy.

23 July, 2013

Original web page at Nature


25 generations of cloned mice with normal lifespans created

The technique that created Dolly the sheep, researchers from the RIKEN Center for Developmental Biology in Kobe, Japan have identified a way to produce healthy mouse clones that live a normal lifespan and can be sequentially cloned indefinitely. Their study is published recently in the journal Cell Stem Cell. In an experiment that started in 2005, the team led by Dr. Teruhiko Wakayama has used a technique called somatic cell nuclear transfer (SNCT) to produce 581 clones of one original ‘donor’ mouse, through 25 consecutive rounds of cloning. SNCT is a widely used cloning technique whereby a cell nucleus containing the genetic information of the individual to be cloned is inserted into a living egg that has had its own nucleus removed. It has been used successfully in laboratory animals as well as farm animals. However, until now, scientists hadn’t been able to overcome the limitations of SNCT that resulted in low success rates and restricted the number of times mammals could be recloned. Attempts at recloning cats, pigs and mice more than two to six times had failed.

“One possible explanation for this limit on the number of recloning attempts is an accumulation of genetic or epigenetic abnormalities over successive generations,” explains Dr. Wakayama. To prevent possible epigenetic changes, or modifications to DNA function that do not involve a change in the DNA itself, Wakayama and his team added trichostatin, a histone deacetylase inhibitor, to the cell culture medium. Using this technique, they increased cloning efficiency by up to 6-fold. By improving each step of the SCNT procedure, they were able to clone the mice repeatedly 25 times without seeing a reduction in the success rate. The 581 healthy mice obtained in this way were all fertile, they gave birth to healthy pups and lived a normal lifespan of about two years, similar to normally conceived mice. “Our results show that there were no accumulations of epigenetic or genetic abnormalities in the mice, even after repeated cloning,” conclude the authors. Dr. Wakayama adds, “This technique could be very useful for the large-scale production of superior-quality animals, for farming or conservation purposes.” Dr. Wakayama’s work made the news in 2008 when his team created clones from the bodies of mice that had been frozen for 16 years, using SNCT.

Science Daily
April 2, 2013

Original web page at Science Daily


Stem cells in Texas: Cowboy culture

By offering unproven therapies, a Texas biotechnology firm has sparked a bitter debate about how stem cells should be regulated. Ann McFarlane is losing faith. In the first half of 2012, the Houston resident received four infusions of adult stem cells grown from her own fat. McFarlane has multiple sclerosis (MS), and had heard that others with the inflammatory disease had experienced improvements in mobility and balance after treatment. The infusions — which have cost her about US$32,000 so far — didn’t help, but she knew that there were no guarantees. It is McFarlane’s experience with Celltex Therapeutics, the company that administered the cells, that bothers her. She was told that she had been enrolled in a study to test the cells’ efficacy, but received almost no information about it. And although it wasn’t exactly a secret that the treatment had not been approved by the US Food and Drug Administration (FDA), Celltex, based in Houston, Texas, assured its clients that it was within its rights to provide it. But Celltex was forced to halt treatments in October, and in November a legal battle broke out over who owned the cells still being stored by the company. For weeks, McFarlane was uncertain whether her cells were being grown and stored properly. Although Celltex has told its customers that it has settled the dispute, McFarlane has her doubts. “I am not confident that the cells are viable and safe,” she says. “I probably will not feel comfortable using these cells.”

For the past decade, people such as McFarlane have searched far and wide for clinics offering to deliver on the promise of adult stem cells. Unlike embryonic stem cells, their use does not require the controversial destruction of an embryo. Yet although adult stem cells are claimed to ameliorate a wide range of disorders, they have not yet been shown to do so conclusively in clinical trials in the United States. Relying on customer testimonies and company promises, patients have travelled to clinics in places such as China, Costa Rica, Mexico and Japan to receive them from unregulated, often unaccredited, laboratories, driving a boom in stem-cell tourism. According to Leigh Turner, a bioethicist at the University of Minnesota in Minneapolis, at least ten clinics offer treatments in the United States. Turner and others have questioned the quality of the cells that these firms provide, and several outlets have been forced to stop providing treatments. CellTex has been one of the most visible. Established in 2011, it offered therapies for conditions as varied as arthritis, back pain, MS and Parkinson’s disease. It produced its cells in a 1,400-square-metre, state-of-the-art facility in Sugar Land, Texas, that was registered with the FDA, and — the company claimed — had strict quality control. Although the company flouted federal regulations, which deem such cell therapies to be biological drugs, it adhered to state rules, which had recently been tailored to bolster the stem-cell industry. But even with the support of Texas governor Rick Perry — Celltex’s first patient — the company had to stop offering treatments in the United States.

March 5, 2013

Original web page at Nature


Unchecked antibiotic use in animals may affect global human health

The increasing production and use of antibiotics, about half of which is used in animal production, is mirrored by the growing number of antibiotic resistance genes, or ARGs, effectively reducing antibiotics’ ability to fend off diseases — in animals and humans. A study in the current issue of the Proceedings of the National Academy of Sciences shows that China — the world’s largest producer and consumer of antibiotics — and many other countries don’t monitor the powerful medicine’s usage or impact on the environment. On Chinese commercial pig farms, researchers found 149 unique ARGs, some at levels 192 to 28,000 times higher than the control samples, said James Tiedje, Michigan State University Distinguished Professor of microbiology and molecular genetics and of plant, soil and microbial sciences, and one of the co-authors. “Our research took place in China, but it reflects what’s happening in many places around the world,” said Tiedje, part of the research team led by Yong-Guan Zhu of the Chinese Academy of Sciences. “The World Organization for Animal Health and the U.S. Food and Drug Administration have been advocating for improved regulation of veterinary antibiotic use because those genes don’t stay local.”

Antibiotics in China are weakly regulated, and the country uses four times more antibiotics for veterinary use than in the United States. Since the medicine is poorly absorbed by animals, much of it ends up in manure — an estimated 700 million tons annually from China alone. This is traditionally spread as fertilizer, sold as compost or ends up downstream in rivers or groundwater, taking ARGs with them. Along with hitching rides in fertilizer, ARGs also are spread via international trade, immigration and recreational travel. Daily exposure to antibiotics, such as those in animal feed, allows microbes carrying ARGs to thrive. In some cases, these antibiotic resistant genes become highly mobile, meaning they can be transferred to other bacteria that can cause illness in humans. This is a big concern because the infections they cause can’t be treated with antibiotics. ARGs can reach the general population through food crops, drinking water and interactions with farm workers. Because of this undesirable cycle, ARGs pose a potential global risk to human health and should be classified as pollutants, said Tiedje, an MSU AgBioResearch scientist. “It is urgent that we protect the effectiveness of our current antibiotics because discovering new ones is extremely difficult,” Zhu said. “Multidrug resistance is a global problem and must be addressed in a comprehensive manner, and one area that needs to be addressed is more judicious use and management of wastes that contain ARGs.”

Science Daily
March 5, 2013

Original web page at Science Daily


Dozens of chimpanzees retired from research may have to continue to live in lab-like conditions

It is not easy to find living space for a great ape at short notice, let alone more than 100 of them. Yet that is precisely the problem that administrators at the US National Institutes of Health (NIH) are scrambling to solve, as the biomedical agency takes its most visible and decisive step away from invasive research on chimpanzees. Scrutiny of the NIH’s chimp research enterprise has been intensifying since the release last December of an Institute of Medicine report, which declared most of the invasive chimp studies to be scientifically unnecessary (see Nature 480, 424–425; 2011). The agency, based in Bethesda, Maryland, immediately put a moratorium on new grant applications for work involving chimps. In January 2013, a working group will recommend which of the grants already in progress should continue to be funded. The group will also advise on how many research-eligible chimps the agency should maintain for current and future use, and where they should be housed.

On 21 September, NIH director Francis Collins declared the 110 agency-owned chimps at the New Iberia Research Center, which is part of the University of Louisiana at Lafayette, “permanently ineligible” for research. The move followed the centre’s decision a month earlier not to reapply for a key contract that has supported the NIH chimps housed there for decades. The existing NIH contract with the facility expires in August 2013, leaving the agency little time to avert a housing crisis for animals that can live for up to 60 years in captivity. The problem presented by the New Iberia chimps is just the first manifestation of a bigger conundrum. The federal government owns or supports 670 chimpanzees, many of which were bred between 1986 and 1995, when it was hoped — incorrectly, as it turned out — that they would be a useful model for HIV/AIDS. Although some have been used in virology studies and in the development of mono­clonal antibodies, their use by federal researchers now looks set to dwindle.

Critics say that the housing problem should have been addressed long ago. “This is emblematic of the NIH’s failure to plan,” says Eric Kleiman, a research consultant for the Animal Welfare Institute in Washington DC. “The writing has been on the wall for how many years now.” Collins announced the withdrawal of the 110 chimps from research in personal calls to British primatologist and chimp-welfare activist Jane Goodall, and Wayne Pacelle, director of the Humane Society of the United States in Washington DC. They were pleased with the news, but not with the NIH’s plans for housing many of the chimps. Collins said he would move 10–20 animals to fill available space at Chimp Haven in Keithville, Louisiana, the only federally supported chimpanzee sanctuary. The rest would go to the Texas Biomedical Research Institute in San Antonio, where each social group of four to six chimps would be housed in an indoor–outdoor enclosure about the size of a squash court, with extra space for elevated perches. Chimp advocates say that Chimp Haven, a forested 80-hectare refuge that is currently home to 124 chimps, could accommodate the animals in more appropriate conditions than the research institute. “There is no comparison between a place like Chimp Haven and Texas Biomed,” says Kleiman. “Chimp Haven is chimpanzee-centred. Texas Biomed is a lab. It’s caging.”

“In a perfect world, we would absolutely like to move all of the chimps directly to Chimp Haven,” says Kathy Hudson, NIH deputy director for science, outreach and policy. “We are working collaboratively with Chimp Haven to try to figure out what are the options for being able to do that.” Managers at Chimp Haven say that they could house all 110 animals if they received US$2.55 million to pay for shovel-ready construction projects that could be completed in four months. But the NIH faces a ticking clock and a number of roadblocks. Perhaps the most daunting is the language of the 12-year-old federal law that established Chimp Haven. Although it obliges the government to provide ‘lifetime’ care for retired research chimpanzees, it also caps at $30 million the money that the NIH’s parent agency, the Department of Health and Human Services, can spend in doing so. Chimp Haven, which began receiving government funds in 2002, is expected to hit the $30 million cap during 2013. Hudson says that the agency is looking at all alternatives, including finding space in other private sanctuaries and asking the New Iberia centre to keep some animals for the short term. If expanded to full capacity, Chimp Haven says that it could eventually house around 430 chimps.

November 13, 2012

Original web page at Nature


Faked affiliation of stem cell researcher not caught for years

For years, Japanese researcher Hisashi Moriguchi claimed an affiliation with Harvard Medical School that did not exist. Of the many oddities in the case of Japanese researcher Hisashi Moriguchi, who admitted over the weekend to lying about a startling stem cell experiment, is how for years he managed to claim an affiliation with Harvard Medical School in Boston that did not exist. Even a Harvard stem cell scientist who reviewed a meeting abstract Moriguchi provided—the abstract that eventually led to the crumbling of his claims—did not pick up on the deception. Moriguchi’s stem cell experiment is “something that’s completely eye-catching,” agrees Kevin Eggan, a stem cell biologist at Harvard University and chief scientific officer of The New York Stem Cell Foundation, which held its annual translational research meeting last week. Along with several others, Eggan reviewed the abstract supplied by Moriguchi, who listed his primary affiliation as the University of Tokyo and a secondary affiliation as Harvard Medical School. Like all other would-be presenters to the conference, Moriguchi submitted the abstract to conference organizers several months in advance. In it, according to a copy of the abstract supplied by co-author Chifumi Sato of Tokyo Medical and Dental University, he described transplanting induced pluripotent stem (iPS) cells into cardiac patients. iPS cells are mature cells reprogrammed to behave like those from an early embryo. The transplant would have been the first ever in people of this type of cell. The abstract caught Eggan’s attention, but didn’t raise red flags. “We did note it and we were wondering about it,” he said in an interview last week. “We were really curious to see what this poster was going to look like.”

Because Harvard was listed as a secondary, not primary, affiliation, Eggan thought nothing of it. “There are countless people like that,” he says, and a secondary affiliation often means little. Eggan himself has a secondary affiliation at Columbia University, which makes it easier for him to access different parts of campus, such as the library system. Had Moriguchi claimed to be based at Harvard, and not in Tokyo, “that would have been pretty fishy,” Eggan says. And while he considers himself plugged in to the stem cell world, in retrospect Eggan is not surprised that he hadn’t heard about this groundbreaking experiment. “We of course don’t have a lot of ability to survey what’s happening in Japan,” he says. Moriguchi also listed a Harvard affiliation on papers published by Scientific Reports (a Nature publication) and Hepatology. As is common, neither journal verifies affiliations. (Science also does not double-check author affiliations.) Journals, including Science, say they assume that faked affiliations would be detected by co-authors during the writing or editing process. “We do require that all authors sign an author agreement,” says Gregory Bologna, senior director of the American Association for the Study of Liver Diseases, which publishes Hepatology. The journal’s author agreement requires that authors adhere to standards from the International Committee of Medical Journal Editors. ICMJE notes that “authorship” credit should be based on … final approval of the version to be published,” among other things. But it appears that Moriguchi’s co-authors either did not see final proofs or didn’t read them closely enough.

One co-author, Raymond Chung, is medical director of the liver transplant program at Massachusetts General Hospital (MGH) in Boston and a faculty member at Harvard Medical School. Chung was not taking calls from the media, but an MGH spokeswoman, Sue McGreevey, wrote in an e-mail message that Chung did not see the articles when they were in process: “Although Dr. Chung assisted Dr. Moriguchi with the initial preparation and editing of these papers, he was not sent any of the proofs for review. Dr. Moriguchi was the corresponding author, and Dr. Chung had no notification of changes Dr. Moriguchi may have made to the original manuscripts or proofs.” Chung and Moriguchi were co-authors on nine papers dating back to 2002, including seven published in Hepatology. Six of those were correspondence from 2010, and a seventh was an examination of hepatitis C treatment published in 2003. Not only did Moriguchi list a Harvard affiliation on all seven papers that Harvard says did not exist — he listed himself as a member of Chung’s own department, the gastrointestinal unit at MGH.

October 30, 2012

Original web page at ScienceNow


The Tamiflu story continued: Full reports from clinical trials should be made publicly available, experts argue

The full clinical study reports of drugs that have been authorized for use in patients should be made publicly available in order to allow independent re-analysis of the benefits and risks of such drugs, according to leading international experts who base their assertions on their experience with Tamiflu (oseltamivir). Tamiflu is classed by the World Health Organization as an essential drug and many countries have stockpiled the anti-influenza drug at great expense to taxpayers. But a recent Cochrane review on Tamiflu has shown that even more than ten thousand pages of regulatory evidence were not sufficient to clarify major discrepancies regarding the effects and mode of action of the drug. Writing in this week’s PLoS Medicine, Peter Doshi from Johns Hopkins University School of Medicine in Baltimore, USA, Tom Jefferson from the Cochrane Collaboration in Rome, Italy, and Chris Del Mar from Bond University in the Gold Coast, Australia say that there are strong ethical arguments for ensuring that all clinical study reports are publicly accessible. In the course of trying to get hold of the regulatory evidence, the authors received several explanations from Roche as to why it would not share its data. By publishing that correspondence and comment, the authors assert that experiments on humans should be made available, all the more so given the international public health nature of the drug.

They argue: “It is the public who take and pay for approved drugs, and therefore the public should have access to complete information about those drugs. We should also not lose sight of the fact that clinical trials are experiments conducted on humans that carry an assumption of contributing to medical knowledge. Non-disclosure of complete trial results undermines the philanthropy of human participants and sets back the pursuit of knowledge.” However, according to the authors, industry and regulators have historically treated clinical study reports as confidential documents, impeding additional scrutiny by independent researchers. Using the example of Tamiflu, in which drug companies, drug regulators, and public health bodies such as the World Health Organization and the Center for Disease Control have made discrepant claims about its clinical effects, the authors argue that critical analysis by an independent group such as a Cochrane review group is essential. By recounting the details of an extended correspondence with Tamiflu’s manufacturer Roche, the authors argue that the company provided no convincing reasons to refuse providing access to clinical study reports.

The authors challenge industry to either provide open access to clinical study reports or publically defend their current position of randomized controlled trial data secrecy. They say: “we hope the debate may soon shift from one of whether to release regulatory data to the specifics of doing so. But until these policies go into effect — and perhaps even after they do — most drugs on the market will remain those approved in an era in which regulators protected industry’s data.” In a Perspective article accompanying a new analysis by Peter Doshi and colleagues in PLoS Medicine that recommended full clinical study reports of authorized drugs be made publicly available in order to allow independent re-analysis of the benefits and risks of such drugs, four drug regulators (representing the European Medicines Agency, the French Agence Française de Sécurité Sanitaire des Produits de Santé, the UK’s Medicines and Healthcare products Regulatory Agency, and the Medicines Evaluation Board in The Netherlands) respond. The four regulators say: “We consider it neither desirable nor realistic to maintain the status quo of limited availability of regulatory trials data,” and suggest what they call a “three pronged approach,” which includes establishing rules of engagement to follow the principle of maximum transparency whilst respecting the need to guarantee data privacy and to avert the potential for misuse. The regulators say: “We welcome debate on these issues, and remain confident that satisfactory solutions can be found to make complete trial data available in a way that will be in the best interest of public health.” However, they also lay out arguments for why trial data should not be open for all: personal data protection; non-financial competing interests; and the risks of competition. They conclude: “We welcome debate on these issues, and remain confident that satisfactory solutions can be found to make complete trial data available in a way that will be in the best interest of public health.”

Science Daily
May 1, 2012

Original web page at Science Daily


Case for full publication of controversial flu studies was unbalanced, board member says

A closed meeting, convened last month by the US Government to decide the fate of two controversial unpublished papers on the H5N1 avian influenza virus was stacked in favour of their full publication, a participant now says. Michael Osterholm, who heads the University of Minnesota’s Center for Infectious Disease Research and Policy in Minneapolis, is a member of the National Science Advisory Board for Biosecurity (NSABB), which was tasked with evaluating the research. In a letter to Amy Patterson, associate director for science policy at the National Institutes of Health in Bethesda, Maryland, and sent to other members of the NSABB, Osterholm writes that the meeting agenda and presenters were “designed to produce the outcome that occurred”. The letter was leaked to Nature by an anonymous source. Michael Osterholm appeared in this Nature video explaining the mutant H5N1 controversy in February. The two-day meeting held at the end of March was meant to put an end to swirling controversy around research papers describing an H5N1 avian influenza virus able to pass between mammals, in this case ferrets, which are a model for human flu transmission. When the NSABB was asked by the US government to review the papers for publication in the fall of 2011, they suggested that the papers be published in redacted form, stripping away detail that would allow people to recreate the viruses.

The recommendation was a controversial compromise pitting the ideals of open, international science communication against concerns that the work could be misused by bioterrorists or result in the accidental release of a potentially devastating pathogen. In February, after the World Health Organization convened a brief meeting in Geneva, Switzerland that favoured full publication, the US government asked NSABB to reconvene and reconsider their position in light of modifications made to the papers and new information presented by the researchers. At that meeting, the NSABB revised their position, voting unanimously to publish one of the papers, a manuscript submitted to Nature by Yoshihiro Kawaoka of the University of Wisconsin, Madison and the University of Tokyo. But the board voted only 12-to-6 in favour of publishing the other paper, submitted to Science by Ron Fouchier of Erasmus Medical Center in Rotterdam, the Netherlands. Several members of the board, including Osterholm felt that the modifications made to the paper did not strip away their concerns.

In the letter, dated 12 April, Osterholm writes that the March meeting was stacked heavily in favour of experts doing such research on flu viruses, who had an interest in the outcome of the decision. It did not give voice to disinterested scientists with relevant expertise. He says he had recommended several such individuals and they were not invited. While arguments were made at the meeting that the data in these papers would support surveillance efforts, Osterholm said there were no experts working on the frontlines of influenza surveillance present. A recent analysis by Nature suggested that surveillance efforts require other kinds of support before they could use such data. And a security briefing that Osterholm says influenced many of the NSABB members was “one of them [sic.] most incomplete and, dare I say, useless classified security briefings I’ve ever attended.” Osterholm has worked on biosecurity issues for the department of Health and Human Services for 20 years. Prior to the meeting, assessments of NSABB’s proposed redaction deemed it too difficult to facilitate. NSABB had recommended that only brief announcements of the findings be published, and that only qualified, vetted experts gain access to the full data and methods.

Officials essentially took that option off the table, meaning that NSABB members would have to vote either for full publication or no publication. Osterholm writes that removing this merely “kicked the can down the road,” saying that he heard at the meeting that Fouchier has found “one additional mutation that now confers H5N1 transmissibility between mammals without ferret passage.” Publishing this finding could raise the same issues: “If we believe redaction of the current manuscript is problematic in terms of international agreements, I think the next mutation paper will prove to be the straw that breaks the camel’s back.” Osterholm, when contacted by phone, would say little about the letter, which he did not make public. He asserted that “This type of review must be based on an expert, scientific, risk-benefit basis. And it should involve disinterested experts from a variety of fields.” He also noted: “I have been and continue to be a supporter of this kind of research.” At publication of this story, neither Patterson nor other officials at the NIH were able to respond.

May 1, 2012

Original web page at Nature


Puzzling over links between monkey research and human health

Studies in monkeys are unlikely to provide reliable evidence for links between social status and heart disease in humans, according to the first ever systematic review of the relevant research. The study, published in PLoS ONE, concludes that although such studies are cited frequently in human health research the evidence is often “cherry picked” and generalisation of the findings from monkeys to human societies does not appear to be warranted. Psychosocial factors such as stress, social instability and work dynamics are often believed to play an important role in the emergence of disease, with the negative effects associated with high stress levels deriving from disturbances and sudden change. In evaluating these effects on humans, the scientific community often relies on primate models because it is easier to induce changes in their environment, and because of monkeys’ biological closeness to us. Such studies have historically provided one foundation for the suggestion that factors such as stress or position in a social hierarchy may lead to some people suffering more ill health than others.

Researchers from the London School of Hygiene & Tropical Medicine and the University of Bristol undertook an extensive search of relevant studies and found 14 which offered evidence on coronary artery disease (CAD) and social status and/or psychosocial stress within the natural social hierarchies of primates. They conclude: “Overall, non-human primate studies present only limited evidence for an association between social status and CAD. Despite this, there is selective citation of individual monkey studies in reviews and commentaries relating to human disease aetiology. Such generalisation of data from monkey studies to human societies does not appear warranted.” Lead author, Mark Petticrew, Professor of Public Health Evaluation at the London School of Hygiene & Tropical Medicine, says that without assessing the validity of primate studies first, there is little point in using them to build theories of causes of human ill-health. He says: “Before we can apply results from primates into our society, we need to make sure that the evidence coming from these studies is reliable. Systematic reviews of animal studies are still uncommon but they are essential for assessing the consistency and strength of their findings. It is unscientific to selectively refer a same small handful of positive findings and discard all the others that do not fit the hypothesis.”

The researchers also warn against generalising results from primate data to human societies and point out that many primatologists themselves have drawn attention to the limitations in reaching such conclusions, as their findings are not necessarily comparable between similar species of monkey, and sometimes not even within the same species. Too many factors are at play that can introduce bias, such as the environment the primates were brought up in, the laboratory settings and other potentially traumatic experiences like relocating from the wild to a laboratory. The study suggests that if studies correlating social hierarchy and heart disease in monkeys cannot be generalised to other monkeys, it makes even less sense to extend these findings to human health outcomes.

Science Daily
April 17, 2012

Original web page at Science Daily