Categories
News

US government gives research chimps endangered-species protection

The decision will prohibit most research on captive animals. Wild and captive chimpanzees will now be treated equally under US law. Chimpanzee research in the United States may be nearly over. On 12 June, the US Fish and Wildlife Service (FWS) announced that it is categorizing captive chimpanzees as an endangered species subject to legal protections. The new rule will bar most invasive research on chimpanzees. Exceptions will be granted for work that would “benefit the species in the wild” or aid the chimpanzee’s propagation or survival, including work to improve chimp habitat and the management of wild populations.

The FWS proposed the rule in 2013 to close a loophole that exempted captive chimps from the Endangered Species Act protections that had already been given to their wild counterparts. Under the law, it is illegal to import or export an endangered animal, or to “harm, harass, kill [or] injure” one.

The new regulation will extend these limits to more than 700 chimps in US research laboratories, as well as animals in zoos or entertainment venues such as circuses. The FWS rule also makes it illegal to sell chimpanzee blood, cell lines or tissue across state lines without a permit.

The government’s decision to list captive chimps as endangered drew swift criticism from some science groups. “Practically speaking, [given] the process to get exceptions [for invasive research], I don’t expect chimps will be a viable option,” says Matt Bailey, executive vice president of the National Association for Biomedical Research in Washington DC.

Bailey’s group argues that medical research with chimpanzees benefits both humans and chimps, given that the two species are affected by many of the same diseases, and notes that captive research chimps have been bred for that purpose — making the connection to wild populations tenuous.

Unsurprisingly, animal-rights groups welcomed the government’s action. “This FWS decision continues momentum — adding another barrier to unnecessary and non-productive research purportedly to benefit humans,” Theodora Capaldo, president of the New England Anti-Vivisection Society in Boston, Massachusetts, said in a statement. “We stand on ethically and scientifically firmer ground as we move closer toward ending atrocities under the guise of ‘necessary’ research.”

Biomedical research on chimpanzees has already decreased drastically since the US National Institutes of Health announced in 2013 that it would retire more than 300 captive chimpanzees at its facilities. The agency has retained only 50 chimps, which had been available for invasive research that satisfies ethical considerations that it set out.

The NIH says that there are no active biomedical research projects with these animals. Going forward, the agency says, it  “will work with the Fish and Wildlife Service to comply with any implications of this ruling for future use of NIH-owned chimpanzees.”

The government action comes as chimpanzees are embroiled in another legal matter. The New York Supreme Court is currently considering whether two chimpanzees at Stony Brook University are “persons” illegally detained by the university. The court heard arguments last month on the matter, and a decision is expected this summer.

Nature doi:10.1038/nature.2015.17755

http://www.nature.com/news/index.html  Natur

http://www.nature.com/news/us-government-gives-research-chimps-endangered-species-protection-1.17755  Original web page at Nature

Categories
News

* Serengeti Park disappearing

A huge wildebeest herd migrates across the open, parched plains. Dust swirls up from the many hooves pounding the ground, and forms a haze over the landscape. The setting sun gives the scene a golden tinge. Serengeti National Park is the symbol of Africa’s abundant wildlife. The herd heading towards one of the life-giving water holes, where wildebeest congregate along with zebra, gazelles, antelopes, elephants and other species of the abundant animal population that live in the Serengeti makes the area unique.

This iconic image of the national park is far from guaranteed in perpetuity. Serengeti means “endless plains” in the Maasai language, but the national park is pressured from many quarters and is changing quickly. A large EU-funded research project is taking stock of what is happening in the Serengeti. “The Serengeti is in many ways an iconic image of Africa, and we’ll use the Serengeti as our case study in this research project,” says Eivin Røskaft, a Professor of Biology at the Norwegian University of Science and Technology (NTNU), who is heading the entire EU project.

Serengeti National Park in Tanzania extends into Kenya towards the Mau Forest, the largest virgin montane forests of Africa. The Mara River, which originates in the Mau Forest, is the lifeblood of the entire ecosystem of the national park. Together the Mara River, the Mau Forest and the Serengeti constitute one of the world’s most complex ecosystems, and they support a great diversity of fauna and birdlife, as well as special and rare plants. The region also offers crucial resources such as vitally important water, food for animals and humans, wood for fuel and construction, land that can be cultivated — and nature experiences.

“Everything we harvest from nature are services that the ecosystem services provides us. These resources are deteriorating little by little, and what we see in the Serengeti is that the pressures on the ecosystem can become so large that that they are no longer sustainable. In the worst case, the Serengeti could disappear completely in a few decades,” says Røskaft. What makes the Serengeti especially vulnerable is that residents of the area live close to the park and are completely dependent on nature. Because of this, the consequences of changes in natural resources are immediately apparent. The area lends itself well to research because scientists can extract clear data.

Moreover, much research has already been conducted in the Serengeti, so a large amount of material already exists on which to build, which enables scientists to see longitudinal changes. Researchers are studying three different drivers of the pressure on the Serengeti-Mara ecosystem. One of the drivers is climate change. In recent years, the climate has become warmer, the dry season longer and the rains more powerful, resulting in soil erosion and washouts. The wet season has also shifted. All these factors create challenges for vegetation, animals and humans in the area.

Population growth is another driver. In 1961 Tanzania had 8,000,000 inhabitants. Today the number is 50,000,000, and in 20 years it is estimated to double. The population growth increases the need for food, and both legal hunting and poaching of wildlife have escalated in the Serengeti-Mara. Livestock numbers have also increased, intensifying the pressure on pasturelands. Cooking primarily takes place on open fires, and the population growth means a growing demand for more firewood and building materials. The unique Mara Forest is now in the process of being cut down. Due to the expanding population, people are pushing farther and farther into the national park to extract natural resources.

Researchers are investigating the development of infrastructure. People in the area have gained greater access to electricity, and most Tanzanians are more prosperous than before. The road network is being developed, including a road planned to run through the national park that will cross the wildebeest migration route. This has generated great debate both nationally and internationally. Besides the three main drivers of change in the Serengeti, says Røskaft, the researchers will also examine how these factors are affecting the wildebeest population and the life cycles of wildebeest, zebra, impala and African wild dogs.

The scientists are also mapping changes in disease prevalence. Malaria and sleeping sickness spread via insects (malaria mosquitoes and tsetse flies), and climate change can alter the pattern of disease transmission. Road construction enables people to move more and over longer distances, potentially leading to an increased spread of AIDS. Researchers are also mapping demographic changes in population growth, fertility and mortality.

Norway is in the driver’s seat. Nearly 100 scientists are involved in the research project entitled “Linking biodiversity, ecosystem functions and services in the Serengeti-Mara region, East Africa.” The researchers come from 13 different research institutions in Kenya, Tanzania, Denmark, Norway, Germany, Scotland and the Netherlands. Norway, via NTNU and Eivin Røskaft, are in the driver’s seat for the project. Thirteen people from NTNU are involved so far: five professors, three researchers, two technical personnel and three doctoral students. In addition, more master’s students and doctoral candidates will join the research team, and several of NTNU’s students and PhD candidates on the project come from Tanzania.

In answer to the question of what he considers the most important aspect of this project, Røskaft responds that “the most important thing is to strengthen expertise in Tanzania. My top goal and motivation are to build up good and safe professional expertise with Tanzanians. It’s rewarding to see people gain confidence and faith in the job that they do.” Major international players like WWF and IUCN will and can influence what happens in the Serengeti. Many international stakeholders have waged massive protests against road construction, for example. “Especially for countries like Kenya and Tanzania, which have such abundant natural resources, it is important for locals to have the necessary expertise to make their own assessments based on sound professional judgment,” says Røskaft.

http://www.sciencedaily.com/  Science Daily

http://www.sciencedaily.com/releases/2015/05/150521210608.htm Original web page at Science Daily

Categories
News

* Global decline of large herbivores may lead to an ’empty landscape’

The decline of the world’s large herbivores, especially in Africa and parts of Asia, is raising the specter of an ’empty landscape’ in some of the most diverse ecosystems on the planet. Many populations of animals such as rhinoceroses, zebras, camels, elephants and tapirs are diminishing or threatened with extinction in grasslands, savannahs, deserts and forests. The decline of the world’s large herbivores, especially in Africa and parts of Asia, is raising the specter of an “empty landscape” in some of the most diverse ecosystems on the planet, according to a newly published study.

Many populations of animals such as rhinoceroses, zebras, camels, elephants and tapirs are diminishing or threatened with extinction in grasslands, savannahs, deserts and forests, scientists say. An international team of wildlife ecologists led by William Ripple, Oregon State University distinguished professor in the College of Forestry, conducted a comprehensive analysis of data on the world’s largest herbivores (more than 100 kilograms, or 220 pounds, on average), including endangerment status, key threats and ecological consequences of population decline. They published their observations in Science Advances, the open-access online journal of Science magazine.

The authors focused on 74 large herbivore species — animals that subsist on vegetation — and concluded that “without radical intervention, large herbivores (and many smaller ones) will continue to disappear from numerous regions with enormous ecological, social, and economic costs.” Ripple initiated the study after conducting a global analysis of large-carnivore decline, which goes hand-in-hand, he said, with the loss of their herbivore prey. “I expected that habitat change would be the main factor causing the endangerment of large herbivores,” Ripple said. “But surprisingly, the results show that the two main factors in herbivore declines are hunting by humans and habitat change. They are twin threats.”

The scientists refer to an analysis of the decline of animals in tropical forests published in the journal BioScience in 1992. The author, Kent H. Redford, then a post-doctoral researcher at the University of Florida, first used the term “empty forest.” While soaring trees and other vegetation may exist, he wrote, the loss of forest fauna posed a long-term threat to those ecosystems. Ripple and his colleagues went a step further. “Our analysis shows that it goes well beyond forest landscapes,” he said, “to savannahs and grasslands and deserts. So we coin a new term, the empty landscape.” As a group, terrestrial herbivores encompass about 4,000 known species and live in many types of ecosystems on every continent except Antarctica.

The highest numbers of threatened large herbivores live in developing countries, especially Southeast Asia, India and Africa, the scientists report. Only one endangered large herbivore lives in Europe (the European bison), and none are in North America, which, the authors add, has “already lost most of its large mammals” through prehistoric hunting and habitat changes. The authors note that 25 of the largest wild herbivores now occupy an average of only 19 percent of their historical ranges. Competition from livestock production, which has tripled globally since 1980, has reduced herbivore access to land, forage and water and raised disease transmission risks, they add.

Meanwhile, herbivore hunting occurs for two major purposes, the authors note: meat consumption and the global trade in animal parts. An estimated 1 billion humans subsist on wild meat, they write. “The market for medicinal uses can be very strong for some body parts, such as rhino horn,” said Ripple. “Horn sells for more by weight than gold, diamonds or cocaine.” Africa’s western black rhinoceros was declared extinct in 2011.

Co-author Taal Levi, an assistant professor in Oregon State’s Department of Fisheries and Wildlife, said the causes of the decline of some large herbivores “are difficult to remedy in a world with increasing human populations and consumption.” “But it’s inconceivable that we allow demand for horns and tusks to drive the extirpation of large herbivores from otherwise suitable habitat,” Levi said. “We need to intensify the reduction of demand for such items.” The loss of large herbivores suggests that other parts of wild ecosystems will diminish, the authors write. The likely consequences include: reduction in food for large carnivores such as lions and tigers; diminished seed dispersal for plants; more frequent and intense wildfires; slower cycling of nutrients from vegetation to the soil; changes in habitat for smaller animals including fish, birds and amphibians. “We hope this report increases appreciation for the importance of large herbivores in these ecosystems,” said Ripple. “And we hope that policymakers take action to conserve these species.”

To understand the consequences of large herbivore decline, the authors call for a coordinated research effort focusing on threatened species in developing countries. In addition, solutions to the decline of large herbivores need to involve local people. “It is essential that local people be involved in and benefit from the management of protected areas,” they write. “Local community participation in the management of protected areas is highly correlated with protected area policy compliance.”

http://www.sciencedaily.com/  Science Daily

http://www.sciencedaily.com/releases/2015/05/150501151606.htm  Original web page  at Science Daily

Categories
News

Bibliometrics: The Leiden Manifesto for research metrics

Use these ten principles to guide research evaluation, urge Diana Hicks, Paul Wouters and colleagues. Data are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation.

Before 2000, there was the Science Citation Index on CD-ROM from the Institute for Scientific Information (ISI), used by experts for specialist analyses. In 2002, Thomson Reuters launched an integrated web platform, making the Web of Science database widely accessible. Competing citation indices were created: Elsevier’s Scopus (released in 2004) and Google Scholar (beta version released in 2004). Web-based tools to easily compare institutional research productivity and impact were introduced, such as InCites (using the Web of Science) and SciVal (using Scopus), as well as software to analyse individual citation profiles using Google Scholar (Publish or Perish, released in 2007). In 2005, Jorge Hirsch, a physicist at the University of California, San Diego, proposed the h-index, popularizing citation counting for individual researchers. Interest in the journal impact factor grew steadily after 1995.

Lately, metrics related to social usage and online comment have gained momentum — F1000Prime was established in 2002, Mendeley in 2008, and Altmetric.com (supported by Macmillan Science and Education, which owns Nature Publishing Group) in 2011. As scientometricians, social scientists and research administrators, we have watched with increasing alarm the pervasive misapplication of indicators to the evaluation of scientific performance. The following are just a few of numerous examples. Across the world, universities have become obsessed with their position in global rankings (such as the Shanghai Ranking and Times Higher Education’s list), even when such lists are based on what are, in our view, inaccurate data and arbitrary indicators.

Some recruiters request h-index values for candidates. Several universities base promotion decisions on threshold h-index values and on the number of articles in ‘high-impact’ journals. Researchers’ CVs have become opportunities to boast about these scores, notably in biomedicine. Everywhere, supervisors ask PhD students to publish in high-impact journals and acquire external funding before they are ready. In Scandinavia and China, some universities allocate research funding or bonuses on the basis of a number: for example, by calculating individual impact scores to allocate ‘performance resources’ or by giving researchers a bonus for a publication in a journal with an impact factor higher than 15.

In many cases, researchers and evaluators still exert balanced judgement. Yet the abuse of research metrics has become too widespread to ignore. We therefore present the Leiden Manifesto, named after the conference at which it crystallized (see http://sti2014.cwts.nl). Its ten principles are not news to scientometricians, although none of us would be able to recite them in their entirety because codification has been lacking until now. Luminaries in the field, such as Eugene Garfield (founder of the ISI), are on record stating some of these principles. But they are not in the room when evaluators report back to university administrators who are not expert in the relevant methodology. Scientists searching for literature with which to contest an evaluation find the material scattered in what are, to them, obscure journals to which they lack access. We offer this distillation of best practice in metrics-based research assessment so that researchers can hold evaluators to account, and evaluators can hold their indicators to account.

1) Quantitative evaluation should support qualitative, expert assessment. Quantitative metrics can challenge bias tendencies in peer review and facilitate deliberation. This should strengthen peer review, because making judgements about colleagues is difficult without a range of relevant information. However, assessors must not be tempted to cede decision-making to the numbers. Indicators must not substitute for informed judgement. Everyone retains responsibility for their assessments.

2) Measure performance against the research missions of the institution, group or researcher. Programme goals should be stated at the start, and the indicators used to evaluate performance should relate clearly to those goals. The choice of indicators, and the ways in which they are used, should take into account the wider socio-economic and cultural contexts. Scientists have diverse research missions. Research that advances the frontiers of academic knowledge differs from research that is focused on delivering solutions to societal problems. Review may be based on merits relevant to policy, industry or the public rather than on academic ideas of excellence. No single evaluation model applies to all contexts.

3) Protect excellence in locally relevant research. In many parts of the world, research excellence is equated with English-language publication. Spanish law, for example, states the desirability of Spanish scholars publishing in high-impact journals. The impact factor is calculated for journals indexed in the US-based and still mostly English-language Web of Science. These biases are particularly problematic in the social sciences and humanities, in which research is more regionally and nationally engaged. Many other fields have a national or regional dimension — for instance, HIV epidemiology in sub-Saharan Africa.

This pluralism and societal relevance tends to be suppressed to create papers of interest to the gatekeepers of high impact: English-language journals. The Spanish sociologists that are highly cited in the Web of Science have worked on abstract models or study US data. Lost is the specificity of sociologists in high-impact Spanish-language papers: topics such as local labour law, family health care for the elderly or immigrant employment. Metrics built on high-quality non-English literature would serve to identify and reward excellence in locally relevant research.

4) Keep data collection and analytical processes open, transparent and simple. The construction of the databases required for evaluation should follow clearly stated rules, set before the research has been completed. This was common practice among the academic and commercial groups that built bibliometric evaluation methodology over several decades. Those groups referenced protocols published in the peer-reviewed literature. This transparency enabled scrutiny. For example, in 2010, public debate on the technical properties of an important indicator used by one of our groups (the Centre for Science and Technology Studies at Leiden University in the Netherlands) led to a revision in the calculation of this indicator. Recent commercial entrants should be held to the same standards; no one should accept a black-box evaluation machine.

Simplicity is a virtue in an indicator because it enhances transparency. But simplistic metrics can distort the record (see principle 7). Evaluators must strive for balance — simple indicators true to the complexity of the research process.

5) Allow those evaluated to verify data and analysis. To ensure data quality, all researchers included in bibliometric studies should be able to check that their outputs have been correctly identified. Everyone directing and managing evaluation processes should assure data accuracy, through self-verification or third-party audit. Universities could implement this in their research information systems and it should be a guiding principle in the selection of providers of these systems. Accurate, high-quality data take time and money to collate and process. Budget for it.

6) Account for variation by field in publication and citation practices. Best practice is to select a suite of possible indicators and allow fields to choose among them. A few years ago, a European group of historians received a relatively low rating in a national peer-review assessment because they wrote books rather than articles in journals indexed by the Web of Science. The historians had the misfortune to be part of a psychology department. Historians and social scientists require books and national-language literature to be included in their publication counts; computer scientists require conference papers be counted.

Citation rates vary by field: top-ranked journals in mathematics have impact factors of around 3; top-ranked journals in cell biology have impact factors of about 30. Normalized indicators are required, and the most robust normalization method is based on percentiles: each paper is weighted on the basis of the percentile to which it belongs in the citation distribution of its field (the top 1%, 10% or 20%, for example). A single highly cited publication slightly improves the position of a university in a ranking that is based on percentile indicators, but may propel the university from the middle to the top of a ranking built on citation averages.

7) Base assessment of individual researchers on a qualitative judgement of their portfolio. The older you are, the higher your h-index, even in the absence of new papers. The h-index varies by field: life scientists top out at 200; physicists at 100 and social scientists at 20–30. It is database dependent: there are researchers in computer science who have an h-index of around 10 in the Web of Science but of 20–30 in Google Scholar. Reading and judging a researcher’s work is much more appropriate than relying on one number. Even when comparing large numbers of researchers, an approach that considers more information about an individual’s expertise, experience, activities and influence is best.

8) Avoid misplaced concreteness and false precision. Science and technology indicators are prone to conceptual ambiguity and uncertainty and require strong assumptions that are not universally accepted. The meaning of citation counts, for example, has long been debated. Thus, best practice uses multiple indicators to provide a more robust and pluralistic picture. If uncertainty and error can be quantified, for instance using error bars, this information should accompany published indicator values. If this is not possible, indicator producers should at least avoid false precision. For example, the journal impact factor is published to three decimal places to avoid ties. However, given the conceptual ambiguity and random variability of citation counts, it makes no sense to distinguish between journals on the basis of very small impact factor differences. Avoid false precision: only one decimal is warranted.

9) Recognize the systemic effects of assessment and indicators. Indicators change the system through the incentives they establish. These effects should be anticipated. This means that a suite of indicators is always preferable — a single one will invite gaming and goal displacement (in which the measurement becomes the goal). For example, in the 1990s, Australia funded university research using a formula based largely on the number of papers published by an institute. Universities could calculate the ‘value’ of a paper in a refereed journal; in 2000, it was Aus$800 (around US$480 in 2000) in research funding. Predictably, the number of papers published by Australian researchers went up, but they were in less-cited journals, suggesting that article quality fell.

10) Scrutinize indicators regularly and update them. Research missions and the goals of assessment shift and the research system itself co-evolves. Once-useful metrics become inadequate; new ones emerge. Indicator systems have to be reviewed and perhaps modified. Realizing the effects of its simplistic formula, Australia in 2010 introduced its more complex Excellence in Research for Australia initiative, which emphasizes quality.

Abiding by these ten principles, research evaluation can play an important part in the development of science and its interactions with society. Research metrics can provide crucial information that would be difficult to gather or understand by means of individual expertise. But this quantitative information must not be allowed to morph from an instrument into the goal. The best decisions are taken by combining robust statistics with sensitivity to the aim and nature of the research that is evaluated. Both quantitative and qualitative evidence are needed; each is objective in its own way. Decision-making about science must be based on high-quality processes that are informed by the highest quality data.

Nature 520, 429–431 (23 April 2015) doi:10.1038/520429a

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351

Original web page at Nature

Categories
News

Canadians baulk at reforms to health-research agency

Alain Beaudet says that funding reforms at Canada’s biomedical research agency are designed to increase collaboration. The biggest overhaul in the 15-year history of the Canadian Institutes of Health Research (CIHR) was meant to rescue biomedical researchers from the endless grant applications and Byzantine peer-review processes that had become a feature of the cash-strapped agency. “The research community was complaining bitterly,” says Alain Beaudet, president of the CIHR in Ottawa. “They begged me to make changes.” But now that reality is kicking in, many researchers worry that the changes — which modify how grants are awarded, restructure advisory boards and reallocate the money funnelled through the 13 virtual institutes that comprise the CIHR — will marginalize some fields and hurt early-career researchers. Beaudet says that the plans have been in place for some time, but many researchers — particularly those on the institutes’ scientific advisory boards — complain that the CIHR has failed to communicate the changes adequately, and that the number of simultaneous reforms is overwhelming.

“We’re a little bit stunned,” says Gillian Einstein, a cognitive neuroscientist at the University of Toronto and chair of the board that advises the CIHR’s Institute of Gender Health. “I’m not sure the groundwork was laid so we’d understand what was happening.” Each institute has its own advisory board with up to 12 members, and receives a dedicated allotment of about Can$8.5 million (US$6.7 million) from the CIHR’s Can$1‑billion annual research budget. In the 2016 budget, these outlays will be cut in half, with the savings going into a common fund. To access this new funding source, institutes will have to work together to design cross-disciplinary initiatives that have extra support from a funding partner such as a charity, institution or company. Beaudet says that the CIHR will be responsible for finding many of these partners.

The CIHR also plans to eliminate most of the scientific advisory boards, leaving only three or four panels, which will advise several institutes each. An internal panel is still evaluating the plan, which would not take effect before April 2016. Nearly all of the advisory boards are protesting the changes. “If you’re doing well and have some vision, and someone took half your toolset away, I’d say the rug was pulled out,” says Anthony Jevnikar, a nephrologist at Western University in London, Ontario, who chairs the advisory board for the Institute of Infection and Immunity. Feathers are also being ruffled by changes to the CIHR’s system for awarding grants to proposals submitted by researchers. In July, the agency plans to hand out the first set of awards under a pilot system that divides about half of its research budget between two mechanisms. One of these, the Foundation Scheme, gives seven years of guaranteed funding to established researchers and five years to early-career investigators. Grant recipients can use the money for any project, but are barred from receiving other CIHR funding. The second mechanism, the Project Scheme, awards smaller grants for specified work over a shorter period. But researchers who have been reviewing the first set of applications under the new system see potential problems, particularly for early-career researchers, who often have difficulty showing enough preliminary data to justify specific projects or enough of a track record to win an open-ended grant. New investigators submitted about 40% of the 1,366 grant applications for the Foundation Scheme’s pilot round, but they were involved with less than 20% of the 467 applications that made it through the first phase of peer review. “Young researchers are left out in the cold,” says Jim Woodgett, a molecular biologist at Mount Sinai Hospital in Toronto.

Some institutes also feel imperilled by the changes. Researchers supported by the Institute of Aboriginal Peoples’ Health (IAPH) say that they have few funding options outside the CIHR, and would not find it easy to interest external partners in providing support so that they could receive money through the cross-disciplinary common fund. Their field is relatively new and they are under-represented among public-health researchers, so they feel disadvantaged if they have to compete against other institutes for money and for spots on an advisory board that will also oversee other institutes. “We’re losing our distinctive voice,” says Frederic Wien, a sociologist at Dalhousie University in Halifax who studies aboriginal health

Such concerns are exactly why the reforms are taking place, says Beaudet: “There were not enough collaborations between institutes.” For instance, he says, the other 12 institutes assumed that they did not need to worry about aboriginal peoples’ health, because the IAPH would cover all relevant research. The other institutes’ inattention to indigenous peoples’ health is a huge problem, Beaudet adds. Wien says that the CIHR has not been responsive to complaints over the past several years. He and others are also concerned that the agency might eliminate some institutes altogether. The 13 divisions have existed since the CIHR was founded, but Beaudet says that, by law, external and internal panels must review the institutes every five years; it has always been possible that some could be eliminated.

Nature 520, 272–273 (16 April 2015) doi:10.1038/520272

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/canadians-baulk-at-reforms-to-health-research-agency-1.17319  Original web page at Nature

Categories
News

US societies push back against NIH reproducibility guidelines

Premature’ rules for preclinical research need more flexibility and greater community involvement, say scientific society leaders. Many journals have introduced checklists to comply with NIH guidelines for reproducibility in research. But some societies are not happy with the “one-size-fits-all” approach. Guidelines from the US National Institutes of Health (NIH) that aim to improve the reproducibility of preclinical research are premature, burdensome and may be ineffective, an influential group of scientific societies says. The NIH introduced its guidelines last November in the wake of reports that only a fraction of landmark biomedical findings could be reproduced. The guidance includes aims such as making crucial data sets available on request, and providing thorough descriptions of cell lines, animals, antibodies and other reagents used in experiments. Authors must also state whether they have taken steps such as blinding and randomization. Journals including Nature have adopted the guidelines in the form of checklists to ensure good reporting standards in experiments.

But last week, the president of the Federation of American Societies for Experimental Biology (FASEB), Joseph Haywood, called for a step back from “one size fits all” guidelines in an e-mail to NIH principal deputy director Lawrence Tabak. FASEB, based in Bethesda, Maryland, represents 27 societies that together have some 120,000 members and publish 62 peer-reviewed journals. Haywood argues that the guidelines could make preparing, submitting and reviewing research papers more burdensome, and that the checklists and other requirements imposed by journals could make it more difficult to recruit high-quality peer reviewers. There can be a good case not to blind or randomize some experiments or to have smaller sample sizes, especially when studies are labelled as ‘exploratory’ and will not be used as evidence to launch clinical trials, says Jonathan Kimmelman, a biomedical ethicist who studies clinical-research requirements at McGill University in Montreal, Canada. The NIH guidelines do not dictate how experiments should be designed, however: they just stipulate that experimental methods be explicitly stated, he notes.

FASEB members are worried that they might be pressured to report details that would not affect the experiment just for the sake of reporting, says Yvette Seger, FASEB’s director of science policy, adding: “Would people be so beholden to a checklist that they would forget what kind of science they were looking at?” Much of the concern, she says, is because it is unclear exactly how the guidelines will be implemented and how much flexibility will be allowed. There is also uncertainty around what kinds of experiment count as preclinical research. FASEB leaders think that action should be taken to improve reproducibility, and are grateful that the NIH is tackling the issue, but there should have been more discussion with the scientific community, Seger says. The NIH released the guidelines after a workshop last June with journal editors from more than 30 titles (including Science, Nature, and Cell). “We would have loved to have been at the table,” she says.

More than 80 journals, including at least three run by FASEB member societies, have endorsed the NIH guidelines. But many journals have not. The American Chemical Society (which publishes titles such as ACS Chemical Biology) and the Royal Society Journals (which include Biology Letters) have decided not to endorse them, saying they have their own procedures for ensuring reproducibility. The Biophysical Journal, published by the Cell Press group, published an editorial last week saying that while it agreed with the guidelines’ intent, it felt they were “not pertinent or applicable to the types of science” the journal published. The Proceedings of the National Academies of Science is still deliberating about whether to sign up. The NIH’s Tabak says he is encouraging journals to sign the guidelines, and that he has already seen improvements in the scientific literature because of them, such as authors explicitly stating that an animal study did not have statistical power to assess a hypothesis. The journals who came together to draft a set of general principles for improving transparency and reproducibility should be commended, he says. But he adds that developing guidelines is an iterative process that requires continual refinement. “Journals that sent us the letters should be commended also. They are paying attention, they are thinking about it. This is a good thing.”

Neuroscientist Ulrich Dirnagl at the Charité Medical University in Berlin, who is editor-in-chief of the Journal of Cerebral Blood Flow and Metabolism, thinks that the part journals have to play is relatively minor. “The cleanup that is needed must come from scientists themselves, nudged by the funders,” he says. The crucial factor is that researchers take on the spirit of the guidelines, says Glenn Begley, chief scientific officer at pharmaceutical firm TetraLogic in Malvern, Pennsylvania, who co-authored a 2012 Nature commentary calling for more rigorous standards in biomedical research. “If ‘improved rigour’ merely becomes a box-checking exercise, rather than a fundamental embrace of good scientific method that is inculcated into every experiment, we will have failed,” he says.

Nature doi:10.1038/nature.2015.1735

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/us-societies-push-back-against-nih-reproducibility-guidelines-1.17354  Original web page at Nature

Categories
News

* To save an entire species, all you need is $1. 3 million a year

The international team of researchers includes scientists from the Max-Planck Odense Center at the University of Southern Denmark, Imperial College of London, Australia’s University of Queensland, the American Bird Conservancy, the IUCN SSC Conservation Breeding Specialist Group, the International Species Information System, the World Association of Zoos and Aquariums, and Burak Güneralp, research assistant professor at Texas A&M. The team’s work is published in the current issue of Current Biology. The researchers developed a “conservation opportunity index” using measurable indicators to quantify the possibility of achieving successful conservation of a species, both in its natural habitat and by establishing insurance populations in zoos. They computed the cost of, and opportunities for, conserving 841 species of mammals, reptiles, birds and amphibians listed by the Alliance for Zero Extinction or AZE as restricted to single sites and categorized as Endangered or Critically Endangered on the IUCN Red List.

The total cost: only $1.3 billion per year to safeguard all 841 species, truly a bargain basement price by any standard, the researchers note. Of this, a little over $1.1 billion per year would go towards conserving the species in their natural habitats and the rest for complementary management in zoos. “Although the cost seems high, safeguarding these species is essential if we want to reduce the extinction rate by 2020,” said Prof. Hugh Possingham from the University of Queensland. “When compared to global government spending on other sectors (such as U.S. defense spending, which is more than 500 times greater), an investment in protecting high biodiversity value sites is minor.” “AZE sites are arguably the most irreplaceable category of important biodiversity conservation sites,” notes assistant professor Dalia A. Conde, lead author on the paper at the Max-Planck Odense Center at the University of Southern Denmark. “Conservation opportunity evaluations like ours show the urgency of implementing management actions before it is too late. It is imperative to rationally determine actions for species that we found to have the lowest chances of successful habitat and zoo conservation actions.”

“Habitat loss and fragmentation caused by human activities including expansion of urban areas is a major factor putting at risk many of the species in the AZE list,” adds Güneralp, co-author of the study. There are about 17,000 species that are now threatened with extinction, and there have been five mass extinctions — including the one that killed the dinosaurs. Because of habitat loss and fragmentation, many scientists believe that we are now living during the sixth mass extinction period. While the study indicated that 39 percent of the species scored high for conservation opportunities, it also showed that at least 15 AZE species are in imminent danger of extinction given their low conservation opportunity index. This low index is due to one or a combination of different factors such as: high probability of its habitat becoming urbanized, political instability in the site and/or high costs of habitat protection and management.

Additionally, the opportunity of establishing an insurance population in zoos for these 15 species is low, either due to high costs or lack of breeding expertise for the species. “Our exercise gives us hope for saving many highly endangered species from extinction, but actions need to be taken immediately and, for species restricted to one location, an integrative conservation approach is needed,” says Prof. John E. Fa of Imperial College. The paper states the importance of integrating protection of the places these particular species inhabit with complementary zoo insurance population programs. According to Onnie Byers, Chair of the IUCN SSC Conservation Breeding Specialist Group, “The question is not one of protecting a species in the wild or in zoos. The One Plan approach — effective integration of planning, and the optimal use of limited resources, across the spectrum of management from wild to zoo — is essential if we are to have a hope of achieving the Aichi Biodiversity Targets.”

Nate Flesness, scientific director of the International Species Information System, stresses that “we want to thank the more than 800 zoos in 87 countries which contribute animal and collection data to the International Species Information System, where the assembled global data enables strategic conservation studies like this.” Markus Gusset of the World Association of Zoos and Aquariums added that “Actions that range from habitat protection to the establishment of insurance populations in zoos will be needed if we want to increase the chances of species’ survival.”

http://www.sciencedaily.com/  Science Daily

http://www.sciencedaily.com/releases/2015/03/150316160425.htm Original web page at Science Daily

Categories
News

World’s whaling slaughter tallied

The first global estimate of the number of whales killed by industrial harvesting last century reveals that nearly 3 million cetaceans were wiped out in what may have been the largest cull of any animal — in terms of total biomass — in human history. The devastation wrought on whales by twentieth-century hunting is well documented. By some estimates, sperm whales have been driven down to one-third of their pre-whaling population, and blue whales have been depleted by up to 90%. Although some populations, such as minke whales, have largely recovered, others — including the North Atlantic right whale and the Antarctic blue whale — now hover on the brink of extinction. But researchers had hesitated to put a number on the global scale of the slaughter. That was largely because they did not trust some of the information in the databases of the International Whaling Commission, the body that compiles countries’ catches and that manages whaling and whale conservation, says Robert Rocha, director of science at the New Bedford Whaling Museum in Massachusetts.

Rocha, together with fellow researchers Phillip Clapham and Yulia Ivashchenko of the National Marine Fisheries Service in Seattle, Washington, has now done the maths, in a paper published last week in Marine Fisheries Review (R. C. Rocha Jr, P. J. Clapham and Y. V. Ivashchenko Mar. Fish. Rev. 76, 37–48; 2014). “When we started adding it all up, it was astonishing,” Rocha says. The researchers estimate that, between 1900 and 1999, 2.9 million whales were killed by the whaling industry: 276,442 in the North Atlantic, 563,696 in the North Pacific and 2,053,956 in the Southern Hemisphere.

Other famous examples of animal hunting may have killed greater numbers of creatures — such as hunting in North America that devastated bison and wiped out passenger pigeons. But in terms of sheer biomass, twentieth-century whaling beat them all, Rocha estimates. “The total number of whales we killed is a really important number. It does make a difference to what we do now: it tells us the number of whales the oceans might be able to support,” says Stephen Palumbi, a marine ecologist at Stanford University in California. He thinks that 2.9 million whale deaths is a “believable” figure.

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/world-s-whaling-slaughter-tallied-1.17080  Original web page at Nature

Categories
News

Online debate erupts to ask: is science broken?

A debate this week on ways to improve the practice of science quickly spread to social media. The event at University College London, called ‘Is science broken? If so, how can we fix it?’, included claims that some dubious laboratory practices, such as tweaking statistical analyses to make results seem significant, are widespread. One suggested solution — requiring scientists to register their experimental design and planned analysis with a journal before running any tests — received general support. Chris Chambers, a neuroscientist at Cardiff University, UK, opened the discussion with a call for widespread preregistration, a step that would essentially require scientists to stick to their original protocols throughout the experiments and present all results, even those that failed to reach significance. In 2013, Chambers helped to implement a Registered Reports format at the journal Cortex, where he is now the Registered Reports editor. He adds that at least 13 other journals, including Experimental Psychology and AIMS Neuroscience, have adopted the preregistration model, and Royal Society Open Science is scheduled to do so later this year

Scientists who wish to publish in Cortex first submit a detailed plan for the experiment and the statistical analysis. If the proposal passes peer review, the journal promises to publish the results, even if they do not show an effect, provided that the experiment actually follows the original plan. Chambers says that the traditional publishing model generally encourages scientists to chase exciting results instead of focusing on sound experimental practice. “Journal-based preregistration seeks to solve these problems by making publishing decisions before the results are known,” he says. “I want to look back on this period in 20 years and be able to say that we put in place a publishing model that rewarded transparency and reproducibility,” he says. Some panellists, including Sam Schwarzkopf, an experimental psychologist at University College London (UCL), wondered whether a strict preregistration system might impose too many limits on scientific analysis. In an online comment about the event, Schwarzkopf said that he is “wary” of any system that prevents scientists from thoroughly examining their results. “I think except for the simplest designs there will always be things you can think of only when you see the data,” he wrote.

Schwarzkopf posted his comment on a blog written by Aiden Horner, a UCL cognitive neuroscientist who attended the event. In an interview, Horner said that it remains to be seen whether preregistration will actually make experimental results easier to verify and replicate. “The only way to collect that data is to introduce preregistration as an option and assess its impact,” he says. Overall, he says he left the debate feeling optimistic that scientists could find a way to address its core problems. “New problems emerge in science, but they’re dealt with over time.”

Nature 519, 393 (26 March 2015) doi:10.1038/519393f

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/online-debate-erupts-to-ask-is-science-broken-1.17156  Original web page at Nature

Categories
News

Experts question China’s panda survey

The number of giant pandas living in the wild has risen by a sixth over the past decade, according to a long-anticipated survey unveiled by China’s State Forestry Administration on 28 February. But experts say it is unclear if the results can be compared to previous national counts. The argument is bound to re-ignite the debate over whether the iconic bear should still be categorised as an endangered species. At a press conference in Beijing, deputy forestry minister Chen Fengxue said thousands of people combing 4.36 million hectares of forests in Sichuan, Shaanxi and Gansu provinces from 2011-2014 had found evidence of 1,864 pandas living in the wild. The last survey, conducted in 1998-2002, reported 1,596 pandas. The rise, said Chen, is a result of conservation policies which encourage forest protection and restoration. The survey shows that about two-thirds of pandas reside in the country’s 67 nature reserves — including 27 new reserves in the past decade. Overall, the area in which wild pandas live has increased by 11.8% to 2.58 million hectares. But the new study — China’s fourth national panda survey since the 1970s — searched an area around 72% larger than the previous count, making it hard to compare the two figures.

“We really need to know how panda populations have changed in the same area that was sampled last time,” says David Garshelis, a conservation biologist at Minnesota Department of Natural Resources in Grand Rapids and co-chair of the Bear Specialist Group at the International Union for Conservation of Nature (IUCN). Garshelis also wants to know the survey’s margin of error which — as in the 1998-2002 study — was not given. Chinese officials were not immediately available for comment. The survey has important implications for pandas’ conservation status, says Garshelis, who spoke to Nature before the results had been announced, but on the understanding that the survey would report a number around 1,800. If the IUCN does agree the panda population is increasing, there would be a five-year waiting period to assess the stability of the situation before the bear’s conservation status was downgraded to “vulnerable”, he says. IUCN only categorises a species as “endangered” if its total mature population numbers less than 2,500 and that population is declining (or if the largest single population cluster numbers below 250).

Critics have often questioned the integrity of China’s panda surveys, arguing that they are highly influenced by the political motives of government officials and conservation groups alike.“It’s a fine balancing act, so officials can claim the credit for rising panda populations but the number is not too high to diminish conservation funds,” says a Beijing-based researcher who works closely with the forestry ministry and asked not to be named. There are also concerns about the methods used to estimate panda numbers. The survey combined two techniques, since the bears’ reclusive nature and mountainous habitat make direct counting impossible. The first method, genotyping DNA extracted from mucus in faecal samples, can give higher numbers than the more traditional approach: studying the length of intact bamboo fragments left in excrement, in order to identify individuals by the size of their bites. In areas where panda populations are relatively dense, the bite-size method can “grossly underestimate” the number of animals, says Wei Fuwen, deputy director at Chinese Academy of Sciences (CAS)‘ Institute of Zoology in Beijing and chair of the scientific committee of the fourth panda survey. A DNA-based study in 2006 showed, for example, that there were 66 different bears in the Wanglang nature reserve in Sichuan province, more than double the 27 found in the reserve by bamboo inspection in the 1998-2002 survey. In more sparsely-populated habitats, the two counting methods are likely to give more-similar results, Wei says. The new survey relied mainly on inspection of bamboo in faeces; only 1,308 droppings were subjected to DNA analyses, representing 336 individuals. Wei says the DNA method, when used, did give a higher count, but neither he nor the forestry ministry would reveal the difference.

One thing that even the fiercest critics agree on is that the panda population is far from safe, even if it is actually increasing. “While earlier threats of poaching and deforestation have largely become a thing of the past, panda habitats are increasingly fragmented by roads, railways, dams and mines,” says Ouyang Zhiyun, deputy director of CAS’ Research Centre for Eco-Environmental Sciences in Beijing. The survey shows that, while their total habitable area has increased in the past decade, pandas now dwell in 30 isolated populations separated by insurmountable physical barriers — up from 15 in the last count. Twenty-two of these populations each have less than 30 individuals, and are at high risk of extinction. “This could have devastating consequences,” says Fan Zhiyong, director of the conservation group WWF’s China species programme. “There is an urgent need to stop further habitat fragmentation and to construct ecological corridors to connect isolated panda populations.”

Nature doi:10.1038/nature.2015.17020

http://www.nature.com/news/index.html Nature

http://www.nature.com/news/experts-question-china-s-panda-survey-1.17020 Original web page at Nature

Categories
News

Clinical-trial specialist could be next FDA chief

When Peter Pitts met cardiologist Robert Califf, he was struck by two thoughts: “Red hair, red moustache,” says Pitts, who was then an associate commissioner at the US Food and Drug Administration (FDA). “He really stood out in a crowd.” But when Califf started to talk, Pitts was impressed by how everyone else fell silent to listen. “He doesn’t say things unless he has something to say,” says Pitts. “He has a reputation for not wasting people’s time.” For the past week, people have had plenty to say about Califf. When FDA commissioner Margaret Hamburg announced on 5 February that she would resign in March, it rekindled familiar rumours that Califf — appointed a deputy commissioner at the agency late last month — was being groomed to take her place. Califf, a soft-spoken Southerner with a passion for improving clinical trials, will take a leave of absence from Duke University in Durham, North Carolina, to step into the deputy commissioner post at the end of February. “He is a big get for the FDA,” says Ellen Sigal, founder and chair of Friends of Cancer Research, a think tank and advocacy group in Washington DC. “People are very excited about him coming.” Sigal and others are enthused about Califf’s wide experience in running clinical trials at Duke, where he is currently vice chancellor of clinical and translational research.

His earliest foray into large clinical trials was in the 1980s, at the behest of friend and fellow cardiologist Eric Topol, now at the Scripps Research Institute in La Jolla, California. The two met on medical fellowships at the University of California, San Francisco, where they played football in Kezar Stadium — or, as they affectionately called it, dog-shit field. (“If you stepped in the wrong place …” says Topol.) Califf and Topol’s collaboration led to a landmark study, codenamed GUSTO, of ‘clot-busting’ drugs given to heart-attack patients to break up blood clots. The study swelled to 41,000 patients from more than a thousand hospitals — with patients enrolled from every continent except Antarctica, says Topol. The trial showed that a clot buster made by Genentech of South San Francisco, California, reduced the risk of death following heart attack, and led to a change in treatment guidelines. Califf went on to found and lead the Duke Clinical Research Institute, which now employs more than a thousand people who manage clinical trials.

That clinical-trial expertise will be a particular boon, says Sigal, because the FDA will increasingly be asked to adapt to the growing era of ‘precision medicine’ — treatments tailored to an individual patient’s specific characteristics, such as genetic makeup. Such therapies often call for a different approach to clinical trials, such as tailoring trials for smaller numbers of patients, and matching patients’ genetic profiles to treatments. “I think this is one of the few times in history where one could think about making major changes in how clinical trials are done,” says Califf, who points to the development of electronic medical records as one new tool that could be used to streamline clinical trials. Califf’s work has also put him in close contact with the pharmaceutical industry. For some, those ties are likely to be too close for comfort. Public Citizen, a consumer activist group in Washington DC, has charged that the FDA under Hamburg — who launched an FDA initiative to speed particularly promising drugs through the review process — is too “cozy” with industry. But that industry experience has given him a good understanding of how medical technologies are developed, says Califf. “I have spent a lot of time at the interface of industry and academia and the FDA,” he says. “I feel like that’s a good background for this kind of work.” If Califf were appointed as the new FDA chief, he would have to be approved by Congress before he could take up the post. With elections looming and lawmakers heavily divided, it is hard to predict how they would vote. But Pitts, who is now president of the Center for Medicine in the Public Interest, a non-profit organization in New York, says that many in the FDA are eager to have Califf on board. “Let me put it this way,” he says. “Rob is one of the few guys I would go back to the agency for.” Nature doi:10.1038/nature.2015.16876

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/clinical-trial-specialist-could-be-next-fda-chief-1.16876  Original web page at Nature

Categories
News

US women progress to PhD at same rate as men

The received wisdom that women are more likely than men to drop out of academia at every stage of a scientific career is now false, psychologists say — at least in the United States. An analysis of survey data finds that, since the 1990s, men and women in the country have converted their bachelor’s degrees into science, mathematics and engineering PhDs at roughly equal rates. The study adds to encouraging signs that fewer women are dropping out of science careers than used to be the case. Working with Jonathan Wai, a psychologist at Duke University in Durham, North Carolina, Miller used national surveys of college graduates and doctoral recipients to track cohorts of US science students. More than 4% of male graduates in the physical sciences, engineering, mathematics and computer science who graduated in the 1970s went on to earn a PhD a decade later, the researchers found — compared with less than 3% of female graduates. But for the classes of the 1990s, both men’s and women’s conversion rates were around 3%. In the life sciences, the study found similar convergence, but with less statistical confidence that the gap has closed.

Previous studies had underestimated this trend, Miller says, because they directly compared women’s share of US bachelor’s degrees with their share of doctorates in the same field a decade later. This can be misleading because many US doctorates are now earned by researchers who transferred from different fields, or who obtained their bachelor’s degrees outside the United States. But some people are not sure that the findings are wholly positive: the data suggest that, in many fields, the gender gap has closed mainly because US men are becoming less likely to progress to PhDs. “I don’t see a basis for saying things are getting better for women — just worse for men”, says Curt Rice, a linguist at the University of Tromsø in Norway who is head of Norway’s Committee on Gender Balance and Diversity in Research. But Miller says the data are ambiguous: they could suggest that women have made progress despite falling PhD completion rates. “These numbers show how unattractive PhDs are becoming for both men and women,” argues Simone Buitendijk, vice-rector of Leiden University in the Netherlands. Men still hold three-quarters of US PhDs in engineering, mathematics and the physical sciences. But that is largely because many more men than women applied to study them at undergraduate level. “It’s much earlier in life that women think they shouldn’t go into engineering or mathematics,” comments Shulamit Kahn, a labour economist at Boston University in Massachusetts who studies gender differences in academic science but was not involved in the study. “We have to focus on how they perceive these subjects at high school.” Some progress has been made at other stages of the pipeline.

Women still make up less than one-quarter of professors, even though they earn almost half the doctorates in the United States, according to the US National Science Foundation (NSF). But those figures hide a mixed story. Kahn’s research has cast doubt on the existence of a leaky pipeline for the physical sciences post-PhD. In these fields, women are now just as likely as men to have a tenure-track appointment within five years of their PhD, she says. Instead, it is in economics, social sciences, life sciences and psychology that women drop out at greater rates after obtaining their PhD, Kahn says. “It is the very fields in which women are best represented that one sees attrition between baccalaureate and assistant professorship,” says Stephen Ceci, a psychologist at Cornell University in Ithaca, New York. Last year, he, Kahn and their colleagues wrote a review of the field, flagging up the difference in gender attrition rates between the physical and life sciences. “We find women are flocking to other health fields outside academia, such as becoming doctors, dentists or vets,” Kahn adds. Broad statistics may also be hiding persistent disadvantages for women, especially because not all PhDs and jobs are equally prestigious, says Rice. “The research is important, but they have an insufficiently nuanced conception of the pipeline,” he says. But even here there are hopeful signs, Miller says.

In a separate, unpublished analysis of an NSF survey of earned doctorates, he found that, between 2006 and 2012, women’s representation among PhD earners at elite institutions was slightly higher than among all US universities — although those figures include researchers from other countries, so don’t necessarily apply to the domestic bachelors-to-PhD pipeline. Still, Miller agrees that gender bias could be hidden from broad statistics. In recent years, for example, studies have found that elite labs tend to hire more men than women, and that professors offer lower salaries, and are less willing mentors, to students with female names on their CVs. “Gender equality in numbers doesn’t necessarily mean gender equality in opportunities,” he says.

Nature doi:10.1038/nature.2015.16939

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/us-women-progress-to-phd-at-same-rate-as-men-1.16939  Original web page at Nature

Categories
News

Bigger is not better when it comes to lab size

To publish the most papers, labs should ideally have 10 to 15 members, according to a much-discussed study in Peer J PrePrints. Adding more and more graduate students and postdocs beyond that number does not guarantee a continued rise in high-impact papers, the study found, partly because the extra workers tend to be much less productive than the principal investigator (PI). Mark Pallen, who heads a microbiology lab at the University of Warwick, UK, tweeted “Nice that PIs matter!” But Jessica Chong, a geneticist and postdoc at the University of Washington in Seattle, wrote on Twitter: The study, by Adam Eyre-Walker, a geneticist at the University of Sussex, UK, and his colleagues focused on 398 PIs in the biological sciences in the United Kingdom, and compared the size of their groups with the number of publications over a five-year period. On average, the PIs in the study had about six other people in the lab, but that number ranged from 0 to 30. The authors calculated that the PI accounted for slightly more than 10 papers over 5 years, and each extra lab member increased productivity by just less than 2 papers.

A closer look suggested that each postdoc added about 3.5 papers and each PhD student contributed just over 1 paper. Or as Pallen summed up on Twitter: “PIs 5 X more productive than other research group members; each postdoc worth 3 PhD students.” Based on data from Altmetric.com. Altmetric is supported by Macmillan Science and Education, which owns Nature Publishing Group. However, when citations and impact factor of the papers were added to the analysis, the importance of extra people in the lab seemed to dwindle after labs reach 10 to 15 members. The authors conclude that universities and other institutions should consider these trends when making spending decisions. “It might be more productive to invest in new permanent members of faculty rather than additional postdocs and PhD students,” they wrote.

In an interview with Nature, Pallen said that some of the paper’s conclusions make sense. “Postdocs are clearly more productive than PhD students in most areas of biology, and it is therefore a good idea to get project grant funding as soon as possible in one’s academic career” to afford those postdocs, he says. But Pallen also says that he is “dubious” about the suggestion that 10 to 15 members is the ideal lab size for impact. Eyre-Walker says he is not sure why impact might peak for labs of this size, but both he and Pallen note that the analysis did not include enough large groups to draw firm conclusions about ideal lab size.

Some commenters on Twitter questioned the statistical methods used in the study. Ben Moore, who studies computational biology at the University of Edinburgh, UK, called for “more cautious interpretation” and noted that the line correlating lab size and publication number only very roughly fits the data. Eyre-Walker acknowledges that some of the statistical critiques have merit, and that there is a lot of variability in productivity between labs, even those of similar sizes. But Eyre-Walker (who has one postdoc and two PhD students in his own lab) stands by the main conclusion: PIs are the main drivers of productivity in the lab. “People sometimes characterize PIs as indulgently putting their name on every paper they can, whether or not they had input,” he says. “But in the UK, groups are generally small, and I suspect a PI genuinely has a major input into the research that happens in their lab.”

Nature 518, 141 (12 February 2015) doi:10.1038/518141f

http://www.nature.com/news/index.html Nature

http://www.nature.com/news/bigger-is-not-better-when-it-comes-to-lab-size-1.16866 Original web page at Nature

Categories
News

US precision-medicine proposal sparks questions

During his State of the Union address to Congress on 20 January, President Barack Obama announced a programme called the Precision Medicine Initiative. “I want the country that eliminated polio and mapped the human genome to lead a new era of medicine —  one that delivers the right treatment at the right time,” he said. The White House is remaining tight-lipped about the details of the programme, declining to answer questions from Nature — as is the US National Institutes of Health (NIH), which is expected to be a key partner in the effort. But Kay Holcombe, senior vice-president for science policy at the Biotechnology Industry Organization (BIO) in Washington DC, says that her conversations with the NIH suggest that the agency will seek to match genome information with many other types of data, such as health records and blood-test results. The agency seems to have been planning the effort for some time, listing ‘precision medicine’ as one of its four priorities in its 2015 budget proposal; another was ‘big data’. Other government agencies are also expected to participate, as may some private companies. There is no word on how much the initiative will cost, but details are likely to trickle out as Obama prepares his budget request for fiscal year 2016, which is due to be released on 2 February.

A major question is whether the plan will run alongside or merge with a similar proposal being discussed by members of the US House of Representatives’ Energy & Commerce Committee. The committee’s 21st Century Cures plan seeks to speed up the translation of research advances into treatments, and personalized medicine is one potential element of the effort. Lawmakers are expected to release a first draft of that proposal later this month. Both the White House effort and the House plan would be extremely expensive, but they might not be as difficult to carry out as they first seem. Rather than recruiting all of their participants anew, Holcombe says that both initiatives could collect data and recruit participants from ongoing longitudinal studies. These include the Million Veteran Program at the US Department of Veterans Affairs, which seeks to understand how genes affect health, and the NIH’s 67-year-old Framingham Heart Study at Boston University in Massachusetts, which aims to identify risk factors for heart disease.

If the federal programme takes the form of a public–private partnership, then private insurance companies and health systems could contribute data as well. David Ledbetter, chief scientific officer at Geisinger Health System in Danville, Pennsylvania, says that his company might be willing to join such an effort. Geisinger aims to recruit up to 200,000 of its 3 million customers to have their exomes — parts of the genome that code for proteins — sequenced and integrated with their health records. The company now has completed sequences from about 20,000 patients, and it is preparing to provide each of these people with an analysis of his or her health risks.  “My personal attitude is always to try to collaborate rather than duplicate and compete in an inefficient way,” Ledbetter says. Still, standardizing data collection and patient recruitment across the country will be extremely difficult, especially if ongoing studies are rolled into the effort. Such complexities sank the NIH’s 100,000-person National Children’s Study, which the agency cancelled last month after 14 years of delays. Informed consent and data security will present additional challenges; the roll-out of the UK National Health Service’s care.data project, which would make health information from most patients in England available for research, has been delayed for several months for this reason.

Nevertheless, with personalized medicine in vogue, studies are likely to continue to grow in both number and magnitude and in both the public and private sectors. The Precision Medicine Initiative could once again pit NIH director Francis Collins, who headed the Human Genome Project, against his old private-sector rival, Craig Venter. Last March, Venter launched a company called Human Longevity in San Diego, California, with the goal of sequencing one million human genomes by 2020. The effort is gaining steam: on 14 January, Venter announced that his company would be sequencing tens of thousands of genomes for Genentech, a biotechnology company based in South San Francisco, California, that is searching for new drug targets.

Nature doi:10.1038/nature.2015.16774

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/us-precision-medicine-proposal-sparks-questions-1.16774  Original web page at Nature

Categories
News

* Wolf cull will not save threatened Canadian caribou

Boreal caribou populations have declined as industrial activity in Canada’s boreal forest has increased. Since 2005, the Canadian government has shot nearly 1,000 wolves to protect a herd of threatened boreal caribou in the forests of Alberta, Canada. But a recent study suggests that this approach has limited benefit. It is enough to keep the population of caribou from shrinking further, but it will not allow the animals — a geographically distinct population of Rangifer tarandus, which in Europe is known as the reindeer — to increase their number, finds the November analysis published in the Canadian Journal of Zoology. Such an increase would require placing new limits on industrial development in Alberta, a conclusion that adds fuel to an ongoing debate about the ecological consequences of human activity in the boreal forest.

Caribou have been listed as threatened since 2002, mainly because much of their boreal forest habitat has been sliced into small fragments by a web of roads, pipelines, clear-cut swathes and well pads. Moose and other deer species do well in these open areas, and their populations have boomed — supporting an increasing population of wolves, which have learned to use the roads and pipelines to access caribou hiding in the deep woods. Dave Hervieux, a biologist at the Alberta Environment and Sustainable Resource Development ministry, led a team that studied the government wolf-control programme in the Little Smoky range in western Alberta. The ministry has shot or poisoned 980 wolves there since 2005 to protect a herd of fewer than 100 caribou. From 2000 to 2012, researchers flew helicopter surveys of caribou cows and calves each March, using signals from special collars to help to locate animals. After the wolf-control programme began, the number of calves that survived to about 9 months old rose from an average of 2 calves per 100 cows to an average of 19 calves. The Little Smoky caribou population, which had been crashing, stabilized — but did not grow. Yet Hervieux says that ending the wolf kills is not an option. “If predator management stopped, the caribou would be done,” even if industrial activity halted, he argues. He adds that it might take 30 years for the boreal forest to regrow to the point at which caribou would no longer be at a disadvantage.

The situation presents an ethical quandary, says Dan MacNulty, a wolf biologist at Utah State University in Logan. “What is better, killing off these wolves or watching these caribou blink out because of our appetite for cheap oil and gas?” he asks. Conservationists in Alberta reluctantly supported the wolf kill programme when it began, says Carolyn Campbell, a conservation specialist at the Alberta Wilderness Association in Calgary. But her group later stopped its support for the programme because because drilling and logging continued, posing other threats to caribou. “It is a completely unethical approach to just declare a war on wolves when they are a symptom and not a root cause,” Campbell says. Despite classifying the caribou as threatened, the Canadian government released a strategy to help the population to recover only after lawsuits by environmental groups and tribes in 2012. Many of the required provincial and territorial range plans, which would spell out actions to protect caribou, have yet to appear. That includes Alberta’s plan. Government officials, conservationists and industry officials all say that it is unclear what the document might include or when it would be released.

In the meantime, several timber companies have the right to remove trees in the Little Smoky range; Nature contacted several, and one, the Alberta Newsprint Company, responded. Spokesperson Gary Smith says that the company has observed a moratorium on new logging in the range since summer 2013, but last year cut down trees in three previously approved patches as permitted by law.  And whereas sales of oil leases in the Little Smoky range have halted, they continue in other highly fragmented caribou ranges. The two government agencies that regulate oil and gas activity in the province — Alberta Energy, which sells leases, and the Alberta Energy Regulator, which approves specific energy-development activities by leaseholders — each say that the other agency made the key decision to allow energy development in a critical habitat for a threatened species. This culture of development, plus the dire straits that many boreal caribou populations are in, has prompted study co-author Mark Hebblewhite, a biologist at the University of Montana in Missoula, to consider whether saving the animals will require a radically different approach. “Pick two herds. Fence them. Remove predators non-lethally and just farm caribou,” he says. “That is how bad it is.”

Nature doi:10.1038/nature.2015.16734

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/wolf-cull-will-not-save-threatened-canadian-caribou-1.16734  Original web page at Nature

Categories
News

* US lawmakers seek to revamp biomedical research

Days after the White House announced a new precision-medicine initiative, an influential group of US lawmakers has released its own wish list for biomedical research. On 27 January, several members of the House of Representatives released the first draft of their long-anticipated proposal to speed the translation of research into medicine. The effort, known as 21st Century Cures, seeks to streamline research and development at the US National Institutes of Health (NIH) and US Food and Drug Administration (FDA). Biomedical research advocates welcome the proposal. But they worry that winning the additional funding it would require will be difficult, given the tight US budget climate. The 393-page document is the result of a rare bipartisan cooperation between a group of House members led by Fred Upton, the Michigan Republican who leads the Energy and Commerce Committee, and Diana DeGette, a Democrat from Colorado. Over the past nine months, the lawmakers have met with officials from the NIH, the FDA, patient advocacy groups and pharmaceutical companies as they shaped their plan. The legislation has not been formally introduced, and further changes are likely. “Everything is on the table as we hope to trigger a thoughtful discussion toward a more polished product,” Upton said in a statement. Notably, he and DeGette appear to have diverged in recent months; she did not endorse the latest version of the bill. “We look forward to receiving feedback on the issues identified in [Upton’s] draft document and other suggestions,” DeGette said in a statement. “I know that with continued engagement, we can reach a bipartisan consensus to help advance biomedical research and cures.” Like the committee’s discussions, the draft bill is far-ranging. It would increase funding for the NIH’s National Center for Advancing Translational Sciences (NCATS), particularly an initiative that repurposes old drugs for different diseases. The plan also seeks more cash for the NIH’s Common Fund, which supports initiatives that do not fit into any one NIH institute. The House proposal would also expand the NIH’s authority to fund “high-risk high-reward” research, create new programmes to support young scientists, and reduce the amount of paperwork in the NIH’s grant process. The bill also includes a number of reforms to FDA programmes, such as making it easier for patients in dire need to obtain experimental therapies, and including patient feedback in the agency’s regulatory approval process. In a move that seems likely to stir controversy, the House proposal would grant longer market-exclusivity periods to pharmaceutical and device companies making treatments determined to be greatly needed. The lawmakers also include provisions to combat antibiotic resistance, including new research and surveillance programmes and incentives for companies to develop new antibiotics. But there are some notable omissions — including expected provisions on precision medicine and travel rules for NIH employees. Early reactions from research-advocacy organizations are largely positive. Margaret Anderson, executive director of FasterCures in Washington DC, is particularly interested in the committee’s proposal to create public-private consortia to coordinate and fund biomedical research. Such cooperation is key, she says: the most meaningful reforms would address “how you get the NIH and FDA to link arms. We want to see as much of that as possible.” Carrie Wolinetz, associate vice-president for federal relations at the Association of American Universities in Washington DC, is also positive about the effort and says that Congress was clearly responsive to the concerns raised at hearings. “But the devil is all in the details in these initiatives,” she says. “We’ll have to wait and see if there is any funding.”

Nature doi:10.1038/nature.2015.16807

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/us-lawmakers-seek-to-revamp-biomedical-research-1.16807  Original web page at Nature

Categories
News

* The focus on bibliometrics makes papers less useful

How do we recognize a good scientist? There is an entire industry — bibliometrics — that would have us believe that it is easy: count journal articles, sort them according to the impact factors of the journals, and count all the citations. Science managers and politicians seem especially fond of such ways to assess ‘scientific quality’. But many scientists also accept them, and use them in hiring and funding decisions. They are drawn to the alleged objectivity of bibliometrics. Indeed, one sometimes hears that scientists should be especially ready to apply scientific methods to their own output. However, scientists will also be aware that no good science can be built on bad data, and we are in a unique position to judge the quality of the raw data of bibliometrics, because we generate them through our citation behaviour. The underlying assumption of bibliometrics is that, by citing, scientists are engaging in an ongoing poll to elect the best-quality academic papers. But we know the real reasons that we cite. Chiefly, it is to refer to results from other people, our own earlier work or a method; to give credit to partial results towards the same goal; to back up some terminology; to provide background reading for less familiar ideas; and sometimes to criticize. There are less honourable reasons, too: to boost a friend’s citation statistics; to satisfy a potential big-shot referee; and to give the impression that there is a community interested in the topic by stuffing the introduction with irrelevant citations to everybody, often recycled from earlier papers. None of these citations — for good reasons or bad — express the opinion that the paper in question is a remarkable scientific achievement. Consequently, highly cited papers often contain popular (but otherwise unimpressive) concepts or methods. If you have a favourite well-cited paper, it is a sobering experience to check 20 random citations. They typically contain little appreciation for the quality of the work. To be sure, selection for an academic job guided mainly by citation statistics or papers in high-impact journals will get better results than flipping a coin. But it is blind to the difference between someone who creatively develops a research agenda — and is likely to be doing that in ten years — and someone who grinds out papers in a narrow, fashionable subfield. “When we believe that we will be judged by silly criteria we will adapt and behave in silly ways.” Many negative effects of bibliometrics come not from using it, but from the anticipation that it will be used. When we believe that we will be judged by silly criteria, we will adapt and behave in silly ways. A good example is the distortion in the journal landscape — and with it the changes in the style of papers — that arose when the journal impact factor began to be taken seriously as a proxy for reputation. For example, when Physical Review Letters (PRL) split from Physical Review, it was intended to allow speedier publication of short announcements, which had previously been sent as unrefereed letters to the editor of Physical Review.

It is easier to reach high impact with this format, so the ‘reputation’ shifted from the standard journal to the letters section. Although there is no reason that shorter papers should be scientifically better than long ones, many authors now happily mutilate their work to stay under PRL’s page limit, rendering papers less readable and less useful. Another example is the way Nature became the top journal for experimental physicists. Life scientists are more numerous and use more citations than physicists, so the impact factors of Science and Nature, which cover all sciences, easily beat that of any non-review physics journal. Despite the higher impact factor, there is no reason why a paper written for a broad audience should be scientifically more valuable than one with an in-depth technical discussion. In fact, in pitching for such an audience, authors often leave out the tricky parts, keep technical terms out of their titles, and overstate their conclusions in broad terms. What can we do? Simply, individual scientists must resist the trend of making bibliometrics a central plank in their decision-making processes. And we must make this public, perhaps by stating in job adverts that papers will be judged by scientific merit and not by journal impact factor. Once a hiring decision is made, we should resist the temptation to justify it by quoting the candidate’s bibliometrics to administrators. This reinforces the damaging idea that hiring decisions could be made by administrators in the first place, and makes it harder to justify decisions that do not follow the metrics the next time round. As the tyranny of bibliometrics tightens its grip, it is having a disastrous effect on the model of science presented to young researchers. For example, a master’s student of mine moved to a renowned research institute for his PhD. Like many institutes, this one boasts of its performance in terms of publications in high-impact journals. So my student was told: “If you cannot write up your research in a form suitable for Nature or Science or Physical Review Letters, don’t bother to even do it.” Such advice, driven by the appeal of metrics to funders, is common but horribly misguided. If we raise scientists to be driven by such extrinsic motivation alone, then why should they not follow the logic to its natural conclusion, and run off to become well-paid bankers instead?

Nature 517, 245 (15 January 2015) doi:10.1038/517245a

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/the-focus-on-bibliometrics-makes-papers-less-useful-1.16706  Original web page at Nature

Categories
News

US budget deal gives small increases to research

The measure, passed by the House on 11 December and by the Senate on 13 December, also includes an additional $5.2 billion in aid and research funds for the Ebola epidemic in West Africa. US President Barack Obama is expected to sign the bill into law, finalizing the budget for US agencies through 30 September 2015. Overall, the bill would increase spending on research and development by 1.7% above the 2014 level — in lockstep with the rate of inflation. But the share of money going to basic research would decline by 0.3% in real dollars, according to Matt Hourihan, director of the research and development budget and policy programme at the American Association for the Advancement of Science in Washington DC. “The problem is it continues the slow bleed,” says Michael Lubell, the director of public affairs for the American Physical Society in Washington DC. With fiscally conservative Republicans set to control both houses of Congress next year, the budget picture is not likely to improve, he adds — although Obama’s ability to veto legislation should ward off large funding cuts. The Ebola bolus includes $25 million for the Food and Drug Administration for purposes such as expediting drug and vaccine approval. The Department of State, which includes the Agency for International Development, would receive an additional $2.5 billion for Ebola programmes. The Centers for Disease Control and Prevention would get $1.78 billion for its Ebola work in the United States and Africa, and the National Institute of Allergy and Infectious Disease (NIAID) would receive $238 million for research that includes testing experimental Ebola vaccines.

Nature doi:10.1038/nature.2014.16553

http://www.nature.com/news/index.html Nature

http://www.nature.com/news/us-budget-deal-gives-small-increases-to-research-1.16553  Original web page at Nature

Categories
News

Japanese scientist resigns as ‘STAP’ stem-cell method fails

A RIKEN team announced on 19 December that it was unable to reproduce Obokata’s controversial results. Haruko Obokata, the stem-cell biologist whose papers caused a sensation earlier this year before being retracted, has resigned from the RIKEN Center for Developmental Biology in Kobe, Japan. Her emotional resignation letter was posted on RIKEN’s website on 19 December alongside results of the organization’s own investigation, which failed to confirm her claims of a simple method to create pluripotent stem cells. Such cells are scientifically valuable because they can develop into most other cells types, from brain to muscle. But they are difficult to make. Obokata’s method — known as stimulus-triggered acquisition of pluripotency, or STAP — was published in Nature in January 2014. However, the results immediately came under suspicion, and the papers were retracted in July. A few weeks later, one of the paper’s co-authors, Yoshiki Sasai, took his own life. Obokata wrote she could not “find words enough to apologize… for troubling so many people at RIKEN and other places”. In an accompanying statement, RIKEN president Ryoji Noyori wrote that Obokata had been subject to extreme stress over the affair, and that in accepting her resignation he hoped to save her “further mental burden”.

Nature doi:10.1038/nature.2014.16631

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/japanese-scientist-resigns-as-stap-stem-cell-method-fails-1.16631  Original web page at Nature

Categories
News

Peer review — reviewed

Top medical journals filter out poor papers but often reject future citation champions. Most scientists have horror stories to tell about how a journal brutally rejected their landmark paper. Now researchers have taken a more rigorous approach to evaluating peer review, by tracking the fate of more than 1,000 papers that were submitted ten years ago to the Annals of Internal Medicine, the British Medical Journal and The Lancet. Using subsequent citations as a proxy for quality, the team found that the journals were good at weeding out dross and publishing solid research. But they failed — quite spectacularly — to pick up the papers that went to on to garner the most citations. “The shocking thing to me was that the top 14 papers had all been rejected, one of them twice,” says Kyle Siler, a sociologist at the University of Toronto in Canada, who led the study. The work was published on 22 December in the Proceedings of the National Academy of Sciences. Siler and his team tapped into a database of manuscripts and reviewer reports held by the University of California, San Francisco, that had been used in previous studies of the peer-review process. They found that out of 1,008 submitted manuscripts, just 62 were published in one of the three journals. Of the rejected papers, 757 were eventually published elsewhere, and the remaining 189 either underwent radical transformation or disappeared without a trace. By giving reviewers’ reports a score representing their level of enthusiasm, the researchers found that papers that received better appraisals generally got more citations. “The gatekeepers did a good job on the whole,” says Siler. But the team also found that 772 of the manuscripts were ‘desk rejected’ by at least one of the journals — meaning they were not even sent out for peer review — and that 12 out of the 15 most-cited papers suffered this fate. “This raises the question: are they scared of unconventional research?” says Siler. Given the time and resources involved in peer review, he suggests, top journals that accept just a small percentage of the papers they receive can afford to be risk averse. “The market dynamics that are at work right now tend to a certain blandness,” agrees Michèle Lamont, a sociologist at Harvard University in Cambridge, Massachusetts, whose book How Professors Think explores how academics assess the quality of others’ work. “And although editors may be well informed about who to turn to for reviews, they don’t necessarily have a good nose for what is truly creative.” Fiona Godlee, editor-in-chief of the British Medical Journal, says that these desk rejections were not necessarily mistakes. A paper that reports an excellent biotechnology study, for example, might be rejected simply because it falls outside the journal’s clinical focus. “The decision-making is very much about relevance to readers,” she says. “And I fear chasing citations as a way forward.” Siler acknowledges that using citations as a proxy for quality poses some problems. A recent survey by Nature found that the world’s most-cited scientific papers tended to be about widely used methods rather than paradigm-shifting breakthroughs. One alternative approach would be to assess the quality of the published papers by conducting a fresh round of peer review on them, and perhaps even find out whether they were replicated or translated into the clinic successfully, suggests Daniele Fanelli, an evolutionary biologist currently at the University of Montreal in Canada who studies publication bias. “But that’s a lot of work,” he says ruefully. For now, peer review is clearly here to stay. “Many people think the system is full of weaknesses,” says Lamont. “It’s not perfect, but it’s the best we have.” Nature doi:10.1038/nature.2014.16629

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/peer-review-reviewed-1.16629  Original web page at Nature

Categories
News

Study points to press releases as sources of hype

Scientists, press officers and journalists online are pointing fingers in light of a paper that traces the origins of exaggerated claims in health news. Researchers love to blame the news media when reports about science are misleading or even wrong. But a December study making the rounds online suggests that much of the hype and misinformation about health-related research in the news has its roots in university press releases — which are almost always approved in advance by the researchers themselves. “Academics should be accountable for the wild exaggerations in press releases of their studies,” tweeted Catherine Collins, a dietetic who works for the National Health Service in London. But some say that others are to blame. Steve Usdin, editor and co-host of BioCentury This Week, a US public-affairs show covering the biopharma industry, tweeted: The study, published in the British Medical Journal (BMJ), examined 462 press releases produced by the leading 20 UK research institutions in 2011. Overall, 40% of those releases contained health advice that was more explicit than anything found in the actual article. One-third emphasized possible cause and effects when the paper merely reported correlations. And 36% of releases about studies of cells or animals over-inflated the relevance to humans. Those exaggerations seemed to spread to the media. The study found that when news releases took liberties with the science, 58% of the resulting news stories overstated health advice; 81% highlighted cause and effects and 86% overplayed human relevance. By comparison, when news reports were based on straightforward, unembellished press releases, only 10–18% ended up stretching the truth. The authors conclude that “improving the accuracy of academic press releases could represent a key opportunity for reducing misleading health related news”. University press officers who are under pressure to get news coverage for their institution’s research may be motivated to overhype, notes Mark Henderson, a former newspaper science editor and now head of communications at the Wellcome Trust, a major UK research-funding agency in London. But press officers are not the only ones to blame, he writes on the Wellcome Trust blog. Scientists often deserve their share of responsibility as well. Some — though by no means all — are often only too keen to make inflated or extrapolated claims in pursuit of a little credit or media limelight.” And, he adds, some journalists can find it difficult to break from the pack and not cover a seemingly sensational finding, especially when the press release makes that finding sound true. Meanwhile, one institution has already tried to be more careful. An August press release from a different BMJ journal contained this disclaimer: “This is an observational study so no definitive conclusions can be drawn about cause and effect.” BMJ releases have long been a target of criticism from Gary Schwitzer, a journalism researcher at the University of Minnesota School of Public Health in Minneapolis, who runs the Health News Watchdog blog. He announced earlier this week that his initiative, Health News Review, has received funding to review health-care news releases, among other things. Schwitzer discussed the latest study on his blog and said in an interview that it opens up an opportunity to reexamine everything that can go wrong at every level of the news food chain. Investigators and university public-relations offices can stretch the facts, he says, but he emphasizes that the media is far from blameless. “Many reporters don’t get beyond the news release. And many more don’t get beyond the abstract. That’s simply not good enough.”

Speaking to Nature, Emma Dickinson, the press officer for the BMJ itself, says that the new study “highlights the need for shared responsibility — by scientists, press officers, journals and journalists — for the reporting of new research.” In a comment on Schwitzer’s blog earlier this summer, Dickinson apologized for a release that Schwitzer had criticized, writing: “But we do make great efforts to get these things right by working with authors and editors.” Susan Dynarski, an education and economics researcher at the University of Michigan in Ann Arbor, tweeted this take-home message for researchers:

Nature doi:10.1038/nature.2014.16551

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/study-points-to-press-releases-as-sources-of-hype-1.16551  Original web page at Nature

Categories
News

Publishing: The peer-review scam

Most journal editors know how much effort it takes to persuade busy researchers to review a paper. That is why the editor of The Journal of Enzyme Inhibition and Medicinal Chemistry was puzzled by the reviews for manuscripts by one author — Hyung-In Moon, a medicinal-plant researcher then at Dongguk University in Gyeongju, South Korea. The reviews themselves were not remarkable: mostly favourable, with some suggestions about how to improve the papers. What was unusual was how quickly they were completed — often within 24 hours. The turnaround was a little too fast, and Claudiu Supuran, the journal’s editor-in-chief, started to become suspicious. In 2012, he confronted Moon, who readily admitted that the reviews had come in so quickly because he had written many of them himself. The deception had not been hard to set up. Supuran’s journal and several others published by Informa Healthcare in London invite authors to suggest potential reviewers for their papers. So Moon provided names, sometimes of real scientists and sometimes pseudonyms, often with bogus e-mail addresses that would go directly to him or his colleagues. His confession led to the retraction of 28 papers by several Informa journals, and the resignation of an editor. Moon’s was not an isolated case. In the past 2 years, journals have been forced to retract more than 110 papers in at least 6 instances of peer-review rigging. What all these cases had in common was that researchers exploited vulnerabilities in the publishers’ computerized systems to dupe editors into accepting manuscripts, often by doing their own reviews. The cases involved publishing behemoths Elsevier, Springer, Taylor & Francis, SAGE and Wiley, as well as Informa, and they exploited security flaws that — in at least one of the systems — could make researchers vulnerable to even more serious identity theft. “For a piece of software that’s used by hundreds of thousands of academics worldwide, it really is appalling,” says Mark Dingemanse, a linguist at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, who has used some of these programs to publish and review papers. But even the most secure software could be compromised. That is why some observers argue for changes to the way that editors assign papers to reviewers, particularly to end the use of reviewers suggested by a manuscript’s authors. Even Moon, who accepts the sole blame for nominating himself and his friends to review his papers, argues that editors should police the system against people like him. “Of course authors will ask for their friends,” he said in August 2012, “but editors are supposed to check they are not from the same institution or co-authors on previous papers.” Moon’s case is by no means the most spectacular instance of peer-review rigging in recent years. That honour goes to a case that came to light in May 2013, when Ali Nayfeh, then editor-in-chief of the Journal of Vibration and Control, received some troubling news. An author who had submitted a paper to the journal told Nayfeh that he had received e-mails about it from two people claiming to be reviewers. Reviewers do not normally have direct contact with authors, and — strangely — the e-mails came from generic-looking Gmail accounts rather than from the professional institutional accounts that many academics use. Read more: http://www.nature.com/news/publishing-the-peer-review-scam-1.16400

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/publishing-the-peer-review-scam-1.16400  Original web page at Nature

Categories
News

Key Galapagos research station in trouble

For more than half a century, the Charles Darwin Foundation (CDF) has supported a thriving research station in Ecuador’s Galapagos Islands. Scientists at the station have helped to bring the iconic Galapagos tortoise back from the brink of extinction and to eradicate invasive goats from Isabela, the largest island in the Galapagos archipelago. But that long legacy is being threatened by a spat with the local government, which could force the Charles Darwin Research Station to close. In July, officials on Santa Cruz island ordered the CDF to shut its lucrative gift shop in the town of Puerto Ayora, citing complaints from restaurateurs and shop owners who said that the store was siphoning away their business. That has deprived the foundation of at least US$8,000 per week in income; total losses could reach $200,000 if the shop remains closed for the rest of the year, the foundation says. “The closure of the store basically ruined our 2014 budget,” says CDF president Dennis Geist, a volcanologist who has studied Galapagos sites for 30 years. “We have no endowment. We don’t even have any reserve funds. The closing of the Darwin station is a very realistic possibility right now.” On 24 November, the CDF’s governing body met in Quito, Ecuador. Its voting members, who include employees of the Ecuadorian federal government, agreed to form a working group “to strategically secure the operation of the research station”. But the financial troubles are already affecting operations at the station, which employs around 65 people and works with more than 100 international scientific collaborators. Although the gift shop provides just 10% of the foundation’s revenue, its closure has had cascading effects, says CDF executive director Swen Lorenz. “We have already lost a significant donation from someone who said that if the government of Ecuador doesn’t support us having a souvenir shop, then he won’t support us with a donation,” he says. “We’re two and a half months late with salary, projects haven’t been running, and we’ve had one staff member leave.” Alex Hearn, director of conservation science at the Turtle Island Restoration Network, an environmental advocacy group based in Olema, California, says that the closure of Darwin station would be a major blow. Nearly every scientist who has worked in the Galapagos has dealt either directly or indirectly with the CDF, says Hearn, who coordinated fisheries research at the station from 2002 to 2008. He still works closely with scientists there on fisheries and shark research. “I don’t have to jump on a plane every time I need some data,” he says. “I know the research can be done, and done well. ”Nature 515, 479 (27 November 2014) doi:10.1038/515479a

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/key-galapagos-research-station-in-trouble-1.16410  Original web page at Nature

Categories
News

US government cracks down on clinical-trials reporting

Hiding negative results and harmful side effects that occur in clinical trials would become harder in the United States under regulations proposed on 19 November by the US National Institutes of Health (NIH) and the Food and Drug Administration (FDA). One proposal would require companies seeking the FDA’s approval of a new drug or therapy to post all clinical-trial results to the government website ClinicalTrials.gov, even if the treatment being tested is never approved; current law mandates this only for drugs that are approved. Companies and researchers that do not comply with the deadlines set out in the proposal could face fines of US$10,000 per day. The second proposal would require that any NIH-funded research on interventions, not just drugs, be registered and reported on ClinicalTrials.gov. The rule would apply to surgical techniques and behavioural interventions such as anti-smoking programmes. And for the first time, federally funded researchers will be required to post the results of their phase I clinical trials. Noncompliant institutions could have their NIH funding withdrawn. The regulations are intended to close a loophole in a 2007 law known as the FDA Amendments Act (FDAAA), which requires sponsors of FDA-approved drugs to post the results of their clinical trials on ClinicalTrials.gov. A 2013 report found that only about half of trial results posted on the site are ever published in peer-reviewed journals. “When a lot of dollars and time and volunteers are potentially putting themselves in a risk situation, we need to be sure the results of that are finding their way into view of the public,” NIH director Francis Collins said at a press conference announcing the proposed regulations. “This shows a much broader understanding of what a clinical trial is than in earlier legislation,” says Kay Dickersin, director of the Center for Clinical Trials at Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. But Dickersin is concerned about a number of loopholes that remain in the regulations. Industry and privately funded studies are not required to post phase I results. And trial sponsors are required to report only summaries of people’s reactions to a drug, not each person’s results. Researchers have found that analysing data from individuals can yield vastly different information about adverse events than summaries alone. But Jennifer Miller, a bioethicist at Duke University in Durham, North Carolina, says that the regulations are addressing the wrong question altogether. Miller’s unpublished analysis comparing the number of trials registered with the FDA with those reported on ClinicalTrials.gov suggest that results for most trials are not reported — even for drugs that are approved. “If you were going to expand or enhance FDAAA, you would think there would be considerations around monitoring and enforcement of the existing law,” she says. Nature doi:10.1038/nature.2014.16390

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/us-government-cracks-down-on-clinical-trials-reporting1.16390 Original web page at Nature

Categories
News

Fear and caring are what’s at the core of divisive wolf debate

To hunt or not hunt wolves can’t be quantified as simply as men vs. women, hunters vs. anti-hunters, Democrats vs. Republicans or city vs. rural. What’s truly fueling the divisive debate is fear of wolves or the urge to care for canis lupis. The social dynamics at play and potential options for establishing common ground between sides can be found in the current issue of the journal PLOS ONE. “People who are for or against this issue are often cast into traditional lots, such as gender, political party or where they live,” said Meredith Gore, associate professor of fisheries and wildlife and co-lead author of the study. “This issue, however, isn’t playing out like this. Concerns about hunting wolves to reduce conflict are split more by social geography and less by physical geography.” It’s definitely an us-versus-them debate, she added. However, it took the concept of social identity theory to better reveal the true “us” and “them.” Applying principles from social psychology revealed how the two groups were interacting and offers some potential solutions to get the vying groups to work together. The team’s findings are comparable, in part, to civil uprisings in the Middle East. The region is far removed from the United States, in terms of geography. Americans, however, tend to identify with a distant, threatened identity group, said Gore, an MSU AgBioResearch scientist. “The concept of how our identity drives our activism is quite interesting,” said Gore, who co-led the research with Michelle Lute, former MSU fisheries and wildlife graduate student who’s now at Indiana University. “Our findings challenge traditional assumptions about regional differences and suggest a strong role for social identity in why people support or oppose wildlife management practices.” The majority of the nearly 670 surveys were collected from Michigan stakeholders interested in wolf-hunting as a management response to wolf conflicts. However, a small percentage of the data was gathered from participants in 21 states. While the study focused on gray wolves in Michigan, its results have implications for other states’ policies on wolves as well as other large carnivores such as brown bears, polar bears, mountain lions and other predators, Gore added. Noting that there’s sharp polarization in debates about wolf management is not new. However, providing empirical evidence of its existence is new and meaningful because it provides a framework for improving engagement between the fighting factions. For example, communications may be better directed toward each identity group’s concerns of fear and care for wolves. These missives could be more effective than messages simply directed toward pro-hunters or anti-hunters. Identity-specific communications may also help build trust between agencies and stakeholders. “These types of communications may not only build trust, but they can also contribute to a sense of procedural justice,” Gore said. “This, in turn, may increase support for decision-makers and processes regardless of the outcome.” Also, by shaping and discussing the issue in terms of care and fear, rather than traditional qualifiers, may help usher in a greater agreement about management strategies

http://www.sciencedaily.com/  Science Daily

http://www.sciencedaily.com/releases/2014/12/141202144833.htm  Original web page at Science Daily

Categories
News

European Commission scraps chief scientific adviser post

Former president José Manuel Barroso had pledged in late 2009 to create the post. It was not filled until two years later, when Anne Glover, a molecular and cell biologist who was then CSA of Scotland, was appointed. Glover’s term of office as CSA for Europe ended last month, along with that of the rest of the outgoing commission, following European Union (EU) elections earlier this year. Glover will remain at the commission until the end of January. But the new commission — led by president Jean-Claude Juncker, who succeeded Barroso on 1 November — is shaking things up. On 12 November, Glover informed colleagues at science academies by email that the position of CSA would disappear. Various research leaders protested the move, which they interpreted as downgrading the value of science advice at the highest levels of the commission. They argued that the position should instead have been reinforced, in particular by allocating it more resources. The commission has not yet said how it plans to replace the position. “President Juncker believes in independent scientific advice. He has not yet decided how to institutionalise this independent scientific advice,” says Lucia Caudet, a commission spokeswoman. What influence the CSA position has had in the commission, and on its policies, is not clear as much of its advice is confidential. Glover’s office declined an interview request, but a talk titled “1,000 days in the life of a Chief Scientific Adviser”, which she gave in Auckland, New Zealand, on 28-29 August, provides a candid account of her time at the commission. For example, Glover describes frustration at dealing with in-house politics, cites a lack of sufficient staff and resources and says she was sometimes excluded from essential information. She adds that she was surprised by the appetite for scientific advice in Brussels, and that EU policies were more technical than national ones, which drag science into a “political battlefield”. However, earlier this year Glover also said that the commission’s decisions were often driven by political imperatives, and that evidence was marshalled to support policies rather than being used to inform the best choice of policies. CSAs are largely an Anglo-Saxon tradition. Few countries have them, and, as Glover noted in her talk, there are a diversity of models for providing top-level advice to government. Paul Nurse, president of the Royal Society in London, urged the European Commission to choose one quickly. “If the commission has a plausible plan for ensuring that scientific evidence will be taken seriously they need to start sharing it with people soon,” he told the UK Science Media Centre. “Otherwise they will encourage those who portray the commission as out of touch and not willing to listen to informed advice.”

Nature doi:10.1038/nature.2014.16348

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/european-commission-scraps-chief-scientific-adviser-post-1.16348  Original web page at Nature

Categories
News

China opens translational medicine centre in Shanghai

Over the past decade, China has poured money into research, especially in the biomedical sciences. But as the nation’s health-care costs have risen in the past few years, critics have argued that the investment has not paid off. A group of researchers and government officials now hopes to improve those returns with the official opening this month of the National Centre for Translational Medicine in Shanghai. The 1-billion-renminbi (US$163-million) centre, slated to complete construction in 2017, is the first of five institutions meant to bridge the gap between basic research and clinical application by putting researchers, doctors and patients under one roof. China’s biologists have made impressive progress in fields such as genome sequencing and protein-structure analysis, but have produced little in terms of drugs or other medical products. “Bloggers and others are always complaining that China is just burning money,” says Xiao-Fan Wang, a cancer researcher at DukeUniversity in Durham, North Carolina, and one of the 21 people on the centre’s international consulting committee. Some even ask why China, which has made progress in industry largely by copying other countries, has not succeeded by following the same strategy in the biomedical sciences, says Wang. Wang says that doctors in China are overworked, often racing through whole days of back-to-back patient visits and procedures. And yet, because most Chinese hospitals are part of universities, the doctors must publish to get promoted. In this competitive environment, they often refuse to share data, but are rarely able to do thorough research themselves. “You can’t expect them to do eight hours of surgery and then jump into a lab coat,” says Wang. The Shanghai centre will change that, he says, by giving clinicians time to focus on research questions. Saijuan Chen, the centre’s director and a geneticist at ShanghaiJiaotongUniversity, says that the centre began interviews this month to recruit some 50 principal investigators and 12 scientists to direct research in their disciplines. She says that the institution’s focus will be on developing treatments for heart disease, stroke, metabolic diseases and cancer. The centre’s international consulting committee will help to recruit top talent in these fields. Tak Mak, a committee member and an immunologist at the University of Toronto in Canada, says that the strategy shows a commitment to hiring on the basis of expertise rather than connections. It is “an effort to break out of the old system — which is probably the system in more places than you’d care to know”, he says. One challenge will be attracting Chinese biomedical researchers who are working abroad. Wang says that China has been able to lure basic-research scientists back to the country with competitive pay. But clinicians in China receive a fraction of the salaries they can get in the United States.

Managing the centre may also be difficult, says committee member Sujuan Ba, chief operating officer of the US National Foundation for Cancer Research in Bethesda, Maryland. The centre’s governing council comprises members from ten government agencies and institutions. “That shows the wide scope of support from China, but it is going to be a huge challenge for the leadership team to balance and to manage each council member’s requisites,” she says. “It is very important for the centre to steer away from bureaucratic red tape and stay focused on its mission and long-term vision of conducting high-impact translational research.” The 54,000-square-metre site will have 300 beds for patients and study volunteers. It will also run a biobank that will collect hundreds of thousands of patient samples, and an ‘omics’ centre that will conduct high-throughput genome analyses and gather data on proteins and metabolic products. Haematologist Zhu Chen of ShanghaiJiaoTongUniversity, who is China’s former health minister and chairman of the centre’s scientific advisory board, hopes that the centre will emulate the spirit of St Jude Children’s ResearchHospital in Memphis, Tennessee, with its close relationship between clinicians and basic researchers. He also emphasizes the need to make therapeutic trials free: in China, participants are often charged for treatment. Zhu and Saijuan Chen, who are married, led one of the few successful translational research projects in China — treatment of a form of leukaemia using retinoic acid and arsenic trioxide. The Shanghai centre will eventually have four sister institutions: a geriatrics centre at the People’s Liberation Army general hospital in Beijing, a centre for rare and refractory diseases at the Peking Union Medical College in Beijing, a molecular-medicine research centre at the Fourth Military Medical University in Xian and a regenerative-medicine centre at the West China Hospital in Chengdu. “These centres are at the historic moment to make a huge impact for China’s drug development,” says Ba. “We should be able to see the signs of success in five years, if not earlier.”

Nature 514, 547 (30 October 2014) doi:10.1038/514547a See Editorial page 535

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/china-opens-translational-medicine-centre-in-shanghai-1.16238  Original web page at Nature

Categories
News

Divide and rule: Raven politics

Thomas Bugnyar and his team have been studying the behavior of approximately 300 wild ravens in the Northern Austrian Alps for years. They observed that ravens slowly build alliances through affiliative interactions such as grooming and playing. However, they also observed that these affiliative interactions were regularly interrupted by a third individual. Although in about 50 % of the cases these interventions were successful and broke up the two affiliating ravens, intervening can be potentially risky when the two affiliating ravens team up and chase away the intervening individual. Interestingly, the researchers found that these interventions did not occur at random. Specifically ravens that already have an alliance tend to interrupt the affiliative interactions of those individuals that are in the process of establishing one. “Because of their already established power, allied ravens can afford such risky strategies,” explains lead-author Jorg Massen: “They specifically target those ravens that are about to establish a new alliance, and might thereby prevent them from becoming future competitors through a divide and rule strategy.” Massen furthermore underlines that at the time of intervention the birds that are trying to establish an alliance are no threat yet to the already allied ravens. “It thus seems that the ravens keep track of the relationships of others and have a keen understanding of when to intervene in affiliative interactions and when not; i.e. not when these are just loose flirts, but also not when the alliance is already established and it is already too late,” says Jorg Massen. This is the first time that such a sophisticated political maneuver has been described in animals other than humans.

http://www.sciencedaily.com/  Science Daily

http://www.sciencedaily.com/releases/2014/10/141031120901.htm Original web page at Science Daily

Categories
News

Investments boost neurotechnology career prospects

The past few years have seen some extraordinary activity in the neuroscience field. High-profile advances, from the Allen Brain Atlas to the Brainbow mouse, have injected an air of excitement into the study of the brain—an atmosphere that has been amplified by big funding initiatives in the United   States and abroad. For budding neuroscientists, it’s heady days—at least if you’ve got a knack for technology development, programming, and engineering. But it will take more than raw skill to land a job. By Jeffrey M. Perkel. Mark Cembrowski was a graduate student in applied mathematics with a taste for neurobiology at NorthwesternUniversity when he discovered a way to marry his two interests. Two of his math professors were collaborating with physiologist Joshua Singer, also at Northwestern, who was keen to model the biology of a retinal cell called the AII amacrine interneuron. “Josh wanted someone to come in and build a model of single AII cells to try and understand how the AII works as an input/output device,” Cembrowski explains. So, he joined Singer’s team. But models are only as good as their input data, and very quickly, Cembrowski says, he realized he needed more of it. Specifically patch-clamp electrophysiology data. And he was going to have to collect it himself. Patch clamping isn’t easy even for seasoned neuroscientists, let alone an applied mathematician who’d never set foot in a biology lab. “I was the worst of the worst,” he concedes. “I broke a lot of things getting started.” Still, he persevered, and in 2012 published his first electrophysiology paper. “My whole perspective on this just flipped 180 degrees,” he says. “When I found the confidence and the ability to do these experimental techniques, I felt like I was on top of the world.” As it turns out, researchers like Cembrowski are atop the neuroscience world, too, where research opportunities increasingly blend traditional neurobiology with technology development. That marriage of disciplines underlies President Obama’s recently announced Brain Research through Advanced Innovative Neurotechnologies (BRAIN) initiative. Seeded with $110 million from the U.S. National Institutes of Health (NIH), the National Science Foundation, and the Defense Advanced Research Projects Agency, the initiative has a heavy focus on technology development, says Tom Insel, director of the National Institute of Mental Health (NIMH), one of four NIH institutes that together contributed $40 million to the pot. “This is a unique investment,” Insel says. “It’s not to expand all of neuroscience, but it’s to invest in the area of tool development specifically, which is sometimes difficult to do with RO1grant funding.”

In particular, he says, the initiative will support a new breed of neuroscientist, one trained not as a classical brain researcher but as a physicist or mathematician, computer scientist or engineer—researchers who may never have received NIH funding before. “One of the measures of success for me with the BRAIN Initiative is, when I see the pay plan of who’s going to be funded, I’m hoping that I will not recognize most of the names,” he says. One name that won’t be on the list is Cembrowski, who is still in training. Upon graduating with a Ph.D. in applied mathematics, he joined Nelson Spruston’s lab at the Howard Hughes Medical Institute’s Janelia Research Campus, a private research institute with a heavy focus on neurobiology. There, he pivoted again and again, from electrophysiology to RNA-sequencing data analysis, to anatomy and histology, and thence to behavioral analysis. “No technique is an island,” he explains. “There’s always other techniques that one can adopt as a way of validating and extending what you’ve done previously.” “This is a guy who just knows no boundaries,” Spruston says. “He’s going to go out and learn what he needs to learn to answer the questions that he wants to answer. And this is to me the phenotype of the successful neuroscientist these days.” So how can one develop that phenotype? Certainly, a solid technical background doesn’t hurt. Popular flavors du jour include connectomics, functional magnetic resonance imaging (fMRI), and optogenetics. But it’s not the acquisition of techniques per se that matters, most say, so much as the willingness to try new things, coupled with sufficient neurobiology expertise to understand what questions to ask.

http://www.sciencemag.org/ Science Magazine

http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2014_10_31/

science.opms.r1400148  Original web page at Science Magazine

Categories
News

Review rewards

How many manuscripts is it reasonable for a scientist to peer review in a year? Many researchers would estimate two or three dozen; Malcolm Jobling, a fish biologist at the University of Tromsø in Norway, says that he has racked up more than 125 already this year. How do we know? A welcome movement is under way to publicly register and recognize the hitherto invisible efforts of referees. Jobling’s staggering total is revealed at Publons, a New Zealand-based start-up firm that encourages researchers to post their peer-review histories online (for an interview, see Nature http://doi.org/wbp; 2014). Publons is not the only attempt to recognize and reward academics for their refereeing activity. As Nature noted last year (see Nature 493, 5; 2013), publishers are increasing their efforts to reward assiduous reviewers. The Nature journals give a free subscription to anyone who has refereed three or more papers in a year for them, and allow peer reviewers to download a statement of work. Similarly, science publisher Elsevier this year launched a system to formally recognize its peer reviewers, and to give rewards to ‘outstanding reviewers’ — those who have reviewed the most papers. Unlike Publons, which hopes to establish a cross-publisher profile, the activities of individual publishers are restricted to their own platforms. But publishers are taking part in broader talks to establish standards to publicly record peer-review service in a researcher’s ORCID (Open Researcher and Contributor ID) profile. Those discussions, under the auspices of the Consortia Advancing Standards in Research Administration (CASRAI), an international non-profit group, are also looking at ways to record other types of peer review — including reviews of grant applications, conference abstracts, service as a journal editor and institutional benchmarking (for example, being on the panel of a national research audit such as the UK Research Excellence Framework). Researchers could use their reviewer records to highlight their expertise for employers and government agencies. If enough information can be publicly revealed, it could shed more light on the average number and type of review undertaken by scientists, who increasingly complain that they are overwhelmed with peer-review requests. The final direction of the drive to publicly record and reward peer review is far from clear. Publons — among others — hopes that there will be more cases of open, signed reviews (which will make it easy to recognize a referee’s contribution). Yet the majority of pre-publication reviews remain private: many researchers are uncomfortable about being publicly revealed as the author of a critical review because of the fear of subtle reprisals in other areas of their career. Unless this culture shifts, efforts will stay focused on allotting credit for reviews whose text and author remain secret.

Recording the number of reviews is only the start. A well-considered review that substantially improves a paper can take days — whereas a sloppy reviewer could dash off assessments of many papers in a few hours. So the next challenge in publicly recognizing peer review will be to find a way to assess quality. Many journal editors already have an informal idea of their ‘good’ and ‘bad’ reviewers, which in some cases can be quantified by response time. But these judgements are not usually shared with colleagues, and may differ from one editor to another. Lutz Prechelt, an informatics researcher at the Free University of Berlin who is advising Elsevier on its programme, has suggested that both authors and editors could be asked to mark the helpfulness and timeliness of a review. But it will be important to ensure that the benefits of this system are not drowned by the bureaucracy involved. Efforts to publicly recognize peer review are still in their infancy. But as attempts to acknowledge and reward a crucial role, they should be applauded.

Nature 514, 274 (16 October 2014) doi:10.1038/514274

http://www.nature.com/news/index.html  Nature

http://www.nature.com/news/review-rewards-1.16138  Original web page at Nature