Categories
News

Newly identified eye disease in dogs can be easily treated

Sinisa Grozdanic, assistant professor of veterinary medicine at Iowa State University, has identified and named an eye disease not previously known. The disease, Immune-Mediated Retinopathy, or IMR, causes loss of function in retinal cells and, in some cases, blindness in canines. Both diseases occur when the dog produces auto antibodies that attack the retinal cells. The antibodies mistake retinal cells for cancerous tumors or tissues that need to be destroyed. In the process of attacking the retinal cells, the auto antibodies cause the retinal cells to lose function and the dog to lose some or all of its vision. The difference between IMR and SARDS that Grozdanic identified is that the auto antibodies that attack the retinal cells in SARDS patients are produced in the eye. In the newly identified IMR, Grozdanic found that these auto antibodies are produced elsewhere in the dog and travel to the eyes in the blood. This is a critical step in treating the disease because the source of the problem is better understood, according to Grozdanic.

“The whole purpose is to start to understand the disease better,” he said. “The more we understand these diseases, the more proficient we will be developing new treatments.” Grozdanic says the evidence shows that approximately 2,000 cases of SARDS occur every year. Some of those cases may now be identified as IMR, and treated differently. Treatment for IMR can have a relatively high success rate. “In approximately 60 percent of the Immune-Mediated Retinopathy cases, we have been able to treat it,” he said. “In some cases very successfully, in some cases moderately successfully.” Since IMR has only recently been identified, there are no statistics on how many dogs this disease affects. Grozdanic has also developed a test to differentiate the two types of retinopathy. Grozdanic shines colored lights in the dog’s eyes to see if the pupils constrict. If the pupils constrict poorly while the doctor uses the red light, and have normal constriction when blue light is used, the patient most likely suffers from IMR. If the eyes respond to blue lights, but not red lights, then the diagnosis is SARDS. Tests show SARDS-affected eyes have almost no electrical activity. IMR-affected eyes have some electrical activity, and the retinal cells are not destroyed but have only lost function. These are the retinal cells that Grozdanic thinks can function again now that the origin of the problem is known. In his work with canine patients with IMR during the past few years, Grozdanic has restored sight in several dogs.

Science Daily
March 17, 2008

Original web page at Science Daily

Categories
News

Long-term retinal implant study offers hope for treating blindness

USC and Second Sight Medical Products Inc, leading developers of retinal prostheses for treating blindness, announced today that they have completed enrollment of the first phase of a U.S. FDA approved clinical study of the Argus II Retinal Prosthesis System. They also announced that enrollment at key European sites is underway as studies continue in Mexico “We are pleased that Second Sight, along with our fantastic clinical partners, was able to fully enroll the US trial in a timely manner,” said Robert Greenberg, MD, PhD, President and CEO of Second Sight, and a leader in the field of retinal prostheses for more than 15 years. Although it is too early to comment on the clinical data, each device continues to function as expected, and all participants are using their systems at home daily.”

The Argus II is the second generation of an electronic retinal implant designed for the treatment of blindness due to Retinitis Pigmentosa (RP), a group of inherited eye diseases that affect the retina. The Argus II implant consists of an array of 60 electrodes that are attached to the retina. These electrodes conduct information acquired from an external camera to the retina to provide a rudimentary form of sight to implanted subjects. The development of this technology was largely supported by the National Eye Institute (NEI) of the National Institutes of Health (NIH), and the Department of Energy’s Office of Science (DOE) Artificial Retina Project, which is helping to advance the implant’s design and construction. The unique resources and expertise at DOE national laboratories–particularly in engineering, microfabrication, material science, and microelectronic technologies–are enabling the development of much smaller, higher resolution devices. Ten subjects were recruited for the Phase I trial at four leading ophthalmic centers throughout the US, including the Doheny Eye Institute at the University of Southern California (USC), Wilmer Eye Institute at Johns Hopkins University (Baltimore), the University of California at San Francisco, and the Retina Foundation of the Southwest (Dallas).

Science Daily
March 4, 2008

Original web page at Science Daily

Categories
News

Pioneering eagle eye surgery removes cataract, restores vision, after injury

Surgeons from the University of Glasgow’s Small Animal Hospital have restored the sight of a golden eagle. It is believed the shock caused a cataract to develop and the 14lb bird of prey was taken to the Small Animal Hospital where the tricky surgery was carried out. It is the first time a procedure to remove a cataract caused by trauma has been carried out on a golden eagle. The bird was found on the island of Mull by staff from the Wings Over Mull bird sanctuary, who brought it the University of Glasgow. Putting birds under general anaesthetic is considered very risky as the shock often kills them. But it was decided that without sight, the bird’s future was bleak. Ophthalmologist George Peplinski carried out the surgery on the bird’s right eye. However, a second cataract operation on the other eye was ruled out. He said: “With such a small chance of any improvement I don’t think it was justified. We worked on the eye that we know is the healthiest and we are best just leaving it there and not risking a prolonged anaesthetic and a prolonged recovery.” The bird, named Electra, is now a permanent resident at the Wings Over Mull sanctuary, as with its reduced eyesight, it could not survive in the wild.

Science Daily
March 4, 2008

Original web page at Science Daily

Categories
News

How well do dogs see at night?

A lot better than we do, says Paul Miller, clinical professor of comparative ophthalmology at University of Wisconsin-Madison. “Dogs have evolved to see well in both bright and dim light, whereas humans do best in bright light. No one is quite sure how much better a dog sees in dim light, but I would suspect that dogs are not quite as good as cats,” which can see in light that’s six times dimmer than our lower limit. Dogs, he says, “can probably see in light five times dimmer than a human can see in.” Dogs have many adaptations for low-light vision, Miller says. A larger pupil lets in more light. The center of the retina has more of the light-sensitive cells (rods), which work better in dim light than the color-detecting cones. The light-sensitive compounds in the retina respond to lower light levels. And the lens is located closer to the retina, making the image on the retina brighter. But the canine’s biggest advantage is called the tapetum. This mirror-like structure in the back of the eye reflects light, giving the retina a second chance to register light that has entered the eye. “Although the tapetum improves vision in dim light, it also scatters some light, degrading the dog’s vision from the 20:20 that you and I normally see to about 20:80,” Miller says. The tapetum also causes dog eyes to glow at night.

Science Daily
November 27, 2007

Original web page at Science Daily

Categories
News

Physics provides new insights on cataract formation

Using the tools and techniques of soft condensed matter physics, a research team in Switzerland has demonstrated that a finely tuned balance of attractions between proteins keeps the lens of the eye transparent, and that even a small change in this balance can cause proteins to aggregate and de-mix. This leads to cataract formation, the world’s leading cause of blindness. This work could shed light on other protein aggregation diseases (such as Alzheimer’s disease), and may one day lead to methods for stabilizing protein interactions and thus preventing these problematic aggregations from occurring. The eye lens is made up of densely packed crystallin proteins, arranged in such a way that light in the visible wavelength range can pass through. But for a variety of reasons including UV radiation exposure and age, the proteins sometimes change their behavior and clump together. As a result, light is scattered once it enters the lens, resulting in cloudy vision or blindness. There is currently no known way to reverse the protein aggregation process once it has begun. Nearly 5 million people every year undergo cataract surgery in which their lenses are removed and replaced with artificial ones.

Previous research has shown that the interactions between the three major crystallin proteins that make up the concentrated eye lens protein solution are key to cataract formation. A team of scientists from the University of Fribourg, EPFL and the Rochester Institute of Technology (USA) studied the interactions between two of these proteins, at concentrations similar to those found in the eye lens, using a combination of neutron scattering experiments and molecular dynamics computer simulations. They found that a finely tuned combination of attraction and repulsion between the two proteins resulted in an arrangement that was transparent to visible light. “By combining experiments and simulations it became possible to quantify that there had to be a weak attraction between the proteins in order for the eye lens to be transparent,” explains EPFL postdoctoral researcher Giuseppe Foffi, a member of the Institut Romand de Recherche Numerique en Physique des Materiaux (IRRMA). “Our results indicate that cataracts may form if this balance of attractions is disrupted, and this opens a new direction for research into cataract formation.”

“Lots of studies have been done on individual proteins in the lens,” adds University of Fribourg physicist and lead author Anna Stradner, “But none on their mixtures at concentrations typically found in the eye. We modeled these proteins as colloidal particles, and found there was a very narrow window in which the protein solution remained stable, and this was a necessary condition for lens transparency.” In addition to unveiling important new information about the interactions of the proteins in the eye lens, this benchmark study provides a framework for further study into the molecular properties and interactions of proteins. The results suggest that these properties could perhaps be manipulated to prevent aggregation or reverse the aggregation process once it has begun.

Science Daily
November 27, 2007

Original web page at Science Daily

Categories
News

Dawn of animal vision discovered

By peering deep into evolutionary history, scientists at the University of California, Santa Barbara have discovered the origins of photosensitivity in animals. The scientists studied the aquatic animal Hydra, a member of Cnidaria, which are animals that have existed for hundreds of millions of years. The authors are the first scientists to look at light-receptive genes in cnidarians, an ancient class of animals that includes corals, jellyfish, and sea anemones. “Not only are we the first to analyze these vision genes (opsins) in these early animals, but because we don’t find them in earlier evolving animals like sponges, we can put a date on the evolution of light sensitivity in animals,” said David C. Plachetzki, first author and a graduate student at UC Santa Barbara. The research was conducted with a National Science Foundation dissertation improvement grant. “We now have a time frame for the evolution of animal light sensitivity. We know its precursors existed roughly 600 million years ago,” said Plachetzki.

Senior author Todd H. Oakley, assistant professor of biology at UCSB, explained that there are only a handful of cases where scientists have documented the very specific mutational events that have given rise to new features during evolution. Oakley said that anti-evolutionists often argue that mutations, which are essential for evolution, can only eliminate traits and cannot produce new features. He goes on to say, “Our paper shows that such claims are simply wrong. We show very clearly that specific mutational changes in a particular duplicated gene (opsin) allowed the new genes to interact with different proteins in new ways. Today, these different interactions underlie the genetic machinery of vision, which is different in various animal groups.” Hydras are predators, and the authors speculate that they use light sensitivity in order to find prey. Hydra use opsin proteins all over their bodies, but they are concentrated in the mouth area, near the tip of the animal. Hydras have no eyes or light-receptive organs, but they have the genetic pathways to be able to sense light. Source: PLoS One

Science Daily
October 30, 2007

Original web page at Science Daily

Categories
News

Glaucoma surgery in the blink of an eye

One of a small number of surgeons in the world who currently perform a complicated form of glaucoma surgery, Prof. Assia has developed a novel laser device that promises to revolutionize treatment of the disease. The laser, called the OTS134 for now, is expected to give most practicing eye surgeons the ability to master complex glaucoma surgery very quickly. Glaucoma, nicknamed the silent sight thief, is the second leading cause of blindness in the West. “Glaucoma is a serious problem that starts to cause nerve damage to people without them realizing that anything is happening to their eyesight, often before it is too late, ” says Prof. Assia, who is also the director of Ophthalmology at Meir Hospital in Israel, which treats thousands of glaucoma patients each year. The most common surgical treatment in use today perforates the wall of the eye, often resulting in collapse of the eyeball, infection, cataract formation and other complications. A more effective and elegant approach, a specialty of Prof. Assia’s, involves penetration of the eye wall to a depth of only about 95 percent, leaving a razor-thin layer intact. The difference between success and failure may amount to just a few microns.

This highly-specialized non-penetrating surgery, requiring years of rigorous training and great skill, is performed by only a small number of surgeons at leading international ophthalmology centers. But a small observation led Prof. Assia to think about a method that could make the procedure accessible to eye surgeons without the long and involved training. “Several years ago I served as a consultant for a company that produces CO2 lasers, which are used for different kinds of cosmetic and skin surgery. Because it is a relatively strong type of laser, it was not a likely candidate for use on something as delicate as the eye. However, one of the CO2 laser’s unique characteristics is that it does not function when it comes in contact with liquid. It occurred to me that this would be a perfect fit for non-penetrating surgery, because the moment the CO2 laser came in contact with the intra-ocular liquid, it would automatically shut off,” he recalls.

Working in partnership with the Israeli-based company IOPtima, Prof. Assia has already carried out a series of successful human trials. A larger worldwide study will take place this year before the company launches the OTS134 — as it plans to do in the United States — by the middle of 2008. Glaucoma affects 3 million Americans every year, with onset around the age of 40. It is a disease that is brought on by a seemingly harmless increase of pressure in the eyeball. When this pressure builds up over time, the aging body cannot seem to correct the pressure effectively. Glaucoma eventually damages the optical nerve in the eye, with extreme tunnel vision and complete blindness ensuing. “There are drug treatments that can reduce the intra-ocular pressure, but that means life-long treatment involving two or three kinds of eye drops three times a day,” says Prof. Assia. “We find that a large number of patients don’t comply with this treatment, especially because the harmful effects of not taking the medicine properly are not immediately felt.” Although he counsels that surgical approaches to glaucoma also carry risks, the OTS134 is a promising tool for more widespread treatment with fewer complications.

Science Daily Health & Medicine
October 2, 2007

Original web page at Science Daily Health & Medicine

Categories
News

Nanoparticle offers promise for treating glaucoma

A unique nanoparticle made in a laboratory at the University of Central Florida is proving promising as a drug delivery device for treating glaucoma, an eye disease that can cause blindness and affects millions of people worldwide. “The nanoparticle can safely get past the blood-brain barrier making it an effective non-toxic tool for drug delivery,” said Sudipta Seal, an engineering professor with appointments in UCF’s Advanced Materials Processing and Analysis Center and the Nanoscience Technology Center. Seal and his colleagues from North Dakota State University note in the article that while barely 1-3 percent of existing glaucoma medicines penetrate into the eye, earlier experiments with nanoparticles have shown not only high penetration rates but also little patient discomfort. The miniscule size of the nanoparticles makes them less abrasive than some of the complex polymers now used in most eye drops.

Seal and his team created a specialized cerium oxide nanoparticle and bound it with a compound that has been shown to block the activity of an enzyme (hCAII) believed to play a central role in causing glaucoma. The disease involves abnormally high pressure of the fluid inside the eye, which, if left untreated, can result in damage to the optic nerve and vision loss. High pressure occurs, in part, because of a buildup of carbon dioxide inside the eye, and the compound blocks the enzyme that produces carbon dioxide. Seal and a team of collaborators including Sanku Mallik, of North Dakota State University, developed the research on using nanoparticles as a delivery mechanism for the compound after supervising a student summer project at UCF. Duke University undergraduate Serge Reshetnikov spent a summer studying nanoscience on UCF’s Orlando campus as part of a Research Experience for Undergraduates (REU) project funded by the National Science Foundation. Reshetnikov started looking into the possibilities of using nanoparticles as drug delivery tools. Subsequent research with his advisors led to the specific application for glaucoma. In their paper on the research, which was also supported by the National Science Foundation, Seal and Mallik note the results are “very promising” and that their nanoparticle configuration offers seemingly limitless possibilities as a non-toxic drug delivery tool.

Science Daily
July 10, 2007

Original web page at Science Daily

Categories
News

New eye research to end in tears

University of Western Sydney researcher, Associate Professor Tom Millar has approached the problem of dry eyes from a new perspective. He re-examined the structure and function of natural tears to find new clues for creating longer lasting artificial tears. Tears protect and lubricate the cornea and conjunctiva of the eye and help provide a clear medium through which we see. Dry eyes occur when tears evaporate or break-up too quickly. Anyone can experience dry eyes, but the problem is more common when you stare at computer screens, wear contact lenses or after you turn 65. Hot dry conditions in summer, winter heating and taking antihistamines can also aggravate the condition. Associate Professor Millar, from the School of Natural Sciences, says the interaction between the liquid tear and air holds the key to slowing the ‘break-up time’ of tears. “At the surface of all liquids, including tears, molecules are spread very thinly,” he says.

“A good example of what’s happening at the micro level can be seen when you put a small drop of oil into a bowl of water. The oil spreads over the entire surface, so a little bit goes a long way. “When we looked closely at the thin surface layer of molecules on tears – the ‘tear film’ – we found proteins previously thought to be confined to the aqueous portion of the tear,” he says. Further study by Associate Professor Millar revealed, for the first time, proteins at the surface also played an unexpected role slowing down the break-up rate of tears. “Proteins on the tear film interact and behave very differently. They lower the surface tension and make tears more stable,” he says. Previously it was believed lipids – released from small holes inside the eyelids – formed an oily barrier, which protected the tears from evaporating too quickly. Associate Professor Millar’s discovery has opened a whole new avenue of research and is the culmination of 14 years of blood, sweat and literally tears. Over the years, he has collected samples of his own tears to extract compounds needed for experiments. Already Associate Professor Millar’s research, and tears, has helped to develop a synthetic polymer, which has doubled the tear break-up time in animal trials. “The ultimate goal is to create effective eye drops which work with your natural tears to give lasting relief from dye eyes,” Associate Professor Millar says.

Science Daily
June 12, 2007

Original web page at Science Daily

Categories
News

First trial of gene therapy to restore human sight

The first clinical trial using gene therapy to treat a vision disorder has begun, involving 12 patients with an inherited condition that causes childhood blindness. The treatment, which is taking place in London, UK, hopes to restore vision in patients who have a genetic defect that causes degeneration of the retina. Robin Ali at Moorfields Eye Hospital in London and colleagues are treating adults and children with Leber’s congenital amaurosis (LCA), caused by an abnormality in the RPE65 gene. This gene is important in recycling retinol, a molecule that helps the retina detect light. People with LCA usually lose vision from infancy. Ali’s team are inserting healthy copies of RPE65 into cells in the retina, using a viral vector. Previously, dogs with LCA have had their vision restored in this way, allowing them to walk through a maze for the first time without difficulty.

Leonard Seymour, who leads the Gene Delivery Group at the University of Oxford in the UK, and is not involved in the current trial, says the retina is a good place for gene therapy because it can be accessed by injection to overcome the problem of delivery. “The retina is also good because it is relatively immune-privileged, meaning that the vector (in this case a virus) should not be neutralised immediately upon administration,” he says. The team have been developing the therapy for almost 15 years, and Ali says testing it for the first time in patients represents a huge step towards establishing gene therapy for the treatment of many different eye conditions. “The results from this first human trial are likely to provide an important basis for many more gene therapy protocols in the future, as well as potentially leading to an effective treatment for a rare but debilitating disease,” he says. Although some patients in the trial have already had the procedure, the researchers say it will be many months before they know whether or not the treatment has been successful.

New Scientist
May 15, 2007

Original web page at New Scientist

Categories
News

Bypassing bad eyes

An illustration of what a prosthetic visual device might look like: two small digital video cameras mounted on a set of glasses and connected to a signal processor that transmits visual impulses wirelessly to a surgically implanted stimulator. The trick to restoring vision in people blinded by injury or disease may be to bypass the eyes entirely. By establishing a connection between a video device and the part of the brain that receives visual stimuli, researchers have shown that the brain can interpret electronic signals in the same way it interprets light waves. For years, scientists have tried with limited success to provide sight to the blind via prosthetic devices. One approach is to stimulate the remaining healthy neurons in the retina, the light-sensitive lining of the inside of the eyeball, with miniature electrodes that mimic the effects of incoming light. But retinal tissue is so fragile that it is easily damaged. Another tactic involves inserting microelectrodes into the primary visual cortex–the main part of the brain responsible for processing visual signals– and stimulating visual nerve cells with electrical impulses. So far, however, no one has been able to achieve more than simple behavioral responses in test animals because of the complexity of that cerebral area and the nature of visual signals.

A team from Harvard Medical School has tried a new approach. Reporting online this week in Proceedings of the National Academy of Sciences, the researchers describe how they got lab animals to track preselected artificial visual signals with their eyes–just as though they were watching lights flashing on a real video screen–by precisely inserting two minute electrodes into the lateral geniculate nucleus (LGN) of the thalamus. This part of the brain acts like a relay station for visual information. All signals from the eyes run through the LGN to the visual cortex. The experiment used sighted monkeys so the researchers could compare their responses to real images with artificial signals, says neuroscientist and co-author John Pezaris. Although the team only used two electrodes as proof of concept, he says, the monkeys followed both the real and artificial signals in exactly the same way.

If further animal research is successful, Pezaris says his team hopes to move on to work with human volunteers, using implants containing more and more electrodes to transmit visual signals of increasing complexity. Eventually, the procedure could lead to a full-fledged artificial vision system comprising twin digital video cameras worn as a pair of glasses that transmits signals wirelessly to an implanted neural stimulator, which in turn connects to micro-electrodes planted in the brain. The research represents “a fundamental advance toward a visual prosthesis,” says neurobiologist Nicholas Hatsopoulos of the University of Chicago in Illinois. For one thing, he says, the thalamus is easier to stimulate than the retina and is much less prone to tissue damage. Furthermore, current neurosurgical procedures, such as those used to treat diseases such as Parkinson’s, “could be modified relatively easily to stimulate the thalamus.”

ScienceNow
May 15, 2007

Original web page at ScienceNow

Categories
News

Implants buried deep inside the brain may provide the best hope yet for vision-restoring bionic eyes

Most visual prosthetics rely on implants behind the retina. These stimulate surrounding nerve tissue to generate points of light, called phosphenes, in the mind’s eye. Such prosthetics require a detailed map of where phosphenes appear in response to electrical stimulation. Once this map is complete, digital images, captured by a camera, can be converted to electrical pulses that produce multiple points of light, allowing a blind person to “see” simple shapes. In patients with severe eye trauma, however, there may not be enough surviving retinal neurons to stimulate. Or a patient’s retinas may simply have degenerated over time. An alternative is to place implants directly in the brain, within the visual cortex. But this is a large and complexly folded part of the brain, making access and mapping of the visual field a serious challenge.

John Pezaris and colleague R. Clay Reid, both at Harvard Medical School in Boston, US, have shown that phosphenes can be produced by stimulating the lateral geniculate nucleus (LGN) – an area deep in the centre of the brain that relays visual signals from the retina to the cortex. The LGN was previously thought to be too difficult to reach. But surgical advances for deep brain stimulation – including treatment used for movement disorders such as Parkinson’s disease – have made accessing it relatively easy, via a single small hole in the skull. Pezaris and Reid tested LGN electrode implants on two adult macaques. Each animal had previously been trained to quickly direct their gaze towards a point of light on a computer screen. They then ran three types of trials: one in which a flash appeared on the screen; another in which the monkey received an electrical pulse from their implant; a third in which nothing happened at all. With electrical stimulation, the monkeys directed their gaze at specific points in front of them, exactly as if they had just “seen” a flash. When the researchers implanted two separate electrodes, stimulating different parts of the LGN, the monkeys looked in two different directions, one after another.

“This research establishes that there is a new avenue for further exploration,” says Pezaris. “What we created was only two points of light, two pixels. Though the exact numbers haven’t been determined accurately, it’s generally thought that we need some hundreds of them for general vision.” In the coming months, the team will repeat the experiment with eight electrodes, and ultimately plan to apply the technique to humans. Peter Schiller at MIT, who works with implants in the visual cortex, says only further research will reveal what area is best suited for implants. “The geniculate is more promising than the retina, but I am not at all convinced that is it better than the primary visual cortex,” he says. “Given the limitations of how tightly packed you can put in an [electrode] array, and how much the current spreads at the tip of the electrodes, it is highly desirable to place them in an area with the largest amount of visual tissue available,” he told New Scientist.
Source: PNAS

New Scientist
May 1, 2007

Original web page at New Scientist

Categories
News

Mice given a human photopigment gene have better color discrimination than do their peers

Aside from primates, most mammals are largely colorblind. Now researchers have found that transgenic mice can acquire the ability to detect new color differences if given a gene for making an additional light-sensing eye protein. The findings have implications for understanding how color vision evolved. Primates can distinguish the colors of the rainbow better than other mammals because their eyes contain three photopigment proteins. Each photopigment is sensitive to light of a particular wavelength, and the primate visual system detects colors by comparing the relative activity of cells in the retina that bear each of the three photopigments. Most other mammals, however, only make two photopigments, limiting their color discrimination. Scientists have suggested that trichromatic color vision arose in primates when one of the two photopigment genes they already had mutated to produce a third photopigment.

A sudden mutation like this could have given primates an instant advantage when it came to finding food–but only if their visual system were able to make sense of the new information. Certain differences in retina anatomy between primates and other mammals led many researchers to suspect that only primates had the right kind of wiring to make use of a sudden addition of a third photopigment. But perhaps not. In the new experiment, vision scientist Gerald Jacobs at the University of California, Santa Barbara, teamed up with geneticist Jeremy Nathans at Johns Hopkins Medical School in Baltimore, Maryland, and other colleagues to add a human photopigment gene to mice. Electrical recordings from the retinas of the engineered mice indicated that the added photopigment had enabled their color-sensing cone cells to respond to long wavelength red light, which normal mice can’t see. Next the team gave the mice a battery of behavioral tests that required them to poke their nose at panels in their enclosure to indicate which of three panels was a different color than the other two. Right answers earned a tiny drop of soy milk (“It’s kind of hippie-ish, but they really enjoy it,” Jacobs says.) The engineered mice passed with flying colors, so to speak, making distinctions that regular mice cannot, the researchers report in tomorrow’s Science.

ScienceNow
April 3, 2007

Original web page at ScienceNow

Categories
News

New device could revolutionize eye disease diagnosis, creating eye maps on the high street

A new digital ophthalmoscope, devised by a research team led by the University of Warwick can provide both doctors and high street optometrists with a hand held eye disease diagnosis device equal to the power of bulky hospital based eye diagnosis cameras. It will also give optometrists the ability to email detailed eye maps of patients to specialist eye doctors. Ophthalmoscopes, which act as an illuminated microscope for the eye, have changed little in design in the last century. As a result the effective operation of the device is constrained by the skill, expertise and eyesight of the eye specialist.

The new digital ophthalmoscope (developed from a 3 year research partnership bringing together the University of Warwick, ophthalmoscope manufacturer Keeler Optics, City University, & UCL) uses a combination of specialist lens, digital imaging and lighting technology which for the first time allows a high quality digital image to be captured and recorded by an ophthalmoscope. University of Warwick research Professor Peter Bryanston-Cross has also been able to apply software used to stitch together detailed map images to assemble the captured images from the digital ophthalmoscope. This produces a highly detailed single picture of medical significance and usefulness. It provides a map of the eye equal to the field of view and resolution of the large “Fundus” cameras typically used in hospital settings to examine eyes. The new digital ophthalmoscope would also be around 10 times cheaper than a Fundus camera. This technology will be a powerful tool in the hands of specialist eye doctors but it will also revolutionize eye care on the high street. Previously high street opticians have had to rely on notes and hand drawn sketches when referring customers to eye clinics. This new technology will allow them to create and email detailed eye images to hospital specialists cutting patient referral and diagnosis times and massively easing the burden on expensive and overstretched hospital eye equipment.

Warwick Hospital consultant eye surgeon Gary Misson has been working with Professor Peter Bryanston-Cross’s digital ophthalmoscope research digital programme for several years. He says: “This is an exciting development as it makes an instrument that is traditionally difficult to use much easier to handle and therefore available for use to a wider range of health care workers. It will allow digital images of disease such as the potentially blinding complications of diabetes and glaucoma to be accurately and quickly sent to specialists who will then be able to arrange appropriate treatment. I foresee a relatively inexpensive instrument that is about the size of a mobile phone in common use in the near future”

Science Daily
March 20, 2007

Original web page at Science Daily

Categories
News

Researcher placing eye implants in cats to help humans see

In “Star Trek: The Next Generation,” Geordi La Forge is a blind character who can see through the assistance of special implants in his eyes. While the Star Trek character “lives” in the 24th century, people living in the 21st century may not have to wait that long for the illuminating technology. Kristina Narfstrom, a University of Missouri-Columbia veterinary ophthalmologist, has been working with a microchip implant to help blind animals “see.” According to Narfstrom, the preliminary results are promising. “About one in 3,500 people worldwide is affected with a hereditary disease, retinitis pigmentosa, that causes the death of retinal cells and, eventually, blindness,” Narfstrom said. “Our current study is aimed at determining safety issues in regard to the implants and to further develop surgical techniques. We also are examining the protection the implants might provide to the retinal cells that are dying due to disease progression with the hope that natural sight can be maintained much longer than would be possible in an untreated patient.”

Narfstrom, the Ruth M. Kraeuchi-Missouri Professor in Veterinary Ophthalmology, is working primarily with Abyssinian and Persian cats that are affected with hereditary retinal blinding disease. The cat’s eye is a good model to use for this type of research because it is very similar to a human eye in size and construction, so surgeons can use the same techniques and equipment. Cats also share many of the same eye diseases with humans. The Abyssinian cats that Narfstrom is working with typically start to lose their sight when they are around one or two years old and are completely blind by age four.

To date, Narfstrom has performed surgeries in severely visually impaired or blind cats. During the surgery, Narfstrom makes two small cuts into the sclera, the outer wall of the eyeball. After removing the vitreous, which is the gelatinous fluid inside the back part of the eyeball, Narfstrom creates a small blister in the retina and a small opening, large enough for the microchip, which is just two millimeters in diameter and 23 micrometers (one-millionth of a meter) thick. The chip includes several thousand microphotodiodes that react to light and produce small electrical impulses in parts of the retina. “We are really excited about the potential uses for this technology and the potential to create improved vision in some of the millions of people affected worldwide with retinal blindness,” Narfstrom said. “This technology also may be beneficial for pets that have similar diseases because this technology can benefit both animals and humans.”

Science Daily
January 23, 2007

Original web page at Science Daily

Categories
News

Blind mice see after cell transplant

Using a technique that may one day help blind people to see, researchers have shown in mice that retinal cells from newborns transplanted into the eyes of blind adults wire up correctly and help them to detect light. The finding challenges conventional biological thinking, because it shows that cells that have stopped dividing are better for transplantation than the stem cells that normally make new cells. For decades, researchers have sought a way to replace the light-detecting cells that carpet the back of our eyes — and which break down in diseases such as retinitis pigmentosa and macular degeneration. But they have struggled to find cells that will work normally after being transplanted into the eye.

To find the best cell type, researchers led by Anand Swaroop at University of Michigan, Ann Arbor, and Robin Ali at University College London, UK, extracted cells from the retinas of mice at various times when photoreceptors are normally being generated, as embryos and after they are born. They then injected these cells into adult mouse retinas and counted how many new photoreceptors were generated. Cells produced in the few days after birth generated the most new photoreceptors after transplantation and connected to the retina correctly, they found. These cells were destined to be photoreceptors but had not fully matured into rods, the cells that detect low light. The results are published in Nature. Injecting these cells into the eyes of partially blind mice improved the animals’ sight, making their pupils react to light. “For us ophthalmologists it’s very, very, very exciting,” says Robert MacLaren, one of the study’s authors at Moorfields Eye Hospital, London. “We can suddenly see in our minds a potential treatment.”

It would be difficult to obtain equivalent human cells for transplantation, because they would have to come from fetuses in the first or second trimester of pregnancy. But Maclaren says that it may soon be possible to grow the correct retinal cells from adult stem cells or embryonic stem cells. In the past there have been many attempts at transplanting tissue into the adult retina. Some researchers have transferred whole sheets of fetal retina into animals — a method that is now showing good results in tests on humans, says Robert Aramant of the company Ocular Transplantation in Louisville, Kentucky. But these sheets do not join properly to the rest of the retina, says Thomas Reh, who studies retinal development at the University of Washington, Seattle. And transplanted stem cells have not efficiently generated new photoreceptors or restored sight. “This new work is head and shoulders above most of the other studies,” Reh says. MacLaren thinks that his cells are well suited to transplantation, because they are only one step from being adaptable stem cells and can tolerate being moved from one eye to another. Also, they are newly committed to becoming photoreceptors, so that they continue to grow into photoreceptors even after the move.

Nature
December 5, 2006

Original web page at Nature

Categories
News

Light-sensitive photoswitches could restore sight to those with macular degeneration

A research center newly created by the University of California, Berkeley, and Lawrence Berkeley National Laboratory (LBNL) aims to put light-sensitive switches in the body’s cells that can be flipped on and off as easily as a remote control operates a TV. Optical switches like these could trigger a chemical reaction, initiate a muscle contraction, activate a drug or stimulate a nerve cell – all at the flash of a light. One major goal of the UC Berkeley-LBNL Nanomedicine Development Center is to equip cells of the retina with photoswitches, essentially making blind nerve cells see, restoring light sensitivity in people with degenerative blindness such as macular degeneration. “We’re asking the question, ‘Can you control biological nanomolecules – in other words, proteins – with light?'” said center director and neurobiologist Ehud Y. Isacoff, professor of molecular and cell biology and chair of the Graduate Group in Biophysics at UC Berkeley. “If we can control them by light, then we could develop treatments for eye or skin diseases, even blood diseases, that can be activated by light. This challenge lies at the frontier of nanomedicine.”

The nanoscience breakthrough at the core of the research was developed at UC Berkeley and LBNL over the past several years by neuroscientist Richard Kramer, professor of molecular and cell biology, Dirk Trauner, professor of chemistry, and Isacoff – all three members of the Physical Bioscience Division of LBNL. It involves altering an ion channel commonly found in nerve cells so that the channel turns the cell on when zapped by green light and turns the cell off when hit by ultraviolet light. The researchers demonstrated in 2004 that they could turn cultured nerve cells on and off with this optical switch. Since then, with UC Berkeley Professor of Vision Science and Optometry John Flannery, they’ve injected photoswitches into the eyes of rats that have a disease that kills their rods and cones, and have restored some light sensitivity to the remaining retinal cells.

Isacoff, Kramer, Flannery and Trauner have now joined forces with 9 other researchers from UC Berkeley and LBNL, as well as from Stanford University, Scripps Institution of Oceanography and the California Institute of Technology, to perfect this fundamental development and bring it closer to medical application. Their group, centered around the optical control of biological function, will develop viruses that can carry the photoswitches into the correct cells, new types of photoswitches based on other chemical structures, and strategies for achieving the desired control of cell processes. “The research will focus on one major application: restoring the response to light in the eyes of people who have lost their photoreceptor cells, in particular, the rods and cones in the most sensitive part of the retina,” Isacoff said. “We plan to develop the tools to create a new layer of optically active cells for the retina.”

Loss of photoreceptors – the light detectors in the retina – is the major cause of blindness in the United States. One in four people over age 65 suffers vision loss as a result of this condition, the most common diagnosis being macular degeneration. The chemistry at the core of the photoswitch is a molecule – an azobenzene compound – that changes its shape when illuminated by light of different colors. Kramer, Trauner and Isacoff created a channel called SPARK, for Synthetic Photoisomerizable Azobenzene-Regulated K (potassium) channel, by attaching the azobenzene compound to a broken potassium channel, which is a valve found in nerve cells. When attached, one end of the compound sticks in the channel pore and blocks it like a drain plug. When hit with UV light, the molecule kinks and pulls the plug, allowing ions to flow through the channel and activate the nerve cell. Green light unkinks it and replugs the channel, blocking ion flow.

Isacoff said that this same photoswitch could be attached to a variety of proteins to push or pull them into various shapes, even making a protein bend in half like a tweezer. In 2006, in a cover article in the new journal Nature Chemical Biology, the researchers described for the first time a re-engineered glutamate receptor that is sensitive to light, which complements the SPARK channel because the same color of light will turn one on while turning the other off. “Now we have photochemical tools for an on switch and an off switch for nerve cells,” Kramer said. “This will allow us to simulate the natural activity of the healthy retina, which has on cells and off cells that respond to light in opposite ways.” Isacoff, Kramer, Trauner and their colleagues are experimenting with other molecules that can force shape changes, looking for improved ways to attach shape-changing molecules to proteins, developing means to shuttle these photoswitches into cells, building artificial genes that can be inserted into a cell’s DNA to express the photoswitches in the correct cell, and searching for ways to get light into areas of the body not possible to illuminate directly. “I’m struck by how versatile this approach seems to be,” Isacoff said, noting its applications for screening, diagnosing and treating disease. “I’m convinced that we’ll come up with a therapy that will work in the clinic.”

Science Daily
November 21, 2006

Original web page at Science Daily

Categories
News

Giant pandas see in color

They may be black and white, but new research at the Georgia Institute of Technology and Zoo Atlanta shows that giant pandas can see in color. Graduate researcher Angela Kelling tested the ability of two Zoo Atlanta pandas, Yang Yang and Lun Lun, to see color and found that both pandas were able to discriminate between colors and various shades of gray. The research is published in the psychology journal Learning and Behavior. “My study shows that giant pandas have some sort of color vision,” said Kelling, graduate student in Georgia Tech’s Center for Conservation Behavior in the School of Psychology. “Most likely, their vision is dichromatic, since that seems to be the trend for carnivores.”

Vision is not a well-studied aspect of bears, including the giant pandas. It has long been thought that bears have poor vision, perhaps, Kelling said, because they have such excellent senses of smell and hearing. Some experts have thought that bears must have some sort of color vision as it would help them in identifying edible plants from the inedible ones, although there’s been little experimental evidence of this. However, one experiment on black bears found some evidence that bears could tell blue from gray and green from gray. Kelling used this study’s design as the basis to test color vision in Zoo Atlanta’s giant pandas. Over a two-year period, Kelling investigated whether giant pandas can tell the difference between colors and shades of gray. In separate tests, the two pandas (Lun Lun, the female, and Yang Yang, the male) were presented with three PVC pipes, two hanging under a piece of paper that contained one of 18 shades of gray and one that contained a color – red, green or blue. If the panda pushed the pipe located under a color, it received a reward. If it pushed one of the pipes under the gray paper, it received nothing.

Kelling tested each color separately against gray. In the green versus gray tests, the bears’ performance in choosing green was variable, but mostly above chance. In the red versus gray tests, both bears performed above chance every single time. Only Lun Lun completed the blue versus green tests because Yang Yang had a tooth problem that prevented him from eating the treats used as reinforcement. For this trial, Lun Lun performed below chance only once. “While this study shows that giant pandas have some color vision, it wasn’t conclusive as to what level of color vision they have,” said Kelling. “From this study, we can’t tell if the pandas can tell the difference between the colors themselves, like red from blue, or blue from green. But we can see that they can determine if something is gray or colored. That ability and the accompanying visual acuity could lead to the pandas being better able to forage for bamboo. For instance, to determine whether to head for a bamboo patch that is healthy and colorful as opposed to one that is brown and dying.”

Science Daily
October 24, 2006

Original web page at Science Daily

Categories
News

New growth in old eyes

Nerve cells in the retinas of elderly mice show an unexpected and purposeful burst of growth late in life, according to researchers at UC Davis. “Mostly, the older you are, the more neurons shrivel up and die. This gives us a more optimistic view of aging,” said Leo Chalupa, professor ophthalmology and neurobiology, chair of neurobiology, physiology and behavior at UC Davis, and senior author on the paper, which was published online Aug. 8 by the journal Proceedings of the National Academy of the U.S.A. The nerves of the eye are really a part of the brain, Chalupa said, so this discovery means that it might be possible to encourage other parts of the aging brain to grow back. The group has preliminary evidence that the same process takes place in the eyes of elderly humans.

The nerve cells, or neurons, in the retina form a layer over another layer containing the light-sensitive cells. The neurons collect signals from the light-sensitive layer and relay them back to the brain. Lauren Liets, a researcher in Chalupa’s laboratory, noticed that in mice more than a year old — roughly 70 to 80 years old in human terms — the neurons sprouted tendrils into the photoreceptor layer, and the older the mice, the more growth took place. At the same time, the photoreceptor cells are shrinking and pulling back, so the neurons appear to be following them, perhaps compensating for those effects, said Chalupa. Similar sprouting occurs in damaged or detached retinas, Liets said. But this is the first time such an effect has been seen in the normal, aging eye, she said.

Science Daily
September 26, 2006

Original web page at Science Daily

Categories
News

How much the eye tells the brain

Researchers at the University Pennsylvania School of Medicine estimate that the human retina can transmit visual input at about the same rate as an Ethernet connection, one of the most common local area network systems used today. They present their findings in the July issue of Current Biology. This line of scientific questioning points to ways in which neural systems compare to artificial ones, and can ultimately inform the design of artificial visual systems. Much research on the basic science of vision asks what types of information the brain receives; this study instead asked how much. Using an intact retina from a guinea pig, the researchers recorded spikes of electrical impulses from ganglion cells using a miniature multi-electrode array. The investigators calculate that the human retina can transmit data at roughly 10 million bits per second. By comparison, an Ethernet can transmit information between computers at speeds of 10 to 100 million bits per second.

The retina is actually a piece of the brain that has grown into the eye and processes neural signals when it detects light. Ganglion cells carry information from the retina to the higher brain centers; other nerve cells within the retina perform the first stages of analysis of the visual world. The axons of the retinal ganglion cells, with the support of other types of cells, form the optic nerve and carry these signals to the brain. Investigators have known for decades that there are 10 to 15 ganglion cell types in the retina that are adapted for picking up different movements and then work together to send a full picture to the brain. The study estimated the amount of information that is carried to the brain by seven of these ganglion cell types. The guinea pig retina was placed in a dish and then presented with movies containing four types of biological motion, for example a salamander swimming in a tank to represent an object-motion stimulus. After recording electrical spikes on an array of electrodes, the researchers classified each cell into one of two broad classes: “brisk” or “sluggish,” so named because of their speed.

The researchers found that the electrical spike patterns differed between cell types. For example, the larger, brisk cells fired many spikes per second and their response was highly reproducible. In contrast, the smaller, sluggish cells fired fewer spikes per second and their responses were less reproducible. But, what’s the relationship between these spikes and information being sent? “It’s the combinations and patterns of spikes that are sending the information. The patterns have various meanings,” says co-author Vijay Balasubramanian, PhD, Professor of Physics at Penn. “We quantify the patterns and work out how much information they convey, measured in bits per second.”

Calculating the proportions of each cell type in the retina, the team estimated that about 100,000 guinea pig ganglion cells transmit about 875,000 bits of information per second. Because sluggish cells are more numerous, they account for most of the information. With about 1,000,000 ganglion cells, the human retina would transmit data at roughly the rate of an Ethernet connection, or 10 million bits per second. “Spikes are metabolically expensive to produce,” says lead author Kristin Koch, a PhD student in the lab of senior author Peter Sterling, PhD, Professor of Neuroscience. “Our findings hint that sluggish cells might be ‘cheaper,’ metabolically speaking, because they send more information per spike. If a message must be sent at a high rate, the brain uses the brisk channels. But if a message can afford to be sent more slowly, the brain uses the sluggish channels and pays a lower metabolic cost.” “In terms of sending visual information to the brain, these brisk cells are the Fedex of the optic system, versus the sluggish cells, which are the equivalent of the U.S. mail,” notes Sterling. “Sluggish cells have not been studied that closely until now. The amazing thing is that when it’s all said and done, the sluggish cells turned out to be the most important in terms of the amount of information sent.”

Science Daily
August 14, 2006

Original web page at Science Daily

Categories
News

Why cornea is transparent and free of blood vessels, allowing vision

Scientists at the Harvard Department of Ophthalmology’s Schepens Eye Research Institute and Massachusetts Eye and Ear Infirmary (MEEI) are the first to learn why the cornea, the clear window of the eye, is free of blood vessels–a unique phenomenon that makes vision possible. The key, say the researchers, is the unexpected presence of large amounts of the protein VEGFR-3 (vascular endothelial growth factor receptor-3) on the top epithelial layer of normal healthy corneas.

Diabetes can cause blood vessels to collapse, creating a hypoxic environment that generates vascular endothelial growth factor (VEGF) and triggers angiogenesis. Pericytes (diamonds) detach, destabilizing vessels. Kazlauskas and Im hypothesize that endothelial cells (rectangles) receive specific instructions during this unstable state that dictate whether the vessels should grow or regress. In the case of diabetic retinopathy, too many vessels grow, eventually obscuring vision. According to their findings, VEGFR-3 halts angiogenesis (blood vessel growth) by acting as a “sink” to bind or neutralize the growth factors sent by the body to stimulate the growth of blood vessels. The cornea has long been known to have the remarkable and unusual property of not having blood vessels, but the exact reasons for this had remained unknown.

These results, published in the July 25, 2006 issue of the Proceedings of the National Academy of Sciences and in the July 17 online edition, not only solve a profound scientific mystery, but also hold great promise for preventing and curing blinding eye disease and illnesses such as cancer, in which blood vessels grow abnormally and uncontrollably, since this phenomenon, present in the cornea normally, can be used therapeutically in other tissues. “This is a very significant discovery,” says Dr. Reza Dana, Senior Scientist at the Schepens Eye Research Institute, head of the Cornea Service at the Massachusetts Eye and Ear Infirmary, and an associate professor at Harvard Medical School, and the senior author and principal investigator of the study. “A clear cornea is essential for vision. Without the ability to maintain a blood-vessel-free cornea, our vision would be significantly impaired,” he says, adding that clear, vessel-free corneas are vital to any animal that needs a high level of visual acuity to survive.

The cornea, one of only a few tissues in the body that actively keep themselves vessel-free (the other is cartilage), is the thin transparent tissue that covers the front of the eye. It is the clarity of the cornea that allows light to pass onto the retina and from there to the brain for interpretation. When the cornea is clouded by injury, infection or abnormal blood vessel growth, vision is severely impaired, if not destroyed. Scientists have been wrestling with the “clarity” puzzle for many decades. And, while some previous studies have revealed small clues, none have pointed to one major mechanism, until this study.

In most other tissues of the body, blood vessel growth or angiogenesis occurs in response to a need for increased blood flow to heal an injured or infected area. The immune system sends in growth factors such as vascular endothelial growth factor (VEGF) to bind with a protein receptor called VEGFR-2 on blood vessels to trigger vessel growth. Three forms of VEGF–A, C, and D–bind with this receptor. Two of them, C and D also bind with VEGFR-3, which is usually found on cells lining lymphatic vessels, to stimulate the growth of lymphatic vessels. Dana’s team began to suspect the involvement of VEGFR-3 in stopping blood growth in corneas when they noticed unexpectedly that large amounts of the protein seemed to exist naturally on healthy corneal epithelium, a previously unknown location for the receptor. Dana and his team were already aware from clinical experience that the epithelium most likely played a role in suppressing blood vessel growth on the cornea, having witnessed blood vessels develop on corneas stripped of their epithelial layers.

They began to theorize that the large amounts of VEGFR-3, in this new, non-vascular location, might be attracting and sucking up all the C and D VEGF growth factors, thereby blocking them from binding with VEGFR-2. And, because this binding took place in a non-vascular setting, the growth factors were neutralized. To test their theory, the team conducted a series of experiments. They conducted chemical analyses that demonstrated that VEFGR-3 and the gene that expressed it were indeed present on the corneal epithelium. Next, in two separate experiments, they compared corneas with and without epithelial layers that were injured. They found that only the corneas without epithelial layers developed blood vessels, implicating the role of the epithelium in suppressing blood vessel growth. To further prove their theory, they added a VEGFR-3 substitute to corneas stripped of their epithelial layers and found that vessel growth continued to be suppressed, replacing the normal anti-angiogenic role of the epithelium. Finally they exposed intact corneas to an agent that blocked VEGFR-3 and found that blood vessels began to grow, formally demonstrating that the corneal epithelium is key to suppression of blood vessels and that the key mechanism is expression of VEGFR-3.

“The results from this series of tests, confirmed our belief that the presence of VEGFR-3 is the major factor in preventing blood vessel formation in the cornea,” says Dana, who says that the discovery will have a far reaching impact on the development of new therapies for eye and other diseases. “Drugs designed to manipulate the levels of this protein could heal corneas that have undergone severe trauma or help shrink tumors fed by rapidly growing abnormal blood vessels,” he says. “In fact, the next step in our work is exactly this.”

Science Daily
August 1, 2006

Original web page at Science Daily

Categories
News

RNA therapy tackles eye disease

The commonest cause of blindness in the elderly has been treated with small pieces of genetic material that block genes. The result comes from the first clinical trial to assess the effectiveness of a therapy known as RNA interference (RNAi). The trial tested a drug called Bevasiranib on patients suffering from age-related macular degeneration (AMD), which erodes vision as blood vessels grow on the retina at the back of the eyes. The currently incurable condition affects about 1.65 million Americans. Bevasiranib was developed by Acuity Pharmaceuticals of Philadelphia. The company estimates that 11 million people worldwide will have AMD by 2013.

The trial on 129 patients found that Bevasiranib reduced blood-vessel growth in the eyes and improved vision slightly. At the lowest doses, these effects lasted for several months; at higher doses the positive responses are still present, says Dale Pfost, Acuity’s president. No adverse side effects were seen other than the anticipated swelling and inflammation at the site where the drug was injected into the eye. “It’s a very encouraging result,” says Pfost, who announced the preliminary findings at the meeting of the American Society of Gene Therapy in Baltimore on 1 June. Bevasiranib turns off the gene for a molecule called vascular endothelial growth factor (VEGF), which stimulates blood-vessel growth across the retina of AMD patients. The drug uses short sections of RNA, the molecular cousin of DNA, which often ferries genetic information around cells.

Short, interfering RNA (siRNA) strands stick to cellular RNA molecules that trigger the growth factor’s production. The siRNA carries a chemical unit that marks its quarry for destruction by the cell’s proteins. Less RNA means less growth factor, slowing the proliferation of blood vessels. The siRNA molecules do not appear to interact with DNA, easing concerns that the drug will alter patients’ genetic make-up. This is seen as a threat in other types of gene therapy, and has been suggested as the cause of leukaemia in three patients in a recent French gene-therapy trial. Pfost says that Bevasiranib could be used in combination with other drugs, such as the anticancer drug Avastin, that block the effects of VEGF. Ophthalmologists who have given their AMD patients Avastin have seen some encouraging results, but the effects only seem to last a few weeks. Bevasiranib could beat the disease in the longer term, says Pfost.

The successful trial is a milestone in the development of RNA therapies, says Mark Kay, an RNAi researcher at Stanford University, California, and president of the American Society of Gene Therapy. “I’m cautiously optimistic that RNAi will be useful in the clinic and that this will be established relatively soon,” he says. Formal results of the Phase 2 trial of Bevasiranib will be revealed in September. Phase 3 trials of are expected to start at the end of 2007, with final results anticipated in 2009.

Nature
June 20, 2006

Original web page at Nature

Categories
News

Factor spurs regeneration in the optic nerve

Researchers at Children’s Hospital Boston have discovered a naturally occurring growth factor that stimulates regeneration of injured nerve fibers (axons) in the central nervous system. Under normal conditions, most axons in the mature central nervous system (which consists of the brain, spinal cord and eye) cannot regrow after injury. The previously unrecognized growth factor, called oncomodulin, is described in the May 14 online edition of Nature Neuroscience.

Rats whose retinas were treated with slow-release beads containing oncomodulin (plus a cofactor that boosts cells’ response to oncomodulin) showed dramatically increased growth of axons into the optic nerve as compared with controls who received empty beads. When oncomodulin was added to retinal nerve cells in a Petri dish, with known growth-promoting factors already present, axon growth nearly doubled. No other growth factor was as potent. In live rats with optic-nerve injury, oncomodulin released from tiny sustained-release capsules increased nerve regeneration 5- to 7-fold when given along with a drug that helps cells respond to oncomodulin. Yin, Benowitz and colleagues also showed that oncomodulin switches on a variety of genes associated with axon growth.
Benowitz, the study’s senior investigator, believes oncomodulin could someday prove useful in reversing optic-nerve damage caused by glaucoma, tumors or traumatic injury. In addition, the lab has shown that oncomodulin works on at least one other type of nerve cell, and now plans to test whether it also works on the types of brain cells that would be relevant to treating conditions like stroke and spinal cord injury. The current study builds on work Benowitz, Yin and colleagues published a few years ago. Studying the optic nerve, they found – quite by accident – that an injury to the eye activated axon growth: it caused an inflammatory reaction that stimulated immune cells known as macrophages to move into the eye. “To make this finding clinically useful, we wanted to understand what was triggering the growth, so we could achieve nerve regeneration without causing an injury,” Benowitz says.

Working in Benowitz’s lab, Yin took a closer look and found that the macrophages secreted an essential but as-yet unidentified protein. Further studies revealed it to be oncomodulin, a little-known molecule first observed in association with cancer cells. “Out of the blue, we found a molecule that causes more nerve regeneration than anything else ever studied,” Benowitz says. “We expect this to spur further research into what else oncomodulin is doing in the nervous system and elsewhere.” For oncomodulin to work, it must be given along with an agent that raises cell levels of cyclic AMP, a “messenger” that initiates various cellular reactions. Increased cyclic AMP levels are needed to make the oncomodulin receptor available on the cell surface.

Benowitz also notes that there is another side to the nerve-regeneration problem: overcoming agents that act as natural inhibitors of axon growth. These inhibitors are the subject of intense study by several labs, including that of Zhigang He, PhD, at Children’s Hospital Boston. In a study published in 2004, Benowitz and postdoctoral fellow Dietmar Fischer, PhD, collaborated with He to combine both approaches – overcoming inhibition and activating the growth state (by injuring the lens of the eye) – and achieved dramatic optic-nerve regeneration. Now that Benowitz has isolated oncomodulin, he believes even greater regeneration is possible by combining it with agents that counteract growth inhibitors.

Science Daily
June 6, 2006

Original web page at Science Daily

Categories
News

Scientists restore sight to chickens with blinding disease

University of Florida scientists have delivered a gene through an eggshell to give sight to a type of chicken normally born blind. The finding, reported May 23 in the online journal Public Library of Science-Medicine, proves in principle that a similar treatment can be developed for an incurable form of childhood blindness. “We were able to restore function to the photoreceptor cells in the retinas of an avian model of a disease that is one of the more common causes of inherited blindness in human infants,” said Sue Semple-Rowland, Ph.D., an associate professor of neuroscience with UF’s Evelyn F. and William L. McKnight Brain Institute. “The vision capabilities of the treated animals far exceeded our expectations.”

The bird — a type of Rhode Island Red chicken — carries a genetic defect that prevents it from producing an enzyme essential for sight. The condition closely models a genetic disease in humans that causes Leber congenital amaurosis type 1, or LCA1. About 2,000 people in the United States are blind because they have a disease that falls in the LCA family. “Enabling chickens that can’t see to peck and eat after treatment is stunning,” said Dr. Jean Bennett, a professor of ophthalmology and cell and developmental biology at the University of Pennsylvania who was not involved in the study but who participated in a landmark gene transfer experiment five years ago that restored vision to blind Briard dogs. “This is proof of concept using a unique vector, animal model and approach. One would hope this could happen in a human.”

Semple-Rowland, a College of Medicine faculty member, has worked since 1986 to first discover the malfunctioning gene, known as GC1, and then to develop a viral therapy to treat it. “I will always remember the first animal that we successfully treated,” said Semple-Rowland, who is also a member of the UF Center for Vision Research and the UF Genetics Institute. “I thought I saw signs that the chick was responding visually to the environment, but I didn’t want to believe it. Scientists always doubt what they see — it’s intrinsic to how we operate. So I did this simple little test, drawing little dots on a piece of paper. The chick, which was standing on the table, came over to the paper and started pecking at all of them. It was so exciting.”

Later, more precise tests showed that of the seven treated chickens, five displayed near-normal visual behavior. Measurement of electrical activity in the retinas of the same five animals showed they responded to light. In comparison, tests on three untreated chickens showed no meaningful responses. “This is an interesting gene-transfer technique that appears to restore function to light-sensitive cells in the retina,” said Dr. Paul A. Sieving, director of the National Eye Institute of the National Institutes of Health, which partially funded the study. “An approach such as this could lead eventually to a vision-restoring therapy for children who suffer from blinding retinal diseases.”

Like people, chickens possess color vision and function best in daylight. The predominant photoreceptor cell type in the chicken retina, the cone cell, is the same cell type that is essential for normal human vision. To develop the treatment, UF scientists constructed a virus able to infect photoreceptors, delivering a normal copy of the GC1 gene to these cells. Using a very fine glass needle, they injected the viral vector into the developing nervous system of a chicken embryo through a tiny hole in the eggshell. The shell was resealed and the egg was incubated to hatching to produce a live chick.

“The process sounds straightforward but it really isn’t,” Semple-Rowland said. “It took quite a long time to build the vector, develop the injection procedure and figure out how to hatch the eggs. By doing the injection early during development, we actually treat the cells before they become photoreceptors.” Infants with LCA1 would receive an injection of the gene transfer agents directly into the eyeball during the first couple of years of life, bypassing embryonic treatment. That’s important, researchers say, because a diagnosis of LCA1 is often not made until months after a child is born.

“There are only a few clues that an infant may have this disease,” Semple-Rowland said. “Often parents will notice that their child doesn’t seem to be smiling at or looking at faces. Children may also poke or rub their eyes, behaviors clinically known as oculo-digital signs that may produce sensations of sight.” Work remains to refine the viral delivery system that transfers the healthy genes to the photoreceptor cells. In addition, solutions have to be found to make the treatment long-lasting — scientists have restored sight and slowed degeneration, but the retinal cells still degenerate.

But Semple-Rowland thinks the time necessary to turn these research results into a treatment for patients will be a fraction of the 20 years that have gone into discovering the genetic defect and developing a therapy for it. “We can do amazing things in animal models,” Semple-Rowland said, “but this work can’t be done quickly. That’s the hardest thing — knowing there are people who need these treatments now. But we work as fast as we can. You’ll see the first treatments for some of these genetic eye diseases soon, especially after the groundwork for an approved therapy is laid and the therapy works.”

Science Daily
June 6, 2006

Original web page at Science Daily

Categories
News

Solar-powered implant could restore vision

Retinal implant that squirts chemicals into the back of your eye may not sound like much fun. But a solar-powered chip that stimulates retinal cells by spraying them with neurotransmitters could restore sight to blind people. Unlike other implants under development that apply an electric charge directly to retinal cells, the device does not cause the cells to heat up. It also uses very little power, so it does not need external batteries. The retina, which lines the back and sides of the eyeball, contains photoreceptor cells that release signalling chemicals called neurotransmitters in response to light. The neurotransmitters pass into nerve cells on top of the photoreceptors, from where the signals are relayed to the brain via a series of electrical and chemical reactions. In people with retinal diseases such as age-related macular degeneration and retinitis pigmentosa, the photoreceptors become damaged, ultimately causing blindness.

Last year engineer Laxman Saggere of the University of Illinois at Chicago unveiled plans for an implant that would replace these damaged photoreceptors with a set of neurotransmitter pumps that respond to light. Now he has built a crucial component: a solar-powered actuator that flexes in response to the very low-intensity light that strikes the retina. Multiple actuators on a single chip pick up the details of the image focused on the retina, allowing some “pixels” to be passed on to the brain. The prototype actuator consists of a flexible silicon disc just 1.5 millimetres in diameter and 15 micrometres thick. When light hits a silicon solar cell next to the disc it produces a voltage. The solar cell is connected to a layer of piezoelectric material called lead zirconate titanate (PZT), which changes shape in response to the voltage, pushing down on the silicon disc. In future, a reservoir will sit underneath the disc, and this action will squeeze the neurotransmitters out onto retinal cells.

New Scientist
May 9, 2006

Original web page at New Scientist

Categories
News

Optic nerve regrown with a nanofibre scaffold

Hamsters blinded following damage to their optic nerve have had their vision partially restored with the help of an implanted nanoscale scaffold that has encouraged nerve tissue to regrow. The technique, likened by its inventors to the way a garden trellis encourages the growth of ivy, holds out the hope that people with diseased or injured optic nerves might one day recover their sight. The optic nerve, which connects the eye to the brain, can be severed by traumatic injuries such as those suffered by people in car crashes. It can also be damaged by glaucoma, when excessive pressure in the eyeball causes tissue at the back of the eye to collapse, pulling nerve fibres apart and so causing progressive loss of vision.

Repairing the optic nerve requires the long, spidery branches of nerve cells, called axons, to grow again and reconnect. Achieving this is a “formidable barrier”, says Rutledge Ellis-Behnke, a biomedical engineer at the Massachusetts Institute of Technology (MIT), US. Axons can be encouraged to extend by exposing them to growth factors, but they rarely extend far enough to bridge the large gaps typical of most optic nerve injuries, he says. To overcome this problem, Ellis-Behnke and colleagues from Hong Kong University and the Institute for Neuroscience in Xi’an, both in China, created a nerve-bridging scaffold, made up of nanoparticle fibres. They attempted to make these fibres the same size as the sugars and proteins on the surface of the torn axon, in the hope that this would encourage cell growth and migration.

To make their scaffold, the team turned to a discovery from the early 1990s by Shuguang Zhang of MIT’s Center for Biomedical Engineering. He found that certain sequences of peptides can be made to self-assemble into mesh-like sheets of nanofibres by immersing them in salt solutions at similar concentrations to those found in the body. To test whether this would help nerves to regenerate, the team took hamsters whose optic nerves had been deliberately severed and injected a peptide mixture into the animals’ brain close to the injury site. They found that after six weeks, the animals had recovered some of their vision. “They could see well enough to find their food, to function well,” says Gerald Schneider, a member of the team at MIT.

Schneider estimates that 30,000 axons had reconnected, compared with only around 30 in previous experiments using other approaches, such as nerve growth factors. The team speculates that the similarity between the size of the fibres and the features on neural material is what encourages the axons to bridge the gap. The scaffold appears to eventually break down harmlessly. Tissue engineer Kevin Shakesheff at the University of Nottingham, UK, says the work is “very exciting”, but urges great caution. The surgical cut made in the hamster’s nerve is not representative of “more messy” injury or disease in people, he warns, and other central nervous system work has shown that species differences mean nerve regeneration in a rodent might not translate into humans.

Shakesheff also notes that how the scaffold regenerates tissue is currently a mystery and that delivering stem cells to further boost the regenerative response might ultimately be an option. The biggest prize would be if the technique could repair spinal cord injuries, or brain tissue damaged by stroke or other neurological condition. With that in view, the MIT team now plans to extend the work in the hope of developing therapies for some forms of paralysis.

Source: Proceedings of the National Academy of Sciences (DOI: 10.1073/pnas.0600559103)

New Scientist
March 28, 2006

Original web page at New Scientist

Categories
News

Light-sensing cells in retina develop before vision

Investigators at Washington University School of Medicine in St. Louis have found that cells making up a non-visual system in the eye are in place and functioning long before the rods and cones that process light into vision. The discovery should help scientists learn more about the eye’s non-visual functions such as the synchronization of the body’s internal, circadian clock, the pupil’s responses to light and light-regulated release of hormones. The researchers report in the Dec. 22 issue of Neuron that in the mouse retina, intrinsically photosensitive retinal ganglion cells (ipRGCs) are active and functioning at birth. That was surprising because the mouse retina doesn’t develop fully until a mouse is almost three weeks old, and the first rod cells don’t appear until about 10 days after birth.

“We were stunned to find these photoreceptors were firing action potentials on the day of birth,” says Russell N. Van Gelder, M.D., Ph.D., associate professor of ophthalmology and visual sciences and of molecular biology and pharmacology. “Mice are very immature when they’re born. It takes about three weeks after birth for the retina to fully develop. No one previously had detected light-dependent cell firing in a mouse before 10 days.” Van Gelder says the ganglion cells react to light in two ways, sending messages to parts of the brain that control circadian rhythms, and (on the first day or two of life) also setting off a wave of activity that spreads through the retina, possibly helping visual cells develop. Van Gelder and colleagues have spent the last few years learning how blind animals (and people) can sense light and use it to set their circadian clocks. The ipRGCs were first identified in 2002 — by David M. Berson, Ph.D., and colleagues at Brown University — as the cells that could sense light even in visually blind eyes. But it was very difficult and time consuming to isolate and study the cells, requiring precise injection of a tracing dye into the brains of animals to label and identify the ipRGCs.

That has changed as the result of a technical advance developed by Daniel C. Tu and Donald Zhang, both Medical Scientist Training Program students in Van Gelder’s lab, and co-first authors of this study. Tu and Zhang used a multi-electrode array technique in which tiny, individual electrodes are placed about 200 microns apart. Each electrode is a mere 30 microns in size — there are 25,400 microns per inch —and 60 electrodes are contained on a grid. “This spacing turns out to be perfect for a retina,” Van Gelder says. “You can remove the retina and place it, ganglion cell-side down, on this array. Then the electrodes pick up the impulses of the ganglion cells when those cells react to light.” Whereas the original brain injection technique allowed researchers to study only one or two ipRGCs per day, the multi-electrode array allows Van Gelder’s team to study 30 times that many. Those studies have revealed a cell population that reacts quickly and consistently to light.

“If you give the cells a series of identical pulses of light and look at how fast they fire, the reaction is identical every time,” Van Gelder says. “The ganglion cells detect brightness, and they’re extremely good at it. You could make a good light meter for a camera out of these cells because they are consistent in their response to brightness over the equivalent of almost 10 f-stops on a camera. That’s completely different from the rods and cones in the retina. Those visual cells can’t detect brightness very well. They detect contrast, sensitivity and motion.” Studying these populations of ipRGCs, Van Gelder also found the cells require a protein called melanopsin to sense and react to pulses of light. When the group examined retinas of mice that were genetically engineered to lack melanopsin, they found that the ganglion cells lost all sensitivity to light.

The ability to study many of these cells at once allowed Van Gelder’s team to learn that there are three distinct populations of ipRGCs, and each cell type reacts to light differently. Some fire quickly when a light turns on but take longer to stop firing when it goes out. Other cells take a while to ramp up their response but then quickly stop firing when the area gets dark. A third cell type is slow to turn on when exposed to light and takes its time shutting down in darkness. In addition, the cells tend to react to light in groups. Electrically, some of the cells work almost like a chorus, sending several synchronized “harmonies” to the brain as part of one big “song” that responds to light impulses. “We were able to detect about 20 percent of the ganglion cells were coupled to other ganglion cells,” he says. “That’s probably a low estimate because if we had a finer grid and could record the activities of more individual cells, we might well find more interactions.”

Van Gelder believes the early activity and the interactions of the ipRGCs may somehow enhance survival by helping animals detect light and set their circadian clocks prior to the development of vision. And he says because retinas tend to be very similar in most mammals, human ganglion cells also may develop and begin to function earlier than rods and cones. Although ipRGCs sense light in mice and humans, they don’t connect to the brain’s visual cortex. Instead, they send signals to deeper, more ancient parts of the brain, such as the hypothalamus, from which they project to the brain regions that control the circadian clock as well as the response of the pupil to light. “The multi-electrode array technique that Dan Tu and Don Zhang have brought into this field should help us learn a lot more about how these retinal ganglion cells influence all kinds of non-visual functions and reinforce the fact that the eye is responsible for more than just vision,” Van Gelder says.

Washington University
January 3, 2006

Original web page at Washington University

Categories
News

Laser activates gene therapy in rats’ eyes

Laser light has been used to remotely control gene therapy in rats. This mechanism will help make gene therapy more effective by allowing the precise time and location at which new genes are activated to be controlled, meaning specific tissues can be targeted while healthy tissues are left alone. Lasers have been used in the past to perforate cells for gene therapy in cultured cells. But the new research – activating marker genes in the eyes of rats – is more sophisticated and the first time lasers have been used for gene therapy in live animals. Kazunori Kataoka, at the University of Tokyo, Japan, and colleagues developed a photosensitive molecular complex that could be activated in rats’ eyes by irradiating them with visible light from a low power laser.

The synthetic complex is designed to deliver foreign DNA by carrying it past the cell membrane – a process known as transfection. The complex consists of three components: a photosensitive anionic dendrimer, which provides the triggering mechanism, and a cationic peptide which drives the third component, its DNA payload, towards the nucleus of a cell after it has been released. The complex enters the cell by a process known as endocytosis, where the cell’s plasma membrane envelops the complex at its surface and draws it into the cell. The membrane around the complex then detaches from the cell’s membrane to form a bubble containing the complex within the cell. Applying the pulse of the laser light causes the dendrimer to break free from its bubble and simultaneously release the DNA-carrying peptide. By only causing damage to the plasma enveloping the complex the cell membrane is left intact, reducing cell death.

Jennifer West at Rice University in Houston, Texas, US, has used light to kill cancer cells by inducing localised heating in nanoparticles. She says that the scope of such “biophotonics” does not have to be limited to exposed areas such as the eye. “Some wavelengths of light can penetrate very deeply,” she says. Near-infrared, for example, could potentially be used to treat many different tissue types and organs. And fibre optics can be run through needles, catheters and laparoscopes to deliver light of any wavelength to essentially anywhere in the body. But there are good reasons for concentrating on ocular therapies, says Christopher Norbury at Pennsylvania State University, US. There is a lot of interest in using gene therapy to treat congenital blindness because gene therapy has a greater chance of success in the eye compared with other parts of the body.

There are two reasons why gene therapy often does not work, says Norbury. “Either because the cells are dividing so the gene isn’t passed on or because you get an immune response against the gene.” But in the eye, cells tend not to divide and immune responses are less severe. Norbury also notes that previous methods of breaking into cells often kill a lot of them. But, according to the Japanese researchers, evaluations of the complex carried out on cultured cells showed that, not only did the areas targeted by laser light enhance gene expression 100-fold, but cell death in those areas was minimal.

New Scientist
December 20, 2005

Original web page at New Scientist

Categories
News

How do we look? Cracking the visual system’s code with artificial and natural stimuli

In the early visual system, signals travel from the retina through the visual thalamus near the middle of the brain to area V1 at the back of the brain. V1 sends signals to the rest of visual cortex. Since long before the word neuroscience was coined, the community has devoted substantial resources to studying the visual system, and for good reason. The visual system occupies a huge portion of the brain–about 40% of the cerebral cortex in monkeys. But with roughly 30 processing areas in the cortex, fed either directly or indirectly from the primary visual area, V1, deciding the correct entry point is a challenge.

With a few well characterized exceptions, not much is known about the responses of these processing centers. Indeed, besides conjecturing that it must somehow be advantageous, we don’t really know why visual processing is distributed into so many areas. The place to start therefore is the early visual system, the retina, visual thalamus, and area V1. But how well we can predict how these visual stages respond to an arbitrary stimulus is not clear. This question is the topic of a Society for Neuroscience symposium, which Nicole Rust and I organized. The participants are all involved in a similar effort: They design simple quantitative models of visual responses and test these models on neurons in the early visual system.
Their research concentrates on different portions of the visual system, from the retina (Jonathan Demb), through the visual thalamus (Valerio Mante), to area V1 (David Tolhurst, Yang Dan, and Bruno Olshausen), and to visual areas V2 and V4 (Jack Gallant).

These scientists share a commitment to prediction. We can say that we know what the early visual system does only if we can predict its response to arbitrary stimuli. These stimuli should include both the simple images used commonly in the laboratory (spots, bars, gratings, and plaids) and the more complex images encountered in nature. Very few existing models have been held to this rigorous test.
Efforts to predict responses of neurons in the early visual system to such images have been made for the visual thalamus and for area V1. These mostly have involved the simplest possible model of visual response, one based on pure linear filtering of the images. And as Demb has pointed out, no published efforts to date predict retinal response to complex natural images.

The models used to explain responses should be as simple as possible. They should capture the system’s computation while remaining intuitive and, crucially, have a limited number of parameters, which can be derived directly from the data. But this simplicity has a cost: There is not likely to be a one-to-one mapping between model components and the underlying biophysics and anatomy. While we may be able to predict what the system does, answers will still be lacking as to how it does it. All the models being presented at our symposium share a common arrangement, an elaboration of the classical concept of receptive field.
The receptive field is the window through which a visual neuron observes the scene. Mathematically, it specifies the weights that a neuron applies to each image location. A neuron that responds purely as dictated by its receptive field would operate exactly as one of the linear filters that are well known to engineers: It would compute a weighted sum. This simple and intuitive description has dominated visual neuroscience since the 1960s, allowing the field to form solid ties with germane disciplines such as image processing and visual psychophysics.

In the last couple of decades, however, a number of nonlinear phenomena were discovered, which cannot be explained by the receptive field alone. These nonlinearities have been discovered at all stages of the early visual system. New models were developed, which built upon the receptive field endowing it with mechanisms that adjust responses based on the prevalent stimulus conditions, making the neuron more responsive if a stimulus is weak, or preceded by a weak stimulus, or surrounded by a weak stimulus.

Contemporary models, including those presented by the participants in the symposium, all include a linear receptive field but accompany it with a number of nonlinear mechanisms. At the output of the receptive field it is common to place a “pointwise” nonlinearity, in which output depends on the response of the neuron and not on the responses of its neighbors. At the input of the receptive field it is common to place more complex, “visual processing” nonlinearities, in which output depends on the intensity of the image at a number of spatial locations. This nonlinearity can be simple, such as an adjustment of responsiveness, or it can be more complex, such as the extraction of edges or of the amplitude of a Fourier Transform.

How well these models do depends on the visual stage, and on one’s perspective. Demb and Mante indicate that for the retina and the visual thalamus we know what the ingredients should be to account for a large part of the stimulus-driven responses. The opinions diverge when it comes to area V1. Tolhurst and Dan argue that even the simplest versions of the receptive-field model for V1 capture the gist of neuronal responses in this area. Results from the Gallant laboratory provide an estimate of this performance, and indicate that the receptive field alone explains about 35% of the responses, a good start but certainly one with room for improvement. Adding the known nonlinearities to the receptive field may provide a much higher performance.

The natural images at bottom were taken by a camera on the head of a cat roaming the forests of Switzerland (C. Kayser et al, J Neurophysiol, 90:1910–20, 2003.). The responses were recorded in our laboratory from a neuron in the visual thalamus of an anesthetized cat who was observing the visual scene. Each dot corresponds to a spike, and each row to a new repetition of the 2.5-s stimulus. The goal of this research is to be able to predict the average firing rate of the neuron to determine what the visual thalamus does.
Olshausen, however, argues that we know very little about what V1 does. He points to a number of limitations in the current approach to the study of area V1: One is strongly biased neuron sampling, which ignores quieter neurons; another is the ecological deviance between simple laboratory stimuli and more complex images encountered in nature. Natural images contain much information about spatial structure, such as shading and cast shadows, which might be crucial in determining the responses of visual neurons even in an early area such as V1.

Indeed, the best use of complex stimuli such as natural images remains a point of contention among symposium participants. Mante and Tolhurst use them only to test a model that has been constrained with simpler stimuli. An alternative approach, proposed for example by Gallant and by Olshausen is to use them also to discover the appropriate type of model and constrain the model. The first approach posits that appropriate models of neural function are so nonlinear that it would be hopeless to try to fit them to responses to complicated stimuli. Thus it would be better to constrain the model with simpler stimuli such as spots and gratings. On the other hand, neurons in visual cortex, and particularly in areas beyond V1, are likely to have scene-analysis specialization that goes well beyond the extraction of edges and similar low-level image processing. It could become pointless to try to characterize these neurons using simple stimuli. Simple stimuli might perhaps become useful after a wide exploration is made with complex, natural stimuli and the general outlines of the mechanisms underlying the responses have been elucidated.

Clearly, much work lies ahead before we can say that we understand what the early visual system does. The goal of the symposium is to discuss and if possible overcome the differences of opinion. The way forward lies in establishing a shared method of analysis of the different visual stages. These efforts will bring the field of visual neuroscience closer to that of established quantitative fields such as physics. In such fields there is wide agreement as to what constitutes a “standard theory” and which results should be the source of surprise. Thanks to the participants in the symposium, the coming years will certainly bring great improvements in our understanding of the early visual system. Matteo Carandini is a scientist at the Smith-Kettlewell Eye Research Institute in San Francisco. He works to decipher what the primary visual cortex and the visual thalamus contribute to early visual processing.

The Scientist
December 6, 2005

Original web page at The Scientist

Categories
News

Retinal scans eyed for New Mexico show cattle

It sounds like science fiction, but New Mexico State University researchers are testing advanced eye-scanning technology on cattle as part of a national tracking system for animal health. “Retinal scans are part of a growing technological trend in cattle identification,” said Manny Encinias, livestock specialist at NMSU’s Clayton Livestock Research Center. “It painlessly flashes a beam of light into the eyeball and records the pattern of veins in the eye.” Each retina, whether bovine or human, is unique and a scan is considered one of the most accurate forms of identification, he said.

The NMSU evaluations are part of an accelerating effort by the U.S. Department of Agriculture to implement a National Animal Identification System. The goal is to track and identify all animals and premises that have had contact with an animal disease of concern within 48 hours of an initial diagnosis. In a first-of-its kind project for New Mexico, scientists tested 35 market steers from 18 Quay County farm families, using a combination of eye-scanning and radio frequency identification (RFID) ear tags for animal ID evaluation. Most of the cattle were high-value 4-H and FFA show cattle that spent much of the past season moving between regional livestock fairs.

Encinias used a $3,000 retinal scanner not much bigger than a small video camera to record the IDs at three locations over a six-month period. To make the digital record, the cow is held in what’s known as a squeeze chute and the scanner’s eye-cup, specially molded for a cow’s face, is held to each animal’s eye. The scanner senses when the eye is open, automatically makes an image, and downloads the data to a computer database. In addition to the retinal image, the device records the date, time and a global positioning satellite coordinate of the location. “It’s as simple as taking a picture,” Encinias said. “Plus, we can do everything at chute side.”

Historically, 4-H and FFA exhibitors have been required to submit hair samples for DNA analysis to serve as a permanent means of identification. “DNA analysis is costly and requires a complex laboratory procedure that takes time,” he said. “Also, hair samples have been lost in transit.” Among the show cattle, researchers found that the retinal scans proved a near-perfect animal ID technology because of its speed and accuracy, Encinias said. However, the eye-scans might not be practical on a working ranch due to cost and technical skill required. Separately, NMSU experts examined RFID tags alone on several hundred cattle on a communal grazing allotment in the Valles Caldera, an 89,000-acre area in the Jemez Mountains. Communal grazing allotments are typically seasonal, running from May to October. “We wanted to show the small-scale producers of northern and central New Mexico that this animal ID technology will work for them, too,” Encinias said.

The $2 tags contain a unique 15-digit electronic code identifying each animal like a Social Security number for life. Located on the cow’s left ear, the tags are about the size of a bottle cap. With more than 900 RFID tags applied since May, only two have been lost. Using a panel tag reader located on a cattle chute, researchers found that the RFID tags could be readily read as cattle passed through a single-file alley. Effective range for this particular electronic reader was about two feet. “A prime goal was not to slow down the normal production process, and I think we did that,” he said. “We maintained a single-file, continuous flow with a reliability rate of more than 93 percent. It’s not 100 percent, and that’s something we’re working on.”

The animal ID projects, which were conducted in cooperation with the state veterinarian’s office and the New Mexico Livestock Board, were funded through a USDA grant, and were designed to evaluate the technology specifically for New Mexico’s varied production environments. “We need to have producers on board and comfortable with ID technology because it’s inevitable that it’s coming to New Mexico,” Encinias said. When the idea of mandatory cattle identification was introduced several years ago, cattle producers were concerned about expense, inconvenience and loss of privacy.

But experts stress the importance of controlling cattle diseases before they get out of hand, said Clay Mathis, a livestock specialist with NMSU’s Cooperative Extension Service. Even though the new system will cost producers more money, it will eventually mean the full tracking of animal movement throughout the nation, he said. The announced USDA target for a mandatory animal identification program is January 2009, said Ron Parker, an animal ID specialist with NMSU Extension and the New Mexico Livestock Board. At that time, all animals entering marketing channels must be identified. Now, a final report on both NMSU animal ID technology studies is being compiled for USDA, Encinias said. “In the future, we need to look at just how long the RFID tags last under New Mexico’s varied and sometimes brutal climate conditions,” he said. “And, we need to see how these ID technologies work at the sale barn and other commingling areas.”

Science Daily
December 6, 2005

Original web page at Science Daily