30 Eylül 2015 Çarşamba

Plant pest reprogram the roots

from
BIOENGINEER.ORG http://bioengineer.org/plant-pest-reprogram-roots/

Microscopic roundworms (nematodes) live like maggots in bacon: They penetrate into the roots of beets, potatoes or soybeans and feed on plant cells, which are full of energy. But how they do it precisely was previously unknown. Scientists at the University of Bonn together with an international team discovered that nematodes produce a plant hormone to stimulate the growth of specific feeding cells in the roots. These cells provide the parasite with all that it needs. The results are now published in the journal Proceedings of the National Academy of Sciences.

root Plant pest reprogram the roots root

The beet cyst nematode (Heterodera schachtii) sucks at a plant root. The pest reprogrammes the root with a plant hormone. Photo Credit: Copyright Zoran Radakovic

The beet cyst nematode (Heterodera schachtii) is a pipsqueak of less than a millimetre in length, but it causes huge yield losses in sugar beet. Not only are infected beets smaller than normal, but also they have an increasing number of lateral roots and experience a drastic decrease in sugar yield. This makes the pest a talking point as a cause of the dreaded “beet fatigue,” especially in traditional sugar beet growing such as Bonn. To date, however, it was not clear how the nematodes stimulate the development of a nurse cell system inside the root, which they absolutely need as a food source.

It arises from the fact that cells divide increasingly, merge with each other and eventually swell. “For a long time it was speculated that plant hormones play a role in the formation of a nurse cell system in roots,” says Prof. Dr. Florian Grundler from the Molecular Phytomedicine, University of Bonn. Since the nematodes lose their ability to move after penetrating into the roots, they are particularly dependent on the development of tumorous nurse cell system.

Pest uses degradation products of its metabolism

Together with scientists from Columbia (USA), Olomouc (Czech Republic), Warsaw (Poland), Osaka (Japan) and the Freie Universitaet Berlin, the researchers at the University of Bonn have used Arabidopsis thaliana as a model plant to discover that the beet cyst nematode itself produces the plant hormone cytokinin. “The nematode has been able to employ a breakdown product of its own metabolism as a plant hormone to control the development of plant cells,” said lead author and research group leader Dr Shahid Siddique. The pest programmed the plant roots in beets to form a special nutritive tissue, which the nematode uses for its own growth.

The research team initially did not know whether the pest uses the hormone plants produce or whether it produces and releases the hormone itself. The scientists blocked cytokinin production in the plant — the nematode nevertheless continued to grow because it was not dependent on the plant-produced hormone. Only when the agricultural experts blocked a special receptor at the docks to override the worm-produced hormone did they starve the pest, discovering that the hormone is important for the formation of the nurse cell system. “In this case, Heterodera schachtii cannot use its ability to produce cytokinin anymore, because a vital pathway was interrupted in the root cells,” explained Dr Siddique.

New options for plant breeding

Although this discovery is a result of basic research, it opens up new avenues in plant breeding. “On the one hand the result is an important contribution to the fundamental understanding of parasitism in plants, and on the other hand it can help to reduce the problem of cyst nematode in important agricultural crops,” said Prof Grundler. Now that an important mechanism had been found by the research, we are looking for an appropriate strategy to use these results specifically in resistance breeding.

Story Source:

The above post is reprinted from materials provided by Universität Bonn.

29 Eylül 2015 Salı

New Prosthesis to Help People With Memory Loss

from
BIOENGINEER.ORG http://bioengineer.org/new-prosthesis-help-people-memory-loss/

Researchers at USC and Wake Forest Baptist Medical Center have developed a brain prosthesis that is designed to help individuals suffering from memory loss.

memory New Prosthesis to Help People With Memory Loss memory

The prosthesis, which includes a small array of electrodes implanted into the brain, has performed well in laboratory testing in animals and is currently being evaluated in human patients.

Designed originally at USC and tested at Wake Forest Baptist, the device builds on decades of research by Ted Berger and relies on a new algorithm created by Dong Song, both of the USC Viterbi School of Engineering. The development also builds on more than a decade of collaboration with Sam Deadwyler and Robert Hampson of the Department of Physiology & Pharmacology of Wake Forest Baptist who have collected the neural data used to construct the models and algorithms.

When your brain receives the sensory input, it creates a memory in the form of a complex electrical signal that travels through multiple regions of the hippocampus, the memory center of the brain. At each region, the signal is re-encoded until it reaches the final region as a wholly different signal that is sent off for long-term storage.

If there’s damage at any region that prevents this translation, then there is the possibility that long-term memory will not be formed. That’s why an individual with hippocampal damage (for example, due to Alzheimer’s disease) can recall events from a long time ago — things that were already translated into long-term memories before the brain damage occurred — but have difficulty forming new long-term memories.

Song and Berger found a way to accurately mimic how a memory is translated from short-term memory into long-term memory, using data obtained by Deadwyler and Hampson, first from animals, and then from humans. Their prosthesis is designed to bypass a damaged hippocampal section and provide the next region with the correctly translated memory.

That’s despite the fact that there is currently no way of “reading” a memory just by looking at its electrical signal.

“It’s like being able to translate from Spanish to French without being able to understand either language,” Berger said.

Their research was presented at the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society in Milan on August 27, 2015.

The effectiveness of the model was tested by the USC and Wake Forest Baptist teams. With the permission of patients who had electrodes implanted in their hippocampi to treat chronic seizures, Hampson and Deadwyler read the electrical signals created during memory formation at two regions of the hippocampus, then sent that information to Song and Berger to construct the model. The team then fed those signals into the model and read how the signals generated from the first region of the hippocampus were translated into signals generated by the second region of the hippocampus.

In hundreds of trials conducted with nine patients, the algorithm accurately predicted how the signals would be translated with about 90 percent accuracy.

“Being able to predict neural signals with the USC model suggests that it can be used to design a device to support or replace the function of a damaged part of the brain,” Hampson said.

Next, the team will attempt to send the translated signal back into the brain of a patient with damage at one of the regions in order to try to bypass the damage and enable the formation of an accurate long-term memory.

Story Source:

The above post is reprinted from materials provided by University of Southern California.

Ancestral background can be determined by fingerprints

from
BIOENGINEER.ORG http://bioengineer.org/ancestral-background-determined-fingerprints/

A proof-of-concept study finds that it is possible to identify an individual’s ancestral background based on his or her fingerprint characteristics — a discovery with significant applications for law enforcement and anthropological research.

fingerprint Ancestral background can be determined by fingerprints fingerprint

“This is the first study to look at this issue at this level of detail, and the findings are extremely promising,” says Ann Ross, a professor of anthropology at North Carolina State University and senior author of a paper describing the work. “But more work needs to be done. We need to look at a much larger sample size and evaluate individuals from more diverse ancestral backgrounds.”

Anthropologists have looked at fingerprints for years, because they are interested in human variation. But this research has looked at Level 1 details, such as pattern types and ridge counts. Forensic fingerprint analysis, which is used in criminal justice contexts, looks at Level 2 details — the more specific variations, such as bifurcations, where a fingerprint ridge splits.

For this study, researchers looked at Level 1 and Level 2 details of right index-finger fingerprints for 243 individuals: 61 African American women; 61 African American men; 61 European American women; and 60 European American men. The fingerprints were analyzed to determine whether there were patterns that were specific to either sex or ancestral background.

The researchers found no significant differences between men and women, but did find significant differences in the Level 2 details of fingerprints between people of European American and African American ancestry.

“A lot of additional work needs to be done, but this holds promise for helping law enforcement,” Ross says. “And it’s particularly important given that, in 2009, the National Academy of Sciences called for more scientific rigor in forensic science — singling out fingerprints in particular as an area that merited additional study.

“This finding also tells us that there’s a level of variation in fingerprints that is of interest to anthropologists, particularly in the area of global population structures — we just need to start looking at the Level 2 fingerprint details,” Ross says.

Story Source:

The above post is reprinted from materials provided by North Carolina State University.

Antidepressants plus blood thinners cause brain cancer cells to eat themselves in mice

from
BIOENGINEER.ORG http://bioengineer.org/antidepressants-plus-blood-thinners-cause-brain-cancer-cells-eat-mice/

Scientists have been exploring the connection between tricyclic antidepressants and brain cancer since the early 2000s. There’s some evidence that the drugs can lower one’s risk for developing aggressive glioblastomas, but when given to patients after diagnosis in a small clinical trial, the antidepressants showed no effect as a treatment.

mice Antidepressants plus blood thinners cause brain cancer cells to eat themselves in mice mice

In a study appearing in Cancer Cell on September 24, Swiss researchers find that antidepressants work against brain cancer by excessively increasing tumor autophagy (a process that causes the Cancer Cells to eat themselves). The scientists next combined the antidepressants with blood thinners–also known to increase autophagy–as a treatment for mice with the first stages of human glioblastoma. Mouse lifespan doubled with the drug combination therapy, while either drug alone had no effect.

“It is exciting to envision that combining two relatively inexpensive and non-toxic classes of generic drugs holds promise to make a difference in the treatment of patients with lethal brain cancer,” says senior study author Douglas Hanahan, of the Swiss Federal Institute of Technology (EPFL). “However, it is presently unclear whether patients might benefit from this treatment. This new mechanism-based strategy to therapeutically target glioblastoma is provocative, but at an early stage of evaluation, and will require considerable follow-up to assess its potential.”

Mice received the combination therapy 5 days a week with 10-15 minute intervals between drugs. The antidepressant was given orally, and the other drug (the blood thinner or anti-coagulant) was injected. The data suggest that the drugs act synergistically by disrupting, in two different places, the biological pathway that controls the rate of autophagy–a cellular recycling system that at low levels enhances cell survival in stressful conditions. The two drugs work together to hyper-stimulate autophagy, causing the Cancer Cells to die.

“Importantly, the combination therapy did not cure the mice; rather, it delayed disease progression and modestly extended their lifespan,” Hanahan says. “It seems likely that these drugs will need to be combined with other classes of anticancer drugs to have benefit in treating gliblastoma patients. One can also envision ‘co-clinical trials’ wherein experimental therapeutic trials in the mouse models of glioblastom are linked to analogous small proof-of-concept trials in GBM patients. Such trials may not be far off.”

Story Source:

The above post is reprinted from materials provided by Cell Press.

28 Eylül 2015 Pazartesi

Cancer: Most People in World Can’t Get Surgery

from
BIOENGINEER.ORG http://bioengineer.org/cancer-people-world-cant-surgery/

Over 80% of the 15 million people diagnosed with cancer worldwide in 2015 will need surgery, but less than a quarter will have access to proper, safe, affordable surgical care when they need it, according to a major new Commission examining the state of global cancer surgery, published in The Lancet Oncology, and being presented at the 2015 European Cancer Congress in Vienna, Austria.

breast cancer Cancer: Most People in World Can't Get Surgery breast cancer

The Commission reveals that access is worst in low-income countries where as many as 95% of people with cancer do not receive basic cancer surgery. Yet despite this worldwide shortfall in access to cancer surgery, surgical care is not seen as an essential component of global cancer control by the international community.

According to lead Commissioner, Professor Richard Sullivan, Institute of Cancer Policy, King’s Health Partners Comprehensive Cancer Centre, King’s College London, UK, “With many competing health priorities and substantial financial constraints in many low- and middle-income countries, surgical services for cancer are given low priority within national cancer plans and are allocated few resources. As a result, access to safe, affordable cancer surgical services is dismal. Our new estimates suggest that less than one in twenty (5%) patients in low-income countries and only roughly one in five (22%) patients in middle-income countries can access even the most basic cancer surgery.”

Poor access to basic cancer surgery and good quality cancer care is not just confined to the world’s poorer countries. Survival data across Europe shows that many of the poorer EU member states are not delivering high quality cancer surgery to their populations.

Without urgent investment in surgical services for cancer care, global economic losses from cancers that could have been treated by surgery will reach a staggering US$12 trillion by 2030, equivalent to 1-1.5% of economic output in high-income countries (HICs) and 0.5-1% in low- and middle-income countries (LMICs) every year. Furthermore, the lack of effective action to train more cancer surgeons and improving cancer surgical systems could cost the global economy more than US$ 6 trillion between now and 2030, says co-author Professor John Meara, Director of the Program in Global Surgery and Social Change at Harvard Medical School in the USA.

Surgery is the mainstay of cancer control and cure, with over 80% of all cancers requiring some type of surgery, in many cases multiple times. Almost 300 different surgical procedures are used for diagnosis, curative, or palliative treatment of cancers in people of all ages including one in five children diagnosed with cancer.

The demand for cancer surgery is growing as many of the worst affected countries face rising cancer rates. By 2030, of the almost 22 million new cancer patients, over 17 million will need operations, 10 million of them in LMICs. “The global community can no longer ignore this problem,” says co-author Professor CS Pramesh, Head of Thoracic Surgery at Tata Memorial Centre, Mumbai, India. “By 2030, 45 million operations a year will be needed worldwide. The situation is particularly dire in low-income countries in sub-Saharan Africa and Asia where the need for cancer surgery is projected to increase by around 60% between now and 2030.”

The Commission also reveals that a third of people with cancer in LMICs who have a surgical procedure will incur financial catastrophe–costs that drive them into poverty. Another quarter will stop treatment because they cannot afford it.

With a serious shortfall of cancer surgeons in over 82% of countries, radical action is needed to train general surgeons to deliver basic cancer surgery, produce more gynaecological and surgical oncologists, and create more high quality surgical training programmes, say the authors. Other solutions to improve access to surgery include: better regulated public systems; growing international partnerships between institutions and surgical societies, such as SSO and ESSO; and a firm commitment to universal health coverage.

Educating policymakers, patients, and the public about the key issues in delivering safe, affordable, timely surgical care is also essential, say the authors. “Policy makers at all levels still have little awareness of the central importance of surgery to cancer control. Even recent studies of capacity building for cancer systems in Africa barely acknowledged the importance of surgery, focusing mainly on chemotherapy instead,” says co-author Professor Riccardo Audisio, President of the European Society for Surgical Oncology.

According to the Commission, funding for research in cancer surgery is dire and needs urgent investment. Despite its huge impact on patient outcomes — with over 50% of survival in breast cancer, for example, credited to high quality surgery — just 1.3% of the annual global cancer research and development budget goes towards surgery. This figure is similar in the UK, with only 2.1% of research spending on cancer allocated to surgery. New estimates produced for the Commission find that 93% of global research in cancer surgery is done by just 34 of 195 countries. LMICs only account for 15% of this output, yet these countries urgently need to conduct research that is relevant to their settings.

According to Professor Sullivan, “This Commission clearly outlines the enormous scale of the problem posed by the global shortfall in access to cancer surgery and current deficiencies in pathology and imaging. The evidence outlined by the Commission, contributed by some of the world’s leading experts in the field, leaves no doubt of the dire situation we are facing. It is imperative that surgery is at the heart of global and national cancer plans. A powerful political commitment is needed in all countries to increase investment and training in publicly funded systems of cancer surgery.”

Story Source:

The above post is reprinted from materials provided by The Lancet.

Feeling anxious? Check your orbitofrontal cortex, cultivate your optimism

from
BIOENGINEER.ORG http://bioengineer.org/feeling-anxious-check-orbitofrontal-cortex-cultivate-optimism/

A new study links anxiety, a brain structure called the orbitofrontal cortex, and optimism, finding that healthy adults who have larger OFCs tend to be more optimistic and less anxious.

feeling Feeling anxious? Check your orbitofrontal cortex, cultivate your optimism feeling

The new analysis, reported in the journal Social, Cognitive and Affective Neuroscience, offers the first evidence that optimism plays a mediating role in the relationship between the size of the OFC and anxiety.

Anxiety disorders afflict roughly 44 million people in the U.S. These disorders disrupt lives and cost an estimated $42 billion to $47 billion annually, scientists report.

The orbitofrontal cortex, a brain region located just behind the eyes, is known to play a role in anxiety. The OFC integrates intellectual and emotional information and is essential to behavioral regulation. Previous studies have found links between the size of a person’s OFC and his or her susceptibility to anxiety. For example, in a well-known study of young adults whose brains were imaged before and after the colossal 2011 earthquake and tsunami in Japan, researchers discovered that the OFC actually shrank in some study subjects within four months of the disaster. Those with more OFC shrinkage were likely to also be diagnosed with post-traumatic stress disorder, the researchers found.

Other studies have shown that more optimistic people tend to be less anxious, and that optimistic thoughts increase OFC activity.

The team on the new study hypothesized that a larger OFC might act as a buffer against anxiety in part by boosting optimism.

Most studies of anxiety focus on those who have been diagnosed with anxiety disorders, said University of Illinois researcher Sanda Dolcos, who led the research with graduate student Yifan Hu and psychology professor Florin Dolcos. “We wanted to go in the opposite direction,” she said. “If there can be shrinkage of the orbitofrontal cortex and that shrinkage is associated with anxiety disorders, what does it mean in healthy populations that have larger OFCs? Could that have a protective role?”

The researchers also wanted to know whether optimism was part of the mechanism linking larger OFC brain volumes to lesser anxiety.

The team collected MRIs of 61 healthy young adults and analyzed the structure of a number of regions in their brains, including the OFC. The researchers calculated the volume of gray matter in each brain region relative to the overall volume of the brain. The study subjects also completed tests that assessed their optimism and anxiety, depression symptoms, and positive (enthusiastic, interested) and negative (irritable, upset) affect.

A statistical analysis and modeling revealed that a thicker orbitofrontal cortex on the left side of the brain corresponded to higher optimism and less anxiety. The model also suggested that optimism played a mediating role in reducing anxiety in those with larger OFCs. Further analyses ruled out the role of other positive traits in reducing anxiety, and no other brain structures appeared to be involved in reducing anxiety by boosting optimism.

“You can say, ‘OK, there is a relationship between the orbitofrontal cortex and anxiety. What do I do to reduce anxiety?'” Sanda Dolcos said. “And our model is saying, this is working partially through optimism. So optimism is one of the factors that can be targeted.”

“Optimism has been investigated in social psychology for years. But somehow only recently did we start to look at functional and structural associations of this trait in the brain,” Hu said. “We wanted to know: If we are consistently optimistic about life, would that leave a mark in the brain?”

Florin Dolcos said future studies should test whether optimism can be increased and anxiety reduced by training people in tasks that engage the orbitofrontal cortex, or by finding ways to boost optimism directly.

“If you can train people’s responses, the theory is that over longer periods, their ability to control their responses on a moment-by-moment basis will eventually be embedded in their brain structure,” he said.

Story Source:

The above post is reprinted from materials provided by University of Illinois at Urbana-Champaign.

20 Questions Study Brings Us A Step Closer To Mind Reading

from
BIOENGINEER.ORG http://bioengineer.org/questions-study-brings-step-closer-mind-reading/

A new study in brain-to-brain communication placed participants a mile apart and asked them to play a game of 20 Questions with just their minds.

Story Source:

The above post is reprinted from materials provided by YNewsy Science.

How hunger neurons control bone mass

from
BIOENGINEER.ORG http://bioengineer.org/hunger-neurons-control-bone-mass/

In an advance that helps clarify the role of a cluster of neurons in the brain, Yale School of Medicine researchers have found that these neurons not only control hunger and appetite, but also regulate bone mass.

The study is published Sept. 24 online ahead of print in the journal Cell Reports.

“We have found that the level of your hunger could determine your bone structure,” said one of the senior authors, Tamas L. Horvath, the Jean and David W. Wallace Professor of Comparative Medicine, and professor of neurobiology and obstetrics, gynecology, and reproductive sciences. Horvath is also director of the Yale Program in Integrative Cell Signaling and Neurobiology of Metabolism.

“The less hungry you are, the lower your bone density, and surprisingly, the effects of these neurons on bone mass are independent of the effect of the hormone leptin on these same cells.”

Horvath and his team focused on agouti-related peptide (AgRP) neurons in the hypothalamus, which control feeding and compulsive behaviors. Using mice that were genetically-engineered so their cells selectively interfere with the AgRP neurons, the team found that these same cells are also involved in determining bone mass.

The team further found that when the AgRP circuits were impaired, this resulted in bone loss and osteopenia in mice — the equivalent of osteoporosis in women. But when the team enhanced AgRP neuronal activity in mice, this actually promoted increased bone mass.

“Taken together, these observations establish a significant regulatory role for AgRP neurons in skeletal bone metabolism independent of leptin’s action,” said co-senior author Karl Insogna, M.D., professor of medicine, and director of the Yale Bone Center. “Based on our findings, it seems that the effect of AgRP neurons on bone metabolism in adults is mediated at least in part by the sympathetic nervous system, but more than one pathway is likely involved.”

“There are other mechanisms by which the AgRP system can affect bone mass, including actions on the thyroid, adrenal and gonad systems,” Insogna added. “Further studies are needed to assess the hormonal control of bone metabolism as a pathway modulated by AgRP neurons.”

Story Source:

The above post is reprinted from materials provided by Yale University.

27 Eylül 2015 Pazar

Chip-based technology enables reliable direct detection of Ebola virus

from
BIOENGINEER.ORG http://bioengineer.org/chip-based-technology-enables-reliable-direct-detection-ebola-virus/

A team led by researchers at UC Santa Cruz has developed chip-based technology for reliable detection of Ebola virus and other viral pathogens. The system uses direct optical detection of viral molecules and can be integrated into a simple, portable instrument for use in field situations where rapid, accurate detection of Ebola infections is needed to control outbreaks.

chip-ebola Chip-based technology enables reliable direct detection of Ebola virus chip ebola

This hybrid device integrates a microfluidic chip for sample preparation and an optofluidic chip for optical detection of individual molecules of viral RNA. Photo Credit: Joshua Parks

Laboratory tests using preparations of Ebola virus and other hemorrhagic fever viruses showed that the system has the sensitivity and specificity needed to provide a viable clinical assay. The team reported their results in a paper published September 25 in Nature Scientific Reports.

An outbreak of Ebola virus in West Africa has killed more than 11,000 people since 2014, with new cases occurring recently in Guinea and Sierra Leone. The current gold standard for Ebola virus detection relies on a method called polymerase chain reaction (PCR) to amplify the virus’s genetic material for detection. Because PCR works on DNA molecules and Ebola is an RNA virus, the reverse transcriptase enzyme is used to make DNA copies of the viral RNA prior to PCR amplification and detection.

“Compared to our system, PCR detection is more complex and requires a laboratory setting,” said senior author Holger Schmidt, the Kapany Professor of Optoelectronics at UC Santa Cruz. “We’re detecting the nucleic acids directly, and we achieve a comparable limit of detection to PCR and excellent specificity.”

In laboratory tests, the system provided sensitive detection of Ebola virus while giving no positive counts in tests with two related viruses, Sudan virus and Marburg virus. Testing with different concentrations of Ebola virus demonstrated accurate quantification of the virus over six orders of magnitude. Adding a “preconcentration” step during sample processing on the microfluidic chip extended the limit of detection well beyond that achieved by other chip-based approaches, covering a range comparable to PCR analysis.

“The measurements were taken at clinical concentrations covering the entire range of what would be seen in an infected person,” Schmidt said.

Schmidt’s lab at UC Santa Cruz worked with researchers at Brigham Young University and UC Berkeley to develop the system. Virologists at Texas Biomedical Research Institute in San Antonio prepared the viral samples for testing.

The system combines two small chips, a microfluidic chip for sample preparation and an optofluidic chip for optical detection. For over a decade, Schmidt and his collaborators have been developing optofluidic chip technology for optical analysis of single molecules as they pass through a tiny fluid-filled channel on the chip. The microfluidic chip for sample processing can be integrated as a second layer next to or on top of the optofluidic chip.

Schmidt’s lab designed and built the microfluidic chip in collaboration with coauthor Richard Mathies at UC Berkeley who pioneered this technology. It is made of a silicon-based polymer, polydimethylsiloxane (PDMS), and has microvalves and fluidic channels to transport the sample between nodes for various sample preparation steps. The targeted molecules–in this case, Ebola virus RNA–are isolated by binding to a matching sequence of synthetic DNA (called an oligonucleotide) attached to magnetic microbeads. The microbeads are collected with a magnet, nontarget biomolecules are washed off, and the bound targets are then released by heating, labeled with fluorescent markers, and transferred to the optofluidic chip for optical detection.

Schmidt noted that the team has not yet been able to test the system starting with raw blood samples. That will require additional sample preparation steps, and it will also have to be done in a biosafety level 4 facility.

“We are now building a prototype to bring to the Texas facility so that we can start with a blood sample and do a complete front-to-back analysis,” Schmidt said. “We are also working to use the same system for detecting less dangerous pathogens and do the complete analysis here at UC Santa Cruz.”

Story Source:

The above post is reprinted from materials provided by University of California – Santa Cruz.

Scientists discover new system for human genome editing

from
BIOENGINEER.ORG http://bioengineer.org/scientists-discover-new-human-genome-editing/

A team including the scientist who first harnessed the revolutionary CRISPR-Cas9 system for mammalian genome editing has now identified a different CRISPR system with the potential for even simpler and more precise genome engineering.

dna Scientists discover new system for human genome editing dna4

CRISPR sequences were first described in 1987 and their natural biological function was initially described in 2010 and 2011. The application of the CRISPR-Cas9 system for mammalian genome editing was first reported in 2013, by Zhang and separately by George Church at Harvard.

In a study published in Cell, Feng Zhang and his colleagues at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT, with co-authors Eugene Koonin at the National Institutes of Health, Aviv Regev of the Broad Institute and the MIT Department of Biology, and John van der Oost at Wageningen University, describe the unexpected biological features of this new system and demonstrate that it can be engineered to edit the genomes of human cells.

“This has dramatic potential to advance genetic engineering,” said Eric Lander, Director of the Broad Institute and one of the principal leaders of the human genome project. “The paper not only reveals the function of a previously uncharacterized CRISPR system, but also shows that Cpf1 can be harnessed for human genome editing and has remarkable and powerful features. The Cpf1 system represents a new generation of genome editing technology.”

CRISPR sequences were first described in 1987 and their natural biological function was initially described in 2010 and 2011. The application of the CRISPR-Cas9 system for mammalian genome editing was first reported in 2013, by Zhang and separately by George Church at Harvard.

In the new study, Zhang and his collaborators searched through hundreds of CRISPR systems in different types of bacteria, searching for enzymes with useful properties that could be engineered for use in human cells. Two promising candidates were the Cpf1 enzymes from bacterial species Acidaminococcus and Lachnospiraceae, which Zhang and his colleagues then showed can target genomic loci in human cells.

“We were thrilled to discover completely different CRISPR enzymes that can be harnessed for advancing research and human health,” Zhang said.

The newly described Cpf1 system differs in several important ways from the previously described Cas9, with significant implications for research and therapeutics, as well as for business and intellectual property:

First: In its natural form, the DNA-cutting enzyme Cas9 forms a complex with two small RNAs, both of which are required for the cutting activity. The Cpf1 system is simpler in that it requires only a single RNA. The Cpf1 enzyme is also smaller than the standard SpCas9, making it easier to deliver into cells and tissues.
Second, and perhaps most significantly: Cpf1 cuts DNA in a different manner than Cas9. When the Cas9 complex cuts DNA, it cuts both strands at the same place, leaving ‘blunt ends’ that often undergo mutations as they are rejoined. With the Cpf1 complex the cuts in the two strands are offset, leaving short overhangs on the exposed ends. This is expected to help with precise insertion, allowing researchers to integrate a piece of DNA more efficiently and accurately.
Third: Cpf1 cuts far away from the recognition site, meaning that even if the targeted gene becomes mutated at the cut site, it can likely still be re-cut, allowing multiple opportunities for correct editing to occur.
Fourth: the Cpf1 system provides new flexibility in choosing target sites. Like Cas9, the Cpf1 complex must first attach to a short sequence known as a PAM, and targets must be chosen that are adjacent to naturally occurring PAM sequences. The Cpf1 complex recognizes very different PAM sequences from those of Cas9. This could be an advantage in targeting some genomes, such as in the malaria parasite as well as in humans.
“The unexpected properties of Cpf1 and more precise editing open the door to all sorts of applications, including in cancer research,” said Levi Garraway, an institute member of the Broad Institute, and the inaugural director of the Joint Center for Cancer Precision Medicine at the Dana-Farber Cancer Institute, Brigham and Women’s Hospital, and the Broad Institute. Garraway was not involved in the research.

Zhang, Broad Institute, and MIT plan to share the Cpf1 system widely. As with earlier Cas9 tools, these groups will make this technology freely available for academic research via the Zhang lab’s page on the plasmid-sharing-website Addgene, through which the Zhang lab has already shared Cas9 reagents more than 23,000 times to researchers worldwide to accelerate research. The Zhang lab also offers free online tools and resources for researchers through its website, http://www.genome-engineering.org.

The Broad Institute and MIT plan to offer non-exclusive licenses to enable commercial tool and service providers to add this enzyme to their CRISPR pipeline and services, further ensuring availability of this new enzyme to empower research. These groups plan to offer licenses that best support rapid and safe development for appropriate and important therapeutic uses. “We are committed to making the CRISPR-Cpf1 technology widely accessible,” Zhang said.

“Our goal is to develop tools that can accelerate research and eventually lead to new therapeutic applications. We see much more to come, even beyond Cpf1 and Cas9, with other enzymes that may be repurposed for further genome editing advances.”

Story Source:

The above post is reprinted from materials provided by Broad Institute of MIT and Harvard.

Viruses Are Alive, Science Shows

from
BIOENGINEER.ORG http://bioengineer.org/viruses-alive-science-shows/

A new analysis supports the hypothesis that viruses are living entities that share a long evolutionary history with cells, researchers report. The study offers the first reliable method for tracing viral evolution back to a time when neither viruses nor cells existed in the forms recognized today, the researchers say.

bac Viruses Are Alive, Science Shows bac

The diverse physical attributes, genome sizes and lifestyles of viruses make them difficult to classify. A new study uses protein folds as evidence that viruses are living entities that belong on their own branch of the tree of life. Photo Credit: Julie McMahon

The new findings appear in the journal Science Advances.

Until now, viruses have been difficult to classify, said University of Illinois crop sciences and Carl R. Woese Institute for Genomic Biology professor Gustavo Caetano-Anollés, who led the new analysis with graduate student Arshan Nasir. In its latest report, the International Committee on the Taxonomy of Viruses recognized seven orders of viruses, based on their shapes and sizes, genetic structure and means of reproducing.

“Under this classification, viral families belonging to the same order have likely diverged from a common ancestral virus,” the authors wrote. “However, only 26 (of 104) viral families have been assigned to an order, and the evolutionary relationships of most of them remain unclear.”

Part of the confusion stems from the abundance and diversity of viruses. Less than 4,900 viruses have been identified and sequenced so far, even though scientists estimate there are more than a million viral species. Many viruses are tiny — significantly smaller than bacteria or other microbes — and contain only a handful of genes. Others, like the recently discovered mimiviruses, are huge, with genomes bigger than those of some bacteria.

The new study focused on the vast repertoire of protein structures, called “folds,” that are encoded in the genomes of all cells and viruses. Folds are the structural building blocks of proteins, giving them their complex, three-dimensional shapes. By comparing fold structures across different branches of the tree of life, researchers can reconstruct the evolutionary histories of the folds and of the organisms whose genomes code for them.

The researchers chose to analyze protein folds because the sequences that encode viral genomes are subject to rapid change; their high mutation rates can obscure deep evolutionary signals, Caetano-Anollés said. Protein folds are better markers of ancient events because their three-dimensional structures can be maintained even as the sequences that code for them begin to change.

Today, many viruses — including those that cause disease — take over the protein-building machinery of host cells to make copies of themselves that can then spread to other cells. Viruses often insert their own genetic material into the DNA of their hosts. In fact, the remnants of ancient viral infiltrations are now permanent features of the genomes of most cellular organisms, including humans. This knack for moving genetic material around may be evidence of viruses’ primary role as “spreaders of diversity,” Caetano-Anollés said.

The researchers analyzed all of the known folds in 5,080 organisms representing every branch of the tree of life, including 3,460 viruses. Using advanced bioinformatics methods, they identified 442 protein folds that are shared between cells and viruses, and 66 that are unique to viruses.

“This tells you that you can build a tree of life, because you’ve found a multitude of features in viruses that have all the properties that cells have,” Caetano-Anollés said. “Viruses also have unique components besides the components that are shared with cells.”

In fact, the analysis revealed genetic sequences in viruses that are unlike anything seen in cells, Caetano-Anollés said. This contradicts one hypothesis that viruses captured all of their genetic material from cells. This and other findings also support the idea that viruses are “creators of novelty,” he said.

Using the protein-fold data available in online databases, Nasir and Caetano-Anollés used computational methods to build trees of life that included viruses.

The data suggest “that viruses originated from multiple ancient cells … and co-existed with the ancestors of modern cells,” the researchers wrote. These ancient cells likely contained segmented RNA genomes, Caetano-Anollés said.

The data also suggest that at some point in their evolutionary history, not long after modern cellular life emerged, most viruses gained the ability to encapsulate themselves in protein coats that protected their genetic payloads, enabling them to spend part of their lifecycle outside of host cells and spread, Caetano-Anollés said. The protein folds that are unique to viruses include those that form these viral “capsids.”

“These capsids became more and more sophisticated with time, allowing viruses to become infectious to cells that had previously resisted them,” Nasir said. “This is the hallmark of parasitism.”

Some scientists have argued that viruses are nonliving entities, bits of DNA and RNA shed by cellular life. They point to the fact that viruses are not able to replicate (reproduce) outside of host cells, and rely on cells’ protein-building machinery to function. But much evidence supports the idea that viruses are not that different from other living entities, Caetano-Anollés said.

“Many organisms require other organisms to live, including bacteria that live inside cells, and fungi that engage in obligate parasitic relationships — they rely on their hosts to complete their lifecycle,” he said. “And this is what viruses do.”

The discovery of the giant mimiviruses in the early 2000s challenged traditional ideas about the nature of viruses, Caetano-Anollés said.

“These giant viruses were not the tiny Ebola virus, which has only seven genes. These are massive in size and massive in genomic repertoire,” he said. “Some are as big physically and with genomes that are as big or bigger than bacteria that are parasitic.”

Some giant viruses also have genes for proteins that are essential to translation, the process by which cells read gene sequences to build proteins, Caetano-Anollés said. The lack of translational machinery in viruses was once cited as a justification for classifying them as nonliving, he said.

“This is no more,” Caetano-Anollés said. “Viruses now merit a place in the tree of life. Obviously, there is much more to viruses than we once thought.”

Story Source:

The above post is reprinted from materials provided by University of Illinois at Urbana-Champaign.

24 Eylül 2015 Perşembe

How the brain encodes time and place

from
BIOENGINEER.ORG http://bioengineer.org/brain-encodes-time-place/

When you remember a particular experience, that memory has three critical elements — what, when, and where. MIT neuroscientists have now identified a brain circuit that processes the “when” and “where” components of memory.

time How the brain encodes time and place time

This image shows entorhinal “ocean cells” (red) and “island cells” (blue). Green fluorescence indicates ocean cells that have been genetically altered. Photo Credit: Takashi Kitamura

This circuit, which connects the hippocampus and a region of the cortex known as entorhinal cortex, separates location and timing into two streams of information. The researchers also identified two populations of neurons in the entorhinal cortex that convey this information, dubbed “ocean cells” and “island cells.”

Previous models of memory had suggested that the hippocampus, a brain structure critical for memory formation, separates timing and context information. However, the new study shows that this information is split even before it reaches the hippocampus.

“It suggests that there is a dichotomy of function upstream of the hippocampus,” says Chen Sun, an MIT graduate student in brain and cognitive sciences and one of the lead authors of the paper, which appears in the Sept. 23 issue of Neuron. “There is one pathway that feeds temporal information into the hippocampus, and another that feeds contextual representations to the hippocampus.”

The paper’s other lead author is MIT postdoc Takashi Kitamura. The senior author is Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and director of the RIKEN-MIT Center for Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory. Other authors are Picower Institute technical assistant Jared Martin, Stanford University graduate student Lacey Kitch, and Mark Schnitzer, an associate professor of biology and applied physics at Stanford.

When and where

Located just outside the hippocampus, the entorhinal cortex relays sensory information from other cortical areas to the hippocampus, where memories are formed. Tonegawa and colleagues identified island and ocean cells a few years ago, and have been working since then to discover their functions.

In 2014, Tonegawa’s lab reported that island cells, which form small clusters surrounded by ocean cells, are needed for the brain to form memories linking two events that occur in rapid succession. In the new Neuron study, the team found that ocean cells are required to create representations of a location where an event took place.

“Ocean cells are important for contextual representations,” Sun says. “When you’re in the library, when you’re crossing the street, when you’re on the subway, you have different memories associated with each of these contexts.”

To discover these functions, the researchers labeled the two cell populations with a fluorescent molecule that lights up when it binds to calcium — an indication that the neuron is firing. This allowed them to determine which cells were active during tasks requiring mice to discriminate between two different environments, or to link two events in time.

The researchers also used a technique called optogenetics, which allows them to control neuron activity using light, to investigate how the mice’s behavior changed when either island cells or ocean cells were silenced.

When they blocked ocean cell activity, the animals were no longer able to associate a certain environment with fear after receiving a foot shock there. Manipulating the island cells, meanwhile, allowed the researchers to lengthen or shorten the time gap between events that could be linked in the mice’s memory.

Information flow

Previously, Tonegawa’s lab found that the firing rates of island cells depend on how fast the animal is moving, leading the researchers to believe that island cells help the animal navigate their way through space. Ocean cells, meanwhile, help the animal to recognize where it is at a given time.

The researchers also found that these two streams of information flow from the entorhinal cortex to different parts of the hippocampus: Ocean cells send their contextual information to the CA3 and dentate gyrus regions, while island cells project to CA1 cells.

Tonegawa’s lab is now pursuing further studies of how the entorhinal cortex and other parts of the brain represent time and place. The researchers are also investigating how information on timing and location are further processed in the brain to create a complete memory of an event.

“To form an episodic memory, each component has to be recombined together,” Kitamura says. “This is the next question.”

Story Source:

The above post is reprinted from materials provided by Massachusetts Institute of Technology.

22 Eylül 2015 Salı

Kids can remember tomorrow what they forgot today

from
BIOENGINEER.ORG http://bioengineer.org/kids-remember-tomorrow-forgot-today/

For adults, memories tend to fade with time. But a new study has shown that there are circumstances under which the opposite is true for small children: they can remember a piece of information better days later than they can on the day they first learned it.

kids remembering Kids can remember tomorrow what they forgot today kids remembering

While playing a video game that asked them to remember associations between objects, 4- and 5-year-olds who re-played the game after a two-day delay scored more than 20 percent higher than kids who re-played it later the same day.

“An implication is that kids can be smarter than we necessarily thought they could be,” said Kevin Darby, a doctoral student in psychology at The Ohio State University and co-author of the study. “They can make complex associations, they just need more time to do it.”

The study, which will appear in an upcoming issue of the journal Psychological Science, is the first to document two different but related cognitive phenomena simultaneously: so-called “extreme forgetting” — when kids learn two similar things in rapid succession, and the second thing causes them to forget the first — and delayed remembering — when they can recall the previously forgotten information days later.

The findings “give us a window into understanding memory and, in particular, the issue of encoding new information into memory,” said lead study author Vladimir Sloutsky, professor of psychology at Ohio State and director of the university’s Cognitive Development Lab.

“First, we showed that if children are given pieces of similar information in close proximity, the different pieces interfere with each other, and there is almost complete elimination of memory,” Sloutsky said. “Second, we showed that introducing delays eliminates this interference.”

“It seems surprising that children can almost completely forget what they just learned, but then their memories can actually improve with time.”

The study involved 82 4- and 5-year-olds from central Ohio preschools. The kids played a picture association game on a computer three separate times.

The first time, they were shown pairs of objects, such as a baseball cap and a rabbit, and told whether the pairs belonged to Mickey Mouse or Winnie the Pooh. To win the game, they had to match the pairs with the correct owner.

Kids learned the associations fairly easily. At the start of the game, they were scoring an average of 60 percent, but by the end of the game their average scores had risen to around 90 percent.

The kids then played the game again immediately after, but the researchers scrambled the pairs belonging to Mickey and Pooh, so that the kids had to learn a completely new set of associations with the exact same objects.

Again, the kids started out scoring around 60 percent, and ended around 90 percent — scores that proved they were able to learn the new picture associations.

The researchers wanted to test whether learning the new associations in the second game caused the kids to forget what they learned in the first game, so they had half of the kids play one more time the same day. For this last game, the researchers brought back the original pair associations from the first game.

And it seemed that the kids did indeed experience extreme forgetting. They began the third game scoring around 60 percent, and ended scoring around 90 percent — as if they were learning the same information all over again from scratch.

The other half of the kids didn’t play the third game until two days later. Darby explained why.

“We know from previous research that kids struggle to form complex associations in the moment, so we thought that with some time off and periods of sleep they might be able to do better,” he said. “And it turned out that when they had time to absorb the information, they did better.”

A lot better, actually: Kids who had a two-day break began the game with an average score of nearly 85 percent, and finished with a score just above 90 percent. Their final scores were similar, but they remembered enough to start out with a 25-point advantage over kids who didn’t get a two-day break.

Sloutsky said that, for kids, learning the pair associations is analogous to learning things like rules, schedules, or arrangements. For example, a child may have to remember that on Saturdays she can use the scooter and her brother plays video games, but on Sunday she plays video games and her brother uses the scooter.

The study suggests that kids may have difficulty remembering such things in the moment, but given a few days to absorb the new information, they can remember it later.

Sloutsky cautioned that the study does not in any way suggest that kids can absorb adult-sized quantities of information if only they are given time to sleep on it. Rather, it means that they can absorb kid-sized quantities of information given time, even if they seem to forget in the moment.

“We’ve shown that it’s possible for children’s memories to improve with time, but it’s not like we uncovered a method for super-charging how much they can remember,” he said.

“The takeaway message is that kids can experience extreme forgetting, and the counter-intuitive way to fight it is to let time pass.”

Story Source:

The above post is reprinted from materials provided by Ohio State University.

Sensor Could Detect Viruses, Kill Cancer Cells

from
BIOENGINEER.ORG http://bioengineer.org/sensor-detect-viruses-kill-cancer-cells/

MIT biological engineers have developed a modular system of proteins that can detect a particular DNA sequence in a cell and then trigger a specific response, such as cell death.

sensor Sensor Could Detect Viruses, Kill Cancer Cells sensor

At left, cells glow red to indicate that the detection system has been successfully delivered. The system was designed to produce green fluorescence in cells carrying a viral DNA sequence, as seen at right. Photo Credit: Shimyn Slomovic

This system can be customized to detect any DNA sequence in a mammalian cell and then trigger a desired response, including killing cancer cells or cells infected with a virus, the researchers say.

“There is a range of applications for which this could be important,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Department of Biological Engineering and Institute of Medical Engineering and Science (IMES). “This allows you to readily design constructs that enable a programmed cell to both detect DNA and act on that detection, with a report system and/or a respond system.”

Collins is the senior author of a Sept. 21 Nature Methods paper describing the technology, which is based on a type of DNA-binding proteins known as zinc fingers. These proteins can be designed to recognize any DNA sequence.

“The technologies are out there to engineer proteins to bind to virtually any DNA sequence that you want,” says Shimyn Slomovic, an IMES postdoc and the paper’s lead author. “This is used in many ways, but not so much for detection. We felt that there was a lot of potential in harnessing this designable DNA-binding technology for detection.”

Sense and respond

To create their new system, the researchers needed to link zinc fingers’ DNA-binding capability with a consequence — either turning on a fluorescent protein to reveal that the target DNA is present or generating another type of action inside the cell.

The researchers achieved this by exploiting a type of protein known as an “intein” — a short protein that can be inserted into a larger protein, splitting it into two pieces. The split protein pieces, known as “exteins,” only become functional once the intein removes itself while rejoining the two halves.

Collins and Slomovic decided to divide an intein in two and then attach each portion to a split extein half and a zinc finger protein. The zinc finger proteins are engineered to recognize adjacent DNA sequences within the targeted gene, so if they both find their sequences, the inteins line up and are then cut out, allowing the extein halves to rejoin and form a functional protein. The extein protein is a transcription factor designed to turn on any gene the researchers want.

In this paper, they linked green fluorescent protein (GFP) production to the zinc fingers’ recognition of a DNA sequence from an adenovirus, so that any cell infected with this virus would glow green.

This approach could be used not only to reveal infected cells, but also to kill them. To achieve this, the researchers could program the system to produce proteins that alert immune cells to fight the infection, instead of GFP.

“Since this is modular, you can potentially evoke any response that you want,” Slomovic says. “You could program the cell to kill itself, or to secrete proteins that would allow the immune system to identify it as an enemy cell so the immune system would take care of it.”

The MIT researchers also deployed this system to kill cells by linking detection of the DNA target to production of an enzyme called NTR. This enzyme activates a harmless drug precursor called CB 1954, which the researchers added to the petri dish where the cells were growing. When activated by NTR, CB 1954 kills the cells.

Future versions of the system could be designed to bind to DNA sequences found in cancerous genes and then produce transcription factors that would activate the cells’ own programmed cell death pathways.

Research tool

The researchers are now adapting this system to detect latent HIV proviruses, which remain dormant in some infected cells even after treatment. Learning more about such viruses could help scientists find ways to permanently eliminate them.

“Latent HIV provirus is pretty much the final barrier to curing AIDS, which currently is incurable simply because the provirus sequence is there, dormant, and there aren’t any ways to eradicate it,” Slomovic says.

While treating diseases using this system is likely many years away, it could be used much sooner as a research tool, Collins says. For example, scientists could use it to test whether genetic material has been successfully delivered to cells that scientists are trying to genetically alter. Cells that did not receive the new gene could be induced to undergo cell death, creating a pure population of the desired cells.

It could also be used to study chromosomal inversions and transpositions that occur in cancer cells, or to study the 3-D structure of normal chromosomes by testing whether two genes located far from each other on a chromosome fold in such a way that they end up next to each other, the researchers say.

Story Source:

The above post is reprinted from materials provided by Massachusetts Institute of Technology.

Virus bioengineered to deliver therapies to cells

from
BIOENGINEER.ORG http://bioengineer.org/virus-bioengineered-to-deliver-therapies-to-cells/

Stanford researchers have ripped the guts out of a virus and totally redesigned its core to repurpose its infectious capabilities into a safe vehicle for delivering vaccines and therapies directly where they are needed.

virus Virus bioengineered to deliver therapies to cells virus1

Professor James Swartz holds an enlarged replica of a virus-like particle. Swartz and his team have re-engineered a virus to deliver therapies to cells. Photo Credit: Linda Rice

The study reported today in the Proceedings of the National Academy of Sciences breathes new life into the field of targeted delivery, the ongoing effort to fashion treatments that affect diseased areas but leave healthy tissue alone.

“We call this a smart particle,” said James Swartz, the professor of chemical engineering and of bioengineering at Stanford who led the study. “We make it smart by adding molecular tags that act like addresses to send the therapeutic payload where we want it to go.”

Using the smart particle for immunotherapy would involve tagging its outer surface with molecules designed to teach the body’s disease-fighting cells to recognize and destroy cancers, Swartz said.

For Swartz and his principal collaborator, Yuan Lu, now a pharmacology researcher at the University of Tokyo, the result is a vindication. When they first started the research four years ago, funding agencies said it couldn’t be done.

It will require much more effort to accomplish the second goal — packing tiny quantities of medicines into the smart particles, delivering the particles to and into diseased cells, and engineering them to release their payloads.

‘Proof of principle’

“This was a proof-of-principle experiment so there’s a lot of work to be done,” Swartz said. “But I believe we can use this smart particle to deliver cancer-fighting immunotherapies that will have minimal side effects.”

Massachusetts Institute of Technology Professor Robert Langer, a leader in targeted drug delivery research who was not connected to the Stanford experiments, also read the paper before publication.

“This is terrific work, a beautiful paper,” Langer said. “Dr. Swartz and colleagues have done a remarkable job of stabilizing viruslike particles and re-engineering their surface.”

Targeted drug delivery is one of the ultimate goals of medicine because it seeks to focus remedies on diseased cells, minimizing the side effects that occur when, for instance, radiation or chemotherapies harm healthy cells while treating cancer.

Looking for a model in nature, many researchers focused on viruses, which target specific cells, sneak in and deliver an infectious payload. The new paper describes how the Stanford team designed a viruslike particle that is only a delivery vehicle with no infectious payload.

They started with the virus that causes Hepatitis B. This virus has three layers like an egg, and the researchers focused on the non-infectious middle layer, called the capsid. It is a complex protein structure, and when properly assembled this capsid looks like a skeletal soccer ball with lots of spikes sticking out.

Other researchers have had the same idea for repurposing the Hepatitis B capsid because its hollow structure is large enough, in theory, to carry a significant medical payload. But in practice this had proven so difficult that when Swartz floated the idea to funding agencies they said no.

But Swartz was so certain his approach would work that he found ways to bootstrap the project over the several years that it took to finish his experiments.

Next steps

Biotechnologists know how to build the complex protein structures they find in nature, but the Stanford team took this further. They didn’t just build the capsid nature provided. They studied the DNA that directs the structure to assemble and re-engineered the code to custom-design a capsid that would be invisible to the immune system, sturdy enough to survive a trip through the bloodstream and have a surface that would be simple to attach molecular tags to.

Bioengineering the surface was important. If the researchers wanted the capsid to teach the immune system to destroy cancer cells, they would hang vaccine tags on the spikes. If, on the other hand, they wanted the capsid to deliver medicines to a sick cell, they would hang address tags on the spikes.

Finally, the researchers had to make all these modifications without destroying the miraculous capability of the capsid’s DNA code to direct 240 copies of one protein to self-assemble into a hollow sphere with a spiky surface.

Swartz said the next step is to attach cancer tags to the outside of this smart particle, to use it to train the immune system to recognize certain cancers. Those experiments would likely occur in mice.

After that he will add the next function — further engineering the DNA code to make sure that the protein can self-assemble around a small medicinal payload.

“That will be quite complicated, but we’ve already gotten this far when they said it couldn’t be done,” Swartz said.

Stanford has patented the technology and different aspects are licensed to a biotechnology company in which Swartz has a founding interest. The approach is in its early stages and there is as yet no timetable for commercial development.

Story Source:

The above post is reprinted from materials provided by Stanford University.

21 Eylül 2015 Pazartesi

Promising Drug for HIV Treatment

from
BIOENGINEER.ORG http://bioengineer.org/promising-drug-hiv-treatment/

A cure for HIV requires the eradication of latent (i.e., dormant and therefore hidden) virus from reservoirs in immune cells throughout the body. HIV latency depends on the activity of proteins from the human host called histone deacetylases (HDAC), and previous work has shown that HDAC inhibitors (HDACi) can disrupt HIV latency. A study published on September 17th in PLOS Pathogens reports results from a clinical trial of an HDAC inhibitor that had shown potential in preclinical studies and answers open questions about the potential use of these drugs in strategies to eliminate HIV from the body.

hiv Promising Drug for HIV Treatment hiv

Ole Søgaard, from Aarhus University Hospital, Denmark, and colleagues designed the trial to investigate further the potential of HDACi as the latency reversal component in the ‘kick and kill approach’ to purge HIV reservoirs. They chose a single HDACi called romidepsin and investigated the drug’s clinical safety and potential for reversing HIV latency in individuals on long-term antiretroviral therapy (ART) as well as its impact on T cells (an earlier study had suggested that HDACi might negatively affect killer T cell function thus impairing the elimination of HIV-infected cells by the immune system).

The trial involved six participants (all Caucasians) with a median age of 56 and a median time on ART of 10 years. The participants received one romidepsin infusion per week for three consecutive weeks and were followed for several weeks after that. While all individuals experienced some side effects (or adverse events), those were generally mild, and all participants completed the full course of treatment.

When the researchers analyzed blood samples from the participants at different points in the trial, they found the expected biochemical response to HDAC inhibition following each administration of the drug. Concurrently, they saw evidence of HIV transcription–the first step of latency reversal–in all participants. And after the second infusion, HIV RNA became detectable and quantifiable with standard clinical assays in blood plasma in five of the six participants. As previous studies had not consistently shown an increase in plasma HIV RNA, even using ultrasensitive assays, the researchers suggest that this “establishes a new benchmark for future trials investigating the in vivo potency of latency reversing agents to be used in HIV eradication efforts.”

Furthermore, romidepsin did not alter the proportion of HIV-specific T cells, inhibit T-cell cytokine production, or induce other changes that suggest an impaired T-cell response. This, the researchers say, is “critically important for future trials combining HDACi with interventions (e.g. therapeutic HIV vaccination) designed to enhance killing of latently infected cells by cytotoxic T cells.”

“The present study,” they summarize, “demonstrated potent in vivo latency reversal with a single drug resulting in increased plasma HIV-1 RNA that was readily quantified with standard commercial assays and did not show negative effects on T cell immunity.” Moreover “the magnitude of viral induction in the present study was greater than anything previously reported for any latency reversing agent tested in humans.” However, they also acknowledge that, as with previous studies, their data do not answer what proportion out of the total pool of inducible latently infected cells were “kicked out of” latency in the participants,” and that “despite the increases in viral production and preserved T cell functions, no substantial changes in the size of the HIV reservoir were observed.”

Nonetheless, the researchers feel that their combined results “have important implications for the use of romidepsin as the latency reversal agent in a multi-component HIV eradication strategy where this drug may be combined with interventions designed to enhance killing of latently infected cells.” In fact, they mention one such trial combining romidepsin with therapeutic HIV-1 vaccination that is currently under way at the same institution that, they hope, “will shed light on the mechanisms needed to effectively clear the cells that produce viral particles.”

Story Source:

The above post is reprinted from materials provided by PLOS.

World has lost 3 percent of its forests since 1990

from
BIOENGINEER.ORG http://bioengineer.org/world-lost-percent-forests-1990/

The globe’s forests have shrunk by three per cent since 1990 – an area equivalent to the size of South Africa – despite significant improvements in conservation over the past decade.

forests World has lost 3 percent of its forests since 1990 forests

The UN’s Global Forest Resources Assessment (GFRA) 2015 was released this week, revealing that while the pace of forest loss has slowed, the damage over the past 25 years has been considerable.

Total forest area has declined by three per cent between 1990 and 2015 from 4,128 million hectares to 3,999 million hectares – a loss of 129 million hectares.

Significantly, loss of natural forested area was double the global total at six per cent, while tropical forests took the hardest hit with a loss rate of ten per cent.

Forestry expert at the University of Melbourne Professor Rod Keenan has been involved with the GFRA since 2003. For the 2015 Assessment, he headed a team of academics analysing the GFRA data for the UN’s Food and Agriculture Organisation.

“These are not good stats,” Professor Keenan said of the latest report.

“We really need to be increasing forest area across all domains to provide for the forest benefits and services of a growing population. So there is more work to do.”

Agricultural land development, by large and small scale producers, is believed to be the main driver behind the decreases, with Brazil, Indonesia and Nigeria recording the biggest losses over the past five years.

But there have also been positive signs.

While the annual rate of net forest loss in the 1990s stood at 7.3 million hectares, it has since halved to 3.3 million hectares between 2010 and 2015.

“Halving the loss is a good thing, but we need continued policy focus to ensure the trend can be sustained,” Professor Keenan said.

He believes this should include regulations to stop forest conversion, funding for better forest management and incentives to increase forest area.

Brazil and Indonesia, both among the highest deforestation offenders, have significantly improved their ways – with Brazil’s current net loss rate 40 per cent lower than in the 1990s.

Indonesia is also losing forested area at a rate two-thirds slower than it did between 1990 and 2000.

Professor Keenan said the study showed forest is being more rapidly lost in some of the poorest countries, including India, Vietnam and Ghana.

“In low-income countries with high forest cover, forests are being cleared for direct subsistence by individuals and families and large scale agriculture for broader economic development,” he said.

“Some have policies and regulations to protect forests, but they do not have the capacity and resources to implement them.”

In Australia, conservation efforts are beginning to have an impact. Australia recorded a net gain of 1.5 million hectares of forested land over the past five years, despite an overall fall from 128.5 million hectares in 1990 to 124.7 million hectares in 2015.

Much of that is attributed to natural events, such as fire and drought, as well as human land clearance for agriculture.

Significant findings:

In 2015, total forest cover is 3,999 million hectares globally (or 31 per cent of global land)
Since 1990, there has been a loss of three per cent of total forest area, six per cent of total natural forested area and ten per cent decrease in tropical forests
Average rate of loss has halved from 7.3 million hectares in the 1990s, to 3.3 million hectares between 2010 and 2015
Decline in natural forests has been offset by 66 per cent rise in planted forest, from 168 million hectares to 278 million hectares
Loss occurring more quickly in some of the lowest-income countries

Story Source:

The above post is reprinted from materials provided by University of Melbourne.

Personalized 3-D printed heart models for surgical planning

from
BIOENGINEER.ORG http://bioengineer.org/personalized-3-d-printed-heart-models-surgical-planning/

Researchers at MIT and Boston Children’s Hospital have developed a system that can take MRI scans of a patient’s heart and, in a matter of hours, convert them into a tangible, physical model that surgeons can use to plan surgery.

3d printed heart model Personalized 3-D printed heart models for surgical planning 3d printed heart model

New system from MIT and Boston Children’s Hospital researchers converts MRI scans into 3D-printed heart models (shown here). Photo: Credit Bryce Vickmark

The models could provide a more intuitive way for surgeons to assess and prepare for the anatomical idiosyncrasies of individual patients. “Our collaborators are convinced that this will make a difference,” says Polina Golland, a professor of electrical engineering and computer science at MIT, who led the project. “The phrase I heard is that ‘surgeons see with their hands,’ that the perception is in the touch.”

This fall, seven cardiac surgeons at Boston Children’s Hospital will participate in a study intended to evaluate the models’ usefulness.

Golland and her colleagues will describe their new system at the International Conference on Medical Image Computing and Computer Assisted Intervention in October. Danielle Pace, an MIT graduate student in electrical engineering and computer science, is first author on the paper and spearheaded the development of the software that analyzes the MRI scans. Medhi Moghari, a physicist at Boston Children’s Hospital, developed new procedures that increase the precision of MRI scans tenfold, and Andrew Powell, a cardiologist at the hospital, leads the project’s clinical work.

The work was funded by both Boston Children’s Hospital and by Harvard Catalyst, a consortium aimed at rapidly moving scientific innovation into the clinic.

MRI data consist of a series of cross sections of a three-dimensional object. Like a black-and-white photograph, each cross section has regions of dark and light, and the boundaries between those regions may indicate the edges of anatomical structures. Then again, they may not.

Determining the boundaries between distinct objects in an image is one of the central problems in computer vision, known as “image segmentation.” But general-purpose image-segmentation algorithms aren’t reliable enough to produce the very precise models that surgical planning requires.

Human factors

Typically, the way to make an image-segmentation algorithm more precise is to augment it with a generic model of the object to be segmented. Human hearts, for instance, have chambers and blood vessels that are usually in roughly the same places relative to each other. That anatomical consistency could give a segmentation algorithm a way to weed out improbable conclusions about object boundaries.

The problem with that approach is that many of the cardiac patients at Boston Children’s Hospital require surgery precisely because the anatomy of their hearts is irregular. Inferences from a generic model could obscure the very features that matter most to the surgeon.

In the past, researchers have produced printable models of the heart by manually indicating boundaries in MRI scans. But with the 200 or so cross sections in one of Moghari’s high-precision scans, that process can take eight to 10 hours.

“They want to bring the kids in for scanning and spend probably a day or two doing planning of how exactly they’re going to operate,” Golland says. “If it takes another day just to process the images, it becomes unwieldy.”

Pace and Golland’s solution was to ask a human expert to identify boundaries in a few of the cross sections and allow algorithms to take over from there. Their strongest results came when they asked the expert to segment only a small patch –one-ninth of the total area — of each cross section.

In that case, segmenting just 14 patches and letting the algorithm infer the rest yielded 90 percent agreement with expert segmentation of the entire collection of 200 cross sections. Human segmentation of just three patches yielded 80 percent agreement.

“I think that if somebody told me that I could segment the whole heart from eight slices out of 200, I would not have believed them,” Golland says. “It was a surprise to us.”

Together, human segmentation of sample patches and the algorithmic generation of a digital, 3-D heart model takes about an hour. The 3-D-printing process takes a couple of hours more.

Prognosis

Currently, the algorithm examines patches of unsegmented cross sections and looks for similar features in the nearest segmented cross sections. But Golland believes that its performance might be improved if it also examined patches that ran obliquely across several cross sections. This and other variations on the algorithm are the subject of ongoing research.

The clinical study in the fall will involve MRIs from 10 patients who have already received treatment at Boston Children’s Hospital. Each of seven surgeons will be given data on all 10 patients — some, probably, more than once. That data will include the raw MRI scans and, on a randomized basis, either a physical model or a computerized 3-D model, based, again at random, on either human segmentations or algorithmic segmentations.

Using that data, the surgeons will draw up surgical plans, which will be compared with documentation of the interventions that were performed on each of the patients. The hope is that the study will shed light on whether 3-D-printed physical models can actually improve surgical outcomes.

Story Source:

The above post is reprinted from materials provided by MIT NEWS.

20 Eylül 2015 Pazar

Hearts build new muscle with this simple protein patch

from
BIOENGINEER.ORG http://bioengineer.org/hearts-build-new-muscle-simple-protein-patch/

An international team of researchers has identified a protein that helps heart muscle cells regenerate after a heart attack. Researchers also showed that a patch loaded with the protein and placed inside the heart improved cardiac function and survival rates after a heart attack in mice and pigs.

heart Hearts build new muscle with this simple protein patch heart

New heart muscle cells (green with yellow nuclei) grow in the infarcted region of a mouse heart treated by the patch loaded with FSTL1. Photo Credit: UC San Diego/SBP

Animal hearts regained close to normal function within four to eight weeks after treatment with the protein patch. It might be possible to test the patch in human clinical trials as early as 2017. The team, led by Professor Pilar Ruiz-Lozano at Stanford University and involving researchers from the University of California, San Diego and Sanford Burnham Prebys Medical Discovery Institute (SBP) published their findings in the Sept. 16 online issue of Nature.

“We are really excited about the prospect of bringing this technology to the clinic,” said Mark Mercola, professor of Bioengineering at UC San Diego and professor in the Development, Aging, and Regeneration Program at SBP. “It’s commercially viable, clinically attractive and you don’t need immunosuppressive drugs.”

High throughput technology in Mercola’s lab was critical in identifying a natural protein, called Follistatin-like 1 (FSTL1), and showing that it can stimulate cultured heart muscle cells to divide. Researchers led by Ruiz-Lozano at Stanford embedded the protein in a patch and applied it to the surface of mouse and pig hearts that had undergone an experimental form of myocardial infarction or “heart attack.” Remarkably, FSTL1 caused heart muscle cells already present within the heart to multiply and re-build the damaged heart and reduce scarring.

Heart muscle regeneration and scarring are two major issues that current treatments for heart attacks do not address, said Ruiz-Lozano. “Treatments don’t deal with this fundamental problem–and consequently many patients progressively lose heart function, leading to long-term disability and eventually death,” she said.

Today, most patients survive a heart attack immediately after it happens. But the organ is damaged and scarred, making it harder to pump blood. Sustained pressure causes scarring to spread and ultimately leads to heart failure. Heart failure is a major source of mortality worldwide, and roughly half of heart failure patients die within five to six years. Treatments available today focus primarily on making it easier for the heart to pump blood, and advances have extended patients’ lives. But they can’t help regenerate heart tissue.

The team initially looked to other species for inspiration. Lower vertebrates, such as fish, can regenerate heart muscle, and prior studies in fish suggested that the epicardium, the heart’s outside layer, might produce regenerative compounds. The researchers joined forces to find a solution.

The team started with the epicardial cells themselves, and showed that they stimulated existing heart muscle cells, or cardiomyocytes, to replicate. To find whether a single compound might be responsible, the Mercola lab used mass spectrometry, a sophisticated technology, to find over 300 proteins produced by the cells that could fit the bill. They then screened a number of these candidates using high throughput assays to look for the ones that had the same activity as the cells, and found that only one did the job: Follistatin-like 1 (FSTL1).

The group at Stanford–including teams led by Ruiz-Lozano, Dan Bernstein, Manish Butte and Phil Yang– led the development effort for a therapeutic patch made out of collagen, which was cast with FSTL1 at its core. The patch has the elasticity of fetal heart tissue and slowly releases the protein. “It could act like a cell nursery,” Ruiz-Lozano said. “It’s a hospitable environment. Over time, it gets remodeled and becomes vascularized as new muscle cells come in.”

Testing the patch loaded with FSTL1 in a heart attack model in mice and pigs showed that it stimulated tissue regeneration even if implanted after the injury. For example, in pigs that had suffered a heart attack, the fraction of blood pumped out of the left ventricle dropped from the normal 50 percent to 30 percent. But function was restored to 40 percent after the patch was surgically placed onto the heart a week after injury and remained stable. The pigs’ heart tissue also scarred considerably less.

Ruiz-Lozano is the co-founder of EpikaBio, a startup that aims to bring the patches to human clinical trials as soon as possible.

Story Source:

The above post is reprinted from materials provided by UC San Diego/SBP.

How your brain decides blame and punishment

from
BIOENGINEER.ORG http://bioengineer.org/brain-decides-blame-punishment/

Juries in criminal cases typically decide if someone is guilty, then a judge determines a suitable level of punishment. New research confirms that these two separate assessments of guilt and punishment — though related — are calculated in different parts of the brain. In fact, researchers found that they can disrupt and change one decision without affecting the other.

justice How your brain decides blame and punishment justice1

New work by researchers at Vanderbilt University and Harvard University confirms that a specific area of the brain, the dorsolateral prefrontal cortex, is crucial to punishment decisions. Researchers predicted and found that by altering brain activity in this brain area, they could change how subjects punished hypothetical defendants without changing the amount of blame placed on the defendants.

“We were able to significantly change the chain of decision-making and reduce punishment for crimes without affecting blameworthiness,” said René Marois, professor and chair of psychology at Vanderbilt and co-principal author of the study. “This strengthens evidence that the dorsolateral prefrontal cortex integrates information from other parts of the brain to determine punishment and shows a clear neural dissociation between punishment decisions and moral responsibility judgements.”

The research titled “From Blame to Punishment: Disrupting Prefrontal Cortex Activity Reveals Norm Enforcement Mechanisms” was published on Sept. 17 in the journal Neuron.

The Experiment

The researchers used repetitive transcranial magnetic stimulation (rTMS) on a specific area of the dorsolateral prefrontal cortex to briefly alter activity in this brain region and consequently change the amount of punishment a person doled out.

“Many studies show the integrative function of the dorsolateral prefrontal cortex in relatively simple cognitive tasks, and we believe that this relatively basic process forms the foundation for far more complex forms of behavior and decision-making, such as norm enforcement,” said lead author Joshua Buckholtz, now an assistant professor of psychology at Harvard.

The researchers conducted experiments with 66 volunteer men and women. Participants were asked to make punishment and blameworthiness decisions in a series of scenarios in which a suspect committed a crime. The scenarios varied by harm caused (ranging from property loss to grievous injury and death) and how culpable the suspect was for the act (fully responsible or not, due to mitigating circumstances.) Half of the subjects received active rTMS while the other half of the subjects received a sham or placebo version of rTMS.

Level of Harm

Across all participants and all trials, both culpability and level of harm were significant predictors of the amount of punishment the subjects deemed appropriate. But subjects receiving active rTMS chose significantly lower punishments for fully culpable suspects than did those subjects receiving sham rTMS, particularly in scenarios that resulted in low to moderate harm. Additional analyses suggested that the effect was due to impaired integration of signals for harm and culpability.

“Temporarily disrupting the dorsolateral prefrontal cortex function appears to alter how people use information about harm and culpability to render these decisions. In other words punishment requires that people balance these two influences, and the rTMS manipulation interfered with this balance, especially under conditions in which these factors are dissonant, such as when the intent is clear but the harm outcome is mild,” said Buckholtz.

Implications

The research team’s main goal in this work is to expand the knowledge of how the brain assesses and then integrates information relevant to guilt and punishment decisions. It will also advance the burgeoning interdisciplinary study of law and neuroscience.

“This research gives us deeper insights into how people make decisions relevant to law, and particularly how different parts of the brain contribute to decisions about crime and punishment. We hope that these insights will help to build a foundation for better understanding, and perhaps one day better combatting, decision-making biases in the legal system,” said co-author Owen Jones, professor of law and biological sciences at Vanderbilt and director of the MacArthur Foundation Research Network on Law and Neuroscience.

Story Source:

The above post is reprinted from materials provided by Vanderbilt University.

‘Tree of life’ for 2.3 million species released

from
BIOENGINEER.ORG http://bioengineer.org/tree-life-2-3-million-species-released/

A first draft of the “tree of life” for the roughly 2.3 million named species of animals, plants, fungi and microbes has been released, and two University of Michigan biologists played a key role in its creation.

tree of life 'Tree of life' for 2.3 million species released tree of life1

A collaborative effort among 11 institutions, the tree depicts the relationships among living things as they diverged from one another over time, tracing back to the beginning of life on Earth more than 3.5 billion years ago.

Tens of thousands of smaller trees have been published over the years for select branches of the tree of life–some containing upwards of 100,000 species–but this is the first time those results have been combined into a single tree that encompasses all of life. The end result is a digital resource that is available free online for anyone to use or edit, much like a “Wikipedia” for evolutionary trees.

Understanding how the millions of species on Earth are related to one another helps scientists discover new drugs, increase crop and livestock yields, and trace the origins and spread of infectious diseases such as HIV, Ebola and influenza.

“This is the first real attempt to connect the dots and put it all together,” said principal investigator Karen Cranston of Duke University. “Think of it as Version 1.0.” A paper summarizing the findings was published online in Proceedings of the National Academy of Sciences on Sept. 18.

U-M evolutionary biologist Stephen Smith heads the group that tackled the nitty-gritty details of piecing together all the existing branches, stems and twigs of life’s tree into a single diagram. Cody Hinchliff, formerly a postdoctoral researcher in Smith’s lab who is now at the University of Idaho, did much of the heavy lifting on the project and shares first-author credits with Smith on the PNAS paper.

Rather than build the tree of life from scratch, the researchers pieced it together by compiling thousands of smaller chunks that had already been published online and merging them into a gigantic “supertree” that encompasses all named species.

“Many participants on the project contributed hundreds of hours tracking down and cleaning up thousands of trees from the literature, then selecting 484 of them that were used to generate the draft tree of life,” Hinchliff said.

Combining the 484 trees was a painstaking process that took three years to complete, said Smith, an assistant professor in the Department of Ecology and Evolutionary Biology.

Smith and Hinchliff brought both computer savvy and knowledge of evolutionary biology to the project, which required them to write tens of thousands of lines of computer code and to create several new software packages.

“In addition to the process of combining existing trees, much of what was done at the University of Michigan was the development of tools and techniques and the analysis of the tree itself,” Smith said. “To complete this project, we had to code our own solutions. There was nothing out of the box that we could use.”

The aim was to create software tools and algorithms that balanced performance with efficiency when combining large numbers of trees, Hinchliff said.

“Our software, which is called ‘treemachine,’ took a few days to generate the current draft tree of life on a moderately outfitted desktop workstation in Stephen’s office,” he said. “For comparison, other state-of-the-art methods we tried would have taken hundreds of years to finish on that kind of hardware.”

Another challenge faced by the team: The vast majority of evolutionary trees are published as PDFs and other image files that are impossible to enter into a database or merge with other trees.

“There’s a pretty big gap between the sum of what scientists know about how living things are related, and what’s actually available digitally,” Cranston said.

As a result, the relationships depicted in some parts of the tree, such as the branches representing the pea and sunflower families, don’t always agree with expert opinion.

Other parts of the tree, particularly insects and microbes, remain elusive.

That’s because even the most popular online archive of raw genetic sequences — from which many evolutionary trees are built–contains DNA data for less than 5 percent of the tens of millions of species estimated to exist on Earth.

“As important as showing what we do know about relationships, this first tree of life is also important in revealing what we don’t know,” said co-author Douglas Soltis of the University of Florida.

To help fill in the gaps, the team is also developing software that will enable researchers to log on and update and revise the tree as new data come in for the millions of species still being named or discovered.

“This is just the beginning,” Smith said. “While the tree of life is interesting in its own right, our database of thousands of curated trees is an even more useful resource. We hope that this publication will encourage other researchers to contribute their own studies or to enter information from previously published sources.”

“Twenty five years ago, people said this goal of huge trees was impossible,” Soltis said. “The Open Tree of Life is an important starting point that other investigators can now refine and improve for decades to come.”

Story Source:

The above post is reprinted from materials provided by Duke University.