31 Mart 2016 Perşembe

Cold-adapted attenuated polio virus — towards a post-eradication vaccine

Motor learning tied to intelligent control of sensory neurons in muscles

Infections of the heart with common viruses

Living off the fat of the land

How the brain processes emotions

from
BIOENGINEER.ORG http://bioengineer.org/how-the-brain-processes-emotions/

Two neurons of the basolateral amygdala. MIT neuroscientists have found that these neurons play a key role in separating information about positive and negative experiences. How the brain processes emotions MIT Emotional Wiring 0 1 1

Some mental illnesses may stem, in part, from the brain’s inability to correctly assign emotional associations to events. For example, people who are depressed often do not feel happy even when experiencing something that they normally enjoy.

A new study from MIT reveals how two populations of neurons in the brain contribute to this process. The researchers found that these neurons, located in an almond-sized region known as the amygdala, form parallel channels that carry information about pleasant or unpleasant events.

Learning more about how this information is routed and misrouted could shed light on mental illnesses including depression, addiction, anxiety, and posttraumatic stress disorder, says Kay Tye, the Whitehead Career Development Assistant Professor of Brain and Cognitive Sciences and a member of MIT’s Picower Institute for Learning and Memory.

“I think this project really cuts across specific categorizations of diseases and could be applicable to almost any mental illness,” says Tye, the senior author of the study, which appears in the March 31 online issue of Neuron.

The paper’s lead authors are postdoc Anna Beyeler and graduate student Praneeth Namburi.

Emotional circuits

In a previous study, Tye’s lab identified two populations of neurons involved in processing positive and negative emotions. One of these populations relays information to the nucleus accumbens, which plays a role in learning to seek rewarding experiences, while the other sends input to the centromedial amygdala.

In the new study, the researchers wanted to find out what those neurons actually do as an animal reacts to a frightening or pleasurable stimulus. To do that, they first tagged each population with a light-sensitive protein called channelrhodopsin. In three groups of mice, they labeled cells projecting to the nucleus accumbens, the centromedial amygdala, and a third population that connects to the ventral hippocampus. Tye’s lab has previously shown that the connection to the ventral hippocampus is involved in anxiety.

Tagging the neurons is necessary because the populations that project to different targets are otherwise indistinguishable. “As far as we can tell they’re heavily intermingled,” Tye says. “Unlike some other regions of the brain, there is no topographical separation based on where they go.”

After labeling each cell population, the researchers trained the mice to discriminate between two different sounds, one associated with a reward (sugar water) and the other associated with a bitter taste (quinine). They then recorded electrical activity from each group of neurons as the mice encountered the two stimuli. This technique allows scientists to compare the brain’s anatomy (which neurons are connected to each other) and its physiology (how those neurons respond to environmental input).

The researchers were surprised to find that neurons within each subpopulation did not all respond the same way. Some responded to one cue and some responded to the other, and some responded to both. Some neurons were excited by the cue while others were inhibited.

“The neurons within each projection are very heterogeneous. They don’t all do the same thing,” Tye says.

However, despite these differences, the researchers did find overall patterns for each population. Among the neurons that project to the nucleus accumbens, most were excited by the rewarding stimulus and did not respond to the aversive one. Among neurons that project to the central amygdala, most were excited by the aversive cue but not the rewarding cue. Among neurons that project to the ventral hippocampus, the neurons appeared to be more balanced between responding to the positive and negative cues.

“This is consistent with the previous paper, but we added the actual neural dynamics of the firing and the heterogeneity that was masked by the previous approach of optogenetic manipulation,” Tye says. “The missing piece of that story was what are these neurons actually doing, in real time, when the animal is being presented with stimuli.”

Digging deep

The findings suggest that to fully understand how the brain processes emotions, neuroscientists will have to delve deeper into more specific populations, Tye says.

“Five or 10 years ago, everything was all about specific brain regions. And then in the past four or five years there’s been more focus on specific projections. And now, this study presents a window into the next era, when even specific projections are not specific enough. There’s still heterogeneity even when you subdivide at this level,” she says. “We’ve still got a long way to go in terms of appreciating the full complexities of the brain.”

“Neuroscience is quickly moving beyond the classical idea of ‘one brain region equals one function,’” says Joshua Johansen, a team leader at the RIKEN Brain Science Institute in Japan, who was not involved in the research. “This paper represents an important step in this process by showing that within the amygdala, the way distinct populations of cells process information is a critical determinant of how emotional responses arise.”

Another question still remaining is why these different populations are intermingled in the amygdala. One hypothesis is that the cells responding to different inputs need to be able to quickly interact with each other, coordinating responses to an urgent signal, such as an alert that danger is present. “We are exploring the interactions between these different projections, and we think that could be a key to how we so quickly select an appropriate action when we’re presented with a stimulus,” Tye says.

In the long term, the researchers hope their work will lead to new therapies for mental illnesses. “The first step is to define the circuits and then try to go in animal models of these pathologies and see how these circuits are functioning differently. Then we can try to develop strategies to restore them and try to translate that to human patients,” says Beyeler, who is soon starting her own lab at the University of Lausanne to further pursue this line of research.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post How the brain processes emotions appeared first on Scienmag.

Handheld surgical ‘pen’ prints human stem cells

from
BIOENGINEER.ORG http://bioengineer.org/handheld-surgical-pen-prints-human-stem-cells/

In a landmark proof-of-concept experiment, Australian researchers have used a handheld 3D printing pen to ‘draw’ human stem cells in freeform patterns with extremely high survival rates.

The device, developed out of collaboration between ARC Centre of Excellence for Electromaterials Science (ACES) researchers and orthopaedic surgeons at St Vincent’s Hospital, Melbourne, is designed to allow surgeons to sculpt customised cartilage implants during surgery.

Using a hydrogel bio-ink to carry and support living human stem cells, and a low powered light source to solidify the ink, the pen delivers a cell survival rate in excess of 97%.

3D bioprinters have the potential to revolutionise tissue engineering -they can be used to print cells, layer-by-layer, to build up artificial tissues for implantation.

But in some applications, such as cartilage repair, the exact geometry of an implant cannot be precisely known prior to surgery. This makes it extremely difficult to pre-prepare an artificial cartilage implant.

The Biopen special is held in the surgeon’s hands, allowing the surgeon unprecedented control in treating defects by filling them with bespoke scaffolds.

Professor Peter Choong, Director of Orthopaedics at St Vincent’s Hospital Melbourne, developed the concept with ACES Director Professor Gordon Wallace.

“The development of this type of technology is only possible with interactions between scientists and clinicians – clinicians to identify the problem and scientists to develop a solution,” Professor Choong said.

The team designed the BioPen with the practical constraints of surgery in mind and fabricated it using 3D printed medical grade plastic and titanium. The device is small, lightweight, ergonomic and sterilisable. A low powered light source is fixed to the device and solidifies the inks during dispensing.

“The biopen project highlights both the challenges and exciting opportunities in multidisciplinary research. When we get it right we can make extraordinary progress at a rapid rate,” Professor Wallace said.

The work was is published journal Biofabrication.

Design expertise and fabrication of the BioPen was supported by the Materials Node of the Australian National Fabrication Facility.

###

Media Contact

Natalie Foxon
nfoxon@uow.edu.au
@arc_aces

http://www.electromaterials.edu.au

The post Handheld surgical ‘pen’ prints human stem cells appeared first on Scienmag.

Illuminating the inner ‘machines’ that give bacteria an energy boost

30 Mart 2016 Çarşamba

Cancer gene drives vascular disorder

Identification of a new protein essential for ovule and sperm formation

Study: Simple blood test can detect evidence of concussions up to a week after injury

Stem cells used to successfully regenerate damage in corticospinal injury

from
BIOENGINEER.ORG http://bioengineer.org/stem-cells-used-to-successfully-regenerate-damage-in-corticospinal-injury/

Writing in the March 28, 2016 issue of Nature Medicine, researchers at University of California, San Diego School of Medicine and Veterans Affairs San Diego Healthcare System, with colleagues in Japan and Wisconsin, report that they have successfully directed stem cell-derived neurons to regenerate lost tissue in damaged corticospinal tracts of rats, resulting in functional benefit.

“The corticospinal projection is the most important motor system in humans,” said senior study author Mark Tuszynski, MD, PhD, professor in the UC San Diego School of Medicine Department of Neurosciences and director of the UC San Diego Translational Neuroscience Institute. “It has not been successfully regenerated before. Many have tried, many have failed — including us, in previous efforts.”

“The new thing here was that we used neural stem cells for the first time to determine whether they, unlike any other cell type tested, would support regeneration. And to our surprise, they did.”

Specifically, the researchers grafted multipotent neural progenitor cells into sites of spinal cord injury in rats. The stem cells were directed to specifically develop as a spinal cord, and they did so robustly, forming functional synapses that improved forelimb movements in the rats. The feat upends an existing belief that corticospinal neurons lacked internal mechanisms needed for regeneration.

Previous studies have reported functional recovery in rats following various therapies for spinal cord injury, but none had involved regeneration of corticospinal axons. In humans, the corticospinal tract extends from the cerebral cortex in the upper brain down into the spinal cord.

“We humans use corticospinal axons for voluntary movement,” said Tuszynski. “In the absence of regeneration of this system in previous studies, I was doubtful that most therapies taken to humans would improve function. Now that we can regenerate the most important motor system for humans, I think that the potential for translation is more promising.”

Nonetheless, the road to testing and treatment in people remains long and uncertain.

“There is more work to do prior to moving to humans,” Tuszynski said. We must establish long-term safety and long-term functional benefit in animals. We must devise methods for transferring this technology to humans in larger animal models. And we must identify the best type of human neural stem cell to bring to the clinic.”

###

Co-authors include Ken Kadoya, UC San Diego and Kokkaido University, Japan; Paul Lu, UC San Diego and VA San Diego Healthcare System; Kenny Nguyen, Corrine Lee-Kubli, Kumamaru Hiromi, Gunnar Poplawski, Jennifer Dulin, Yoshio Takashima, Jeremy Biane and James Conner, UC San Diego; Lin Yao, Joshua Knackert and Su-Chun Zhang, University of Wisconsin.

Funding for this research came, in part, from the Veterans Administration, the National Institutes of Health (grant NS09881), the Craig H. Neilsen Foundation, the Bernard and Anne Spitzer Charitable Trust, the D. Miriam and Sheldon Adelson Medical Research Foundation and Kitami Kobayashi Hospital.

Media Contact

Scott LaFee
slafee@ucsd.edu
619-543-6163
@UCSanDiego

http://www.ucsd.edu

The post Stem cells used to successfully regenerate damage in corticospinal injury appeared first on Scienmag.

Scientists unlock genetic secret that could help fight malaria

26 Mart 2016 Cumartesi

How one gene contributes to two diseases

from
BIOENGINEER.ORG http://bioengineer.org/how-one-gene-contributes-to-two-diseases/

The gene Shank3 has been linked to both autism and schizophrenia. Researchers found that two different mutations of the Shank3 gene produce some distinct molecular and behavioral effects in mice. How one gene contributes to two diseases MIT AutismGene 0 1

Although it is known that psychiatric disorders have a strong genetic component, untangling the web of genes contributing to each disease is a daunting task. Scientists have found hundreds of genes that are mutated in patients with disorders such as autism, but each patient usually has only a handful of these variations.

To further complicate matters, some of these genes contribute to more than one disorder. One such gene, known as Shank3, has been linked to both autism and schizophrenia.

MIT neuroscientists have now shed some light on how a single gene can play a role in more than one disease. In a study appearing in the Dec. 10 online edition of Neuron, they revealed that two different mutations of the Shank3 gene produce some distinct molecular and behavioral effects in mice.

“This study gives a glimpse into the mechanism by which different mutations within the same gene can cause distinct defects in the brain, and may help to explain how they may contribute to different disorders,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, a member of the Stanley Center for Psychiatric Research at the Broad Institute, and the senior author of the study.

The findings also suggest that identifying the brain circuits affected by mutated genes linked to psychiatric disease could help scientists develop more personalized treatments for patients in the future, Feng says.

The paper’s lead authors are McGovern Institute research scientist Yang Zhou, graduate students Tobias Kaiser and Xiangyu Zhang, and research affiliate Patricia Monteiro.

Disrupted communication

The protein encoded by Shank3 is found in synapses — the junctions between neurons that allow them to communicate with each other. Shank3 is a scaffold protein, meaning it helps to organize hundreds of other proteins clustered on the postsynaptic cell membrane, which are required to coordinate the cell’s response to signals from the presynaptic cell.

In 2011, Feng and colleagues showed that by deleting Shank3 in mice they could induce two of the most common traits of autism — avoidance of social interaction, and compulsive, repetitive behavior. A year earlier, researchers at the University of Montreal identified a Shank3 mutation in patients suffering from schizophrenia, which is characterized by hallucinations, cognitive impairment, and abnormal social behavior.

Feng wanted to find out how these two different mutations in the Shank3 gene could play a role in such different disorders. To do that, he and his colleagues engineered mice with each of the two mutations: The schizophrenia-related mutation results in a truncated version of the Shank3 protein, while the autism-linked mutation leads to a total loss of the Shank3 protein.

Behaviorally, the mice shared many defects, including strong anxiety. However, the mice with the autism mutation had very strong compulsive behavior, manifested by excessive grooming, which was rarely seen in mice with the schizophrenia mutation.

In the mice with the schizophrenia mutation, the researchers saw a type of behavior known as social dominance. These mice trimmed the whiskers and facial hair of the genetically normal mice sharing their cages, to an extreme extent. This is a typical way for mice to display their social dominance, Feng says.

By activating the mutations in different parts of the brain and at different stages of development, the researchers found that the two mutations affected brain circuits in different ways. The autism mutation exerted its effects early in development, primarily in a part of the brain known as the striatum, which is involved in coordinating motor planning, motivation, and habitual behavior. Feng believes that disruption of synapses in the striatum contributes to the compulsive behavior seen in those mice.

In mice carrying the schizophrenia-associated mutation, early development was normal, suggesting that truncated Shank3 can adequately fill in for the normal version during this stage. However, later in life, the truncated version of Shank3 interfered with synaptic functions and connections in the brain’s cortex, where executive functions such as thought and planning occur. This suggests that different segments of the protein — including the stretch that is missing in the schizophrenia-linked mutation — may be crucial for different roles, Feng says.

The new paper represents an important first step in understanding how different mutations in the same gene can lead to different diseases, says Joshua Gordon, an associate professor of psychiatry at Columbia University.

“The key is to identify how the different mutations alter brain function in different ways, as done here,” says Gordon, who was not involved in the research. “Autism strikes early in childhood, while schizophrenia typically arises in adolescence or early adulthood. The finding that the autism-associated mutation has effects at a younger age than the schizophrenia-associated mutation is particularly intriguing in this context.”

Modeling disease

Although only a small percentage of autism patients have mutations in Shank3, many other variant synaptic proteins have been associated with the disorder. Future studies should help to reveal more about the role of the many genes and mutations that contribute to autism and other disorders, Feng says. Shank3 alone has at least 40 identified mutations, he says.

“We cannot consider them all to be the same,” he says. “To really model these diseases, precisely mimicking each human mutation is critical.”

Understanding exactly how these mutations influence brain circuits should help researchers develop drugs that target those circuits and match them with the patients who would benefit most, Feng says, adding that a tremendous amount of work needs to be done to get to that point.

His lab is now investigating what happens in the earliest stages of the development of mice with the autism-related Shank3 mutation, and whether any of those effects can be reversed either during development or later in life.

The research was funded by the Simons Center for the Social Brain at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, the Poitras Center for Affective Disorders Research at MIT, and National Institute of Mental Health.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post How one gene contributes to two diseases appeared first on Scienmag.

Protein imaging reveals detailed brain architecture

from
BIOENGINEER.ORG http://bioengineer.org/protein-imaging-reveals-detailed-brain-architecture/

Labeling different proteins in a single tissue sample offers a new way to classify neurons and other cells. On the top row, pyramidal neurons are shown in green, and different types of inhibitory interneurons are labeled red, blue, and orange. In the bottom row, at far left, interneurons only are labeled. The two middle images show blood vessels in cyan, and astrocytes in purple. At far right, every neuron in the sample is labeled green. Protein imaging reveals detailed brain architecture MIT Protein Imaging 0

MIT chemical engineers and neuroscientists have developed a new way to classify neurons by labeling and imaging the proteins found in each cell. This type of imaging offers clues to each neuron’s function and should help in mapping the human brain, the researchers say.

“Each cell uses a unique combination of proteins. It’s basically a fingerprint,” says Kwanghun Chung, who is the Samuel A. Goldblith Assistant Professor in the Department of Chemical Engineering, a member of MIT’s Institute for Medical Engineering and Science (IMES) and Picower Institute for Learning and Memory, and the leader of the research team. “If you can look at expression patterns of many proteins, then you can guess each cell’s type and what it’s doing.”

Using this approach, the researchers were able to visualize 22 different proteins inside human brain slices, but the method could be scaled to analyze many more proteins and larger tissue samples. This could help scientists learn more about how diseases alter brain chemistry.

“Now, researchers will be able to investigate the differences between brains from disease models and normal animals, simultaneously looking at potentially dozens of different molecules. This is very important, as the individual variation between brains would make it difficult to make solid connections when looking at those same molecules, one at a time, between dozens of samples,” says graduate student Evan Murray, one of the lead authors of a paper describing the technique in the Dec. 3 issue of Cell.

The paper’s other lead authors are graduate students Jae Hun Cho, Daniel Goodwin, and Justin Swaney, and postdoc Taeyun Ku.

Using a novel method, researchers are able to image and label proteins found in each brain cell from a single tissue sample.

Video: Melanie Gonick/MIT (protein imaging renderings courtesy of Kwanghun Chung and Evan Murray)

Label, rinse, repeat

The key advance of the new technology, known as SWITCH, is the ability to preserve tissue in such a way that it can be imaged repeatedly, with different proteins labeled each time.

To achieve that, the researchers devised a method for controlling the chemical reactions required for tissue preservation and labeling. This allows them to first preserve the tissue, then label a certain protein and image it. They can then wash away the tagging molecule and label a different protein, over and over again.

Controlling the chemical reactions requires a pair of buffers — solutions of weak acids and bases — that alter the tissue’s environment. One of the buffers, known as SWITCH-Off, halts most chemical reactions in the tissue, while the SWITCH-On buffer allows them to resume.

To prepare the tissue samples, the researchers first add the SWITCH-Off buffer, followed by chemicals necessary for tissue preservation, the most important of which is glutaraldehyde. Because the chemicals cannot react with any cells, they diffuse evenly throughout the sample. “It’s like these chemicals are in a stealth mode. They are not detected by tissue,” Chung says.

When the researchers add the SWITCH-On buffer, the glutaraldehyde forms a gel that preserves the tissue. The researchers also add detergent to destroy the lipids of the cell membranes, making the cell interiors more visible to a light microscope.

Once the tissue is preserved and ready for imaging, the researchers add the SWITCH-Off buffer again. With the tissue in an unreactive state, they add labels such as antibodies or dyes, which can be tailored to detect not only proteins but also DNA, neurotransmitters, or lipids. Once the labels have diffused through the tissue, adding the SWITCH-On buffer allows all cells to be exposed to the labels simultaneously.

Protein analysis

In the Cell study, the researchers labeled 22 different proteins in a small section of human brain tissue (roughly 3 millimeters by 3 millimeters by 0.1 millimeters). After 22 rounds of labeling, the tissue was still in good condition, so the researchers believe this technique could be used to image even more proteins.

They also examined the distribution of six proteins in human visual cortex tissue and were able to label and image the myelinated fibers that connect different regions of the brain. “If you can visualize these fibers then you can really understand brain connectivity and the fundamental laws that govern how these wires are formed and connected,” Chung says.

The size of the tissue that can be imaged is limited only by the amount of time required for labeling the proteins and imaging the sample.

It takes about a month for each labeling molecule to diffuse through a cubic-centimeter-sized tissue sample, but Chung and colleagues recently reported in the Proceedings of the National Academy of Sciences that they could speed this up dramatically by exposing the tissue to a randomly changing electric field. This cuts the diffusion time to about a day.

The imaging time depends on the type of microscope used. For this study, the researchers used a light sheet microscope, which can image samples about 100 times faster than a traditional light microscope. Using this microscope, it took about two hours to image an entire mouse brain, compared to about three days with a traditional microscope.

“There are other ways of doing proteomic imaging, but many of them are two-dimensional, or not scalable, or require special equipment,” Chung says. “But with this technique, anyone can do it and it’s scalable.”

Robert Brown, chair of neurology at the University of Massachusetts Medical School, says the new technique is part of a “new generation of imaging technology based on careful manipulation of biochemical structures.”

“It’s extraordinary because it allows one to look for multiple targets simultaneously in the same cell, with three-dimensional resolution, which has not been feasible with previous imaging methods,” he adds.

Chung’s lab will make detailed protocols and other resources available through its website. He now plans to start using SWITCH to study human neurological disorders and is also working on other technologies to help map the human brain.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post Protein imaging reveals detailed brain architecture appeared first on Scienmag.

Singing in the brain

from
BIOENGINEER.ORG http://bioengineer.org/singing-in-the-brain/

When zebra finches first begin to sing, they produce only nonsense syllables similar to the babble of human babies. Now researchers at MIT have uncovered the brain activity that supports the birds’ song-learning process. Singing in the brain MIT BirdSong 0 1 1

Male zebra finches, small songbirds native to central Australia, learn their songs by copying what they hear from their fathers. These songs, often used as mating calls, develop early in life as juvenile birds experiment with mimicking the sounds they hear.

MIT neuroscientists have now uncovered the brain activity that supports this learning process. Sequences of neural activity that encode the birds’ first song syllable are duplicated and altered slightly, allowing the birds to produce several variations on the original syllable. Eventually these syllables are strung together into the bird’s signature song, which remains constant for life.

“The advantage here is that in order to learn new syllables, you don’t have to learn them from scratch. You can reuse what you’ve learned and modify it slightly. We think it’s an efficient way to learn various types of syllables,” says Tatsuo Okubo, a former MIT graduate student and lead author of the study, which appears in the Nov. 30 online edition of Nature.

Okubo and his colleagues believe that this type of neural sequence duplication may also underlie other types of motor learning. For example, the sequence used to swing a tennis racket might be repurposed for a similar motion such as playing Ping-Pong. “This seems like a way that sequences might be learned and reused for anything that involves timing,” says Emily Mackevicius, an MIT graduate student who is also an author of the paper.

The paper’s senior author is Michale Fee, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Bursting into song

Previous studies from Fee’s lab have found that a part of the brain’s cortex known as the HVC is critical for song production.

Typically, each song lasts for about one second and consists of multiple syllables. Fee’s lab has found that in adult birds, individual HVC neurons show a very brief burst of activity — about 10 milliseconds or less — at one moment during the song. Different sets of neurons are active at different times, and collectively the song is represented by this sequence of bursts.

In the new Nature study, the researchers wanted to figure out how those neural patterns develop in newly hatched zebra finches. To do that, they recorded electrical activity in HVC neurons for up to three months after the birds hatched.

When zebra finches begin to sing, about 30 days after hatching, they produce only nonsense syllables known as subsong, similar to the babble of human babies. At first, the duration of these syllables is highly variable, but after a week or so they turn into more consistent sounds called protosyllables, which last about 100 milliseconds. Each bird learns one protosyllable that forms a scaffold for subsequent syllables.

The researchers found that within the HVC, neurons fire in a sequence of short bursts corresponding to the first protosyllable that each bird learns. Most of the neurons in the HVC participate in this original sequence, but as time goes by, some of these neurons are extracted from the original sequence and produce a new, very similar sequence. This chain of neural sequences can be repurposed to produce different syllables.

“From that short sequence it splits into new sequences for the next new syllables,” Mackevicius says. “It starts with that short chain that has a lot of redundancy in it, and splits off some neurons for syllable A and some neurons for syllable B.”

This splitting of neural sequences happens repeatedly until the birds can produce between three and seven different syllables, the researchers found. This entire process takes about two months, at which point each bird has settled on its final song.

“This is a very natural way for motor patterns to evolve, by repeating something and then molding it, but until now nobody had any good data to understand how the brain actually does that,” says Ofer Tchernichovski, a professor of psychology at Hunter College who was not involved in the research. “What’s cool about this paper is they managed to follow how brain centers govern these transitions from simple repetitive patterns to more complex patterns."

Evolution by duplication

The researchers note that this process is similar to what is believed to drive the production of new genes and traits during evolution.

“If you duplicate a gene, then you could have separate mutations in both copies of the gene and they could eventually do different functions,” Okubo says. “It’s similar with motor programs. You can duplicate the sequence and then independently modify the two daughter motor programs so that they can now each do slightly different things.”

Mackevicius is now studying how input from sound-processing parts of the brain to the HVC contributes to the formation of these neural sequences.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post Singing in the brain appeared first on Scienmag.

How neurons lose their connections

from
BIOENGINEER.ORG http://bioengineer.org/how-neurons-lose-their-connections/

MIT neuroscientists discovered that the protein CPG2 connects the cytoskeleton (represented by the scaffold of the bridge) and the endocytic machinery (represented by the cars) during the reabsorption of glutamate receptors. Each How neurons lose their connections MIT Synaptic Plasticity 1 0 1 1

Strengthening and weakening the connections between neurons, known as synapses, is vital to the brain’s development and everyday function. One way that neurons weaken their synapses is by swallowing up receptors on their surfaces that normally respond to glutamate, one of the brain’s excitatory chemicals.

In a new study, MIT neuroscientists have detailed how this receptor reabsorption takes place, allowing neurons to get rid of unwanted connections and to dampen their sensitivity in cases of overexcitation.

“Pulling in and putting out receptors is a dynamic process, and it’s highly regulated by a neuron’s environment,” says Elly Nedivi, a professor of brain and cognitive sciences and member of MIT’s Picower Institute for Learning and Memory. “Our understanding of how receptors are pulled in and how regulatory pathways impact that has been quite poor.”

Nedivi and colleagues found that a protein known as CPG2 is key to this regulation, which is notable because mutations in the human version of CPG2 have been previously linked to bipolar disorder. “This sets the stage for testing various human mutations and their impact at the cellular level,” says Nedivi, who is the senior author of a Jan. 14 Current Biology paper describing the findings.

The paper’s lead author is former Picower Institute postdoc Sven Loebrich. Other authors are technical assistant Marc Benoit, recent MIT graduate Jaclyn Konopka, former postdoc Joanne Gibson, and Jeffrey Cottrell, the director of translational research at the Stanley Center for Psychiatric Research at the Broad Institute.

Forming a bridge

Neurons communicate at synapses via neurotransmitters such as glutamate, which flow from the presynaptic to the postsynaptic neuron. This communication allows the brain to coordinate activity and store information such as new memories.

Previous studies have shown that postsynaptic cells can actively pull in some of their receptors in a phenomenon known as long-term depression (LTD). This important process allows cells to weaken and eventually eliminate poor connections, as well as to recalibrate their set point for further excitation. It can also protect them from overexcitation by making them less sensitive to an ongoing stimulus.

Pulling in receptors requires the cytoskeleton, which provides the physical power, and a specialized complex of proteins known as the endocytic machinery. This machinery performs endocytosis — the process of pulling in a section of the cell membrane in the form of a vesicle, along with anything attached to its surface. At the synapse, this process is used to internalize receptors.

Until now, it was unknown how the cytoskeleton and the endocytic machinery were linked. In the new study, Nedivi’s team found that the CPG2 protein forms a bridge between the cytoskeleton and the endocytic machinery.

“CPG2 acts like a tether for the endocytic machinery, which the cytoskeleton can use to pull in the vesicles,” Nedivi says. “The glutamate receptors that are in the membrane will get pinched off and internalized.”

They also found that CPG2 binds to the endocytic machinery through a protein called EndoB2. This CPG2-EndoB2 interaction occurs only during receptor internalization provoked by synaptic stimulation and is distinct from the constant recycling of glutamate receptors that also occurs in cells. Nedivi’s lab has previously shown that this process, which does not change the cells’ overall sensitivity to glutamate, is also governed by CPG2.

“This study is intriguing because it shows that by engaging different complexes, CPG2 can regulate different types of endocytosis,” says Linda Van Aelst, a professor at Cold Spring Harbor Laboratory who was not involved in the research.

When synapses are too active, it appears that an enzyme called protein kinase A (PKA) binds to CPG2 and causes it to launch activity-dependent receptor absorption. CPG2 may also be controlled by other factors that regulate PKA, including hormone levels, Nedivi says.

Link to bipolar disorder

In 2011, a large consortium including researchers from the Broad Institute discovered that a gene called SYNE1 is number two on the hit list of genes linked to susceptibility for bipolar disorder. They were excited to find that this gene encoded CPG2, a regulator of glutamate receptors, given prior evidence implicating these receptors in bipolar disorder.

In a study published in December, Nedivi and colleagues, including Loebrich and co-lead author Mette Rathje, identified and isolated the human messenger RNA that encodes CPG2. They showed that when rat CPG2 was knocked out, its function could be restored by the human version of the protein, suggesting both versions have the same cellular function.

Rathje, a Picower Institute postdoc in Nedivi’s lab, is now studying mutations in human CPG2 that have been linked to bipolar disorder. She is testing their effect on synaptic function in rats, in hopes of revealing how those mutations might disrupt synapses and influence the development of the disorder.

Nedivi suspects that CPG2 is one player in a constellation of genes that influence susceptibility to bipolar disorder.

“My prediction would be that in the general population there’s a range of CPG2 function, in terms of efficacy,” Nedivi says. “Within that range, it will depend what the rest of the genetic and environmental constellation is, to determine whether it gets to the point of causing a disease state.”

The research was funded by the Picower Institute Innovation Fund and the Gail Steel Fund for Bipolar Research.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post How neurons lose their connections appeared first on Scienmag.

25 Mart 2016 Cuma

Cells in standby mode

Learning more about the brain

from
BIOENGINEER.ORG http://bioengineer.org/learning-more-about-the-brain/

Children participants in an Learning more about the brain unruly art 0 1 1

The question is straightforward enough: How does the brain learn to make sense of the visual world? The full answer is complicated by the fact that infants can’t talk about what they’re taking in. Pawan Sinha seems to have bridged this gap. Through his research, and with the help of making art, the professor of vision and computational neuroscience works with children who have gained sight after a lifetime of blindness, and from them comes data on how the brain immediately starts learning.

The findings, he says, have multiple applications. They cross over into his work on autism, with the hope of a breakthrough in how cognitive issues are diagnosed and treated. They also have the potential to influence learning technology, creating machines that are both more dynamic and efficient, whether it’s using video for training vision systems, making automated face recognizers or ensuring industrial quality control.

No need for difficulty

One processing question that Sinha looks at is how a person is still able to recognize an object when it can vary in appearance on separate occasions. A long-held scientific view was that extracting fine details was crucial for recognition, and much of machine vision was built on this idea, Sinha says.

The challenge with these systems has been their operational brittleness; they often lack robustness. In Sinha’s research, he discovered a more parsimonious encoding strategy — the brain appears to grab onto coarse information and discards smaller, unnecessary detail. As he says, rather than trying to determine precisely where a fine edge is in an image and exactly how strong it is, many neurons seem to just care about the coarse placement of large regions and a similarly coarse assessment of their brightness relationships. The finding led to an attitude shift in Sinha’s thinking. “The brain may adopt fairly simple strategies to answer seemingly complex questions,” he says.

Professor of vision and computational neuroscience Pawan Sinha discusses surprising findings from his object recognition research.

Video: MIT Industrial Liaison Program

With this approach, Sinha created vision systems for tasks such as face detection and industrial inspection that he says are both lightweight and strong for performing in challenging real-world settings. A camera could be set up, trained on an object, and the technology could detect new instances of the object or its flaws. The computational simplicity of the approach enables the system to work in real-time; in a production setting, anything slower wouldn’t be acceptable, he says.

Another possibility would allow a person to take a picture of a product with a cellphone camera and a database would identify similar looking objects. This would, for instance, allow a person to search for products pictured in a magazine or find other products such as the ones in a store. It still needs development, but, once completed, “there would be many interesting applications. Patterns and objects in the real world would, in effect, become ‘hyper-links’ to access a variety of related information,” Sinha says.

Science on the receiving end

In 2005, as an outgrowth of his vision research, Sinha started Project Prakash, “light” in Sanskrit. The goal was to go into remote areas of India and treat children who had been blind since birth. Over 40,000 children have been screened; over 450 have gained sight through surgery. It began as a humanitarian effort, but Sinha says that fortuitously, it ended up producing valuable scientific information. Since the children started seeing as soon as bandages were removed, Sinha and his students could study how the brain developed with the onset of sight.

The only similar kind of dynamic is with a newborn, but the limitation is that babies can’t give complex feedback. A 10-year-old Prakash child could. “For neuroscience, this is like a goldmine of data,” Sinha says. The “gateway result” was that even in the face of prolonged deprivation, the brain retains significant plasticity to reorganize quickly and learn.

Following up on this result, Sinha says he discovered that a key element in the learning equation is dynamic information. The brain can and does learn from static images, but movement speeds up and simplifies the otherwise complex process, by highlighting the aspects of the visual world that go together and those that need to be segregated. “You put the world in motion and it’s as if a magical switch goes off,” he says.

The discovery opens up potential applications. Rather than using thousands of images, as has often been the case in computer vision systems, a few minutes of video may produce equivalent results for both people and machines. Not only is it more effective, Sinha says, but also people and companies don’t need the luxury of time in order to achieve useful results.

Better understanding autism

The Project Prakash findings overlap and influence Sinha’s work on autism. At root is a processing issue. Newly sighted and autistic children both focus on the details of an object; their brains seem to over-fragment their visual field. Instead of perceiving the overall gestalts, the children tend to focus on the local bits and pieces. The difference is that the children with autism appear to stay with this bias while the Prakash children grow out of it, Sinha says.

Again, motion may be key. Sinha’s hypothesis is that children with autism may have a difficulty anticipating events in dynamic settings. Sinha says the eventual findings could change the approach to language and social interaction processing in autism and lead to better diagnosis and treatment. “If we do validate the theory, then we would have advanced our understanding of one of the great riddles in brain science,” he says.

While he gathers data, Sinha says that he has already received a certain validation, from parents. Scientists can work with children, but the time for such interactions is limited, so the picture is a “small vignette” at best, compromised by numerous practical constraints. Parents are 24-hour observers, and, from their vantage point, Sinha says that they see merit in and a basis for his theory.

Painting in colors

Since he was a child, Sinha has drawn and painted, and he uses art in his work. The questions are the same: How does the brain recognize and then communicate an image? In India, after the Prakash children had gained sight, Sinha observed that they were shy and withdrawn, with limited opportunities to socialize. His group has developed an activity called UnrulyArt, where kids are free to play with colors and splatter at will.

The effects are multifold, he says. The children become more outwardly engaged. They also produce beautiful pictures, which raises their self-confidence. Adults admire their work, adding to the boost. And the parents see that the outside world views their children in a new light. Buoyed by that success, he is also conducting UnrulyArt sessions with special needs children in the United States.

Sinha says that he doesn’t exactly know why art helps children become more verbal. It could be the forced interaction of a project. He says that he might eventually prove his hypothesis, but this is one instance where formal data aren’t necessary. “Even if we never know how art has that beneficial impact, I think it’s an activity worth undertaking,” he says.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post Learning more about the brain appeared first on Scienmag.

24 Mart 2016 Perşembe

Unraveling the mystery of stem cells

from
BIOENGINEER.ORG http://bioengineer.org/unraveling-the-mystery-of-stem-cells/

How do neurons become neurons? They all begin as stem cells, undifferentiated and with the potential to become any cell in the body.

Until now, however, exactly how that happens has been somewhat of a scientific mystery. New research conducted by UC Santa Barbara neuroscientists has deciphered some of the earliest changes that occur before stems cells transform into neurons and other cell types.

Working with human embryonic stems cells in petri dishes, postdoctoral fellow Jiwon Jang discovered a new pathway that plays a key role in cell differentiation. The findings appear in the journal Cell.

“Jiwon’s discovery is very important because it gives us a fundamental understanding of the way stem cells work and the way they begin to undergo differentiation,” said senior author Kenneth S. Kosik, the Harriman Professor of Neuroscience Research in UCSB’s Department of Molecular, Cellular, and Developmental Biology. “It’s a very fundamental piece of knowledge that had been missing in the field.”

When stem cells begin to differentiate, they form precursors: neuroectoderms that have the potential to become brain cells, such as neurons; or mesendoderms, which ultimately become cells that comprise organs, muscles, blood and bone.

Jang discovered a number of steps along what he and Kosik labeled the PAN (Primary cilium, Autophagy Nrf2) axis. This newly identified pathway appears to determine a stem cell’s final form.

“The PAN axis is a very important player in cell fate decisions,” explained Jang. “G1 lengthening induces cilia protrusion and the longer those cellular antennae are exposed, the more signals they can pick up.”

For some time, scientists have known about Gap 1 (G1), the first of four phases in the cell cycle, but they weren’t clear about its role in stem cell differentiation. Jang’s research demonstrates that in stem cells destined to become neurons, the lengthening phase of G1 triggers other actions that cause stem cells to morph into neuroectoderms.

During this elongated G1 interval, cells develop primary cilia, antennalike protrusions capable of sensing their environment. The cilia activate the cells’ trash disposal system in a process known as autophagy.

Another important factor is Nrf2, which monitors cells for dangerous molecules such as free radicals — a particularly important job for healthy cell formation.

“Nrf2 is like a guardian to the cell and makes sure the cell is functioning properly,” said Kosik, co-director of the campus’s Neuroscience Research Institute. “Nrf2 levels are very high in stem cells because stem cells are the future. Without Nrf2 watching out for the integrity of the genome, future progeny are in trouble.”

Jang’s work showed that levels of Nrf2 begin to decline during the elongated G1 interval. This is significant, Kosik noted, because Nrf2 doesn’t usually diminish until the cell has already started to differentiate.

“We thought that, under the same conditions if the cells are identical, that both would differentiate the same way, but that is not what we found,” Jang said. “Cell fate is controlled by G1 lengthening, which extends cilia’s exposure to signals from their environment. That is one cool concept.”

###

Media Contact

Julie Cohen
julie.cohen@ucsb.edu
805-893-7220
@ucsantabarbara

http://www.ucsb.edu

The post Unraveling the mystery of stem cells appeared first on Scienmag.

Study reveals a basis for attention deficits

from
BIOENGINEER.ORG http://bioengineer.org/study-reveals-a-basis-for-attention-deficits/

The brain’s thalamic reticular nucleus (TRN), highlighted in the center, is responsible for blocking out distracting sensory input. Study reveals a basis for attention deficits MIT AttentionDeficit 2 1 1

More than 3 million Americans suffer from attention deficit hyperactivity disorder (ADHD), a condition that usually emerges in childhood and can lead to difficulties at school or work.

A new study from MIT and New York University links ADHD and other attention difficulties to the brain’s thalamic reticular nucleus (TRN), which is responsible for blocking out distracting sensory input. In a study of mice, the researchers discovered that a gene mutation found in some patients with ADHD produces a defect in the TRN that leads to attention impairments.

The findings suggest that drugs boosting TRN activity could improve ADHD symptoms and possibly help treat other disorders that affect attention, including autism.

“Understanding these circuits may help explain the converging mechanisms across these disorders. For autism, schizophrenia, and other neurodevelopmental disorders, it seems like TRN dysfunction may be involved in some patients,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research and the Stanley Center for Psychiatric Research at the Broad Institute.

Feng and Michael Halassa, an assistant professor of psychiatry, neuroscience, and physiology at New York University, are the senior authors of the study, which appears in the March 23 online edition of Nature. The paper’s lead authors are MIT graduate student Michael Wells and NYU postdoc Ralf Wimmer.

Paying attention

Feng, Halassa, and their colleagues set out to study a gene called Ptchd1, whose loss can produce attention deficits, hyperactivity, intellectual disability, aggression, and autism spectrum disorders. Because the gene is carried on the X chromosome, most individuals with these Ptchd1-related effects are male.

In mice, the researchers found that the part of the brain most affected by the loss of Ptchd1 is the TRN, which is a group of inhibitory nerve cells in the thalamus. It essentially acts as a gatekeeper, preventing unnecessary information from being relayed to the brain’s cortex, where higher cognitive functions such as thought and planning occur.

“We receive all kinds of information from different sensory regions, and it all goes into the thalamus,” Feng says. “All this information has to be filtered. Not everything we sense goes through.”

If this gatekeeper is not functioning properly, too much information gets through, allowing the person to become easily distracted or overwhelmed. This can lead to problems with attention and difficulty in learning.

The researchers found that when the Ptchd1 gene was knocked out in mice, the animals showed many of the same behavioral defects seen in human patients, including aggression, hyperactivity, attention deficit, and motor impairments. When the Ptchd1 gene was knocked out only in the TRN, the mice showed only hyperactivity and attention deficits.

Toward new treatments

At the cellular level, the researchers found that the Ptchd1 mutation disrupts channels that carry potassium ions, which prevents TRN neurons from being able to sufficiently inhibit thalamic output to the cortex. The researchers were also able restore the neurons’ normal function with a compound that boosts activity of the potassium channel. This intervention reversed the TRN-related symptoms but not any of the symptoms that appear to be caused by deficits of some other circuit.

“The authors convincingly demonstrate that specific behavioral consequences of the Ptchd1 mutation — attention and sleep — arise from an alteration of a specific protein in a specific brain region, the thalamic reticular nucleus. These findings provide a clear and straightforward pathway from gene to behavior and suggest a pathway toward novel treatments for neurodevelopmental disorders such as autism,” says Joshua Gordon, an associate professor of psychiatry at Columbia University, who was not involved in the research.

Most people with ADHD are now treated with psychostimulants such as Ritalin, which are effective in about 70 percent of patients. Feng and Halassa are now working on identifying genes that are specifically expressed in the TRN in hopes of developing drug targets that would modulate TRN activity. Such drugs may also help patients who don’t have the Ptchd1 mutation, because their symptoms are also likely caused by TRN impairments, Feng says.

The researchers are also investigating when Ptchd1-related problems in the TRN arise and at what point they can be reversed. And, they hope to discover how and where in the brain Ptchd1 mutations produce other abnormalities, such as aggression.

The research was funded by the Simons Foundation Autism Research Initiative, the National Institutes of Health, the Poitras Center for Affective Disorders Research, and the Stanley Center for Psychiatric Research at the Broad Institute.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post Study reveals a basis for attention deficits appeared first on Scienmag.

Modified maggots could help human wound healing

23 Mart 2016 Çarşamba

Unlocking the secrets of gene expression

New imaging scans track down persistent cancer cells

Scientific art lights up Main Street

from
BIOENGINEER.ORG http://bioengineer.org/scientific-art-lights-up-main-street/

Science art on display at the public galleries of MIT's Koch Institute for Integrative Cancer Research Scientific art lights up Main Street Koch 0316 300DPI 2832 0 1 1

Through large windows on the corner of Vassar and Main streets hangs a series of oversized portraits and landscapes, plainly visible to those on the outside. The display features a breathtaking array of colors and shapes configured in otherworldly patterns — but this is no ordinary art collection.

Each of the striking scientific images adorning the walls of the Koch Institute for Integrative Cancer Research’s public galleries tells a story of hope and progress, capturing a snapshot of potentially groundbreaking research. Together, they comprise the 2016 Image Awards exhibition, a showcase of winning images from the Koch Institute’s annual competition to recognize the best and brightest of biomedical imagery from MIT laboratories.

The art of science

From immune cells devouring cancer to layered nanoparticles speckled across an early-stage tumor, these images are windows into a microscopic world rarely seen outside of the lab.

“Every year anew these images offer glimpses into the wondrous world of research and really get at what drives [Koch Institute] researchers which is the promise of scientific and technological advances converging to overcome disease,” says Anne Deconinck, executive director of the Koch Institute, to an auditorium packed with guests from MIT and the general public. The crowd was assembled to witness the unveiling of the new collection and to listen to the researchers who captured these images share the stories behind the displays.

“The images really speak for themselves in showcasing how amazing the work here is at the Koch Institute,” Deconinck adds.

This year’s nine winners were selected from more than 150 candidates by a panel of judges whose expertise spans a wide range of disciplines, including biology, visual arts, and media production. The submissions come from the life sciences all across MIT, and the winners are chosen based both on their visual merit and for the research they depict.

“The criteria that we give to judges is that we want them to pick images that are visually stunning and scientifically compelling,” explains Erika Reinfeld, the Koch Institute’s public outreach coordinator. “Once chosen, I work with the winners to craft captions for the display that explain both aspects of their images. The opening event is very much an extension of that process — a chance for the creators to share, in their own words, what these images mean to them.”

“I love how the gallery engages the public,” says Bethany Millard, chair of the MIT Corporation Partners Program and founder of Phosphorous Productions, a company dedicated to creating science podcasts that educate and inspire, and one of the 2016 Image Awards judges. “The images are arresting, fascinating, and ultimately works of art. To experience them is to feel a profound sense of wonder and gratitude for the work of the Koch Institute.”

Unveiling the artwork

The debut of the new collection took place on March 3, the eve of the Koch Institute building’s five-year anniversary.

One-by-one, presenters gave brief overviews of the research underpinning their images. Many of the pieces represent recent breakthroughs and ultimately the possibility for better outcomes for patients.

Dexter Jin of the Whitehead Institute’s Gupta Laboratory presented "Duct Duct Goose," a purple-and-cyan image showing the architecture of healthy human mammary tissue grown on three dimensional hydrogel scaffolds. The model, described in a recently published paper, provides researchers with the opportunity to study normal mammary development and cancer formation.

“I love how all the amazing data that come from such a groundbreaking method can be summarized in such a visually stunning way,” Jin says.

“Having an image displayed is meaningful because it brings you closer to the people you may one day help heal,” says Omar F. Khan of the Koch Institute’s Anderson and Langer Laboratories. “In a way, it’s a conversation with the viewer and the scientist’s way of reinforcing to the public that we haven’t forgotten our pledge to make a difference.”

Khan is a member of the group that created the image "Nerves of Gold" — a fractal pattern of golden strands that comes from a close-up of an advanced piece used in restorative bionics. Its design represents a major step forward in improving the elasticity of hard-metal components. The device can be implanted in the body following neural separation to stand in for the nervous system and prevent atrophy by electrically stimulating muscles as the nerves repair.

“I think the efforts made by the Koch [Institute] to make our work accessible to the public is incredibly rewarding for [researchers],” says Asha Patel of both the Anderson and Langer Laboratories, whose image, "Suit Your Cell," shows how different cell types respond to a wide variety of polymers during an automated screening experiment.

“The gallery brings ideas as well as people together through art who otherwise may never have reason to meet,” adds Patel.

Every year, along with the images from MIT, the public galleries also feature a piece from Wellcome Images, a London-based world leader in the collection of biomedical images. This year’s selection, "Stem Education," features a stem cell cryogenically frozen in a hydrogel matrix and was chosen by the judges from this year’s 20 Wellcome Image Award winners.

“It has been a great honor to be part of the Koch Institute’s Image Awards since the beginning five years ago,” says Catherine Draycott, head of Wellcome Images and Image Award judge. “To see [the] images created through the ground-breaking research of the [Koch] Institute and of MIT provides a thrilling window into the potential of the future of medicine.”

Many of the image creators credited the unique community found at MIT and the Koch Institute with providing the ideal venue for a meeting of art, science, and engineering.

“MIT is a place of progressive, creative, and free thinkers who love to share ideas and stories,” says Khan. “When this exuberance is coupled with a genuine desire to help the world, wonderful things can begin to happen.”

The new images will be on display in the Public Galleries until next spring. To learn more about the Koch Institute Public Galleries, including hours and to see past years’ winners, visit ki-galleries.mit.edu.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post Scientific art lights up Main Street appeared first on Scienmag.

Reconstructing the cell surface in a test tube

20 Mart 2016 Pazar

How language gives your brain a break

from
BIOENGINEER.ORG http://bioengineer.org/how-language-gives-your-brain-a-break/

How language gives your brain a break MIT language efficiency 0 1 1

Image: Christine Daniloff/MIT

Here’s a quick task: Take a look at the sentences below and decide which is the most effective.

(1) “John threw out the old trash sitting in the kitchen.”

(2) “John threw the old trash sitting in the kitchen out.”

Either sentence is grammatically acceptable, but you probably found the first one to be more natural. Why? Perhaps because of the placement of the word “out,” which seems to fit better in the middle of this word sequence than the end.

In technical terms, the first sentence has a shorter “dependency length” — a shorter total distance, in words, between the crucial elements of a sentence. Now a new study of 37 languages by three MIT researchers has shown that most languages move toward “dependency length minimization” (DLM) in practice. That means language users have a global preference for more locally grouped dependent words, whenever possible.

“People want words that are related to each other in a sentence to be close together,” says Richard Futrell, a PhD student in the Department of Brain and Cognitive Sciences at MIT, and a lead author of a new paper detailing the results. “There is this idea that the distance between grammatically related words in a sentence should be short, as a principle.”

The paper, published this week in the Proceedings of the National Academy of Sciences, suggests people modify language in this way because it makes things simpler for our minds — as speakers, listeners, and readers.

“When I’m talking to you, and you’re trying to understand what I’m saying, you have to parse it, and figure out which words are related to each other,” Futrell observes. “If there is a large amount of time between one word and another related word, that means you have to hold one of those words in memory, and that can be hard to do.”

While the existence of DLM had previously been posited and identified in a couple of languages, this is the largest study of its kind to date.

“It was pretty interesting, because people had really only looked at it in one or two languages,” says Edward Gibson, a professor of cognitive science and co-author of the paper. “We thought it was probably true [more widely], but that’s pretty important to show. … We’re not showing perfect optimization, but [DLM] is a factor that’s involved.”

From head to tail

To conduct the study, the researchers used four large databases of sentences that have been parsed grammatically: one from Charles University in Prague, one from Google, one from the Universal Dependencies Consortium (a new group of computational linguists), and a Chinese-language database from the Linguistic Dependencies Consortium at the University of Pennsylvania. The sentences are taken from published texts, and thus represent everyday language use.

To quantify the effect of placing related words closer to each other, the researchers compared the dependency lengths of the sentences to a couple of baselines for dependency length in each language. One baseline randomizes the distance between each “head” word in a sentence (such as “threw,” above) and the “dependent” words (such as “out”). However, since some languages, including English, have relatively strict word-order rules, the researchers also used a second baseline that accounted for the effects of those word-order relationships.

In both cases, Futrell, Gibson, and co-author Kyle Mahowald found, the DLM tendency exists, to varying degrees, among languages. Italian appears to be highly optimized for short sentences; German, which has some notoriously indirect sentence constructions, is far less optimized, according to the analysis.

And the researchers also discovered that “head-final” languages such as Japanese, Korean, and Turkish, where the head word comes last, show less length minimization than is typical. This could be because these languages have extensive case-markings, which denote the function of a word (whether a noun is the subject, the direct object, and so on). The case markings would thus compensate for the potential confusion of the larger dependency lengths.

“It’s possible, in languages where it’s really obvious from the case marking where the word fits into the sentence, that might mean it’s less important to keep the dependencies local,” Futrell says.

Other scholars who have done research on this topic say the study provides valuable new information.

“It’s interesting and exciting work,” says David Temperley, a professor at the University of Rochester, who along with his Rochester colleague Daniel Gildea has co-authored a study comparing dependency length in English and German. “We wondered how general this phenomenon would turn out to be.”

Futrell, Gibson, and Mahowald readily note that the study leaves larger questions open: Does the DLM tendency occur primarily to help the production of language, its reception, a more strictly cognitive function, or all of the above?

“It could be for the speaker, the listener, or both,” Gibson says. “It’s very difficult to separate those.”

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post How language gives your brain a break appeared first on Scienmag.

Young brains can take on new functions

from
BIOENGINEER.ORG http://bioengineer.org/young-brains-can-take-on-new-functions/

In 2011, MIT neuroscientist Rebecca Saxe and colleagues reported that in blind adults, brain regions normally dedicated to vision processing instead participate in language tasks such as speech and comprehension. Now, in a study of blind children, Saxe’s lab has found that this transformation occurs very early in life, before the age of 4.

Young brains can take on new functions MIT Brain Language 1

Illustration: Jose-Luis Olivares/MIT

The study, appearing in the Journal of Neuroscience, suggests that the brains of young children are highly plastic, meaning that regions usually specialized for one task can adapt to new and very different roles. The findings also help to define the extent to which this type of remodeling is possible.

“In some circumstances, patches of cortex appear to take on other roles than the ones that they most typically have,” says Saxe, a professor of cognitive neuroscience and an associate member of MIT’s McGovern Institute for Brain Research. “One question that arises from that is, ‘What is the range of possible differences between what a cortical region typically does and what it could possibly do?’”

The paper’s lead author is Marina Bedny, a former MIT postdoc who is now an assistant professor at Johns Hopkins University. MIT graduate student Hilary Richardson is also an author of the paper.

Brain reorganization

The brain’s cortex, which carries out high-level functions such as thought, sensory processing, and initiation of movement, is made of sheets of neurons, each dedicated to a certain role. Within the visual system, located primarily in the occipital lobe, most neurons are tuned to respond only to a very specific aspect of visual input, such as brightness, orientation, or location in the field of view.

“There’s this big fundamental question, which is, ‘How did that organization get there, and to what degree can it be changed?’” Saxe says.

One possibility is that neurons in each patch of cortex have evolved to carry out specific roles, and can do nothing else. At the other extreme is the possibility that any patch of cortex can be recruited to perform any kind of computational task.

“The reality is somewhere in between those two,” Saxe says.

To study the extent to which cortex can change its function, scientists have focused on the visual cortex because they can learn a great deal about it by studying people who were born blind.

A landmark 1996 study of blind people found that their visual regions could participate in a nonvisual task — reading Braille. Some scientists theorized that perhaps the visual cortex is recruited for reading Braille because like vision, it requires discriminating very fine-grained patterns.

However, in their 2011 study, Saxe and Bedny found that the visual cortex of blind adults also responds to spoken language. “That was weird, because processing auditory language doesn’t require the kind of fine-grained spatial discrimination that Braille does,” Saxe says.

She and Bedny hypothesized that auditory language processing may develop in the occipital cortex by piggybacking onto the Braille-reading function. To test that idea, they began studying congenitally blind children, including some who had not learned Braille yet. They reasoned that if their hypothesis were correct, the occipital lobe would be gradually recruited for language processing as the children learned Braille.

However, they found that this was not the case. Instead, children as young as 4 already have language-related activity in the occipital lobe.

“The response of occipital cortex to language is not affected by Braille acquisition,” Saxe says. “It happens before Braille and it doesn’t increase with Braille.”

Language-related occipital activity was similar among all of the 19 blind children, who ranged in age from 4 to 17, suggesting that the entire process of occipital recruitment for language processing takes place before the age of 4, Saxe says. Bedny and Saxe have previously shown that this transition occurs only in people blind from birth, suggesting that there is an early critical period after which the cortex loses much of its plasticity.

The new study represents a huge step forward in understanding how the occipital cortex can take on new functions, says Ione Fine, an associate professor of psychology at the University of Washington.

“One thing that has been missing is an understanding of the developmental timeline,” says Fine, who was not involved in the research. “The insight here is that you get plasticity for language separate from plasticity for Braille and separate from plasticity for auditory processing.”

Language skills

The findings raise the question of how the extra language-processing centers in the occipital lobe affect language skills.

“This is a question we’ve always wondered about,” Saxe says. “Does it mean you’re better at those functions because you have more of your cortex doing it? Does it mean you’re more resilient in those functions because now you have more redundancy in your mechanism for doing it? You could even imagine the opposite: Maybe you’re less good at those functions because they’re distributed in an inefficient or atypical way.”

There are hints that the occipital lobe’s contribution to language-related functions “takes the pressure off the frontal cortex,” where language processing normally occurs, Saxe says. Other researchers have shown that suppressing left frontal cortex activity with transcranial magnetic stimulation interferes with language function in sighted people, but not in the congenitally blind.

This leads to the intriguing prediction that a congenitally blind person who suffers a stroke in the left frontal cortex may retain much more language ability than a sighted person would, Saxe says, although that hypothesis has not been tested.

Saxe’s lab is now studying children under 4 to try to learn more about how cortical functions develop early in life, while Bedny is investigating whether the occipital lobe participates in functions other than language in congenitally blind people.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post Young brains can take on new functions appeared first on Scienmag.

Possible new weapon against PTSD

from
BIOENGINEER.ORG http://bioengineer.org/possible-new-weapon-against-ptsd/

About 8 million Americans suffer from nightmares and flashbacks to a traumatic event. This condition, known as post-traumatic stress disorder (PTSD), is particularly common among soldiers who have been in combat, though it can also be triggered by physical attack or natural disaster.

This illustration shows a brain with the amygdala highlighted in the center. In the background are models of the serotonin molecule. Possible new weapon against PTSD MIT PTSD Treat 1

This illustration shows a brain with the amygdala highlighted in the center. In the background are models of the serotonin molecule.

Illustration: Jose-Luis Olivares/MIT

Studies have shown that trauma victims are more likely to develop PTSD if they have previously experienced chronic stress, and a new study from MIT may explain why. The researchers found that animals who underwent chronic stress prior to a traumatic experience engaged a distinctive brain pathway that encodes traumatic memories more strongly than in unstressed animals.

Blocking this type of memory formation may offer a new way to prevent PTSD, says Ki Goosens, the senior author of the study, which appears in the journal Biological Psychiatry.

“The idea is not to make people amnesic but to reduce the impact of the trauma in the brain by making the traumatic memory more like a ‘normal,’ unintrusive memory,” says Goosens, an assistant professor of neuroscience and investigator in MIT’s McGovern Institute for Brain Research.

The paper’s lead author is former MIT postdoc Michael Baratta.

Strong memories

Goosens’ lab has sought for several years to find out why chronic stress is so strongly linked with PTSD. “It’s a very potent risk factor, so it must have a profound change on the underlying biology of the brain,” she says.

To investigate this, the researchers focused on the amygdala, an almond-sized brain structure whose functions include encoding fearful memories. They found that in animals that developed PTSD symptoms following chronic stress and a traumatic event, serotonin promotes the process of memory consolidation. When the researchers blocked amygdala cells’ interactions with serotonin after trauma, the stressed animals did not develop PTSD symptoms. Blocking serotonin in unstressed animals after trauma had no effect.

“That was really surprising to us,” Baratta says. “It seems like stress is enabling a serotonergic memory consolidation process that is not present in an unstressed animal.”

Memory consolidation is the process by which short-term memories are converted into long-term memories and stored in the brain. Some memories are consolidated more strongly than others. For example, “flashbulb” memories, formed in response to a highly emotional experience, are usually much more vivid and easier to recall than typical memories.

Goosens and colleagues further discovered that chronic stress causes cells in the amygdala to express many more 5-HT2C receptors, which bind to serotonin. Then, when a traumatic experience occurs, this heightened sensitivity to serotonin causes the memory to be encoded more strongly, which Goosens believes contributes to the strong flashbacks that often occur in patients with PTSD.

“It’s strengthening the consolidation process so the memory that’s generated from a traumatic or fearful event is stronger than it would be if you don’t have this serotonergic consolidation engaged,” Baratta says.

“This study is a very nice dissection of the mechanism by which chronic stress seems to activate new pathways not seen in unstressed animals,” says Mireya Nadal-Vicens, medical director of the Center for Anxiety and Traumatic Stress Disorders at Massachusetts General Hospital, who was not part of the research team.

Drug intervention

This memory consolidation process can take hours to days to complete, but once a memory is consolidated, it is very difficult to erase. However, the findings suggest that it may be possible to either prevent traumatic memories from forming so strongly in the first place, or to weaken them after consolidation, using drugs that interfere with serotonin.

“The consolidation process gives us a window in which we can possibly intervene and prevent the development of PTSD. If you give a drug or intervention that can block fear memory consolidation, that’s a great way to think about treating PTSD,” Goosens says. “Such an intervention won’t cause people to forget the experience of the trauma, but they might not have the intrusive memory that is ultimately going to cause them to have nightmares or be afraid of things that are similar to the traumatic experience.”

The Food and Drug Administration has already approved a drug called agomelatine that blocks this type of serotonin receptor and is used as an antidepressant.

Such a drug might also be useful to treat patients who already suffer from PTSD. These patients’ traumatic memories are already consolidated, but some research has shown that when memories are recalled, there is a window of time during which they can be altered and reconsolidated. It may be possible to weaken these memories by using serotonin-blocking drugs to interfere with the reconsolidation process, says Goosens, who plans to begin testing that possibility in animals.

The findings also suggest that the antidepressant Prozac and other selective serotonin reuptake inhibitors (SSRIs), which are commonly given to PTSD patients, likely do not help and may actually worsen their symptoms. Prozac enhances the effects of serotonin by prolonging its exposure to brain cells. While this often helps those suffering from depression, “There’s no biological evidence to support the use of SSRIs for PTSD,” Goosens says.

“The consolidation of traumatic memories requires this serotonergic cascade and we want to block it, not enhance it,” she adds. “This study suggests we should rethink the use of SSRIs in PTSD and also be very careful about how they are used, particularly when somebody is recently traumatized and their memories are still being consolidated, or when a patient is undergoing cognitive behavior therapy where they’re recalling the memory of the trauma and the memory is going through the process of reconsolidation.”

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post Possible new weapon against PTSD appeared first on Scienmag.

“Lost” memories can be found

from
BIOENGINEER.ORG http://bioengineer.org/lost-memories-can-be-found/

Researchers say the findings raise the possibility of developing future treatments that might reverse some of the memory loss seen in early-stage Alzheimer’s. “Lost” memories can be found MIT Alzheimers 1 0 1 1

In the early stages of Alzheimer’s disease, patients are often unable to remember recent experiences. However, a new study from MIT suggests that those memories are still stored in the brain — they just can’t be easily accessed.

The MIT neuroscientists report in Nature that mice in the early stages of Alzheimer’s can form new memories just as well as normal mice but cannot recall them a few days later.

Furthermore, the researchers were able to artificially stimulate those memories using a technique known as optogenetics, suggesting that those memories can still be retrieved with a little help. Although optogenetics cannot currently be used in humans, the findings raise the possibility of developing future treatments that might reverse some of the memory loss seen in early-stage Alzheimer’s, the researchers say.

“The important point is, this a proof of concept. That is, even if a memory seems to be gone, it is still there. It’s a matter of how to retrieve it,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory.

Tonegawa is the senior author of the study, which appears in the March 16 online edition of Nature. Dheeraj Roy, an MIT graduate student, is the paper’s lead author.

Lost memories

In recent years, Tonegawa’s lab has identified cells in the brain’s hippocampus that store specific memories. The researchers have also shown that they can manipulate these memory traces, or engrams, to plant false memories, activate existing memories, or alter a memory’s emotional associations.

Last year, Tonegawa, Roy, and colleagues found that mice with retrograde amnesia, which follows traumatic injury or stress, had impaired memory recall but could still form new memories. That led the team to wonder whether this might also be true for the memory loss seen in the early stages of Alzheimer’s disease, which occurs before characteristic amyloid plaques appear in patients’ brains.

To investigate that possibility, the researchers studied two different strains of mice genetically engineered to develop Alzheimer’s symptoms, plus a group of healthy mice.

All of these mice, when exposed to a chamber where they received a foot shock, showed fear when placed in the same chamber an hour later. However, when placed in the chamber again several days later, only the normal mice still showed fear. The Alzheimer’s mice did not appear to remember the foot shock.

“Short-term memory seems to be normal, on the order of hours. But for long-term memory, these early Alzheimer’s mice seem to be impaired,” Roy says.

“An access problem”

The researchers then showed that while the mice cannot recall their experiences when prompted by natural cues, those memories are still there.

To demonstrate this, they first tagged the engram cells associated with the fearful experience with a light-sensitive protein called channelrhodopsin, using a technique they developed in 2012. Whenever these tagged engram cells are activated by light, normal mice recall the memory encoded by that group of cells. Likewise, when the researchers placed the Alzheimer’s mice in a chamber they had never seen before and shined light on the engram cells encoding the fearful experience, the mice immediately showed fear.

“Directly activating the cells that we believe are holding the memory gets them to retrieve it,” Roy says. “This suggests that it is indeed an access problem to the information, not that they’re unable to learn or store this memory.”

The researchers also showed that the engram cells of Alzheimer’s mice had fewer dendritic spines, which are small buds that allow neurons to receive incoming signals from other neurons.

Normally, when a new memory is generated, the engram cells corresponding to that memory grow new dendritic spines, but this did not happen in the Alzheimer’s mice. This suggests that the engram cells are not receiving sensory input from another part of the brain called the entorhinal cortex. The natural cue that should reactivate the memory — being in the chamber again — has no effect because the sensory information doesn’t get into the engram cells.

“If we want to recall a memory, the memory-holding cells have to be reactivated by the correct cue. If the spine density does not go up during learning process, then later, if you give a natural recall cue, it may not be able to reach the nucleus of the engram cells,” Tonegawa says.

“This is a remarkable study providing the first proof that the earliest memory deficit in Alzheimer’s involves retrieval of consolidated information,” says Rudolph Tanzi, a professor of neurology at Harvard Medical School, who was not involved in the research. “As a result, the implications for treatment of memory deficits Alzheimer’s disease based on strengthening synapses are extremely exciting.”

Long-term connection

The researchers were also able to induce a longer-term reactivation of the “lost” memories by stimulating new connections between the entorhinal cortex and the hippocampus.

To achieve this, they used light to optogenetically stimulate entorhinal cortex cells that feed into the hippocampal engram cells encoding the fearful memory. After three hours of this treatment, the researchers waited a week and tested the mice again. This time, the mice could retrieve the memory on their own when placed in the original chamber, and they had many more dendritic spines on their engram cells.

However, this approach does not work if too large a section of the entorhinal cortex is stimulated, suggesting that any potential treatments for human patients would have to be very targeted. Optogenetics is very precise but too invasive to use in humans, and existing methods for deep brain stimulation — a form of electrical stimulation sometimes used to treat Parkinson’s and other diseases — affect too much of the brain.

“It’s possible that in the future some technology will be developed to activate or inactivate cells deep inside the brain, like the hippocampus or entorhinal cortex, with more precision,” Tonegawa says. “Basic research as conducted in this study provides information on cell populations to be targeted, which is critical for future treatments and technologies.”

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post “Lost” memories can be found appeared first on Scienmag.

A new glimpse into working memory

from
BIOENGINEER.ORG http://bioengineer.org/a-new-glimpse-into-working-memory/

Pictured is an artist’s interpretation of neurons firing in sporadic, coordinated bursts. “By having these different bursts coming at different moments in time, you can keep different items in memory separate from one another,” Earl Miller says. A new glimpse into working memory MIT WorkingMemory 0 1 1

When you hold in mind a sentence you have just read or a phone number you’re about to dial, you’re engaging a critical brain system known as working memory.

For the past several decades, neuroscientists have believed that as information is held in working memory, brain cells associated with that information fire continuously. However, a new study from MIT has upended that theory, instead finding that as information is held in working memory, neurons fire in sporadic, coordinated bursts.

These cyclical bursts could help the brain to hold multiple items in working memory at the same time, according to the researchers.

“By having these different bursts coming at different moments in time, you can keep different items in memory separate from one another,” says Earl Miller, the Picower Professor in MIT’s Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences.

Miller is the senior author of the study, which appears in the March 17 issue of Neuron. Mikael Lundqvist, a Picower Institute postdoc, and Jonas Rose, now at University of Tubingen in Germany, are the paper’s lead authors.

Bursts of activity

Starting in the early 1970s, experiments showed that when an item is held in working memory, a subset of neurons fires continuously. However, these and subsequent studies of working memory averaged the brain’s activity over seconds or even minutes of performing the task, Miller says.

“The problem with that is, that’s not the way the brain works,” he says. “We looked more closely at this activity, not by averaging across time, but from looking from moment to moment. That revealed that something way more complex is going on.”

Miller and his colleagues recorded neuron activity in animals as they were shown a sequence of three colored squares, each in a different location. Then, the squares were shown again, but one of them had changed color. The animals were trained to respond when they noticed the square that had changed color — a task requiring them to hold all three squares in working memory for about two seconds.

The researchers found that as items were held in working memory, ensembles of neurons in the prefrontal cortex were active in brief bursts, and these bursts only occurred in recording sites in which information about the squares was stored. The bursting was most frequent at the beginning of the task, when the information was encoded, and at the end, when the memories were read out.

Filling in the details

The findings fit well with a model that Lundqvist had developed as an alternative to the model of sustained activity as the neural basis of working memory. According to the new model, information is stored in rapid changes in the synaptic strength of the neurons. The brief bursts serve to “imprint” information in the synapses of these neurons, and the bursts reoccur periodically to reinforce the information as long as it is needed.

The bursts create waves of coordinated activity in the gamma frequency (45 to 100 hertz), like the ones that were observed in the data. These waves occur sporadically, with gaps between them, and each ensemble of neurons, encoding a specific item, produces a different burst of gamma waves. “It’s like a fingerprint,” Lundqvist says.

When this activity is averaged over several repeated trials, it appears as a smooth curve of continuous activity, just as the older models of working memory suggested. However, the MIT team’s new way of measuring and analyzing the data suggests that the full picture is much different.

“It’s like for years you’ve been listening to music from your neighbor’s apartment and all you can hear is the thumping bass part. You’re missing all the details, but if you get close enough to it you see there’s a lot more going on,” Miller says.

The findings suggest that it would be worthwhile to look for this kind of cyclical activity in other cognitive functions such as attention, the researchers say. Oscillations like those seen in this study may help the brain to package information and keep it separate so that different pieces of information don’t interfere with each other.

“Your brain operates in a very sporadic, periodic way, with lots of gaps in between the information the brain represents,” Miller says. “The mind is papering over all the gaps and bubbly dynamics and giving us an impression that things are happening in a smooth way, when our brain is actually working in a very periodic fashion, sending packets of information around.”

Robert Knight, a professor of psychology and neuroscience at the University of California at Berkeley, says the new study “provides compelling evidence that nonlinear oscillatory dynamics underlie prefrontal dependent working memory capacity.”

“The work calls for a new view of the computational processes supporting goal-directed behavior,” adds Knight, who was not involved in the research. “The control processes supporting nonlinear dynamics are not understood, but this work provides a critical guidepost for future work aimed at understanding how the brain enables fluid cognition.”

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post A new glimpse into working memory appeared first on Scienmag.

New gene identified as cause, early indicator of breast cancer

How cancer cells fuel their growth

from
BIOENGINEER.ORG http://bioengineer.org/how-cancer-cells-fuel-their-growth/

Artist interpretation of a cancer cell dividing How cancer cells fuel their growth MIT Cancer Cell Growth 0 1 1

Cancer cells are notorious for their ability to divide uncontrollably and generate hordes of new tumor cells. Most of the fuel consumed by these rapidly proliferating cells is glucose, a type of sugar.

Scientists had believed that most of the cell mass that makes up new cells, including cancer cells, comes from that glucose. However, MIT biologists have now found, to their surprise, that the largest source for new cell material is amino acids, which cells consume in much smaller quantities.

The findings offer a new way to look at cancer cell metabolism, a field of research that scientists hope will yield new drugs that cut off cancer cells’ ability to grow and divide.

“If you want to successfully target cancer metabolism, you need to understand something about how different pathways are being used to actually make mass,” says Matthew Vander Heiden, the Eisen and Chang Career Development Professor and an associate professor in the Department of Biology, and a member of MIT’s Koch Institute for Integrative Cancer Research.

Vander Heiden is the senior author of the study, which appears in the journal Developmental Cell on March 7. The paper’s lead author is MIT graduate student Aaron Hosios.

Burning up

Since the 1920s, scientists have known that cancer cells generate energy differently than normal cells, a phenomenon dubbed the “Warburg effect” after its discoverer, German biochemist Otto Warburg. Human cells normally use glucose as an energy source, breaking it down through a series of complex chemical reactions that requires oxygen. Warburg discovered that tumor cells switch to a less efficient metabolic strategy known as fermentation, which does not require oxygen and produces much less energy.

More recently, scientists have theorized that cancer cells use this alternative pathway to create building blocks for new cells. However, one strike against this hypothesis is that much of the glucose is converted into lactate, a waste product that is not useful to cells. Furthermore, there has been very little research on exactly what goes into the composition of new cancer cells or any kind of rapidly dividing mammalian cells.

“Because mammals eat such a diversity of foods, it seemed like an unanswered question about which foods contribute to what parts of mass,” Vander Heiden says.

To determine where cells, including those in tumors, were getting the building blocks they needed, the researchers grew several different types of cancer cells and normal cells in culture dishes. They fed the cells different nutrients labeled with variant forms of carbon and nitrogen, allowing them to track where the original molecules ended up. They also weighed the cells before and after they divided, enabling them to calculate the percentage of cell mass contributed by each of the available nutrients.

Although cells consume glucose and the amino acid glutamine at very high rates, the researchers found that those two molecules contribute little to the mass of new cells — glucose accounts for 10 to 15 percent of the carbon found in the cells, while glutamine contributes about 10 percent of the carbon. Instead, the largest contributors to cell mass were amino acids, which make up proteins. As a group, amino acids (excluding glutamine) contribute the majority of the carbon atoms found in new cells and 20 to 40 percent of the total mass.

“These experiments reveal important details that reinforce our fundamental understanding of the metabolic underpinnings of molecular biosynthesis and cellular proliferation,” says Jared Rutter, a professor of biochemistry at the University of Utah who was not involved in the research. “The MIT team has performed a rigorous and quantitative assessment of the contributions of glucose, glutamine, and other molecules to the mass of proliferating mammalian cells in culture.”

Although initially surprising, the findings make sense, Vander Heiden says, because cells are made mostly of protein.

“There’s some economy in utilizing the simpler, more direct route to build what you’re made out of,” he says. “If you want to build a house out of bricks, it’s easier if you have a pile of bricks around and use those bricks than to start with mud and make new bricks.”

Refocusing the question

It remains something of a mystery why proliferating human cells consume so much glucose. Consistent with previous studies, the researchers found that most of the glucose burned by these cells is excreted as lactate.

“This led us to conclude that the importance of high glucose consumption is not necessarily the manipulation of carbon that allows you to make cell mass, but more for the other products that it provides, such as energy,” Hosios says.

Vander Heiden’s lab is now pursuing a more comprehensive understanding of how the Warburg effect may help cells reproduce. “It refocuses the question,” he says. “It isn’t necessarily about how the Warburg effect helps cells put glucose into cell mass, but more about why does glucose-to-lactate conversion help cells use amino acids to build more cells.”

Other authors of the paper include Vivian Hecht, a former MIT graduate student; Laura Danai, a Koch Institute postdoc; Scott Manalis, the Andrew (1956) and Erna Viterbi Professor in the MIT departments of Biological Engineering and Mechanical Engineering and a member of the Koch Institute; Jeffrey Rathmell, a professor at Vanderbilt University School of Medicine; Marc Johnson, a Vanderbilt University postdoc; and Matthew Steinhauser, an assistant professor of medicine at Harvard Medical School and Brigham and Women’s Hospital.

The research was funded by the National Institutes of Health, the Burroughs Wellcome Fund, and the Damon Runyon Cancer Research Foundation.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

The post How cancer cells fuel their growth appeared first on Scienmag.