31 Aralık 2015 Perşembe

Music in the brain

from
BIOENGINEER.ORG http://bioengineer.org/music-in-the-brain/

  • “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions,” Josh McDermott says. Music in the brain MIT music brain 0 1

    “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions,” Josh McDermott says.

    Illustration: Christine Daniloff/MIT

Scientists have long wondered if the human brain contains neural mechanisms specific to music perception. Now, for the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds selectively to sounds that people typically categorize as music, but not to speech or other environmental sounds.

“It has been the subject of widespread speculation,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions.”

The finding was enabled by a new method designed to identify neural populations from functional magnetic resonance imaging (fMRI) data. Using this method, the researchers identified six neural populations with different functions, including the music-selective population and another set of neurons that responds selectively to speech.

“The music result is notable because people had not been able to clearly see highly selective responses to music before,” says Sam Norman-Haignere, a postdoc at MIT’s McGovern Institute for Brain Research.

“Our findings are hard to reconcile with the idea that music piggybacks entirely on neural machinery that is optimized for other functions, because the neural responses we see are highly specific to music,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.

Norman-Haignere is the lead author of a paper describing the findings in the Dec. 16 online edition of Neuron. McDermott and Kanwisher are the paper’s senior authors.

See how researchers identified a neural population in the human auditory cortex that responds to music.

Video: Julie Pryor/McGovern Institute

Mapping responses to sound

For this study, the researchers scanned the brains of 10 human subjects listening to 165 natural sounds, including different types of speech and music, as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.

The brain’s auditory system has proven difficult to map, in part because of the coarse spatial resolution of fMRI, which measures blood flow as an index of neural activity. In fMRI, “voxels” — the smallest unit of measurement — reflect the response of hundreds of thousands or millions of neurons.

“As a result, when you measure raw voxel responses you’re measuring something that reflects a mixture of underlying neural responses,” Norman-Haignere says.

To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. Using this method, they identified six neural populations, each with a unique response pattern to the sounds in the experiment, that best explained the data.

“What we found is we could explain a lot of the response variation across tens of thousands of voxels with just six response patterns,” Norman-Haignere says.

One population responded most to music, another to speech, and the other four to different acoustic properties such as pitch and frequency.

The key to this advance is the researchers’ new approach to analyzing fMRI data, says Josef Rauschecker, a professor of physiology and biophysics at Georgetown University.

“The whole field is interested in finding specialized areas like those that have been found in the visual cortex, but the problem is the voxel is just not small enough. You have hundreds of thousands of neurons in a voxel, and how do you separate the information they’re encoding? This is a study of the highest caliber of data analysis,” says Rauschecker, who was not part of the research team.

Layers of sound processing

The four acoustically responsive neural populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical processing of sound. Speech and music-selective neural populations lie beyond this primary region.

“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” Norman-Haignere says.

The researchers believe there may be other brain regions involved in processing music, including its emotional components. “It’s inappropriate at this point to conclude that this is the seat of music in the brain,” McDermott says. “This is where you see most of the responses within the auditory cortex, but there’s a lot of the brain that we didn’t even look at.”

Kanwisher also notes that “the existence of music-selective responses in the brain does not imply that the responses reflect an innate brain system. An important question for the future will be how this system arises in development: How early it is found in infancy or childhood, and how dependent it is on experience?”

The researchers are now investigating whether the music-selective population identified in this study contains subpopulations of neurons that respond to different aspects of music, including rhythm, melody, and beat. They also hope to study how musical experience and training might affect this neural population.

Story Source:

The above post is reprinted from materials provided by MIT NEWS

Study finds altered brain chemistry in people with autism

from
BIOENGINEER.ORG http://bioengineer.org/study-finds-altered-brain-chemistry-in-people-with-autism/

  • (Left to right) Caroline Robertson and Nancy Kanwisher. Study finds altered brain chemistry in people with autism MIT Autism Inhibition 0 1

    (Left to right) Caroline Robertson and Nancy Kanwisher.

    Photo: Sham Sthankiya

MIT and Harvard University neuroscientists have found a link between a behavioral symptom of autism and reduced activity of a neurotransmitter whose job is to dampen neuron excitation. The findings suggest that drugs that boost the action of this neurotransmitter, known as GABA, may improve some of the symptoms of autism, the researchers say.

Brain activity is controlled by a constant interplay of inhibition and excitation, which is mediated by different neurotransmitters. GABA is one of the most important inhibitory neurotransmitters, and studies of animals with autism-like symptoms have found reduced GABA activity in the brain. However, until now, there has been no direct evidence for such a link in humans.

“This is the first connection in humans between a neurotransmitter in the brain and an autistic behavioral symptom,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “It’s possible that increasing GABA would help to ameliorate some of the symptoms of autism, but more work needs to be done.”

Robertson is the lead author of the study, which appears in the Dec. 17 online edition of Current Biology. The paper’s senior author is Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute. Eva-Maria Ratai, an assistant professor of radiology at Massachusetts General Hospital, also contributed to the research.

Too little inhibition

Many symptoms of autism arise from hypersensitivity to sensory input. For example, children with autism are often very sensitive to things that wouldn’t bother other children as much, such as someone talking elsewhere in the room, or a scratchy sweater. Scientists have speculated that reduced brain inhibition might underlie this hypersensitivity by making it harder to tune out distracting sensations.

In this study, the researchers explored a visual task known as binocular rivalry, which requires brain inhibition and has been shown to be more difficult for people with autism. During the task, researchers show each participant two different images, one to each eye. To see the images, the brain must switch back and forth between input from the right and left eyes.

For the participant, it looks as though the two images are fading in and out, as input from each eye takes its turn inhibiting the input coming in from the other eye.

“Everybody has a different rate at which the brain naturally oscillates between these two images, and that rate is thought to map onto the strength of the inhibitory circuitry between these two populations of cells,” Robertson says.

She found that nonautistic adults switched back and forth between the images nine times per minute, on average, and one of the images fully suppressed the other about 70 percent of the time. However, autistic adults switched back and forth only half as often as nonautistic subjects, and one of the images fully suppressed the other only about 50 percent of the time.

Performance on this task was also linked to patients’ scores on a clinical evaluation of communication and social interaction used to diagnose autism: Worse symptoms correlated with weaker inhibition during the visual task.

The researchers then measured GABA activity using a technique known as magnetic resonance spectroscopy, as autistic and typical subjects performed the binocular rivalry task. In nonautistic participants, higher levels of GABA correlated with a better ability to suppress the nondominant image. But in autistic subjects, there was no relationship between performance and GABA levels. This suggests that GABA is present in the brain but is not performing its usual function in autistic individuals, Robertson says.

“GABA is not reduced in the autistic brain, but the action of this inhibitory pathway is reduced,” she says. “The next step is figuring out which part of the pathway is disrupted.”

“This is a really great piece of work,” says Richard Edden, an associate professor of radiology at the Johns Hopkins University School of Medicine. “The role of inhibitory dysfunction in autism is strongly debated, with different camps arguing for elevated and reduced inhibition. This kind of study, which seeks to relate measures of inhibition directly to quantitative measures of function, is what we really to need to tease things out.”

Early diagnosis

In addition to offering a possible new drug target, the new finding may also help researchers develop better diagnostic tools for autism, which is now diagnosed by evaluating children’s social interactions. To that end, Robertson is investigating the possibility of using EEG scans to measure brain responses during the binocular rivalry task.

“If autism does trace back on some level to circuitry differences that affect the visual cortex, you can measure those things in a kid who’s even nonverbal, as long as he can see,” she says. “We’d like it to move toward being useful for early diagnostic screenings.”

Story Source:

The above post is reprinted from materials provided by MIT NEWS

Stimulus plan?

from
BIOENGINEER.ORG http://bioengineer.org/stimulus-plan/

  • People building their own devices for transcranial direct current stimulation (tDCS) have often invested substantial time looking at academic research on the subject. “They do look to scientific papers and in a lot of ways they do follow scientific precedent,” PhD student Anna Wexler says. “In other ways, they do their own thing.” Stimulus plan? MIT Brain Stimulation 0 1

    People building their own devices for transcranial direct current stimulation (tDCS) have often invested substantial time looking at academic research on the subject. “They do look to scientific papers and in a lot of ways they do follow scientific precedent,” PhD student Anna Wexler says. “In other ways, they do their own thing.”

It may sound unusual, but it’s true: In recent years a growing number of people have been hooking their heads up to electrodes, in an attempt to stimulate their brains using a direct electrical current. Some of them do this via homemade devices; others may be using a new direct-to-consumer kit that just hit the market.

But why, exactly, are people doing such a thing at all? And to what extent should this practice — seen only in research labs until a few years ago — be regulated?

The first question is easier to answer than the second, according to Anna Wexler, a PhD student in MIT’s Program in Science, Technology, and Society, who had two new papers on the subject appear in academic journals this fall. Following lab research that started appearing 15 years ago, some people believe they can give themselves a kind of neurological tuneup through electrical stimulation, producing better-functioning brains.

“The common thread in all these people is that they’re interested in self-improvement,” Wexler says. “They’re in two camps. Some are interested in enhancing cognition, learning faster, performing better at memory tasks. And another group is interested in self-treating a variety of mood disorders.”

As Wexler discusses in one paper, appearing recently in the Journal of Medical Ethics, the people building their own devices for transcranial direct current stimulation (tDCS) have often invested substantial time looking at academic research on the subject, some of which suggests positive outcomes from brain stimulation. Their ranks are being joined by more casual consumers who can now purchase inexpensive devices to do the same thing.

Such products have produced a regulatory debate among academic researchers. The U.S. Food and Drug Administration (FDA) has not approved tDCS as a treatment for any malady. On the other hand, if such tools are marketed as helping generalized “wellness,” not as a cure for one problem, they may fall outside the FDA’s purview.

“There are a lot of blurry lines in food, drug, and cosmetic regulation,” observes Wexler, who presents the most comprehensive research overview yet written on the nuances of the regulation issue, appearing this fall in an article for the Journal of Law and Biosciences. “The definition of a medical device is not based on a definition of its action, but on how the device is intended to be used. And the FDA has historically judged intended use by manufacturers’ marketing claims.”

Wexler was also one of the experts asked to speak at an FDA panel held on the topic in November.

Gaining currency

Academic interest in tDCS gained currency after a 2000 paper by two German neurophysiologists showed that passing a weak electrical current through the motor cortex helped people perform motor tasks better. The volume of studies increased slowly for several years — about 100 in all through 2007 — but has shot up recently: There have been over 100 published studies in each of the last four years, with about 300 being published in 2014 alone. Several companies produce tDCS machines used in lab settings where such research takes place.

Researchers have not really reached a firm consensus on the effects of tDCS, however. As Wexler notes, “no serious adverse effects” have been found among 10,000 human subjects in academic research, but one study, published in the Journal of Neuroscience last year, found that tDCS appeared to impair cognitive function in at least some individuals. Still on the other hand, numerous studies do show some kind of functional cognitive enhancement due to tDCS.

Wexler’s original research on the do-it-yourselfers — what she terms the “DIY tDCS crowd” — in the Journal of Medical Ethics provides an initial demographic look at who they are. Wexler conducted interviews, and examined online videos, blog posts, and forums, and found that most of the people involved are male; come from one of three dozen countries; and include “at least a handful” of lab researchers.

“They do look to scientific papers and in a lot of ways they do follow scientific precedent,” Wexler says. “In other ways, they do their own thing.” For instance, because “there is no agreement on what level of tDCS is bad for you,” she adds, the DIY tDCS community reports a wide variety of usage patterns, from relatively light to heavy amounts of stimulation.

Other scholars say Wexler’s work is original and significant. Her research into the DIY tDCS community is the “best encapsulation of the near-history of this phenomenon, which has really arisen in the last four to five years,” says Peter Reiner, a professor of psychiatry and expert in neuroethics at the University of British Columbia, who has also studied the issue. Reiner adds that Wexler’s “scholarship is excellent,” and observes that it is unusual for a graduate student to be looked to as a voice for policymakers.

The path ahead

In lieu of a complete scientific consensus on the effects of tDCS, however, it is not yet clear who should regulate the devices, let alone in what ways.

As Wexler puts it in the Journal of Law and Biosciences paper, there is not a “regulatory gap” pertaining to brain stimulation, but rather, “there are multiple, distinct pathways by which consumer tDCS devices can be regulated in the United States.” For example, they could be regulated not by the FDA but as regular consumer devices, subject to consumer safety and advertising laws under federal agencies like the Consumer product Safety Commission and the Federal Trade Commission.

Whatever path lies ahead, Wexler suggests regulators should follow an “open engagement” model of reaching out to the community of tDCS users to get a sense of the extent of use and the degree to which new guidelines are needed.

“I think the open engagement approach is just more practical,” Wexler says. “You can’t crack down on people building the devices. If anybody wants to go out and buy a battery and wires, it’s their right to do so.”

On the other hand, engagement with users, and perhaps a third-party review of tDCS effects by a group such as the National Academy of Medicine, would encourage at-home tDCS users to follow regulatory prescriptions rather than going their own way.

“We’ll have to wait and see,” Wexler says of the regulatory debate’s outcome. “But the DIY community really looks to scientific papers for guidance. They do value what scientists say.”

Story Source:

The above post is reprinted from materials provided by MIT NEWS

18 Aralık 2015 Cuma

Scientists manipulate consciousness in rats

from
BIOENGINEER.ORG http://bioengineer.org/scientists-manipulate-consciousness-rats/

Scientists showed that they could alter brain activity of rats and either wake them up or put them in an unconscious state by changing the firing rates of neurons in the central thalamus, a region known to regulate arousal. The study, published in eLIFE, was partially funded by the National Institutes of Health.

brain Scientists manipulate consciousness in rats brain

Scientists studied how the thalamus tunes brain activity during different states of consciousness in rats. Photo Credit: Courtesy of Lee lab, Stanford University, CA

“Our results suggest the central thalamus works like a radio dial that tunes the brain to different states of activity and arousal,” said Jin Hyung Lee, Ph.D., assistant professor of neurology, neurosurgery and bioengineering at Stanford University, and a senior author of the study.

Located deep inside the brain the thalamus acts as a relay station sending neural signals from the body to the cortex. Damage to neurons in the central part of the thalamus may lead to problems with sleep, attention, and memory. Previous studies suggested that stimulation of thalamic neurons may awaken patients who have suffered a traumatic brain injury from minimally conscious states.

Dr. Lee’s team flashed laser pulses onto light sensitive central thalamic neurons of sleeping rats, which caused the cells to fire. High frequency stimulation of 40 or 100 pulses per second woke the rats. In contrast, low frequency stimulation of 10 pulses per second sent the rats into a state reminiscent of absence seizures that caused them to stiffen and stare before returning to sleep.

“This study takes a big step towards understanding the brain circuitry that controls sleep and arousal,” Yejun (Janet) He, Ph.D., program director at NIH’s National Institute of Neurological Disorders and Stroke (NINDS).

When the scientists used functional magnetic resonance imaging (fMRI) to scan brain activity, they saw that high and low frequency stimulation put the rats in completely different states of activity. Cortical brain areas where activity was elevated during high frequency stimulation became inhibited with low frequency stimulation. Electrical recordings confirmed the results. Neurons in the somatosensory cortex fired more during high frequency stimulation of the central thalamus and less during low frequency stimulation.

“Dr. Lee’s innovative work demonstrates the power of using imaging technologies to study the brain at work,” said Guoying Liu, Ph.D., a program director at the NIH’s National Institute of Biomedical Imaging and Bioengineering (NIBIB).

How can changing the firing rate of the same neurons in one region lead to different effects on the rest of the brain?

Further experiments suggested the different effects may be due to a unique firing pattern by inhibitory neurons in a neighboring brain region, the zona incerta, during low frequency stimulation. Cells in this brain region have been shown to send inhibitory signals to cells in the sensory cortex.

Electrical recordings showed that during low frequency stimulation of the central thalamus, zona incerta neurons fired in a spindle pattern that often occurs during sleep. In contrast, sleep spindles did not occur during high frequency stimulation. Moreover, when the scientists blocked the firing of the zona incerta neurons during low frequency stimulation of the central thalamus, the average activity of sensory cortex cells increased.

Although deep brain stimulation of the thalamus has shown promise as a treatment for traumatic brain injury, patients who have decreased levels of consciousness show slow progress through these treatments.

“We showed how the circuits of the brain can regulate arousal states,” said Dr. Lee. “We hope to use this knowledge to develop better treatments for brain injuries and other neurological disorders.”

This work was supported by grants from the NIH (NS087159, EB008738, MH087988); the National Science Foundation (CAREER Award, 1056008); the Okawa Foundation for Information and Telecommunications; the Alfred P. Sloan Foundation: the Mathers Charitable Foundation; Stanford Bio-X; James and Carrie Anderson Fund for Epilepsy Research; Littlefield Funds.

Story Source:

The above post is reprinted from materials provided by NIH/National Institute of Neurological Disorders and Stroke.

17 Aralık 2015 Perşembe

Compound found to trigger innate immunity against viruses

from
BIOENGINEER.ORG http://bioengineer.org/compound-trigger-innate-immunity-viruses/

Research from UW Medicine and collaborators indicates that a drug-like molecule can activate innate immunity and induce genes to control infection in a range of RNA viruses, including West Nile, dengue, hepatitis C, influenza A, respiratory syncytial, Nipah, Lassa and Ebola.

virus Compound found to trigger innate immunity against viruses virus 1

A scientist’s illustration of immunology research at UW Medicine’s South Lake Union campus. Photo Credit: Dennis Wise

The findings, published today in the Journal of Virology show promising evidence for creating a broad-spectrum antiviral.

“Our compound has an antiviral effect against all these viruses,” said Michael Gale Jr., University of Washington professor of immunology and director of the UW Center for Innate Immunity and Immune Disease. The finding emerged from research by his lab in concert with scientists at Kineta Inc. and the University of Texas at Galveston.

Gale said he thinks the findings are the first to show that innate immunity can be triggered through a molecule present in all our cells, known as RIG-I.

RIG-I is a cellular protein known as a pathogen recognition receptor. These receptors detect viral RNA and signal an innate immune response inside the cell that is essential for limiting and controlling viral infections. The signal induces the expression of many innate immune and antiviral genes and the production of antiviral gene products, pro-inflammatory cytokines, chemokines and interferons.

“These products act in concert to suppress and control virus infection,” the researchers wrote.

Such activation of the innate immune response to control viral infection has been tested successfully in cells and in mice. Next steps would be to test dosing and stability in animal models and then in humans, a process that could take two to five years, Gale said.

Currently, there are no known broad-spectrum antiviral drugs and few therapeutic options against infection by RNA viruses. RNA viruses pose a significant public health problem worldwide because their high mutation rate allows them to escape the immune response. They are a frequent cause of emerging and re-emerging viral infections. West Nile virus infections, for example, started in the United States in 2000 and remerged in 2012. The World Health Organization reports 50 million to 100 million new cases of dengue fever yearly and 22,000 deaths caused by the related dengue virus. Dengue is now present in the southern U.S.

Hepatitis C, which is transmitted through the blood, infects upward of 4 million people each year; 150 million people are chronically infected and at risk for developing cirrhosis or liver cancer, according to the paper. Direct-acting antivirals can control hepatitis C and show promise of long-term cure, but viral mutation to drug resistance is a concern with prolonged use of these drugs. Also the drugs’ exorbitant costs make them unaffordable to many or most patients.

There is tremendous interest in triggering innate immunity, said Shawn Iadonato, chief scientific officer at Seattle biotech Kineta. Some viral infections, he pointed out, cannot be treated by traditional antivirals. Activating innate immunity also will make the viruses less likely to resist the drug actions because the therapy targets the cell, via gene action, rather than the virus itself.

“It’s routine for us to think of broad-spectrum antibiotics, but the equivalent for virology doesn’t exist,” Iadonato said.

Story Source:

The above post is reprinted from materials provided by University of Washington.

16 Aralık 2015 Çarşamba

Gif Shows Cell Division

from
BIOENGINEER.ORG http://bioengineer.org/gif-shows-cell-division/

Metaphase is the point in the cell cycle after the chromosomes have condensed and lined up in the middle, just before the cell is divided into two daughter cells. The chromosomes are held into place with kinetochore microtubules to create what is known as the metaphase plate, spanning the equator of the cell.

The following gif shows metaphase as it happens in normal cells obtained from pig kidney epithelium. The microtubules use mEmerald fluorescent protein as a marker, while the chromatids are labeled in bright red from mCherry.

The images were made with a Nikon C1si/TE2000 laser scanning confocal microscopy and are the property of Nikon’s MicroscopyU. If you haven’t checked out their website, it is completely amazing and you need to go take a look right now.

Story Source:

The above post is reprinted from materials provided by iflscience,Lisa Winter.

00000023 Gif Shows Cell Division 00000023

First serotonin neurons made from human stem cells

from
BIOENGINEER.ORG http://bioengineer.org/serotonin-neurons-human-stem-cells/

Su-Chun Zhang, a pioneer in developing neurons from stem cells at the University of Wisconsin–Madison, has created a specialized nerve cell that makes serotonin, a signaling chemical with a broad role in the brain.

neurons First serotonin neurons made from human stem cells neurons

Human serotonin-producing neurons, generated from induced pluripotent stem cells, created in the lab of Su-Chun Zhang in the Waisman Center. Blue indicates cell nuclei, red and green show typical markers for these neurons, which produce a neurotransmitter that affects large parts of the brain. Photo Credit: JIANFENG LU AND SU-CHUN ZHANG

Serotonin affects emotions, sleep, anxiety, depression, appetite, pulse and breathing. It also plays a role in serious psychiatric conditions like schizophrenia, bipolar disorder and depression.

“Serotonin essentially modulates every aspect of brain function, including movement,” Zhang says. The transmitter is made by a small number of neurons localized on one structure at the back of the brain. Serotonin exerts its influence because the neurons that make it project to almost every part of the brain.

The study, reported today in the journal Nature Biotechnology, began with two types of stem cells: one derived from embryos, the other from adult cells. Because serotonin neurons form before birth, the researchers had to recreate the chemical environment found in the developing brain in the uterus, Zhang says.

“That sounds reasonably simple, and we have made so many different types of neural cells. Here, we had to instruct the stem cells to develop into one specific fate, using a custom-designed sequence of molecules at exact concentrations. That’s especially difficult if you consider that the conditions needed to make serotonin neurons are scarce, existing in one small location in the brain during development.”

The cells showed the expected response to electrical stimulation and also produced serotonin.

Although other scientists have matured stem cells into something resembling serotonin neurons, the case is much more conclusive this time, says first author Jianfeng Lu, a scientist at UW–Madison’s Waisman Center. “Previously, labs were producing a few percent of serotonin neurons from pluripotent stem cells, and that made it very difficult to study their cells. If you detect 10 neurons, and only two are serotonin neurons, it’s impossible to detect serotonin release; that was the stone in the road.”

Instead, those neurons were identified based on cellular markers, which is “not sufficient to say those are functional serotonin neurons,” Lu says.

To confirm that the new cells act like serotonin neurons, “we showed that the neurons responded to some FDA-approved drugs that regulate depression and anxiety through the serotonin pathway,” Zhang says.

While the previous attempts “followed what was learned from mouse studies,” the current study used other growth factors, Zhang says. “It was not exactly trial and error; we have some rules to follow, but we had to refine it little by little to work out — one chemical at a time — the concentration and timing, and then check and recheck the results. That’s why it took time.”

Although cells derived from stem cells are commonly used to test drug toxicity, Zhang is aiming higher with the serotonin neurons. “We think these can help develop new, more effective drugs, especially related to the higher neural functions that are so difficult to model in mice and rats,” he says. “Particularly because they are from humans, these cells may lead to benefits for patients with depression, bipolar disorder or anxiety. These are some of the most troublesome psychiatric conditions, and we really don’t have great drugs for them now.”

Because the neurons can be generated from induced pluripotent stem cells, which can be produced from a patient’s skin cells, “these could be useful for finding treatments for psychiatric disorders like depression, where we often see quite variable responses to drugs,” says Lu. “By identifying individual differences, this could be a step toward personalized medicine.

“I’m like Su-Chun. I don’t want to just make a publication in a scientific journal. I want our work to affect human health, to improve the human condition.”

Story Source:

The above post is reprinted from materials provided by University of Wisconsin.

15 Aralık 2015 Salı

Cell memory loss enables the production of stem cells

from
BIOENGINEER.ORG http://bioengineer.org/cell-memory-loss-enables-the-production-of-stem-cells/

They say we can’t escape our past–no matter how much we change, we still have the memory of what came before; the same can be said of our cells.

stem cell memory Cell memory loss enables the production of stem cells stem cell memory

Induced pluripotent stem cell (iPS cell) colonies were generated after researchers at Harvard Stem Cell Institute suppressed the CAF1 gene. Photo Credit:Sihem Chaloufi

Adult cells, such as skin or blood cells, have a cellular “memory,” or record of how the cell changes as it develops from an uncommitted embryonic cell into a specialized adult cell. Now, Harvard Stem Cell Institute researchers at Massachusetts General Hospital (MGH) in collaboration with scientists from the Research Institutes of Molecular Biotechnology (IMBA) and Molecular Pathology (IMP) in Vienna have identified genes that when suppressed effectively erase a cell’s memory, making the cell more susceptible to reprogramming and, consequently, making the process of reprogramming quicker and more efficient.

The study was recently published in Nature.

“We began this work because we wanted to know why a skin cell is a skin cell, and why does it not change its identity the next day, or the next month, or a year later?” said co-senior author Konrad Hochedlinger, PhD, an HSCI Principal Faculty member at MGH and Harvard’s Department of Stem Cell and Regenerative Biology, and a world expert in cellular reprogramming.

Every cell in the human body has the same genome, or DNA blueprint, explained Hochedlinger, and it is how those genes are turned on and off during development that determines what kind of adult cell each will become. By manipulating those genes and introducing new factors, scientists can unlock dormant parts of an adult cell’s genome and reprogram it into another cell type.

However, “a skin cell knows it is a skin cell,” said IMBA’s Josef Penninger, even after scientists reprogram those skin cells into induced pluripotent stem cells (iPS cells) – a process that would ideally require a cell to “forget” its identity before assuming a new one. Cellular memory is often conserved, acting as a roadblock to reprogramming. “We wanted to find out which factors stabilize this memory and what mechanism prevents iPS cells from forming,” Penninger said.

To identify potential factors, the team established a genetic library targeting known chromatin regulators — genes that control the packaging and bookmarking of DNA, and are involved in creating cellular memory.

Hochedlinger and Sihem Cheloufi, co-first author and a postdoc in Hochedlinger’s lab, designed a screening approach that tested each of these factors.

Of the 615 factors screened, the researchers identified four chromatin regulators, three of which had not yet been described, as potential roadblocks to reprogramming. In comparison to the three to four fold increase seen by suppressing previously known roadblock factors, inhibiting the newly described CAF1 (chromatin assembly factor 1) made the process 50 to 200 fold more efficient. Moreover, in the absence of CAF1 reprogramming turned out to be much faster: While the process normally takes nine days, the researchers could detect the first iPS cell after four days.

“The CAF1 complex ensures that during DNA replication and cell division daughter cells keep their memory, which is encoded on the histones that the DNA is wrapped around,” said Ulrich Elling, a co-first author from IMBA. “When we block CAF-1, daughter cells fail to wrap their DNA the same way, lose this information and covert into blank sheets of paper. In this state, they respond more sensitively to signals from the outside, meaning we can manipulate them much more easily.”

By suppressing CAF-1 the researchers were also able to facilitate the conversion of one type of adult cell directly into another, skipping the intermediary step of forming iPS cells, via a process called direct reprogramming, or transdifferentiation. Thus, CAF-1 appears to act as a general guardian of cell identity whose depletion facilitates both the interconversion of one adult cell type to another as well as the conversion of specialized cells into iPS cells.

In finding CAF-1, the researchers identified a complex that allows cell memory to be erased and rewritten. “The cells forget who they are, making it easier to trick them into becoming another type of cell,” said Sihem Cheloufi.

CAF-1 may provide a general key to facilitate the “reprogramming” of cells to model disease and test therapeutic agents, IMP’s Johannes Zuber explained. “The best-case scenario,” Zuber said, “is that with this insight, we hold a universal key in our hands that will allow us to model cells at will.”

Story Source:

The above post is reprinted from materials provided by Harvard Stem Cell Institute researchers at Massachusetts General Hospital.

14 Aralık 2015 Pazartesi

Men have better sense of direction than women, study suggests

from
BIOENGINEER.ORG http://bioengineer.org/men-better-sense-direction-women-study-suggests/

It’s been well established that men perform better than women when it comes to specific spatial tasks. But how much of that is linked to sex hormones versus cultural conditioning and other factors?

tasks Men have better sense of direction than women, study suggests tasks

The lines show how men and women navigated a route. The blue lines are the women’s routes, and the red lines are the men’s. The lines show that the men arrived faster and solved more tasks. Photo Credit: NTNU

Researchers at the Norwegian University of Science and Technology (NTNU) decided to explore this idea by administering testosterone to women and testing how they performed in wayfinding tasks in a virtual environment.

Using fMRI, the researchers saw that men in the study took several shortcuts, oriented themselves more using cardinal directions and used a different part of the brain than the women in the study.

But when women got a drop of testosterone under their tongue, several of them were able to orient themselves better in the four cardinal directions.

“Men’s sense of direction was more effective. They quite simply got to their destination faster,” says Carl Pintzka, a medical doctor and PhD candidate at NTNU’s Department of Neuroscience.

The directional sense findings are part of his doctoral thesis on how the brain functions differently in men and women.

Puzzle solving in a 3D maze

Pintzka used an MRI scanner to see whether there are any differences in brain activity when men and women orient themselves. Using 3D goggles and a joystick, the participants had to orient themselves in a very large virtual maze while functional images of their brains were continuously recorded.

Eighteen men and 18 women first took an hour to learn the layout of the maze before the scanning session. In the MRI scanner, they were given 30 seconds for each of the 45 navigation tasks. One of the tasks, for example, was to “find the yellow car” from different starting points.

Women often use a route

The men solved 50 per cent more of the tasks than the women.

According to Pintzka, women and men have different navigational strategies. Men use cardinal directions during navigation to a greater degree.

“If they’re going to the Student Society building in Trondheim, for example, men usually go in the general direction where it’s located. Women usually orient themselves along a route to get there, for example, ‘go past the hairdresser and then up the street and turn right after the store’,” he says.

The study shows that using the cardinal directions is more efficient because it is a more flexible strategy. The destination can be reached faster because the strategy depends less on where you start.

Women have better local memory

fMRI images of the brain showed that both men and women use large areas of the brain when they navigate, but some areas were different. The men used the hippocampus more, whereas women used their frontal areas to a greater extent.

“That’s in sync with the fact that the hippocampus is necessary to make use of cardinal directions,” says Pintzka.

He explains the findings in evolutionary terms.

“In ancient times, men were hunters and women were gatherers. Therefore, our brains probably evolved differently. For instance, other researchers have documented that women are better at finding objects locally than men. In simple terms, women are faster at finding things in the house, and men are faster at finding the house,” Pintzka says.

A little testosterone under the tongue

Step two was to give some women testosterone just before they were going to solve the maze puzzles.

This was a different group of women than the group that was compared to men. In this step, 42 women were divided into two groups. Twenty-one of them received a drop of placebo, and 21 got a drop of testosterone under the tongue. The study was double-blinded so that neither Pintzka nor the women knew who got what.

“We hoped that they would be able to solve more tasks, but they didn’t. But they had improved knowledge of the layout of the maze. And . And they used the hippocampus to a greater extent, which tends to be used more by men for navigating,” says Pintzka.

Losing one’s sense of direction is one of the first symptoms in Alzheimer’s disease.

“Almost all brain-related diseases are different in men and women, either in the number of affected individuals or in severity. Therefore, something is likely protecting or harming people of one sex. Since we know that twice as many women as men are diagnosed with Alzheimer’s disease, there might be something related to sex hormones that is harmful,” says Pintzka.

He hopes that by understanding how men and women use different brain areas and strategies to navigate, researchers will be able to enhance the understanding of the disease’s development, and develop coping strategies for those already affected.

Story Source:

The above post is reprinted from materials provided by Norwegian University of Science and Technology.

Scientists teach machines to learn like humans

from
BIOENGINEER.ORG http://bioengineer.org/scientists-teach-machines-learn-like-humans/

A team of scientists has developed an algorithm that captures our learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans.

Scientists teach machines to learn like humans Scientists teach machines to learn like humans Scientists teach machines to learn like humans

This paper compares human and machine learning for a wide range of simple visual concepts, or handwritten characters selected from alphabets around the world. This is an artist’s interpretation of that theme. Art by Danqing Wang. This material relates to a paper that appeared in the Dec. 11, 2015 issue of Science, published by AAAS. The paper, by B.M. Lake at New York University in New York, NY, and colleagues was titled, “Human-level concept learning through probabilistic program induction.” Photo Credit: Danqing Wang

The work, which appears in the latest issue of the journal Science, marks a significant advance in the field — one that dramatically shortens the time it takes computers to ‘learn’ new concepts and broadens their application to more creative tasks.

“Our results show that by reverse engineering how people think about a problem, we can develop better algorithms,” explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper’s lead author. “Moreover, this work points to promising methods to narrow the gap for other machine learning tasks.”

The paper’s other authors were Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines.

When humans are exposed to a new concept — such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet — they often need only a few examples to understand its make-up and recognize new instances. While machines can now replicate some pattern-recognition tasks previously done only by humans — ATMs reading the numbers written on a check, for instance — machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.

“It has been very difficult to build machines that require as little data as humans when learning a new concept,” observes Salakhutdinov. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”

Salakhutdinov helped to launch recent interest in learning with ‘deep neural networks,’ in a paper published in Science almost 10 years ago with his doctoral advisor Geoffrey Hinton. Their algorithm learned the structure of 10 handwritten character concepts — the digits 0-9 — from 6,000 examples each, or a total of 60,000 training examples.

In the work appearing in Science this week, the researchers sought to shorten the learning process and make it more akin to the way humans acquire and apply new knowledge — i.e., learning from a small number of examples and performing a range of tasks, such as generating new examples of a concept or generating whole new concepts.

To do so, they developed a ‘Bayesian Program Learning’ (BPL) framework, where concepts are represented as simple computer programs. For instance, the letter ‘A’ is represented by computer code — resembling the work of a computer programmer — that generates examples of that letter when the code is run. Yet no programmer is required during the learning process: the algorithm programs itself by constructing code to produce the letter it sees. Also, unlike standard computer programs that produce the same output every time they run, these probabilistic programs produce different outputs at each execution. This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter ‘A.’

While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns “generative models” of processes in the world, making learning a matter of ‘model building’ or ‘explaining’ the data provided to the algorithm. In the case of writing and recognizing letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently. The model also “learns to learn” by using knowledge from previous concepts to speed learning on new concepts — e.g., using knowledge of the Latin alphabet to learn letters in the Greek alphabet. The authors applied their model to over 1,600 types of handwritten characters in 50 of the world’s writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic — and even invented characters such as those from the television series Futurama.

In addition to testing the algorithm’s ability to recognize new instances of a concept, the authors asked both humans and computers to reproduce a series of handwritten characters after being shown a single example of each character, or in some cases, to create new characters in the style of those it had been shown. The scientists then compared the outputs from both humans and machines through ‘visual Turing tests.’ Here, human judges were given paired examples of both the human and machine output, along with the original prompt, and asked to identify which of the symbols were produced by the computer.

While judges’ correct responses varied across characters, for each visual Turing test, fewer than 25 percent of judges performed significantly better than chance in assessing whether a machine or a human produced a given set of symbols.

“Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven’t seen,” notes Tenenbaum. “I’ve wanted to build models of these remarkable abilities since my own doctoral work in the late nineties. We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts — even simple visual concepts such as handwritten characters — in ways that are hard to tell apart from humans.”

The work was supported by grants from the National Science Foundation to MIT’s Center for Brains, Minds and Machines (CCF-1231216), the Army Research Office (W911NF-08-1-0242, W911NF-13-1-2012), the Office of Naval Research (N000141310333), and the Moore-Sloan Data Science Environment at New York University.

Story Source:

The above post is reprinted from materials provided by New York University.

12 Aralık 2015 Cumartesi

Here’s what the world will be like in 2045, according to DARPA’s top scientists

from
BIOENGINEER.ORG http://bioengineer.org/heres-what-the-world-will-be-like-in-2045-according-to-darpas-top-scientists/

Predicting the future is fraught with challenges, but when it comes to technological advances and forward thinking, experts working at the Pentagon’s research agency may be the best people to ask.

Launched in 1958, the Defense Advanced Research Projects Agency is behind some of the biggest innovations in the military — many of which have crossed over to the civilian technology market. These include things like advanced robotics, global-positioning systems, and the internet.

So what’s going to happen in 2045?

It’s pretty likely that robots and artificial technology are going to transform a bunch of industries, drone aircraft will continue their leap from the military to the civilian market, and self-driving cars will make your commute a lot more bearable.

But DARPA scientists have even bigger ideas. In a video series from October called “Forward to the Future,” three researchers predict what they imagine will be a reality 30 years from now.

Dr. Justin Sanchez, a neuroscientist and program manager in DARPA’s Biological Technologies Office, believes we’ll be at a point where we can control things simply by using our mind.

“Imagine a world where you could just use your thoughts to control your environment,” Sanchez said. “Think about controlling different aspects of your home just using your brain signals, or maybe communicating with your friends and your family just using neural activity from your brain.”

According to Sanchez, DARPA is currently working on neurotechnologies that can enable this to happen. There are already some examples of these kinds of futuristic breakthroughs in action, like brain implants controlling prosthetic arms.

Stefanie Tompkins, a geologist and director of DARPA’s Defense Sciences Office, thinks we’ll be able to build things that are incredibly strong but also very lightweight. Think of a skyscraper using materials that are strong as steel, but light as carbon fiber. That’s a simple explanation for what Tompkins envisions, which gets a little bit more complicated down at the molecular level.

Story Source:

The above post is reprinted from materials provided by Tech Insider.

Monument to Lab Mice: Mouse Knitting a DNA Strand

from
BIOENGINEER.ORG http://bioengineer.org/monument-to-lab-mice-mouse-knitting-a-dna-strand/

A symbol of gratitude for their sacrifices to science. Without rodents, many breakthroughs would not have been possible.

mouse

The statue stands six feet tall and sits near the Institute of Cytology and Genetics in Novosibirsk, Russia. According to the artist, Andrew Kharevich:

It combines the image of the laboratory mouse and a scientist, because they are related to each other and serve as one case. Mouse is captured in a moment of scientific discovery. If you look into her eyes, you can see that this little mouse has come up with something. But the whole symphony of scientific discovery, joy, “eureka” has not yet begun to sound.

The Institute hopes to create more displays to honor other laboratory animals, along with plaques containing information on exactly how each animal has helped humanity.

Story Source:

The above post is reprinted from materials provided by politsib.ru.

DNA-based electromechanical switch

from
BIOENGINEER.ORG http://bioengineer.org/dna-based-electromechanical-switch/

A team of researchers from the University of California, Davis and the University of Washington have demonstrated that the conductance of DNA can be modulated by controlling its structure, thus opening up the possibility of DNA’s future use as an electromechanical switch for nanoscale computing. Although DNA is commonly known for its biological role as the molecule of life, it has recently garnered significant interest for use as a nanoscale material for a wide-variety of applications.

dna swithc

In their paper published in Nature Communications, the team demonstrated that changing the structure of the DNA double helix by modifying its environment allows the conductance (the ease with which an electric current passes) to be reversibly controlled. This ability to structurally modulate the charge transport properties may enable the design of unique nanodevices based on DNA. These devices would operate using a completely different paradigm than today’s conventional electronics.

“As electronics get smaller they are becoming more difficult and expensive to manufacture, but DNA-based devices could be designed from the bottom-up using directed self-assembly techniques such as ‘DNA origami’,” said Josh Hihath, assistant professor of electrical and computer engineering at UC Davis and senior author on the paper. DNA origami is the folding of DNA to create two- and three-dimensional shapes at the nanoscale level.

“Considerable progress has been made in understanding DNA’s mechanical, structural, and self-assembly properties and the use of these properties to design structures at the nanoscale. The electrical properties, however, have generally been difficult to control,” said Hihath.

New Twist on DNA? Possible Paradigms for Computing

In addition to potential advantages in fabrication at the nanoscale level, such DNA-based devices may also improve the energy efficiency of electronic circuits. The size of devices has been significantly reduced over the last 40 years, but as the size has decreased, the power density on-chip has increased. Scientists and engineers have been exploring novel solutions to improve the efficiency.

“There’s no reason that computation must be done with traditional transistors. Early computers were fully mechanical and later worked on relays and vacuum tubes,” said Hihath. “Moving to an electromechanical platform may eventually allow us to improve the energy efficiency of electronic devices at the nanoscale.”

This work demonstrates that DNA is capable of operating as an electromechanical switch and could lead to new paradigms for computing.

To develop DNA into a reversible switch, the scientists focused on switching between two stable conformations of DNA, known as the A-form and the B-form. In DNA, the B-form is the conventional DNA duplex that is commonly associated with these molecules. The A-form is a more compact version with different spacing and tilting between the base pairs. Exposure to ethanol forces the DNA into the A-form conformation resulting in an increased conductance. Similarly, by removing the ethanol, the DNA can switch back to the B-form and return to its original reduced conductance value.

One Step Toward Molecular Computing

In order to develop this finding into a technologically viable platform for electronics, the authors also noted that there is still a great deal of work to be done. Although this discovery provides a proof-of-principle demonstration of electromechanical switching in DNA, there are generally two major hurdles yet to be overcome in the field of molecular electronics. First, billions of active molecular devices must be integrated into the same circuit as is done currently in conventional electronics. Next, scientists must be able to gate specific devices individually in such a large system.

“Eventually, the environmental gating aspect of this work will have to be replaced with a mechanical or electrical signal in order to locally address a single device,” noted Hihath.

Story Source:

The above post is reprinted from materials provided by University of California – Davis.

11 Aralık 2015 Cuma

Periodic table of protein complexes

from
BIOENGINEER.ORG http://bioengineer.org/periodic-table-of-protein-complexes/

The Periodic Table of Protein Complexes, published today in Science, offers a new way of looking at the enormous variety of structures that proteins can build in nature, which ones might be discovered next, and predicting how entirely novel structures could be engineered. Created by an interdisciplinary team led by researchers at the Wellcome Genome Campus and the University of Cambridge, the Table provides a valuable tool for research into evolution and protein engineering.

protein table

An interactive Periodic Table of Protein Complexes is available at http://sea31.user.srcf.net/periodictable/ Photo Credit: EMBL-EBI / Spencer Phillips

Almost every biological process depends on proteins interacting and assembling into complexes in a specific way, and many diseases are associated with problems in complex assembly. The principles underpinning this organisation are not yet fully understood, but by defining the fundamental steps in the evolution of protein complexes, the new ‘periodic table’ presents a systematic, ordered view on protein assembly, providing a visual tool for understanding biological function.

“Evolution has given rise to a huge variety of protein complexes, and it can seem a bit chaotic,” explains Joe Marsh, formerly of the Wellcome Genome Campus and now of the MRC Human Genetics Unit at the University of Edinburgh. “But if you break down the steps proteins take to become complexes, there are some basic rules that can explain almost all of the assemblies people have observed so far.”

Different ballroom dances can be seen as an endless combination of a small number of basic steps. Similarly, the ‘dance’ of protein complex assembly can be seen as endless variations on dimerization (one doubles, and becomes two), cyclisation (one forms a ring of three or more) and subunit addition (two different proteins bind to each other). Because these happen in a fairly predictable way, it’s not as hard as you might think to predict how a novel protein would form.

“We’re bringing a lot of order into the messy world of protein complexes,” explains Sebastian Ahnert of the Cavendish Laboratory at the University of Cambridge, a physicist who regularly tangles with biological problems. “Proteins can keep go through several iterations of these simple steps, , adding more and more levels of complexity and resulting in a huge variety of structures. What we’ve made is a classification based on these underlying principles that helps people get a handle on the complexity.”

The exceptions to the rule are interesting in their own right, adds Sebastian, as are the subject of on-going studies.

“By analysing the tens of thousands of protein complexes for which three-dimensional structures have already been experimentally determined, we could see repeating patterns in the assembly transitions that occur – and with new data from mass spectrometry we could start to see the bigger picture,” says Joe.

“The core work for this study is in theoretical physics and computational biology, but it couldn’t have been done without the mass spectrometry work by our colleagues at Oxford University,” adds Sarah Teichmann, Research Group Leader at the European Bioinformatics Institute (EMBL-EBI) and the Wellcome Trust Sanger Institute. “This is yet another excellent example of how extremely valuable interdisciplinary research can be.”

Story Source:

The above post is reprinted from materials provided by EMBL-EBI.

Bacteria bioengineered with synthetic circadian clocks

from
BIOENGINEER.ORG http://bioengineer.org/bacteria-bioengineered-with-synthetic-circadian-clocks/

Many of the body’s processes follow a natural daily rhythm or so-called circadian clock, so there are certain times of the day when a person is most alert, when the heart is most efficient, and when the body prefers sleep. Even bacteria have a circadian clock, and in a December 10 Cell Reports study, researchers designed synthetic microbes to learn what drives this clock and how it might be manipulated.

cyanobacteria

Photo Source: wikimedia.org

“The answer seems to be especially simple: the clock proteins sense the metabolic activity in the cell,” says senior author Michael Rust, of the University of Chicago’s Institute for Genomics and Systems Biology.

“This is probably because cyanobacteria are naturally photosynthetic–they’re actually responsible for a large fraction of the photosynthesis in the ocean–and so whether the cell is energized or not is a good indication of whether it’s day or night,” he says. For photosynthetic bacteria, every night is a period of starvation, and it is likely that the circadian clock helps them grow during the day in order to prepare for nightfall.

To make their discovery, Rust and his colleagues had to separate metabolism from light exposure, and they did this by using a synthetic biology approach to make photosynthetic bacteria capable of living on sugar rather than sunlight.

“I was surprised that this actually worked–by genetically engineering just one sugar transporter, it was possible to give these bacteria a completely different lifestyle than the one they have had for hundreds of millions of years,” Rust says. The findings indicate that the cyanobacteria’s clock can synchronize to metabolism outside of the context of photosynthesis. “This suggests that in the future this system could be installed in microbes of our own design to carry out scheduled tasks,” he says.

In a related analogy, engineers who developed electrical circuits found that synchronizing each step of a computation to an internal clock made increasingly complicated tasks possible, ultimately leading to the computers we have today. “Perhaps in the future we’ll be able to use synthetic clocks in engineered microbes in a similar way,” Rust says.

Other researchers have shown that molecules involved in the mammalian circadian clock are also sensitive to metabolism, but our metabolism is not so closely tied to daylight as the cyanobacteria’s. Therefore, our bodies’ clocks evolved to also sense light and dark.

“This is presumably why, in mammals, there are specialized networks of neurons that receive light input from the retina and send timing signals to the rest of the body,” Rust explains. “So, for us it’s clearly a mixture of metabolic cues and light exposure that are important.”

The bacteria that live inside of our guts, however, most likely face similar daily challenges as those experienced by cyanobacteria because we give them food during the day when we eat but not during the night. “It’s still an open question whether the bacteria that live inside us have ways of keeping track of time,” Rust says.

Story Source:

The above post is reprinted from materials provided by Cell Press.

10 Aralık 2015 Perşembe

Armor plating with built-in transparent ceramic eyes

from
BIOENGINEER.ORG http://bioengineer.org/armor-plating-with-built-in-transparent-ceramic-eyes/

Usually, it’s a tradeoff: If you want maximum physical protection, whether from biting predators or exploding artillery shells, that generally compromises your ability to see. But sea-dwelling creatures called chitons have figured out a way around that problem: Tiny eyes are embedded within their tough protective shells, with their transparent lenses made of the same ceramic material as the rest of their shells — and just as tough.

biomaterial

Close-up image of part of the shell of a chiton (Acanthopleura granulata) shows the two kinds of sensory organs that cover the shell surface. The eyes are the dark bumps with shiny centers. The exact function of other sensory organs called aesthetes (small bumps with black centers) is not yet known. Photo Credit:Researchers

These armor-plated eyes could provide a model for protective armor for soldiers or workers in hazardous surroundings, say researchers at MIT, Harvard University, and elsewhere who analyzed the structure and properties of these uniquely hardy optical systems. Their work is described this week in the journal Science by MIT professor Christine Ortiz; recent MIT graduate and Harvard postdoc Ling Li; recent MIT graduate Matthew Connors; MIT assistant professor Mathias Kolle; Joanna Aizenberg from Harvard University; and Daniel Speiser from the University of South Carolina.

These chitons, a species called Acanthopleura granulate, have hundreds of tiny eyes dotting the surface of their tough shells. The researchers demonstrated that these are true eyes, capable of forming focused images. They also showed that unlike the eyes of almost all other living creatures, which are made primarily of protein, these eyes are made of the mineral aragonite, the same ceramic material as the rest of the creatures’ shells.

These mineral-based eyes, Li says, “allow the animal to monitor its environment with the protective armor shell. The majority of the shell is opaque, and only the eyes are transparent.”

Unlike most mollusks, chitons’ shells are made of eight overlapped plates, which allow them some flexibility. The little creatures — about the size of a potato chip, and resembling prehistoric trilobites — are found in many parts of the world, but are little noticed, as they resemble the rocks they adhere to.

Since these chitons live in the intertidal zone — meaning they are sometimes underwater and sometimes exposed to air — their eyes need to be able to work in both environments. Using experimental measurements and theoretical modeling, the team was able to show that the eyes are able to focus light and form images within the photoreceptive chamber underneath the lens in both air and water.

The team used specialized high-resolution X-ray tomography equipment at Argonne National Laboratory to probe the 3-D architecture of the tiny eyes, which are each less than a tenth of a millimeter across. Using other material characterization techniques, they were able to determine the size shape and crystal orientation of the crystalline grains that make up these lenses — critical to understanding their optical performance, Li says.

While others had long ago noted the chitons’ tiny eyes, it had not been demonstrated that they were capable of forming focused images, as opposed to being simple photoreceptive areas. “A lot of people thought the eyes were so small, there was no way this small lens would be capable of forming an image,” Connors says. But the team isolated some of these lenses and “we were able to produce images,” he says.
Ultimately, the research could lead to the design of bio-inspired materials to provide both physical protection and optical visibility at the same time. “Can we design some kind of structural material,” Li asks, “with additional capabilities for monitoring the environment?”

That remains to be seen, but the new understanding of how Acanthopleura granulata accomplishes that trick should provide some helpful clues, he says.

“High-resolution structure and property studies of the chiton system provide fascinating discoveries into materials-level tradeoffs imposed by the disparate functional requirements, in this case protection and vision, and are key to extracting design principles for multifunctional bio-inspired armor,” says Ortiz, the Morris Cohen Professor of Materials Science and Engineering and MIT’s dean for graduate education.

Peter Fratzl, a professor of biomaterials at the Max Planck Institute of Colloids and Interfaces in Potsdam, Germany, who was not involved in this research, says, “This is a truly impressive example of a multifunctional material.”

“In many instances, materials represent a compromise between conflicting properties, and materials development is often targeted at finding the best possible compromise,” Fratzl adds. “In this paper, the MIT-Harvard collaboration shows that chitons, a variety of mollusks that lives on rocks, has a visual system fully integrated in its armor. It is really astonishing to see how minerals can be used at the same time to focus light and to provide mechanical protection.”

The work was funded by the Department of Defense, the Army Research Office through the MIT Institute for Soldier Nanotechnologies, the National Science Foundation, and the Department of Energy.

Story Source:

The above post is reprinted from materials provided by MIT News.

A new way to deliver microRNAs for cancer treatment

from
BIOENGINEER.ORG http://bioengineer.org/a-new-way-to-deliver-micrornas-for-cancer-treatment/

Scientists exploit gene therapy to shrink tumors in mice with an aggressive form of breast cancer.

mrni-cancer

MIT researchers developed this hydrogel embedded with triple helix microRNA particles and used it to treat cancer in mice. Photo Credit: João Conde, Nuria Oliva, and Natalie Artzi

Twenty years ago, scientists discovered that short strands of RNA known as microRNA help cells to fine-tune their gene expression. Disruption or loss of some microRNAs has been linked to cancer, raising the possibility of treating tumors by adjusting microRNA levels.

Developing such treatments requires delivering microRNA to tumors, which has proven difficult. However, researchers from MIT have now shown that by twisting RNA strands into a triple helix and embedding them in a biocompatible gel, they can not only deliver the strands efficiently but also use them to shrink aggressive tumors in mice.

Using this technique, the researchers dramatically improved cancer survival rates by simultaneously turning on a tumor-suppressing microRNA and de-activating one that causes cancer. They believe their approach could also be used for delivering other types of RNA, as well as DNA and other therapeutic molecules.

“This is a platform that can deliver any gene of interest,” says Natalie Artzi, a research scientist in MIT’s Institute for Medical Engineering and Science (IMES) and an assistant professor of medicine at Brigham and Women’s Hospital. “This work demonstrates the promise of local delivery in combating cancer. In particular, as relates to gene therapy, the triplex structure improves RNA stability, uptake, and transfection efficacy.”

Artzi is the senior author of a paper describing the technique in the Dec. 7 issue of Nature Materials. The study’s lead author is IMES postdoc João Conde.
Local delivery

The new technique reflects a shift among cancer researchers toward designing more targeted and selective treatments, Artzi says. “Cancer is perceived as a systemic disease that mandates systemic treatment. However, in some cases, solid tumors can benefit from a local therapy that may include gene therapy or chemotherapy,” she says.

To create their new system, the researchers took advantage of a material previously developed by Artzi and her colleagues, made from two polymers known as dextran and dendrimer, as a tissue glue.

In the new study, Artzi and Conde exploited the ability of dendrimer to form a self-assembled structure with the microRNAs of interest. First, they wound three strands of microRNA together in a triple helix, creating a molecule that is much more stable than a single or double RNA strand. These triplexes then bind to dendrimer molecules, some of which form nanoparticles, and when dextran is added the injectable formulation gels on top of the solid tumor.

Once placed on the tumor, the gel slowly releases microRNA-dendrimer particles, which are absorbed into the tumor cells. After the particles enter the cells, enzymes cut each triple helix into three separate microRNA strands.

MicroRNA alters gene expression by disrupting messenger RNA molecules, which carry DNA’s instructions to cells’ protein-building machinery. The human genome is believed to encode more than 1,000 microRNAs, and many of these can cause disease when not working properly.

In this study, the researchers delivered two targeted microRNA sequences, plus a third strand whose only function is to keep the helix stable. One of the strands mimics the actions of a naturally occurring microRNA called miR-205, which is frequently silenced in cancer cells. The other blocks a microRNA called miR-221, which is often overactive in cancer cells.

The researchers tested the microRNA delivery platform in mice implanted with triple-negative breast tumors, which lack the three most common breast cancer markers: estrogen receptor, progesterone receptor, and Her2. Such tumors are usually very difficult to treat.

Treating these mice with microRNA delivered as a triple helix was far more effective than standard chemotherapy treatments, the researchers found. With the triple helix treatment, tumors shrank 90 percent and the mice survived for up to 75 days, compared with less than a week for other treatments (including single and double strands of the same microRNAs).

The microRNA combination used in this study appears to work by interfering with cancer cells’ ability to grow and to stick to other cells, the researchers found.

“This is a great proof of principle,” says Mauro Ferrari, the president and CEO of the Houston Methodist Research Institute, who was not involved in the study. “In many ways microRNA is the ultimate opportunity for targeted cancer therapy, but the problem of delivering it has been intractable.”
Identifying targets

Artzi and Conde now plan to look for combinations of microRNA that could combat other types of tumors. This delivery technique will likely work best with accessible solid tumors, such as breast, colon, and possibly brain tumors, Artzi says.

This type of microRNA therapy could also be used to prevent tumors from spreading throughout the body. Several microRNA sequences have already been found to play a role in this process, known as metastasis.
“There are so many microRNAs that are involved in metastasis. It’s really an underexplored field,” Conde says.

The researchers are also looking into using this technique for delivering other types of nucleic acids, including short interfering RNA for RNA interference and DNA for gene therapy. “We really want to identify the right targets and use this platform to deliver them in a very effective way,” Artzi says.

Story Source:

The above post is reprinted from materials provided by MIT News.

Electrically induced arrangement improves bacteria detectors

from
BIOENGINEER.ORG http://bioengineer.org/electrically-induced-arrangement-improves-bacteria-detectors/

Viruses that attack bacteria – bacteriophages – can be fussy: they only inject their genetic material into the bacteria that suit them. The fussiness of bacteriophages can be exploited in order to detect specific species of bacteria. Scientists from Warsaw have just demonstrated that bacteriophage-based biosensors will be much more efficient if prior to the deposition on the surface of the bacteriophage sensor their orientation is ordered in electric field.

electro

Scientists from the Institute of Physical Chemistry of the Polish Academy of Sciences in Warsaw in the role of bacteriophages on the surface, trying to catch bacteria (silver balls). ‘Bacteriophages’ on the left are unordered and ineffective, while ‘bacteriophages’ on the right were electrically ordered. Photo Credit: IPC PAS, Grzegorz Krzyzewski

In the future, an effective method of detecting a particular species of bacteria will be a bacteriophage-based biosensor. The sensitivity of current sensors coated with bacteriophages, that is, viruses attacking bacteria, is far from ideal. In the journal Sensors and Actuators B: Chemical, researchers from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS) in Warsaw, Poland, have presented a method for creating layers of bacteriophages which significantly increases the efficiency of detection. This achievement, funded by the Polish National Centre for Science within SONATA and MAESTRO grants, paves the way for the production of low-cost biosensors, capable of rapidly and reliably detecting specific species of bacteria.

The late detection and identification of bacteria have been – and, unfortunately, still are – the causes of many a tragedy. The lack of reliable and rapid medical tests that, even these days, doctors only find out after several hours which bacterial species is wreaking havoc in the body of the patient. As a result, instead of administering the optimal antibiotic at an early stage of the disease, they have to guess – and often get it wrong, with disastrous consequences for the patient.

“Hospital-acquired infections, to which 100 thousand patients in the United States alone succumb each year, are just some of the problems arising from the lack of good methods for the detection of undesirable bacteria. Industrial contamination is no less important. Nobody wants to sell – much less buy – for example, carrot juice with the addition of dangerous bacteria causing typhoid fever or sepsis. However, such cases continue to occur,” says Dr. Jan Paczesny (IPC PAS).

Attempts have been under way for some time to construct sensors to detect bacteria in which the key role is played by bacteriophages. A single phage, with a length of about 200 nanometers, consists of a head (capsid) containing DNA or RNA and a tail through which genetic material is injected into the interior of the bacteria. The mouth of the tail is surrounded by fibrils. They perform a very important function: they are receptors detecting the presence of bacteria and recognizing their species. The bacteriophage cannot take any risks: its genetic material must reach the interior of only those bacteria that have suitably matching genetic machinery. If the phage were to make a mistake and inject its genetic code into the wrong bacteria, then rather than duplicating itself, it would self-destruct.

The specific structure of bacteriophages means that when they are deposited on the surface they are arranged at random, and most of them cannot effectively penetrate the space around them with their receptors in search of bacteria. As a result, only a few bacteriophages in the detection layer of current biosensors can fulfill their role and the equipment’s sensitivity is greatly reduced.

“Phage heads are electrically negatively charged, whereas the filaments penetrating the surroundings are positive. The bacteriophage is therefore an electrically polarized entity. This gave us the idea of ‘ordering’ the bacteriophages using an electric field,” says PhD student Kinga Matu?a (IPC PAS).

The idea was simple, but its implementation proved to be far from trivial.

“There is a high pressure of up to 50 atm in phage heads. This is what enables the bacteriophage to inject its genetic material. That’s fine, only that this means that bacteriophages like highly saline solutions, because then the pressure difference between the head and the environment is reduced. Such solutions are highly conductive, and therefore the electric field inside them is present only in a thin layer at the surface, further on it drops to zero. And there is a problem. Fortunately, we have managed to solve it,” explains PhD student ?ukasz Richter (IPC PAS).

During their experiments, the Warsaw-based scientists, led by Prof. Robert Holyst, used an appropriately selected constant electric field. Bacteriophages were deposited on a carefully constructed glass substrate, coated first with titanium and then with gold. The titanium served as the glue binding the gold with the glass, while the gold was the main ‘bait’ to which the bacteriophages bound. Unfortunately, not only bacteriophages like gold, so do bacteria. To prevent the binding of random bacteria with the gold layer, the empty spaces between the deposited bacteriophages were covered with a neutral protein (casein).

T4 bacteriophages that attack Escherichia coli bacteria were used to construct the new detection layer at the IPC PAS. The phages for the studies were prepared by the team of Prof. Marcin ?o? from the Department of Biology, University of Gda?sk.

“Virtually all of the bacteriophages in our detection layers stand on the substrate’s surface, so they can easily spread out their receptors. The situation is somewhat similar to what is seen at a rock concert, where fans often raise their hands high above their heads in unison and wave them cheerfully in all directions. We have the impression that our phages are even happier, because we try not to place them too close to each other. After all, the neighbours’ receptors should not interfere with each other,” says Prof. Holyst with a smile.

Meticulous laboratory tests have established that the bacteriophage layers produced using the method developed at the IPC PAS trap up to four times more bacteria than existing layers. As a result, their sensitivity is close to that of the best biosensors that use other, more time consuming and expensive, methods for the detection of bacteria.

The method of preparing layers of ordered bacteriophages developed in Warsaw has numerous advantages. The creation of an external electric field, which is necessary to put the bacteriophages in order, is not very costly. The field acts through space and therefore direct contact of the electrodes with the solution is not required. The presence of an external electric field also means there is significantly less physicochemical interference than in the situation where current is passed through the solution. At the same time, the method is fast and universal: it can be used not only for bacteriophages but also for other electrically polarized molecules.

Story Source:

The above post is reprinted from materials provided by Instytut Chemii Fizycznej Polskiej Akademii Nauk w Warszawie

9 Aralık 2015 Çarşamba

Accidental discovery of how to stay young for longer works in worms

from
BIOENGINEER.ORG http://bioengineer.org/accidental-discovery-of-how-to-stay-young-for-longer-works-in-worms/

Living longer usually means a longer dotage, but wouldn’t it be enticing to extend young adulthood instead? It’s such an appealing prospect that scientists who are announcing success with roundworms are keen to be clear they are a long way from achieving it in humans.

life

Photo Credit: Candida.Performa via Flickr

“We don’t want people to get the impression they can take the drug we used in our study to extend their own teens or early twenties,” says lead author Michael Petrascheck from The Scripps Research Institute (TSRI), California.

“We may have done this in worms, but there are millions of years of evolution between worms and humans.

“We think it is exciting to see that extending lifespan by extending young adulthood can be done at all,” he says.

In the study to be published in the journal eLife, the TSRI-led team administered an antidepressant called mianserin to Caenorhabditis elegans, a type of roundworm used frequently in research. In 2007, they discovered that the drug increases the lifespan of roundworms by 30-40 per cent. Their new goal was to investigate how.

The team treated thousands of worms with either water or mianserin and looked at the activity of genes as the worms aged. First, they measured the activity of genes in young adults as a reference point against which to monitor the aging process. Reproductive maturity begins in day-old roundworms and they live for 2-3 weeks on average.

As the worms aged, the team observed dramatic changes in gene expression. However, the changes occurred in a way that came as a complete surprise. Groups of genes that together play a role in the same function were found to change expression in opposing directions.

They have called this newly-discovered phenomenon ‘transcriptional drift’. By examining data from mice and from 32 human brains aged 26 to 106 years, they confirmed that it also occurs in mammals.

“The orchestration of gene expression no longer seemed coordinated as the organism aged and the results were confusing because genes related to the same function were going up and down at the same time,” says Petrascheck.

“Transcriptional drift can be used as a new metric for measuring age-associated changes that start in young adulthood,” says first author Sunitha Rangaraju.

“Until now we have been dependent on measuring death rates, which are too low in young adults to provide much data. Having a new tool to study aging could help us make new discoveries, for example to treat genetic predispositions where aging starts earlier, such as Hutchinson-Gilford progeria syndrome,” she says.

Using this new metric revealed that treatment with mianserin can suppress transcriptional drift, but only when administered at the right time of life. By 10 days old, treated worms still had the gene expression characteristics of a three-day-old — physiologically they were seven days younger. But by 12 days, the physiological changes required to extend lifespan were complete and lifelong exposure to the drug had no additional effect. Mortality rates were shifted parallel by 7-8 days across the treated worms’ lifespan, confirming the finding.

Mianserin blocked signals related to the regulation of serotonin and this delayed physiological changes associated with age, including the newly-identified transcriptional drift and degenerative processes that lead to death. The effect only occurred during young adulthood and the duration of this period of life was significantly extended.

“How much of our findings with regards to lifespan extension will spill over to mammals is anyone’s guess, for example the extension of lifespan might not be as dramatic,” says Petrascheck.

“However, we are already excited about the fact that we observed the phenomenon of transcriptional drift in species ranging from worms, mice to humans.”

The findings have opened up many new avenues of research for the team and are likely to spawn a wealth of research by others. For example, a significant next step for the team will be to test the effect in mice and to investigate whether there are any side effects. Different environments could produce different results and this will need to be explored. They would also like to test whether the impact is different for different organs in the body.

The discovery of ‘transcriptional drift’ raises the prospect of the phenomenon providing a new general metric for aging, but again this requires further research.

In terms of extending teenage and young adult life in humans, just the idea invites a wealth of questions about the potential social implications and whether this would be as desirable as it first seems.

Story Source:

The above post is reprinted from materials provided by eLife.

Cooperating bacteria isolate cheaters

from
BIOENGINEER.ORG http://bioengineer.org/cooperating-bacteria-isolate-cheaters/

In natural microbial communities, different bacterial species often exchange nutrients by releasing amino acids and vitamins into their growth environment, thus feeding other bacterial cells.

bacteria corperation

The white boxes show the concentrations of amino acids, which are high in the vicinity of cooperative bacteria (above). In contrast, virtually no amino acids were detectable in areas surrounding non-cooperative bacteria (below). This study is the first one to successfully use modern chemical-analytic techniques to visualize the spatial distribution of metabolites and in this way explain the growth of a bacterial colony consisting of cooperating bacteria.Photo Credit: S. Pande, F. Kaftan / Max Planck Institute for Chemical Ecology; S. Lang / Department of Bioinformatics, Friedrich Schiller University Jena.

Even though the released nutrients are energetically costly to produce, bacteria benefit from nutrients their bacterial partners provide in return. Hence, this process is a cooperative exchange of metabolites. Scientists at the Max Planck Institute for Chemical Ecology and the Friedrich Schiller University in Jena have shown that bacteria, which do not actively contribute to metabolite production, can be excluded from the cooperative benefits. The research team demonstrated that cooperative cross-feeding interactions that grow on two-dimensional surfaces are protected from being exploited by opportunistic, non-cooperating bacteria. Under these conditions, non-cooperating bacteria are spatially excluded from the exchanged amino acids. This protective effect probably stabilizes cooperative cross-feeding interactions in the long-run. (The ISME Journal, December 2015)

The Research Group “Experimental Ecology and Evolution” headed by Dr. Christian Kost is investigating how cooperative interactions between organisms have evolved. In this context, the scientists study a special type of division of labor that is very common in nature, namely the reciprocal exchange of nutrients among unicellular bacteria. For these tiny organisms it is often advantageous to divide the labor of certain metabolic processes rather than performing all biochemical functions autonomously. Bacteria that engage in this cooperative exchange of nutrients can save a significant amount of energy.

Indeed, in a previous study, the researchers could already demonstrate that this division-of-metabolic-labor can positively affect bacterial growth. In the new study, they addressed the question how such cooperative interactions can persist if non-cooperating bacteria consume amino acids without providing nutrients in return. The evolutionary disadvantage that results for cooperative cells could lead to a collapse of the cross-feeding interaction.

To experimentally verify this possibility, the scientists have monitored co-cultures of cooperating and non-cooperating bacteria. For this, they genetically engineered “cooperators” of two bacterial species that released increased amounts of certain amino acids into their environment. “As a matter of fact, non-cooperators grew better than cooperators in a well-mixed liquid medium, because under these conditions, they had an unrestricted access to the amino acids in the medium. Their growth, however, was considerably reduced when placed on a two-dimensional surface,” said Kost, summarizing the results of the experiments. A more detailed analysis revealed that non-cooperating bacteria could only exist at the very fringe of colonies consisting of cooperating bacteria.

For their study the scientists combined different methods and techniques. The basis formed a new research approach called “synthetic ecology,” in which certain mutations are rationally introduced into bacterial genomes. The resulting bacterial mutants are then co-cultured and their ecological interactions analyzed. In parallel, colleagues at the Friedrich Schiller University from the Department of Bioinformatics developed computer models to simulate these interactions. Finally, chemical analyses using mass spectrometric imaging was instrumental for visualizing the bacterial metabolites. Only the combination of microbiological methods with chemical-analytic approaches and computer simulations enabled the scientists to understand and elucidate this phenomenon.

“The fact that such a simple principle can effectively stabilize such a complex interaction suggests that similar phenomena may play important roles in natural bacterial communities,” Christian Kost states. After all, bacteria occur predominantly in so-called biofilms — these are surface-attached slime layers that consist of many bacterial species. Known examples include bacteria causing dental plaque or bacterial communities that are used in wastewater treatment plants. Moreover, biofilms are highly relevant for medical research: They do not only play important roles for many infectious diseases by protecting bacterial pathogens from antibiotics or the patients’ immune responses, but are also highly problematic when colonizing and spreading on the surfaces of medical implants.

This new study has elucidated that cooperating bacteria form cell clusters and in this way exclude non-cooperating bacteria from their community. “The importance of this mechanism is due to the fact that no complicated or newly-evolved condition, such as the recognition of potential cooperation partners, needs to be fulfilled to effectively stabilize this long-term partnership. Two cooperating bacterial strains and a two-dimensional surface are sufficient for this protective effect to occur,” explains Kost.

The study raises many new exciting questions the researchers plan to address in the future. For example, they are interested in whether or not similar synergistic effects occur when more than two bacterial partners are involved. In their natural habitats, it is likely that more than two bacterial species participate in such cooperative interactions, leading to rather complex interaction networks. Moreover, amino acid-producing bacterial mutants were synthetically generated for this study. Whether also naturally evolved “cooperators” that occur in a habitat like soil show similar dynamics, remains to be verified. Given that bacteria frequently occur in biofilms, cooperative cross-feeding is probably much more widespread than previously thought. Understanding the factors and mechanisms that promote or inhibit bacterial growth could thus provide important clues on how to fight harmful bacteria or to better use beneficial ones.

Story Source:

The above post is reprinted from materials provided by Max Planck Institute for Chemical Ecology.