MIT Research News' Journal
[Most Recent Entries]
[Calendar View]
Thursday, September 29th, 2016
| Time |
Event |
| 12:00a |
Nanosensors could help determine tumors’ ability to remodel tissue MIT researchers have designed nanosensors that can profile tumors and may yield insight into how they will respond to certain therapies. The system is based on levels of enzymes called proteases, which cancer cells use to remodel their surroundings.
Once adapted for humans, this type of sensor could be used to determine how aggressive a tumor is and help doctors choose the best treatment, says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science and a member of MIT’s Koch Institute for Integrative Cancer Research.
“This approach is exciting because people are developing therapies that are protease-activated,” Bhatia says. “Ideally you’d like to be able to stratify patients based on their protease activity and identify which ones would be good candidates for these therapies.”
Once injected into the tumor site, the nanosensors are activated by a magnetic field that is harmless to healthy tissue. After interacting with and being modified by the target tumor proteins, the sensors are secreted in the urine, where they can be easily detected in less than an hour.
Bhatia and Polina Anikeeva, the Class of 1942 Associate Professor of Materials Science and Engineering, are the senior authors of the paper, which appears in the journal Nano Letters. The paper’s lead authors are Koch Institute postdoc Simone Schurle and graduate student Jaideep Dudani.
Heat and release
Tumors, especially aggressive ones, often have elevated protease levels. These enzymes help tumors spread by cleaving proteins that compose the extracellular matrix, which normally surrounds cells and holds them in place.
In 2014, Bhatia and colleagues reported using nanoparticles that interact with a type of protease known as matrix metalloproteinases (MMPs) to diagnose cancer. In that study, the researchers delivered nanoparticles carrying peptides, or short protein fragments, designed to be cleaved by the MMPs. If MMPs were present, hundreds of cleaved peptides would be excreted in the urine, where they could be detected with a simple paper test similar to a pregnancy test.
In the new study, the researchers wanted to adapt the sensors so that they could report on the traits of tumors in a known location. To do that, they needed to ensure that the sensors were only producing a signal from the target organ, unaffected by background signals that might be produced in the bloodstream. They first designed sensors that could be activated with light once they reached their target. That required the use of ultraviolet light, however, which doesn’t penetrate very far into tissue.
“We started thinking about what kinds of energy we might use that could penetrate further into the body,” says Bhatia, who is also a member of MIT’s Institute for Medical Engineering and Science.
To achieve that, Bhatia teamed up with Anikeeva, who specializes in using magnetic fields to remotely activate materials. The researchers decided to encapsulate Bhatia’s protease-sensing nanoparticles along with magnetic particles that heat up when exposed to an alternating magnetic field. The field is produced by a small magnetic coil that changes polarity some half million times per second.
The heat-sensitive material that encapsulates the particles disintegrates as the magnetic particles heat up, allowing the protease sensors to be released. However, the particles do not produce enough heat to damage nearby tissue.
“It has been challenging to examine tumor-specific protease activities from patients’ biofluids because these proteases are also present in blood and other organs,” says Ji Ho (Joe) Park, an associate professor of bio and brain engineering at the Korea Advanced Institute of Science and Technology.
“The strength of this work is the magnetothermally responsive protease nanosensors with spatiotemporal controllability,” says Park, who was not involved in the research. “With these nanosensors, the MIT researchers could assay protease activities involved more in tumor progression by reducing off-target activation significantly.”
Choosing treatments
In a study of mice, the researchers showed that they could use these particles to correctly profile different types of colon tumors based on how much protease they produce.
Cancer treatments based on proteases, now in clinical trials, consist of antibodies that target a tumor protein but have “veils” that prevent them from being activated before reaching the tumor. The veils are cleaved by proteases, so this therapy would be most effective for patients with high protease levels.
The MIT team is also exploring using this type of sensor to image cancerous lesions that spread to the liver from other organs. Surgically removing such lesions works best if there are fewer than four, so measuring them could help doctors choose the best treatment.
Bhatia says this type of sensor could be adapted to other tumors as well, because the magnetic field can penetrate deep into the body. This approach could also be expanded to make diagnoses based on detecting other kinds of enzymes, including those that cut sugar chains or lipids.
The study was funded in part by the Ludwig Center for Molecular Oncology, a Koch Institute Support Grant from the National Cancer Institute, and a Core Center Grant from the National Institute of Environmental Health Sciences. | | 10:45a |
Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles MIT researchers have developed a technique for recovering visual information from light that has scattered because of interactions with the environment — such as passing through human tissue.
The technique could lead to medical-imaging systems that use visible light, which carries much more information than X-rays or ultrasound waves, or to computer vision systems that work in fog or drizzle. The development of such vision systems has been a major obstacle to self-driving cars.
In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.
From that information, the researchers’ algorithms were able to reconstruct an accurate image of the pattern cut into the mask.
“The reason our eyes are sensitive only in this narrow part of the spectrum is because this is where light and matter interact most,” says Guy Satat, a graduate student at the MIT Media Lab and first author on the new paper. “This is why X-ray is able to go inside the body, because there is very little interaction. That’s why it can’t distinguish between different types of tissue, or see bleeding, or see oxygenated or deoxygenated blood.”
The imaging technique’s potential applications in automotive sensing may be even more compelling than those in medical imaging, however. Many experimental algorithms for guiding autonomous vehicles are highly reliable under good illumination, but they fall apart completely in fog or drizzle; computer vision systems misinterpret the scattered light as having reflected off of objects that don’t exist. The new technique could address that problem.
Satat’s coauthors on the new paper, published today in Scientific Reports, are three other members of the Media Lab’s Camera Culture group: Ramesh Raskar, the group’s leader, Satat’s thesis advisor, and an associate professor of media arts and sciences; Barmak Heshmat, a research scientist; and Dan Raviv, a postdoc.
Expanding circles
Like many of the Camera Culture group’s projects, the new system relies on a pulsed laser that emits ultrashort bursts of light, and a high-speed camera that can distinguish the arrival times of different groups of photons, or light particles. When a light burst reaches a scattering medium, such as a tissue phantom, some photons pass through unmolested; some are only slightly deflected from a straight path; and some bounce around inside the medium for a comparatively long time. The first photons to arrive at the sensor have thus undergone the least scattering; the last to arrive have undergone the most.
Where previous techniques have attempted to reconstruct images using only those first, unscattered photons, the MIT researchers’ technique uses the entire optical signal. Hence its name: all-photons imaging.
The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time. To get a sense of how all-photons imaging works, suppose that light arrives at the camera from only one point in the visual field. The first photons to reach the camera pass through the scattering medium unimpeded: They show up as just a single illuminated pixel in the first frame of the movie.
The next photons to arrive have undergone slightly more scattering, so in the second frame of the video, they show up as a small circle centered on the single pixel from the first frame. With each successive frame, the circle expands in diameter, until the final frame just shows a general, hazy light.
The problem, of course, is that in practice the camera is registering light from many points in the visual field, whose expanding circles overlap. The job of the researchers’ algorithm is to sort out which photons illuminating which pixels of the image originated where.
Cascading probabilities
The first step is to determine how the overall intensity of the image changes in time. This provides an estimate of how much scattering the light has undergone: If the intensity spikes quickly and tails off quickly, the light hasn’t been scattered much. If the intensity increases slowly and tails off slowly, it has.
On the basis of that estimate, the algorithm considers each pixel of each successive frame and calculates the probability that it corresponds to any given point in the visual field. Then it goes back to the first frame of video and, using the probabilistic model it has just constructed, predicts what the next frame of video will look like. With each successive frame, it compares its prediction to the actual camera measurement and adjusts its model accordingly. Finally, using the final version of the model, it deduces the pattern of light most likely to have produced the sequence of measurements the camera made.
One limitation of the current version of the system is that the light emitter and the camera are on opposite sides of the scattering medium. That limits its applicability for medical imaging, although Satat believes that it should be possible to use fluorescent particles known as fluorophores, which can be injected into the bloodstream and are already used in medical imaging, as a light source. And fog scatters light much less than human tissue does, so reflected light from laser pulses fired into the environment could be good enough for automotive sensing.
“People have been using what is known as time gating, the idea that photons not only have intensity but also time-of-arrival information and that if you gate for a particular time of arrival you get photons with certain specific path lengths and therefore [come] from a certain specific depth in the object,” says Ashok Veeraraghavan, an assistant professor of electrical and computer engineering at Rice University. “This paper is taking that concept one level further and saying that even the photons that arrive at slightly different times contribute some spatial information.”
“Looking through scattering media is a problem that’s of large consequence,” he adds. But he cautions that the new paper does not entirely solve it. “There’s maybe one barrier that’s been crossed, but there are maybe three more barriers that need to be crossed before this becomes practical,” he says. | | 1:19p |
Scientists identify neurons devoted to social memory Mice have brain cells that are dedicated to storing memories of other mice, according to a new study from MIT neuroscientists. These cells, found in a region of the hippocampus known as the ventral CA1, store “social memories” that help shape the mice’s behavior toward each other.
The researchers also showed that they can suppress or stimulate these memories by using a technique known as optogenetics to manipulate the cells that carry these memory traces, or engrams.
“You can change the perception and the behavior of the test mouse by either inhibiting or activating the ventral CA1 cells,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and director of the RIKEN-MIT Center for Neural Circuit Genetics at the Picower Institute for Learning and Memory.
Tonegawa is the senior author of the study, which appears in the Sept. 29 online edition of Science. MIT postdoc Teruhiro Okuyama is the paper’s lead author.
Tracking social memory
In a well-known study published in 2005, researchers at Caltech identified neurons in the human brain that respond specifically to images of celebrities such as Halle Berry or Brad Pitt, leading them to conclude that the brain has cells devoted to storing memories of people who are familiar.
Many of these cells were found in and around the hippocampus, which is also where the brain stores memories of events, known as episodic memories. The MIT team suspected that in mice, social memories may be stored in the hippocampus’ ventral CA1, in part because previous studies have suggested that this region is not involved in storing episodic memories.
The researchers set out to test this hypothesis using optogenetics: By engineering neurons of the ventral CA1 to express light-sensitive proteins, they could artificially activate or inhibit these cells by shining light on them as the mice interacted with each other.
First, the researchers allowed one mouse, known as the “test mouse,” to spend time with another mouse for two hours, letting the mice become familiar with each other. Soon after, the test mouse was placed in a cage with the familiar mouse and a new mouse.
Under normal circumstances, mice prefer to interact with mice they haven’t seen before. However, when the researchers used light to shut off a circuit that connects the ventral CA1 to another part of the brain called the nucleus accumbens, the test mouse interacted with both of the other mice equally, because its memory of the familiar mouse was blocked.
“The inhibition of ventral CA1 leads to impairment of the social memory,” Okuyama says. “They cannot show any preference for the novel mouse. They approach both mice equally.”
On the other hand, when the researchers stimulated ventral CA1 cells while the test mouse was interacting with a novel mouse, the test mouse began to treat the novel mouse as if they were already acquainted.
This effect was specific to social interactions: Interfering with the ventral CA1 did not have any effect on the mice’s ability to recognize objects or locations that they had previously seen.
Re-awakening memories
When the researchers monitored activity of neurons in the ventral CA1, they found that after a mouse was familiarized with another mouse, a certain population of these neurons began to respond specifically to the familiar mouse.
These patterns could be seen even after the mice appeared to “forget” the once-familiar mice. After about 24 hours of separation, the test mice began to treat their former acquaintances as strangers, but the neurons that had been tuned to the familiar mice still fired, although not as frequently. This suggests that the memories are still being stored even though the test mice no longer appear to remember the mice they once knew.
Furthermore, the researchers were able to “re-awaken” these memories using optogenetics. In one experiment, when the test mouse first interacted with another mouse, the researchers used a light-sensitive protein called channelrhodopsin to tag only the ventral CA1 cells that were turned on by the familiarization treatment. When these neurons were re-activated with light 24 hours later, the memory of the once-familiar mouse returned. The researchers were also able to artificially link the memory of the familiar mouse with a positive or negative emotion.
Tonegawa’s lab has previously used this technique to identify hippocampal cells that store engrams representing episodic memories. The new study offers strong evidence that memory traces for specific individuals are being stored in the neurons of the ventral CA1, Tonegawa says. “There is some kind of persistent change that takes place in those cells as long as memory is still detectable,” he says.
Larry Young, a professor of psychiatry and director of the Center for Translational Social Science at Emory University, described the study as “one of the most fascinating papers related to social neuroscience I’ve ever seen.”
“In this paper, they identified a subset of cells in a particular brain region that is the engram — a set of cells that through its connections in the nucleus accumbens, actually holds the memory of another individual,” says Young, who was not involved in the study. “They showed that the same group of neurons fired repeatedly in response to the same animal, which is absolutely incredible. Then to go in and control those specific cells is really on the cutting edge.”
The MIT researchers are now investigating a possible link between social memory and autism. Some people with autism have a mutation of the receptor for a hormone called oxytocin, which is abundant on the surface of ventral CA1 cells. Tonegawa’s lab hopes to uncover whether these mutations might impair social interactions.
The research was funded by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, the JPB Foundation, and the Japan Society for the Promotion of Science. |
|