Sunday, April 30, 2017

Visualizing nuclear radiation

Extraordinary decontamination efforts are underway in areas affected by the 2011 nuclear accidents in Japan. The creation of total radioactivity maps is essential for thorough cleanup, but the most common methods, according to Kyoto University's Toru Tanimori, do not 'see' enough ground-level radiation.
"The best methods we have currently are labor intensive, and to measure surface radiation accurately," he says, "complex analysis is needed."
In their latest work published in Scientific Reports, Tanimori and his group explain how gamma-ray imaging spectroscopy is more versatile and robust, resulting in a clearer image.
"We constructed an Electron Tracking Compton Camera (ETCC) to detect nuclear gamma rays quantitatively. Typically this is used to study radiation from space, but we have shown that it can also measure contamination, such as at Fukushima."
The imaging revealed what Tanimori calls "micro hot spots" around the Fukushima Daiichi Nuclear Power Plant, even in regions that had already been considered decontaminated. In fact, the cleaning in some regions appeared to be far less than what could be measured by other means.
Current methods for measuring gamma rays do not reliably pinpoint the source of the radiation. According to Tanimori, "radiation sources including distant galaxies can disrupt the measurements."
The key to creating a clear image is taking a color image including the direction and energy of all gamma rays emitted in the vicinity.
"Quantitative imaging produces a surface radioactivity distribution that can be converted to show dosage on the ground," says Tanimori. "The ETCC makes true images of the gamma rays based on proper geometrical optics."
This distribution can then be used to relatively easily measure ground dosage levels, showing that most gamma rays scatter and spread in the air, putting decontamination efforts at risk.
"Our ETCC will make it easier to respond to nuclear emergencies," continues Tanimori. "Using it, we can detect where and how radiation is being released. This will not only help decontamination, but also the eventual dismantling of nuclear reactors."
Read More

Triple-threat cancer-fighting polymer capsules for guided drug delivery

Chemists at the University of Alabama at Birmingham have designed triple-threat cancer-fighting polymer capsules that bring the promise of guided drug delivery closer to preclinical testing.
These multilayer capsules show three traits that have been difficult to achieve in a single entity. They have good imaging contrast that allows detection with low-power ultrasound, they can stably and efficiently encapsulate the cancer drug doxorubicin, and both a low- and higher-power dose of ultrasound can trigger the release of that cargo.
These three features create a guided drug delivery system to target solid tumors. Therapeutic efficacy can be further improved through surface modifications to boost targeting capabilities. Diagnostic low-power ultrasound then could visualize the nanocapsules as they concentrated in a tumor, and therapeutic higher-dose ultrasound would release the drug at ground zero, sparing the rest of the body from dose-limiting toxicity.
This precise control of when and where doxorubicin or other cancer drugs are released could offer a noninvasive alternative to cancer surgery or systemic chemotherapy, the UAB researchers report in the journal ACS Nano, which has an impact factor of 13.3.
"We envision an entirely different approach to treating solid human tumors of numerous pathologic subtypes, including common metastatic malignancies such as breast, melanoma, colon, prostate and lung, utilizing these capsules as a delivery platform," said Eugenia Kharlampieva, Ph.D., an associate professor in the Department of Chemistry, UAB College of Arts and Sciences. "These capsules can protect encapsulated therapeutics from degradation or clearance prior to reaching the target and have ultrasound contrast as a means of visualizing the drug release. They can release their encapsulated drug cargo in specific locations via externally applied ultrasound exposure."
Kharlampieva -- who creates her novel "smart" particles while working at the intersection of polymer chemistry, nanotechnology and biomedical science -- says there is an urgent, and so far unmet, need for such an easily fabricated, guided drug delivery system.
The UAB researchers, led by Kharlampieva and co-first authors Jun Chen and Sithira Ratnayaka, use alternating layers of biocompatible tannic acid and poly(N-vinylpyrrolidone), or TA/PVPON, to build their microcarriers. The layers are formed around a sacrificial core of solid silica or porous calcium carbonate that is dissolved after the layers are complete.
By varying the number of layers, the molecular weight of PVPON or the ratio of shell thickness to capsule diameter, the researchers were able to alter the physical traits of the capsules and their sensitivity to diagnostic ultrasound, at power levels below the FDA maximum for clinical imaging and diagnosis.
For example, one-fourth of empty microcapsules made with four layers of TA/low-molecular weight PVPON were ruptured by three minutes of ultrasound, while capsules made of 15 layers of TA/low-molecular weight PVPON or capsules made from four layers of TA/high-molecular weight PVPON showed no rupture. The ruptured capsules had a lower mechanical rigidity that made them more sensitive to ultrasound pressure changes. Experiments showed that the ratio of the thickness of the capsule wall to the diameter of the capsule is a key variable for sensitivity to rupture.
To test the ultrasound imaging contrast of the microcapsules, the UAB researchers made capsules that were 5 micrometers wide, or about two times wider than the capsules used in the rupture experiments. This size is small enough to still pass through capillaries in the lung, while a larger size for various microparticles is known to greatly improve ultrasound contrast. Red blood cells, for a size comparison, have a diameter of about 6 to 8 micrometers.
Researchers found that 5-micrometer-wide, empty capsules that were made with eight layers of TA/low-molecular weight PVPON showed an ultrasound contrast comparable to the commercially available microsphere contrast agent Definity. When the UAB capsules -- which have a shell thickness of about 50 nanometers -- were loaded with doxorubicin, the ultrasound imaging contrast increased two- to eightfold compared to empty capsules, depending on the mode of ultrasound imaging used. These doxorubicin-loaded capsules were highly stable, with no change in ultrasound imaging contrast after six months of storage. Exposure to serum, known to deposit proteins on various microparticles, did not extinguish the ultrasound imaging contrast of the TA/PVPON microcapsules.
A therapeutic dose of ultrasound was able to rupture 50 percent of the 5-micrometer, doxorubicin-loaded microcapsules, releasing enough doxorubicin to induce 97 percent cytotoxicity in human breast adenocarcinoma cells in culture. Adenocarcinoma cells that were incubated with intact doxorubicin-loaded microcapsules remained viable.
Thus, Kharlampieva says, these TA/PVPON capsules have strong potential as "theranostic" agents for efficient cancer therapy in conjunction with ultrasound. The term theranostic refers to nanoparticles or microcapsules that can double as diagnostic imaging agents and as therapeutic drug-delivery carriers.
The next important preclinical step, Kharlampieva says, in collaboration with Mark Bolding, Ph.D., assistant professor in the UAB Department of Radiology, and Jason Warram, Ph.D., assistant professor in the UAB Department of Otolaryngology, will be studies in animal models to explore how long the UAB capsules persist in blood circulation and where they distribute in the body.
Read More

Quickly assessing brain bleeding in head injuries using new device

In a clinical trial conducted among adults in 11 hospitals, researchers have shown that a hand-held EEG device approved in 2016 by the U.S. Food and Drug Administration that is commercially available can quickly and with 97 percent accuracy rule out whether a person with a head injury likely has brain bleeding and needs further evaluation and treatment.
According to the Centers for Disease Control and Prevention, about 2.5 million Americans each year show up to the emergency room with suspected head injuries. Most of these people receive a CT scan, and more than 90 percent of the scans show no structural brain injury, creating needless radiation exposure and medical costs estimated at about $1,200 per scan.
In a report on their clinical trial, described online March 31 in Academic Emergency Medicine, the researchers say the new device -- which measures electrical activity in the brain and then uses an algorithm to decide if a patient is likely to have brain bleeding -- can help with clinical decision-making and triage of patients, and could reduce the need for CT scans.
"Before our study, there were no objective, quantitative measures of mild head injury other than imaging," says lead investigator Daniel Hanley Jr., M.D., the Legum Professor of Neurological Medicine and director of the Brain Injury Outcomes Program at the Johns Hopkins University School of Medicine. "This work opens up the possibility of diagnosing head injury in a very early and precise way.
"This technology is not meant to replace the CT scan in patients with mild head injury, but it provides the clinician with additional information to facilitate routine clinical decision-making," says Hanley. "If someone with a mild head injury was evaluated on the sports or battlefield, then this test could assist in the decision of whether or not he or she needs rapid transport to the hospital. Alternatively, if there is an accident with many people injured, medical personnel could use the device to triage which patients would need to have CT scans and who should go first. Those showing a 'positive' for brain injury would go first."
The study only looked at adults and didn't assess how well the device could predict traumatic brain injuries in children or teens.
The study, Hanley says, was designed to test the accuracy and effectiveness of AHEAD 300, a device developed by BrainScope Company Inc. of Bethesda, Maryland, that is now available to a limited audience through a centers of excellence program. Throughout its eight years of development, the company has tested this and prior generations of the device in multiple human trials. The point of the device is to assess the likelihood that a patient has more than 1 milliliter of bleeding in the brain and needs immediate evaluation by medical personnel.
To begin, the researchers recruited 720 adults who came to 11 Emergency Departments across the nation between February and December 2015 with a closed head injury, meaning the skull was intact. Participants were between 18 and 85 years old, and 60 percent were men. Upon entry to the Emergency Department, each physician performed standard clinical assessments for head injuries used at their site. A trained technician then administered the Standardized Assessment of Concussion and the Concussion Symptom Inventory to characterize the patient's symptoms, and then used the AHEAD 300 device to measure electroencephalogram (EEG) data -- essentially tracking and recording brain wave patterns -- from patients while they reclined quietly for five to 10 minutes. The device includes a disposable headset that records the EEG data from five regions on the forehead and feeds the signals back to the hand-held AHEAD 300 device in real time. In addition, the technician entered certain clinical/demographic information into the device, including age; the Glasgow Coma Scale score, which rates how conscious a person is; and if there was a loss of consciousness related to the injury.
The device was programmed to read approximately 30 specific features of brain electrical activity, which it uses an algorithm to analyze, and how the patient's pattern of brain activity compared to the same pattern of brain activity considered normal. For example, it looked for how fast or slow information traveled from one side of the brain to the other, or whether electrical activity in both sides of the brain was coordinated or if one side was lagging.
The accuracy of the device was tested using CT scans from the participants. The presence of any blood within the intracranial cavity was considered a positive finding, indicating brain bleeding. After 72 to 96 hours, the researchers followed up with phone calls to the patients and/or looked at medical records after 30 days to further confirm the accuracy of each participant's injury status.
Of the 720 patients, 564 turned out not to have traumatic brain injuries, and 156 did have them, as determined by independently measured and judged CT scan assessments.
On the basis of AHEAD 300 classification, the researchers sorted patients into "yes" or "no" categories, indicating likely traumatic brain injury with over 1 millimeter of bleeding or not. Of 564 patients without brain bleeding, as confirmed with CT scans, 291 patients were scored on the AHEAD 300 as likely not having a brain injury. Of the 156 patients with confirmed brain bleeding, 144, or 92 percent, were assessed as likely to have an injury by the AHEAD 300 classification. Of those confirmed with brain bleeding via CT scan, 12 participants, or 8 percent, had some intracranial bleeding, and five participants, or 3 percent, had more than 1 milliliter of blood in the brain.
Because many of the incorrect yes/no classifications don't contain information about how close a patient is to the cutoff, the researchers then created three categories to sort patients by -- "yes," "no" and "maybe" -- to see if this boosted the accuracy of the device. The maybe category included a small number of patients with greater-than-usual abnormal EEG activity that was not statistically high enough to be definitely positive. When the results were recalculated on the three-tier system, the sensitivity of detecting someone with a traumatic brain injury increased to 97 percent, with 152 of 156 traumatic head injuries detected, and 99 percent of those had more than or equal to 1 milliliter of bleeding in the brain. None of the four false negatives required surgery, returned to the hospital due to their injury or needed additional brain imaging.
The trial results also show the device predicted the absence of potentially dangerous brain bleeding 52 percent of the time in the participants tested with the yes/no classification. Using the yes/no/maybe classification, the device classified 281 patients as having a brain injury, correctly predicting whether someone didn't have a head injury 39 percent of the time. The researchers say these predictive capabilities improve on the clinical criteria currently used to assess whether to do a CT scan -- known as the New Orleans Criteria and the Canadian Head CT rules -- and predicted the absence of brain bleeding more than 70 percent of the time in those people with no more than one symptom of brain injury, such as disorientation, headache or amnesia.
As with a typical EEG, the test doesn't cause any type of sensation or risk. There is a small chance of skin irritation from the discs that read the electrical activity.
Although an exact cost hasn't been set by BrainScope, the maker of the device, the company says it will be a fraction of the cost of a CT scanner, which starts at $90,000 and goes up to $2.5 million depending on the capabilities, and it will be cheaper and significantly faster to administer. In September 2016, the device was cleared by the Food and Drug Administration for use in a clinical setting.
Read More

Realistic 3-D immersive visualization of unborn babies

Parents may soon be able to watch their unborn babies grow in realistic 3-D immersive visualizations, thanks to new technology that transforms MRI and ultrasound data into a 3-D virtual reality model of a fetus, according to research being presented next week at the annual meeting of the Radiological Society of North America (RSNA).
MRI provides high-resolution fetal and placental imaging with excellent contrast. It is generally used in fetal evaluation when ultrasound cannot provide sufficiently high-quality images.
Researchers in Brazil created virtual reality 3-D models based on fetal MRI results. Sequentially-mounted MRI slices are used to begin construction of the model. A segmentation process follows in which the physician selects the body parts to be reconstructed in 3-D. Once an accurate 3-D model is created -- including the womb, umbilical cord, placenta and fetus -- the virtual reality device can be programmed to incorporate the model.
"The 3-D fetal models combined with virtual reality immersive technologies may improve our understanding of fetal anatomical characteristics and can be used for educational purposes and as a method for parents to visualize their unborn baby," said study co-author Heron Werner Jr., M.D., Ph.D., from the Clínica de Diagnóstico por Imagem, in Rio de Janeiro, Brazil.
The virtual reality fetal 3-D models are remarkably similar to the postnatal appearance of the newborn baby. They recreate the entire internal structure of the fetus, including a detailed view of the respiratory tract, which can aid doctors in assessing abnormalities.
For the virtual reality device, Dr. Werner and colleagues used the latest-generation Oculus Rift 2 headset. Oculus Rift 2 places the user in an immersive environment, complete with heartbeat sounds derived from the ultrasound of the fetus. Users can study the 3-D fetal anatomy simply by moving their head.
"The experience with the Oculus Rift has been wonderful," Dr. Werner said. "It provides fetal images that are sharper and clearer than ultrasound and MR images viewed on a traditional display."
The technology has numerous potential applications, including assessment of fetal airway patency. Airway patency, or the state of airways being open and unblocked, is an important issue for a developing fetus. For example, if ultrasound showed an abnormal mass near the fetal airway, physicians could use the 3-D images and the headset to assess the entire length of the airway and make better informed decisions about delivery.
The technology also can help coordinate care with multidisciplinary teams and provide better visual information to parents to help them understand malformations and treatment decisions.
"The physicians can have access to an immersive experience on the clinical case that they are working on, having the whole internal structure of the fetus in 3-D in order to better visualize and share the morphological information," Dr. Werner said. "We believe that these images will help facilitate a multidisciplinary discussion about some pathologies in addition to bringing a new experience for parents when following the development of their unborn child."
The researchers have used the technique on patients at a clinic in Rio de Janeiro, including cases where the fetus had evidence of an abnormality that required postnatal surgery. They hope to use the technology more broadly over the next year.
Read More

New ultrasound technique is first to image inside live cells

Researchers at The University of Nottingham have developed a break-through technique that uses sound rather than light to see inside live cells, with potential application in stem-cell transplants and cancer diagnosis.
The new nanoscale ultrasound technique uses shorter-than-optical wavelengths of sound and could even rival the optical super-resolution techniques which won the 2014 Nobel Prize for Chemistry.
This new kind of sub-optical phonon (sound) imaging provides invaluable information about the structure, mechanical properties and behaviour of individual living cells at a scale not achieved before.
Researchers from the Optics and Photonics group in the Faculty of Engineering, University of Nottingham, are behind the discovery, which is published in the paper 'High resolution 3D imaging of living cells with sub-optical wavelength phonons' in the journal, Scientific Reports.
"People are most familiar with ultrasound as a way of looking inside the body -- in the simplest terms we've engineered it to the point where it can look inside an individual cell. Nottingham is currently the only place in the world with this capability," said Professor Matt Clark, who contributed to the study.
In conventional optical microscopy, which uses light (photons), the size of the smallest object you can see (or the resolution) is limited by the wavelength.
For biological specimens, the wavelength cannot go smaller than that of blue light because the energy carried on photons of light in the ultraviolet (and shorter wavelengths) is so high it can destroy the bonds that hold biological molecules together damaging the cells.
Optical super-resolution imaging also has distinct limitations in biological studies. This is because the fluorescent dyes it uses are often toxic and it requires huge amounts of light and time to observe and reconstruct an image which is damaging to cells.
Unlike light, sound does not have a high-energy payload. This has enabled the Nottingham researchers to use smaller wavelengths and see smaller things and get to higher resolutions without damaging the cell biology.
"A great thing is that, like ultrasound on the body, ultrasound in the cells causes no damage and requires no toxic chemicals to work. Because of this we can see inside cells that one day might be put back into the body, for instance as stem-cell transplants," adds Professor Clark.
Read More

Live cell imaging using a smartphone

A recent study from Uppsala University shows how smartphones can be used to make movies of living cells, without the need for expensive equipment. The study is published in the open access journal PLOS ONE, making it possible for laboratories around the world to do the same thing.
Live imaging of cells is a very powerful tool for the study of cells, to learn about how cells respond to different treatments such as drugs or toxins. However, microscopes and equipment for live imaging are often very expensive.
In the present study, old standard inverted microscopes that are very abundant at Universities and hospitals were upgraded to high quality live imaging stations using a few 3D-printed parts, off-the-shelf electronics, and a smartphone. It was shown that the resultant upgraded systems provided excellent cell culture conditions and enabled high-resolution imaging of living cells.
"What we have done in this project isn't rocket science, but it shows you how 3D-printing will transform the way scientists work around the world. 3D-printing has the potential to give researchers with limited funding access to research methods that were previously too expensive," says Johan Kreuger, senior lecturer at the Department of Medical Cell Biology at Uppsala University.
"The technology presented here can readily be adapted and modified according to the specific need of researchers, at a low cost. Indeed, in the future, it will be much more common that scientists create and modify their own research equipment, and this should greatly propel technology development," says Johan Kreuger.
Read More

Brain imaging headband measures how our minds align when we communicate

Great ideas so often get lost in translation -- from the math teacher who can't get through to his students, to a stand-up comedian who bombs during an open mic night.
But how can we measure whether our audiences understand what we're trying to convey? And better yet, how can we improve that exchange?
Drexel University biomedical engineers, in collaboration with Princeton University psychologists, are using a wearable brain-imaging device to see just how brains sync up when humans interact. It is one of many applications for this functional near-infrared spectroscopy (or fNIRS) system, which uses light to measure neural activity during real-life situations and can be worn like a headband.
Published in Scientific Reports, a new study shows that the fNIRS device can successfully measure brain synchronization during conversation. The technology can now be used to study everything from doctor-patient communication, to how people consume cable news.
"Being able to look at how multiple brains interact is an emerging context in social neuroscience," said Hasan Ayaz, PhD, an associate research professor in Drexel's School of Biomedical Engineering, Science and Health Systems, who led the research team. "We live in a social world where everybody is interacting. And we now have a tool that can give us richer information about the brain during everyday tasks -- such as natural communication -- that we could not receive in artificial lab settings or from single brain studies."
The current study is based on previous research from Uri Hasson, PhD, associate professor at Princeton University, who has used functional Magnetic Resonance Imaging (fMRI) to study the brain mechanisms underlying the production and comprehension of language. Hasson has found that a listener's brain activity actually mirrors the speaker's brain when he or she is telling story about a real-life experience. And higher coupling is associated with better understanding.
However, traditional brain imaging methods have certain limitations. In particular, fMRI requires subjects to lie down motionlessly in a noisy scanning environment. With this kind of set-up, it is not possible to simultaneously scan the brains of multiple individuals who are speaking face-to-face.
This is why the Drexel researchers sought to investigate whether the portable fNIRS system could be a more effective approach to probe the brain-to-brain coupling question in natural settings.
For their study, a native English speaker and two native Turkish speakers told an unrehearsed, real-life story in their native language. Their stories were recorded and their brains were scanned using fNIRS. Fifteen English speakers then listened to the recording, in addition to a story that was recorded at a live storytelling event.
The researchers targeted the prefrontal and parietal areas of the brain, which include cognitive and higher order areas that are involved in a person's capacity to discern beliefs, desires and goals of others. They hypothesized that a listener's brain activity would correlate with the speaker's only when listening to a story they understood (the English version). A second objective of the study was to compare the fNIRS results with data from a similar study that had used fMRI, in order to compare the two methods.
They found that when the fNIRS measured the oxygenation and deoxygenation of blood cells in the test subject's brains, the listeners' brain activity matched only with the English speakers. These results also correlated with the previous fMRI study.
This new research supports fNIRS as a viable future tool to study brain-to-brain coupling during social interaction. The system can be used to offer important information about how to better communicate in many different environments, including classrooms, business meetings, political rallies and doctors' offices.
"This would not be feasible with fMRI. There are too many challenges," said Banu Onaral, PhD, the H. H. Sun Professor in the School of Biomedical Engineering, Science and Health Systems. "Now that we know fNIRS is a feasible tool, we are moving into an exciting era when we can know so much more about how the brain works as people engage in everyday tasks."
Read More