Reading view
Vulcan Centaur rocket is 'go' for historic Jan. 8 launch of private Peregrine moon lander
Grant to validate blood test for early breast cancer detection
Inhalable sensors could enable early lung cancer detection
Using a new technology developed at MIT, diagnosing lung cancer could become as easy as inhaling nanoparticle sensors and then taking a urine test that reveals whether a tumor is present.
The new diagnostic is based on nanosensors that can be delivered by an inhaler or a nebulizer. If the sensors encounter cancer-linked proteins in the lungs, they produce a signal that accumulates in the urine, where it can be detected with a simple paper test strip.
This approach could potentially replace or supplement the current gold standard for diagnosing lung cancer, low-dose computed tomography (CT). It could have an especially significant impact in low- and middle-income countries that don’t have widespread availability of CT scanners, the researchers say.
“Around the world, cancer is going to become more and more prevalent in low- and middle-income countries. The epidemiology of lung cancer globally is that it’s driven by pollution and smoking, so we know that those are settings where accessibility to this kind of technology could have a big impact,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science.
Bhatia is the senior author of the paper, which appears today in Science Advances. Qian Zhong, an MIT research scientist, and Edward Tan, a former MIT postdoc, are the lead authors of the study.
Inhalable particles
To help diagnose lung cancer as early as possible, the U.S. Preventive Services Task Force recommends that heavy smokers over the age of 50 undergo annual CT scans. However, not everyone in this target group receives these scans, and the high false-positive rate of the scans can lead to unnecessary, invasive tests.
Bhatia has spent the last decade developing nanosensors for use in diagnosing cancer and other diseases, and in this study, she and her colleagues explored the possibility of using them as a more accessible alternative to CT screening for lung cancer.
These sensors consist of polymer nanoparticles coated with a reporter, such as a DNA barcode, that is cleaved from the particle when the sensor encounters enzymes called proteases, which are often overactive in tumors. Those reporters eventually accumulate in the urine and are excreted from the body.
Previous versions of the sensors, which targeted other cancer sites such as the liver and ovaries, were designed to be given intravenously. For lung cancer diagnosis, the researchers wanted to create a version that could be inhaled, which could make it easier to deploy in lower resource settings.
“When we developed this technology, our goal was to provide a method that can detect cancer with high specificity and sensitivity, and also lower the threshold for accessibility, so that hopefully we can improve the resource disparity and inequity in early detection of lung cancer,” Zhong says.
To achieve that, the researchers created two formulations of their particles: a solution that can be aerosolized and delivered with a nebulizer, and a dry powder that can be delivered using an inhaler.
Once the particles reach the lungs, they are absorbed into the tissue, where they encounter any proteases that may be present. Human cells can express hundreds of different proteases, and some of them are overactive in tumors, where they help cancer cells to escape their original locations by cutting through proteins of the extracellular matrix. These cancerous proteases cleave DNA barcodes from the sensors, allowing the barcodes to circulate in the bloodstream until they are excreted in the urine.
In the earlier versions of this technology, the researchers used mass spectrometry to analyze the urine sample and detect DNA barcodes. However, mass spectrometry requires equipment that might not be available in low-resource areas, so for this version, the researchers created a lateral flow assay, which allows the barcodes to be detected using a paper test strip.
The researchers designed the strip to detect up to four different DNA barcodes, each of which indicates the presence of a different protease. No pre-treatment or processing of the urine sample is required, and the results can be read about 20 minutes after the sample is obtained.
“We were really pushing this assay to be point-of-care available in a low-resource setting, so the idea was to not do any sample processing, not do any amplification, just to be able to put the sample right on the paper and read it out in 20 minutes,” Bhatia says.
Accurate diagnosis
The researchers tested their diagnostic system in mice that are genetically engineered to develop lung tumors similar to those seen in humans. The sensors were administered 7.5 weeks after the tumors started to form, a time point that would likely correlate with stage 1 or 2 cancer in humans.
In their first set of experiments in the mice, the researchers measured the levels of 20 different sensors designed to detect different proteases. Using a machine learning algorithm to analyze those results, the researchers identified a combination of just four sensors that was predicted to give accurate diagnostic results. They then tested that combination in the mouse model and found that it could accurately detect early-stage lung tumors.
For use in humans, it’s possible that more sensors might be needed to make an accurate diagnosis, but that could be achieved by using multiple paper strips, each of which detects four different DNA barcodes, the researchers say.
The researchers now plan to analyze human biopsy samples to see if the sensor panels they are using would also work to detect human cancers. In the longer term, they hope to perform clinical trials in human patients. A company called Sunbird Bio has already run phase 1 trials on a similar sensor developed by Bhatia’s lab, for use in diagnosing liver cancer and a form of hepatitis known as nonalcoholic steatohepatitis (NASH).
In parts of the world where there is limited access to CT scanning, this technology could offer a dramatic improvement in lung cancer screening, especially since the results can be obtained during a single visit.
“The idea would be you come in and then you get an answer about whether you need a follow-up test or not, and we could get patients who have early lesions into the system so that they could get curative surgery or lifesaving medicines,” Bhatia says.
The research was funded by the Johnson & Johnson Lung Cancer Initiative, the Howard Hughes Medical Institute, the Koch Institute Support (core) Grant from the National Cancer Institute, and the National Institute of Environmental Health Sciences.
Additional related work was supported by the Marble Center for Cancer Nanomedicine and the Koch Institute Frontier Research Program via Upstage Lung Cancer.
A Soap Bubble Becomes a Laser
Author(s): Katherine Wright
Using a soap bubble, researchers have created a laser that could act as a sensitive sensor for environmental parameters including atmospheric pressure.
[Physics 17, 3] Published Fri Jan 05, 2024
20 of the best places to view the 2024 total solar eclipse
The total eclipse set to take place April 8, 2024, will dazzle everyone who views it. Here are some of the best places to see to see the 2024 eclipse.
The post 20 of the best places to view the 2024 total solar eclipse appeared first on Astronomy Magazine.
Alien life could thrive in Venus' acidic clouds, new study hints
Is a black hole stuck inside the sun? No, but here's why scientists are asking
NASA to unveil new X-59 'quiet' supersonic jet on Jan. 12
NASA's Perseverance rover captures 360-degree view of Mars' Jezero Crater (video)
Improving patient safety using principles of aerospace engineering
Approximately 13 billion laboratory tests are administered every year in the United States, but not every result is timely or accurate. Laboratory missteps prevent patients from receiving appropriate, necessary, and sometimes lifesaving care. These medical errors are the third-leading cause of death in the nation.
To help reverse this trend, a research team from the MIT Department of Aeronautics and Astronautics (AeroAstro) Engineering Systems Lab and Synensys, a safety management contractor, examined the ecosystem of diagnostic laboratory data. Their findings, including six systemic factors contributing to patient hazards in laboratory diagnostics tests, offer a rare holistic view of this complex network — not just doctors and lab technicians, but also device manufacturers, health information technology (HIT) providers, and even government entities such as the White House. By viewing the diagnostic laboratory data ecosystem as an integrated system, an approach based on systems theory, the MIT researchers have identified specific changes that can lead to safer behaviors for health care workers and healthier outcomes for patients.
A report of the study, which was conducted by AeroAstro Professor Nancy Leveson, who serves as head of the System Safety and Cybersecurity group, along with Research Engineer John Thomas and graduate students Polly Harrington and Rodrigo Rose, was submitted to the U.S. Food and Drug Administration this past fall. Improving the infrastructure of laboratory data has been a priority for the FDA, who contracted the study through Synensys.
Hundreds of hazards, six causes
In a yearlong study that included more than 50 interviews, the Leveson team found the diagnostic laboratory data ecosystem to be vast yet fractured. No one understood how the whole system functioned or the totality of substandard treatment patients received. Well-intentioned workers were being influenced by the system to carry out unsafe actions, MIT engineers wrote.
Test results sent to the wrong patients, incompatible technologies that strain information sharing between the doctor and lab technician, and specimens transported to the lab without guarantees of temperature control were just some of the hundreds of hazards the MIT engineers identified. The sheer volume of potential risks, known as unsafe control actions (UCAs), should not dissuade health care stakeholders from seeking change, Harrington says.
“While there are hundreds of UCAs, there are only six systemic factors that are causing these hazards,” she adds. “Using a system-based methodology, the medical community can address many of these issues with one swoop.”
Four of the systemic factors — decentralization, flawed communication and coordination, insufficient focus on safety-related regulations, and ambiguous or outdated standards — reflect the need for greater oversight and accountability. The two remaining systemic factors — misperceived notions of risk and lack of systems theory integration — call for a fundamental shift in perspective and operations. For instance, the medical community, including doctors themselves, tends to blame physicians when errors occur. Understanding the real risk levels associated with laboratory data and HIT might prompt more action for change, the report’s authors wrote.
“There’s this expectation that doctors will catch every error,” Harrington says. “It’s unreasonable and unfair to expect that, especially when they have no reason to assume the data they're getting is flawed.”
Think like an engineer
Systems theory may be a new concept to the medical community, but the aviation industry has used it for decades.
“After World War II, there were so many commercial aviation crashes that the public was scared to fly,” says Leveson, a leading expert in system and software safety. In the early 2000s, she developed the System-Theoretic Process Analysis (STPA), a technique based on systems theory that offers insights into how complex systems can become safer. Researchers used STPA in its report to the FDA. “Industry and government worked together to put controls and error reporting in place. Today, there are nearly zero crashes in the U.S. What’s happening in health care right now is like having a Boeing 787 crash every day.”
Other engineering principles that work well in aviation, such as control systems, could be applied to health care as well, Thomas says. For instance, closed-loop controls solicit feedback so a system can change and adapt. Having laboratories confirm that physicians received their patients’ test results or investigating all reports of diagnostic errors are examples of closed-loop controls that are not mandated in the current ecosystem, Thomas says.
“Operating without controls is like asking a robot to navigate a city street blindfolded,” Thomas says. “There’s no opportunity for course correction. Closed-loop controls help inform future decision-making, and, at this point in time, it’s missing in the U.S. health-care system.”
The Leveson team will continue working with Synensys on behalf of the FDA. Their next study will investigate diagnostic screenings outside the laboratory, such as at a physician’s office (point of care) or at home (over the counter). Since the start of the Covid-19 pandemic, nonclinical lab testing has surged in the country. About 600 million Covid-19 tests were sent to U.S. households between January and September 2022, according to Synensys. Yet, few systems are in place to aggregate these data or report findings to public health agencies.
“There’s a lot of well-meaning people trying to solve this and other lab data challenges,” Rose says. “If we can convince people to think of health care as an engineered system, we can go a long way in solving some of these entrenched problems.”
The Synensys research contract is part of the Systemic Harmonization and Interoperability Enhancement for Laboratory Data (SHIELD) campaign, an agency initiative that seeks assistance and input in using systems theory to address this challenge.
'Cooling glass' could fight climate change by reflecting solar radiation back into space
Ripples in the oldest known spiral galaxy may shed light on the origins of our Milky Way
James Webb Space Telescope could look for 'carbon-lite' exoplanet atmospheres in search for alien life
Researchers 3D print components for a portable mass spectrometer
Mass spectrometers, devices that identify chemical substances, are widely used in applications like crime scene analysis, toxicology testing, and geological surveying. But these machines are bulky, expensive, and easy to damage, which limits where they can be effectively deployed.
Using additive manufacturing, MIT researchers produced a mass filter, which is the core component of a mass spectrometer, that is far lighter and cheaper than the same type of filter made with traditional techniques and materials.
Their miniaturized filter, known as a quadrupole, can be completely fabricated in a matter of hours for a few dollars. The 3D-printed device is as precise as some commercial-grade mass filters that can cost more than $100,000 and take weeks to manufacture.
Built from durable and heat-resistant glass-ceramic resin, the filter is 3D printed in one step, so no assembly is required. Assembly often introduces defects that can hamper the performance of quadrupoles.
This lightweight, cheap, yet precise quadrupole is one important step in Luis Fernando Velásquez-García’s 20-year quest to produce a 3D-printed, portable mass spectrometer.
“We are not the first ones to try to do this. But we are the first ones who succeeded at doing this. There are other miniaturized quadrupole filters, but they are not comparable with professional-grade mass filters. There are a lot of possibilities for this hardware if the size and cost could be smaller without adversely affecting the performance,” says Velásquez-García, a principal research scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper detailing the miniaturized quadrupole.
For instance, a scientist could bring a portable mass spectrometer to remote areas of the rainforest, using it to rapidly analyze potential pollutants without shipping samples back to a lab. And a lightweight device would be cheaper and easier to send into space, where it could monitor chemicals in Earth’s atmosphere or on those of distant planets.
Velásquez-García is joined on the paper by lead author Colin Eckhoff, an MIT graduate student in electrical engineering and computer science (EECS); Nicholas Lubinsky, a former MIT postdoc; and Luke Metzler and Randall Pedder of Ardara Technologies. The research is published in Advanced Science.
Size matters
At the heart of a mass spectrometer is the mass filter. This component uses electric or magnetic fields to sort charged particles based on their mass-to-charge ratio. In this way, the device can measure the chemical components in a sample to identify an unknown substance.
A quadrupole, a common type of mass filter, is composed of four metallic rods surrounding an axis. Voltages are applied to the rods, which produce an electromagnetic field. Depending on the properties of the electromagnetic field, ions with a specific mass-to-charge ratio will swirl around through the middle of the filter, while other particles escape out the sides. By varying the mix of voltages, one can target ions with different mass-to-charge ratios.
While fairly simple in design, a typical stainless-steel quadrupole might weigh several kilograms. But miniaturizing a quadrupole is no easy task. Making the filter smaller usually introduces errors during the manufacturing process. Plus, smaller filters collect fewer ions, which makes chemical analysis less sensitive.
“You can’t make quadrupoles arbitrarily smaller — there is a tradeoff,” Velásquez-García adds.
His team balanced this tradeoff by leveraging additive manufacturing to make miniaturized quadrupoles with the ideal size and shape to maximize precision and sensitivity.
They fabricate the filter from a glass-ceramic resin, which is a relatively new printable material that can withstand temperatures up to 900 degrees Celsius and performs well in a vacuum.
The device is produced using vat photopolymerization, a process where a piston pushes into a vat of liquid resin until it nearly touches an array of LEDs at the bottom. These illuminate, curing the resin that remains in the minuscule gap between the piston and the LEDs. A tiny layer of cured polymer is then stuck to the piston, which rises up and repeats the cycle, building the device one tiny layer at a time.
“This is a relatively new technology for printing ceramics that allows you to make very precise 3D objects. And one key advantage of additive manufacturing is that you can aggressively iterate the designs,” Velásquez-García says.
Since the 3D printer can form practically any shape, the researchers designed a quadrupole with hyperbolic rods. This shape is ideal for mass filtering but difficult to make with conventional methods. Many commercial filters employ rounded rods instead, which can reduce performance.
They also printed an intricate network of triangular lattices surrounding the rods, which provides durability while ensuring the rods remain positioned correctly if the device is moved or shaken.
To finish the quadrupole, the researchers used a technique called electroless plating to coat the rods with a thin metal film, which makes them electrically conductive. They cover everything but the rods with a masking chemical and then submerge the quadrupole in a chemical bath heated to a precise temperature and stirring conditions. This deposits a thin metal film on the rods uniformly without damaging the rest of the device or shorting the rods.
“In the end, we made quadrupoles that were the most compact but also the most precise that could be made, given the constraints of our 3D printer,” Velásquez-García says.
Maximizing performance
To test their 3D-printed quadrupoles, the team swapped them into a commercial system and found that they could attain higher resolutions than other types of miniature filters. Their quadrupoles, which are about 12 centimeters in length, are one-quarter the density of comparable stainless-steel filters.
In addition, further experiments suggest that their 3D-printed quadrupoles could achieve precision that is on par with that of largescale commercial filters.
“Mass spectrometry is one of the most important of all scientific tools, and Velásquez-Garcia and co-workers describe the design, construction, and performance of a quadrupole mass filter that has several advantages over earlier devices,” says Graham Cooks, the Henry Bohn Hass Distinguished Professor of Chemistry in the Aston Laboratories for Mass Spectrometry at Purdue University, who was not involved with this work. “The advantages derive from these facts: It is much smaller and lighter than most commercial counterparts and it is fabricated monolithically, using additive construction. … It is an open question as to how well the performance will compare with that of quadrupole ion traps, which depend on the same electric fields for mass measurement but which do not have the stringent geometrical requirements of quadrupole mass filters.”
“This paper represents a real advance in the manufacture of quadrupole mass filters (QMF). The authors bring together their knowledge of manufacture using advanced materials, QMF drive electronics, and mass spectrometry to produce a novel system with good performance at low cost,” adds Steve Taylor, professor of electrical engineering and electronics at the University of Liverpool, who was also not involved with this paper. “Since QMFs are at the heart of the ‘analytical engine’ in many other types of mass spectrometry systems, the paper has an important significance across the whole mass spectrometry field, which worldwide represents a multibillion-dollar industry.”
In the future, the researchers plan to boost the quadrupole’s performance by making the filters longer. A longer filter can enable more precise measurements since more ions that are supposed to be filtered out will escape as the chemical travels along its length. They also intend to explore different ceramic materials that could better transfer heat.
“Our vision is to make a mass spectrometer where all the key components can be 3D printed, contributing to a device with much less weight and cost without sacrificing performance. There is still a lot of work to do, but this is a great start,” Velásquez-Garcia adds.
This work was funded by Empiriko Corporation.
Targeting kids generates billions in ad revenue for social media
Social media platforms Facebook, Instagram, Snapchat, TikTok, X (formerly Twitter), and YouTube collectively derived nearly $11 billion in advertising revenue from U.S.-based users younger than 18 in 2022, according to a new study led by the Harvard T.H. Chan School of Public Health. The study is the first to offer estimates of the number of youth users on these platforms and how much annual ad revenue is attributable to them.
The study was published Dec.27 in PLOS ONE.
“As concerns about youth mental health grow, more and more policymakers are trying to introduce legislation to curtail social media platform practices that may drive depression, anxiety, and disordered eating in young people,” said senior author Bryn Austin, professor in the Department of Social and Behavioral Sciences. “Although social media platforms may claim that they can self-regulate their practices to reduce the harms to young people, they have yet to do so, and our study suggests they have overwhelming financial incentives to continue to delay taking meaningful steps to protect children.”
The researchers used a variety of public survey and market research data from 2021 and 2022 to comprehensively estimate Facebook, Instagram, Snapchat, TikTok, X, and YouTube’s number of youth users and related ad revenue. Population data from the U.S. Census and survey data from Common Sense Media and Pew Research were used to estimate the number of people younger than 18 using these platforms in the U.S. Data from eMarketer, a market research company, and Qustodio, a parental control app, provided estimations of each platform’s projected gross ad revenue in 2022 and users’ average minutes per day on each platform. The researchers used these estimations to build a simulation model that estimated how much ad revenue the platforms earned from young U.S. users.
The study found that in 2022, YouTube had 49.7 million U.S.-based users under age 18; TikTok, 18.9 million; Snapchat, 18 million; Instagram, 16.7 million; Facebook, 9.9 million; and X, 7 million. The platforms collectively generated nearly $11 billion in ad revenue from these users: $2.1 billion from users ages 12 and under and $8.6 billion from users ages 13-17.
YouTube derived the greatest ad revenue from users 12 and under ($959.1 million), followed by Instagram ($801.1 million) and Facebook ($137.2 million). Instagram derived the greatest ad revenue from users ages 13-17 ($4 billion), followed by TikTok ($2 billion) and YouTube ($1.2 billion). The researchers also calculated that Snapchat derived the greatest share of its overall 2022 ad revenue from users under 18 (41 percent), followed by TikTok (35 percent), YouTube (27 percent), and Instagram (16 percent).
The researchers noted that the study had limitations, including reliance on estimations and projections from public survey and market research sources, as social media platforms don’t disclose user age data or advertising revenue data by age group.
“Our finding that social media platforms generate substantial advertising revenue from youth highlights the need for greater data transparency as well as public health interventions and government regulations,” said lead author Amanda Raffoul, instructor in pediatrics at Harvard Medical School.
Zachary Ward, assistant professor in the Department of Health Policy and Management at Harvard Chan School, was also a co-author.
UGC 12914 and UGC 12915
A long time ago, two spiral galaxies far, far away were slowly drawing closer to each other, until, about 25 million to 30 million years before the image here was taken, they collided head-on. Found 180 million light-years away in the constellation Pegasus, both UGC 12914 and UGC 12915 managed to pull away from each otherContinue reading "UGC 12914 and UGC 12915"
The post UGC 12914 and UGC 12915 appeared first on Astronomy Magazine.
NGC 520
Although this deep-sky object is cataloged as NGC 520, it’s actually a pair of interacting spiral galaxies in the constellation Pisces the Fish. German-born English astronomer William Herschel discovered it in 1784. Even a small scope will show its odd shape, which has led amateur astronomers to christen it the Flying Ghost. It measures 4.6′Continue reading "NGC 520"
The post NGC 520 appeared first on Astronomy Magazine.
3C 273
Observing 3C 273 is a lot like observing Pluto. In both cases, you’ll only see a faint point of light, but the observations are meaningful because of what the objects are. In the case of 3C 273, you’re looking at the first quasar ever discovered, receiving photons emitted a couple of billion years ago fromContinue reading "3C 273"
The post 3C 273 appeared first on Astronomy Magazine.
Purgathofer-Weinberger 1
In May 1980, Austrian astronomers Alois Purgathofer and Ronald Weinberger discovered a large, faint planetary nebula while searching Palomar Observatory Sky Survey prints for possible flare stars. As their first co-discovery of a planetary, it was designated Purgathofer-Weinberger 1. This is usually abbreviated PuWe 1, but also carries the catalog designation PN G158.9+17.8. This objectContinue reading "Purgathofer-Weinberger 1"
The post Purgathofer-Weinberger 1 appeared first on Astronomy Magazine.
NGC 3190 galaxy group
You’ll find this grouping of galaxies 2° north-northwest of the 2nd-magnitude star Algieba (Gamma [γ] Leonis). It carries a couple of common names. One is the Gamma Leonis Group because of its nearness to Algieba. The other is Hickson 44, the brightest group in Canadian astronomer Paul Hickson’s catalog of 100 compact galaxy groups. HicksonContinue reading "NGC 3190 galaxy group"
The post NGC 3190 galaxy group appeared first on Astronomy Magazine.
M15 and Pease 1
Globular cluster M15 in Pegasus is the “hey, let me show you this one” autumn object for amateur astronomers north of the equator. It’s also known as NGC 7078. M15 lies some 34,000 light-years from Earth and appears 18′ across. It has a true diameter of 175 light-years and contains more than 100,000 stars. AmongContinue reading "M15 and Pease 1"
The post M15 and Pease 1 appeared first on Astronomy Magazine.
NASA's Juno spacecraft will get its closest look yet at Jupiter's moon Io on Dec. 30
Meet Taters, the cat who starred in the video streamed from space
Taters, a 3-year-old orange tabby cat, is having his 15 seconds of fame. The world met Taters after NASA used a laser to stream a test video of the feline 19 million miles from the Psyche spacecraft to Earth on Dec. 11. The footage, which took 101 seconds to reach the Hale Telescope at theContinue reading "Meet Taters, the cat who starred in the video streamed from space"
The post Meet Taters, the cat who starred in the video streamed from space appeared first on Astronomy Magazine.
A carbon-lite atmosphere could be a sign of water and life on other terrestrial planets, MIT study finds
Scientists at MIT, the University of Birmingham, and elsewhere say that astronomers’ best chance of finding liquid water, and even life on other planets, is to look for the absence, rather than the presence, of a chemical feature in their atmospheres.
The researchers propose that if a terrestrial planet has substantially less carbon dioxide in its atmosphere compared to other planets in the same system, it could be a sign of liquid water — and possibly life — on that planet’s surface.
What’s more, this new signature is within the sights of NASA’s James Webb Space Telescope (JWST). While scientists have proposed other signs of habitability, those features are challenging if not impossible to measure with current technologies. The team says this new signature, of relatively depleted carbon dioxide, is the only sign of habitability that is detectable now.
“The Holy Grail in exoplanet science is to look for habitable worlds, and the presence of life, but all the features that have been talked about so far have been beyond the reach of the newest observatories,” says Julien de Wit, assistant professor of planetary sciences at MIT. “Now we have a way to find out if there’s liquid water on another planet. And it’s something we can get to in the next few years.”
The team’s findings appear today in Nature Astronomy. De Wit co-led the study with Amaury Triaud of the University of Birmingham in the UK. Their MIT co-authors include Benjamin Rackham, Prajwal Niraula, Ana Glidden Oliver Jagoutz, Matej Peč, Janusz Petkowski, and Sara Seager, along with Frieder Klein at the Woods Hole Oceanographic Institution (WHOI), Martin Turbet of Ècole Polytechnique in France, and Franck Selsis of the Laboratoire d’astrophysique de Bordeaux.
Beyond a glimmer
Astronomers have so far detected more than 5,200 worlds beyond our solar system. With current telescopes, astronomers can directly measure a planet’s distance to its star and the time it takes it to complete an orbit. Those measurements can help scientists infer whether a planet is within a habitable zone. But there’s been no way to directly confirm whether a planet is indeed habitable, meaning that liquid water exists on its surface.
Across our own solar system, scientists can detect the presence of liquid oceans by observing “glints” — flashes of sunlight that reflect off liquid surfaces. These glints, or specular reflections, have been observed, for instance, on Saturn’s largest moon, Titan, which helped to confirm the moon’s large lakes.
Detecting a similar glimmer in far-off planets, however, is out of reach with current technologies. But de Wit and his colleagues realized there’s another habitable feature close to home that could be detectable in distant worlds.
“An idea came to us, by looking at what’s going on with the terrestrial planets in our own system,” Triaud says.
Venus, Earth, and Mars share similarities, in that all three are rocky and inhabit a relatively temperate region with respect to the sun. Earth is the only planet among the trio that currently hosts liquid water. And the team noted another obvious distinction: Earth has significantly less carbon dioxide in its atmosphere.
“We assume that these planets were created in a similar fashion, and if we see one planet with much less carbon now, it must have gone somewhere,” Triaud says. “The only process that could remove that much carbon from an atmosphere is a strong water cycle involving oceans of liquid water.”
Indeed, the Earth’s oceans have played a major and sustained role in absorbing carbon dioxide. Over hundreds of millions of years, the oceans have taken up a huge amount of carbon dioxide, nearly equal to the amount that persists in Venus’ atmosphere today. This planetary-scale effect has left Earth’s atmosphere significantly depleted of carbon dioxide compared to its planetary neighbors.
“On Earth, much of the atmospheric carbon dioxide has been sequestered in seawater and solid rock over geological timescales, which has helped to regulate climate and habitability for billions of years,” says study co-author Frieder Klein.
The team reasoned that if a similar depletion of carbon dioxide were detected in a far-off planet, relative to its neighbors, this would be a reliable signal of liquid oceans and life on its surface.
“After reviewing extensively the literature of many fields from biology, to chemistry, and even carbon sequestration in the context of climate change, we believe that indeed if we detect carbon depletion, it has a good chance of being a strong sign of liquid water and/or life,” de Wit says.
A roadmap to life
In their study, the team lays out a strategy for detecting habitable planets by searching for a signature of depleted carbon dioxide. Such a search would work best for “peas-in-a-pod” systems, in which multiple terrestrial planets, all about the same size, orbit relatively close to each other, similar to our own solar system. The first step the team proposes is to confirm that the planets have atmospheres, by simply looking for the presence of carbon dioxide, which is expected to dominate most planetary atmospheres.
“Carbon dioxide is a very strong absorber in the infrared, and can be easily detected in the atmospheres of exoplanets,” de Wit explains. “A signal of carbon dioxide can then reveal the presence of exoplanet atmospheres.”
Once astronomers determine that multiple planets in a system host atmospheres, they can move on to measure their carbon dioxide content, to see whether one planet has significantly less than the others. If so, the planet is likely habitable, meaning that it hosts significant bodies of liquid water on its surface.
But habitable conditions doesn’t necessarily mean that a planet is inhabited. To see whether life might actually exist, the team proposes that astronomers look for another feature in a planet’s atmosphere: ozone.
On Earth, the researchers note that plants and some microbes contribute to drawing carbon dioxide, although not nearly as much as the oceans. Nevertheless, as part of this process, the lifeforms emit oxygen, which reacts with the sun’s photons to transform into ozone — a molecule that is far easier to detect than oxygen itself.
The researchers say that if a planet’s atmosphere shows signs of both ozone and depleted carbon dioxide, it likely is a habitable, and inhabited world.
“If we see ozone, chances are pretty high that it’s connected to carbon dioxide being consumed by life,” Triaud says. “And if it’s life, it’s glorious life. It would not be just a few bacteria. It would be a planetary-scale biomass that’s able to process a huge amount of carbon, and interact with it.”
The team estimates that NASA’s James Webb Space Telescope would be able to measure carbon dioxide, and possibly ozone, in nearby, multiplanet systems such as TRAPPIST-1 — a seven-planet system that orbits a bright star, just 40 light years from Earth.
“TRAPPIST-1 is one of only a handful of systems where we could do terrestrial atmospheric studies with JWST,” de Wit says. “Now we have a roadmap for finding habitable planets. If we all work together, paradigm-shifting discoveries could be done within the next few years.”
Powering the green economy: How NUS is advancing sustainability education
In this series, NUS News explores how NUS is accelerating sustainability research and education in response to climate change challenges, and harnessing the knowledge and creativity of our people to pave the way to a greener future for all.
Amid the record-high levels of greenhouse gases, pollution and deforestation, the topic of sustainability has never been more pressing. In tandem with shaping the future of sustainability and contributing to climate action, NUS offers a plethora of sustainability-related postgraduate and executive programmes in both STEM (science, technology, engineering and mathematics) and non-STEM fields. These courses are also updated regularly to ensure they remain relevant in our fast-changing world.
NUS Vice Provost for Masters’ Programmes & Lifelong Education, and Dean of the School of Continuing and Lifelong Education (NUS SCALE) Professor Susanna Leong said, “As Singapore, Asia, and the world work towards a more sustainable future, the courses that the University offer can be applied to solving immediate problems, and those of the future.” The diverse learning opportunities include short executive training courses, professional and graduate certificates, as well as credentialled postgraduate programmes, she added.
NUS offers 11 master’s degrees in various specialised fields of sustainability, from the sciences and engineering to business and climate change. The University also introduces new specialisations in sustainability for postgraduate programmes as part of its regular curriculum review. With a wide range of offerings, students can deep-dive into topics they are passionate about, with many going on to become thought leaders and experts in their respective fields.
Here are highlights of some of our sustainability-related Masters programmes.
MSc in Biodiversity Conservation and Nature-based Climate Solutions
This programme explores problems and strategies related to conservation, environmental sustainability, and climate change. Due to its geographical location in Southeast Asia, Singapore is in a unique position to explore issues where countries may prioritise economic development over conservation concerns.
Students can choose from a range of modules such as exploring the impact of biological invasions as well as the integration of spatial and social modelling skills in environmental sustainability, in-demand topics and skillsets which are highly valued.
These are taught by a stellar team of faculty members from the Department of Biological Sciences with expertise in environmental sustainability, biodiversity conservation, and freshwater, marine and terrestrial ecology.
“For many of our modules, guest speakers from different environmental backgrounds and organisations are invited to talk about their work,” said Ms Kayla Lindsey, a 25-year-old student in the programme. “They also share what real-life opportunities are available to us as we look for full-time jobs after graduation.”
MSc in Energy Systems
With the global shift away from fossil fuels to more sustainable energy sources, this uniquely multidisciplinary programme by the NUS College of Design and Engineering (CDE) combines engineering and technology to address the gap in the current energy education landscape, which tends to be single-disciplinary in nature.
Beyond learning about the principles of energy technologies, the impact of policies and market-based mechanisms, as well as cost analysis, students will also acquire skills that prepare them for the global transition to greener sources of energy such as solar energy and hydrogen.
Energy Systems Modelling and Market Mechanisms, Biomass and Energy, and Management of Technological Innovation are just some of the modules they can choose from.
Graduates have gone on to pursue careers in energy analysis and operation management, consulting and policy advisory, as well as technology and innovation management in the energy sector.
MSc in Sustainable and Green Finance
The first of its kind in Asia, this course incorporates social and environmental considerations into conventional financial models. Through partnerships with industry players, students take their learning beyond the classroom and are prepared for a rapidly-evolving industry.
Launched in 2021 by NUS Business School in collaboration with the Sustainable and Green Finance Institute (SGFIN), it was set up with support from the Monetary Authority of Singapore.
By equipping students with the ability to take ESG considerations into account when making investment decisions, the course opens up a variety of career options to graduates. These range from roles in the corporate or financial sector, as well as government agencies and non-governmental organisations (NGOs).
MSc in Environmental Management
Following a recent curriculum revamp, this long-running flagship programme is now multidisciplinary as well as interdisciplinary – jointly offered by six NUS faculties and schools, including the science and law faculties.
Local as well as global in scope, it grooms graduates for key managerial roles in the private and public sectors. Students gain insights in policymaking, data analysis and other fields.
Mr Nihal Jayantha Mallikaratne, 52, an operations manager at a manufacturing firm who is enrolled in this programme part-time, wants to gain “a sound understanding of the complex environmental challenges we face today, and the strategies needed to address them.” He hopes that at the end of the programme, he will possess the knowledge and skills to make a real impact in the field of environment and sustainability.
Short-term Continuing Education and Training (CET) courses
Aside from the undergraduate and master’s degrees, NUS offers short-term courses for professional executives as well as through organisations for their employees. Currently, these CET courses, which provide a broad view of sustainability and climate change issues, fall into four categories – business, policy, engineering, and science.
With more businesses placing greater emphasis on shaping policies and practices in ESG to better manage the risks and opportunities related to sustainable development, the University is seeing a strong interest in the take-up of such courses. There has also been strong interest from government agencies to upskill policy officers and administrators on sustainability.
Social and Sustainable Investing
With a focus on the Asia-Pacific region, this two-day course covers topics such as sustainable investing, the latest developments in corporate and social responsibility, as well as ways to invest in social impact bonds and green bonds.
It is offered by the NUS Business School. Business leaders and investment managers will find it particularly relevant, although the course is open to anyone with an interest in social and environmentally-related investments.
Senior Management Programme: Policy & Leadership for Innovation and Sustainability
Corporate leaders, leaders of non-profit organisations, and senior policy professionals have taken part in this programme run by the Lee Kuan Yew School of Public Policy.
Over three weeks, participants study how different institutions, economies and societies operate as complex and adaptive systems – and learn how they can create better policies.
The Senior Management Programme (SMP) includes a week-long study trip to Zurich, Switzerland, which offers first-hand insights into the country’s policies and business practices.
One recent graduate was Mr Tan Tok Seng, Senior Deputy Director of the School Campus Department at the Ministry of Education, who shared that he was impressed by the “thoughtfully curated and professionally conducted” discussions led by experts of the field. The programme also had a “good variety of topics that benefited participants from different agencies and backgrounds”, he added.
Deep Decarbonisation: Principles and Analysis Tools
This course, offered by CDE, looks at the challenges and opportunities posed by the energy transition, as well as the analytical tools used to study the underlying issues and manage trade-offs.
Energy System Transformation, Decision-Making Under Uncertainties and Energy System Modelling and Analysis, are among the topics covered in this 14-hour programme.
One of the participants, Mr Adrian Chan, a production line trainer from Shell Jurong Island, noted that understanding decarbonisation is essential in his line of work.
“As the implementation of decarbonisation initiatives brings forth novel technologies and processes, it becomes imperative for operation technicians to receive appropriate training to proficiently operate and maintain these systems,” he said.
Sustainability 101 Course for Policy Officers
Sustainability 101 is symbolic of a “whole-of-nation” sustainability movement to create a robust green talent pipeline as enshrined in the Singapore Green Plan 2030. Launched in November 2022 by NUS Centre for Nature-based Climate Solutions in collaboration with the National Climate Change Secretariat and NUS SCALE, more than 153 government officers from over 35 agencies have taken this programme which covers topics such as international climate negotiations and domestic environmental policies.
To help them develop well-balanced and science-based policies in their respective fields, participants were exposed to seminars, panels, case studies and discussions led by academics, government officials and industry leaders.
Keen to find out more about the university’s wide range of sustainability programmes? View the full list here, or email sustainability@nus.edu.sg for more information.
This is the first in a two-part series on NUS’ sustainable education offerings.
What are neutron stars? The cosmic gold mines, explained
It isn’t a secret that humanity and everything around us is made of star stuff. But not all stars create elements equally. Sure, regular stars can create the basic elements: helium, carbon, neon, oxygen, silicon, and iron. But it takes the collision of two neutron stars — incredibly dense stellar corpses — to create theContinue reading "What are neutron stars? The cosmic gold mines, explained"
The post What are neutron stars? The cosmic gold mines, explained appeared first on Astronomy Magazine.
Does “food as medicine” make a big dent in diabetes?
How much can healthy eating improve a case of diabetes? A new health care program attempting to treat diabetes by means of improved nutrition shows a very modest impact, according to the first fully randomized clinical trial on the subject.
The study, co-authored by MIT health care economist Joseph Doyle of the MIT Sloan School of Management, tracks participants in an innovative program that provides healthy meals in order to address diabetes and food insecurity at the same time. The experiment focused on Type 2 diabetes, the most common form.
The program involved people with high blood sugar levels, in this case an HbA1c hemoglobin level of 8.0 or more. Participants in the clinical trial who were given food to make 10 nutritious meals per week saw their hemoglobin A1c levels fall by 1.5 percentage points over six months. However, trial participants who were not given any food had their HbA1c levels fall by 1.3 percentage points over the same time. This suggests the program’s relative effects were limited and that providers need to keep refining such interventions.
“We found that when people gained access to [got food from] the program, their blood sugar did fall, but the control group had an almost identical drop,” says Doyle, the Erwin H. Schell Professor of Management at MIT Sloan.
Given that these kinds of efforts have barely been studied through clinical trials, Doyle adds, he does not want one study to be the last word, and hopes it spurs more research to find methods that will have a large impact. Additionally, programs like this also help people who lack access to healthy food in the first place by dealing with their food insecurity.
“We do know that food insecurity is problematic for people, so addressing that by itself has its own benefits, but we still need to figure out how best to improve health at the same time if it is going to be addressed through the health care system,” Doyle adds.
The paper, “The Effect of an Intensive Food-as-Medicine Program on Health and Health Care Use: A Randomized Clinical Trial,” is published today in JAMA Internal Medicine.
The authors are Doyle; Marcella Alsan, a professor of public policy at Harvard Kennedy School; Nicholas Skelley, a predoctoral research associate at MIT Sloan Health Systems Initiative; Yutong Lu, a predoctoral technical associate at MIT Sloan Health Systems Initiative; and John Cawley, a professor in the Department of Economics and the Department of Policy Analysis and Management at Cornell University and co-director of Cornell's Institute on Health Economics, Health Behaviors and Disparities.
To conduct the study, the researchers partnered with a large health care provider in the Mid-Atlantic region of the U.S., which has developed food-as-medicine programs. Such programs have become increasingly popular in health care, and could apply to treating diabetes, which involves elevated blood sugar levels and can create serious or even fatal complications. Diabetes affects about 10 percent of the adult population.
The study consisted of a randomized clinical trial of 465 adults with Type 2 diabetes, centered in two locations within the network of the health care provider. One location was part of an urban area, and the other was rural. The study took place from 2019 through 2022, with a year of follow-up testing beyond that. People in the study’s treatment group were given food for 10 healthy meals per week for their families over a six-month period, and had opportunities to consult with a nutritionist and nurses as well. Participants from both the treatment and control groups underwent periodic blood testing.
Adherence to the program was very high. Ultimately, however, the reduction in blood sugar levels experienced by people in the treatment group was only marginally bigger than that of people in the control group.
Those results leave Doyle and his co-authors seeking to explain why the food intervention didn’t have a bigger relative impact. In the first place, he notes, there could be some basic reversion to the mean in play — some people in the control group with high blood sugar levels were likely to improve that even without being enrolled in the program.
“If you examine people on a bad health trajectory, many will naturally improve as they take steps to move away from this danger zone, such as moderate changes in diet and exercise,” Doyle says.
Moreover, because the healthy eating program was developed by a health care provider staying engaged with all the participants, people in the control group may have still benefitted from medical engagement and thus fared better than a control group without such health care access.
It is also possible the Covid-19 pandemic, unfolding during the experiment’s time frame, affected the outcomes in some way, although results were similar when they examined outcomes prior to the pandemic. Or it could be that the intervention’s effects might appear over a still-longer time frame.
And while the program provided food, it left it to participants to prepare meals, which might be a hurdle for program compliance. Potentially, premade meals might have a bigger impact.
“Experimenting with providing those premade meals seems like a natural next step,” says Doyle, who emphasizes that he would like to see more research about food-as-medicine programs aiming at diabetes, especially if such programs evolve and try to some different formats and features.
“When you find a particular intervention doesn’t improve blood sugar, we don’t just say, we shouldn’t try at all,” Doyle says. “Our study definitely raises questions, and gives us some new answers we haven’t seen before.”
Support for the study came from the Robert Wood Johnson Foundation; the Abdul Latif Jameel Poverty Action Lab (J-PAL); and the MIT Sloan Health Systems Initiative. Outside the submitted work, Cawley has reported receiving personal fees from Novo Nordisk, Inc, a pharmaceutical company that manufactures diabetes medication and other treatments.
Hubble Telescope sees a bright 'snowball' of stars in the Milky Way's neighbor (image)
30 years ago, astronauts saved the Hubble Space Telescope
12 out-of-this-world exoplanet discoveries in 2023
10 times the night sky amazed us in 2023
12 James Webb Space Telescope findings that changed our understanding of the universe in 2023
Merry Christmas from the cosmos
Have you ever seen comparison photos of objects on Earth versus objects in space, and found that there are some mind-boggling similarities? I’m talking about ones that look almost identical. A fresh image of an open star cluster, cataloged as NGC 2264 and nicknamed the Christmas Tree Cluster, offers an appearance that helps us perfectlyContinue reading "Merry Christmas from the cosmos"
The post Merry Christmas from the cosmos appeared first on Astronomy Magazine.
Nuking an incoming asteroid will spew out X-rays. This new model shows what happens
Engineers develop a vibrating, ingestible capsule that might help treat obesity
When you eat a large meal, your stomach sends signals to your brain that create a feeling of fullness, which helps you realize it’s time to stop eating. A stomach full of liquid can also send these messages, which is why dieters are often advised to drink a glass of water before eating.
MIT engineers have now come up with a new way to take advantage of that phenomenon, using an ingestible capsule that vibrates within the stomach. These vibrations activate the same stretch receptors that sense when the stomach is distended, creating an illusory sense of fullness.
In animals who were given this pill 20 minutes before eating, the researchers found that this treatment not only stimulated the release of hormones that signal satiety, but also reduced the animals’ food intake by about 40 percent. Scientists have much more to learn about the mechanisms that influence human body weight, but if further research suggests this technology could be safely used in humans, such a pill might offer a minimally invasive way to treat obesity, the researchers say.
“For somebody who wants to lose weight or control their appetite, it could be taken before each meal,” says Shriya Srinivasan PhD ’20, a former MIT graduate student and postdoc who is now an assistant professor of bioengineering at Harvard University. “This could be really interesting in that it would provide an option that could minimize the side effects that we see with the other pharmacological treatments out there.”
Srinivasan is the lead author of the new study, which appears today in Science Advances. Giovanni Traverso, an associate professor of mechanical engineering at MIT and a gastroenterologist at Brigham and Women’s Hospital, is the senior author of the paper.
A sense of fullness
When the stomach becomes distended, specialized cells called mechanoreceptors sense that stretching and send signals to the brain via the vagus nerve. As a result, the brain stimulates production of insulin, as well as hormones such as C-peptide, Pyy, and GLP-1. All of these hormones work together to help people digest their food, feel full, and stop eating. At the same time, levels of ghrelin, a hunger-promoting hormone, go down.
While a graduate student at MIT, Srinivasan became interested in the idea of controlling this process by artificially stretching the mechanoreceptors that line the stomach, through vibration. Previous research had shown that vibration applied to a muscle can induce a sense that the muscle has stretched farther than it actually has.
“I wondered if we could activate stretch receptors in the stomach by vibrating them and having them perceive that the entire stomach has been expanded, to create an illusory sense of distension that could modulate hormones and eating patterns,” Srinivasan says.
As a postdoc in MIT’s Koch Institute for Integrative Cancer Research, Srinivasan worked closely with Traverso’s lab, which has developed many novel approaches to oral delivery of drugs and electronic devices. For this study, Srinivasan, Traverso, and a team of researchers designed a capsule about the size of a multivitamin, that includes a vibrating element. When the pill, which is powered by a small silver oxide battery, reaches the stomach, acidic gastric fluids dissolve a gelatinous membrane that covers the capsule, completing the electronic circuit that activates the vibrating motor.
In a study in animals, the researchers showed that once the pill begins vibrating, it activates mechanoreceptors, which send signals to the brain through stimulation of the vagus nerve. The researchers tracked hormone levels during the periods when the device was vibrating and found that they mirrored the hormone release patterns seen following a meal, even when the animals had fasted.
The researchers then tested the effects of this stimulation on the animals’ appetite. They found that when the pill was activated for about 20 minutes, before the animals were offered food, they consumed 40 percent less, on average, than they did when the pill was not activated. The animals also gained weight more slowly during periods when they were treated with the vibrating pill.
“The behavioral change is profound, and that’s using the endogenous system rather than any exogenous therapeutic. We have the potential to overcome some of the challenges and costs associated with delivery of biologic drugs by modulating the enteric nervous system,” Traverso says.
The current version of the pill is designed to vibrate for about 30 minutes after arriving in the stomach, but the researchers plan to explore the possibility of adapting it to remain in the stomach for longer periods of time, where it could be turned on and off wirelessly as needed. In the animal studies, the pills passed through the digestive tract within four or five days.
The study also found that the animals did not show any signs of obstruction, perforation, or other negative impacts while the pill was in their digestive tract.
An alternative approach
This type of pill could offer an alternative to the current approaches to treating obesity, the researchers say. Nonmedical interventions such as diet exercise don’t always work, and many of the existing medical interventions are fairly invasive. These include gastric bypass surgery, as well as gastric balloons, which are no longer used widely in the United States due to safety concerns.
Drugs such as GLP-1 agonists can also aid weight loss, but most of them have to be injected, and they are unaffordable for many people. According to Srinivasan, the MIT capsules could be manufactured at a cost that would make them available to people who don’t have access to more expensive treatment options.
“For a lot of populations, some of the more effective therapies for obesity are very costly. At scale, our device could be manufactured at a pretty cost-effective price point,” she says. “I’d love to see how this would transform care and therapy for people in global health settings who may not have access to some of the more sophisticated or expensive options that are available today.”
The researchers now plan to explore ways to scale up the manufacturing of the capsules, which could enable clinical trials in humans. Such studies would be important to learn more about the devices’ safety, as well as determine the best time to swallow the capsule before to a meal and how often it would need to be administered.
Other authors of the paper include Amro Alshareef, Alexandria Hwang, Ceara Byrne, Johannes Kuosmann, Keiko Ishida, Joshua Jenkins, Sabrina Liu, Wiam Abdalla Mohammed Madani, Alison Hayward, and Niora Fabian.
The research was funded by the National Institutes of Health, Novo Nordisk, the Department of Mechanical Engineering at MIT, a Schmidt Science Fellowship, and the National Science Foundation.
Johannes Kepler: Everything you need to know
The Sky This Week from December 22 to 29: A Christmastime Cold Moon
Friday, December 22The Moon passes 3° north of Jupiter at 9 A.M. EST. Shortly after sunset, you can find both high in the east, in Aries. The Moon is now nearly 7° northeast (to the left) of Jupiter, hanging directly below the Ram’s brightest star, magnitude 2 Hamal. Jupiter is a blazing magnitude –2.7, theContinue reading "The Sky This Week from December 22 to 29: A Christmastime Cold Moon"
The post The Sky This Week from December 22 to 29: A Christmastime Cold Moon appeared first on Astronomy Magazine.
How the Beagle 2 was lost, then found, on Mars
Just south of Mars’ equator, abutting the Red Planet’s crater-studded highlands and smooth rolling lowlands, lies a broad plain wider than Texas, likely carved by a colossal impact more than 3.9 billion years ago. The blasted terrain of Isidis Planitia, a vast landscape of pitted ridges, light-colored ripples, and low dunes, today provides a foreverContinue reading "How the Beagle 2 was lost, then found, on Mars"
The post How the Beagle 2 was lost, then found, on Mars appeared first on Astronomy Magazine.
Hubble Telescope gifts us a dazzling starry 'snow globe' just in time for the holidays
Mirrors for the world’s largest optical telescope are on their way to Chile
The first 18 pieces for one of the European Southern Observatory’s Extremely Large Telescope (ELT) mirrors have started their 10,000 kilometer journey from France to Chile. It’s a key step on the way to competing the ELT, according to the European Southern Observatory (ESO). Each section is part of the telescope’s primary mirror, named the M1.Continue reading "Mirrors for the world’s largest optical telescope are on their way to Chile"
The post Mirrors for the world’s largest optical telescope are on their way to Chile appeared first on Astronomy Magazine.
Steve Wozniak's start-up Privateer develops ride-sharing spacecraft to reduce orbital clutter
Can stars form around black holes?
Why are Americans so sick? Researchers point to middle grocery aisles.
Why are Americans so sick? Researchers point to middle grocery aisles.
Anna Lamb
Harvard Staff Writer
Obesity and disease rising with consumption of ultra-processed foods, say Chan School panelists
According to the Centers for Disease Control, more than 40 percent of Americans are obese, and many struggle with comorbidities such as Type 2 diabetes, heart disease, and cancer.
What is making us so sick? The ultra-processed foods that make up the bulk of the American diet are among the major culprits, according to an online panel hosted by Harvard’s T.H. Chan School of Public Health last week.
Experts from Harvard and the National Institutes of Health joined journalist Larissa Zimberoff, author of “Technically Food: Inside Silicon Valley’s Mission to Change What We Eat,” to discuss why the processing of cereals, breads, and other items typically found in the middle aisles of the grocery store — may be driving American weight gain.
Kevin Hall, senior investigator at the National Institute of Diabetes and Digestive and Kidney Diseases at the NIH, said initial research into diets high in ultra-processed foods shows strong links to overconsumption of calories.
Participants in a study conducted by Hall and his team published in 2019 were randomized to receive either ultra-processed or unprocessed diets for two weeks, immediately followed by the alternate diet for two weeks.
“But despite our diets being matched for various nutrients of concern, what we found was that people consuming the ultra-processed foods ate about 500 calories per day more over the two weeks that they were on that diet as compared to the minimally processed diet,” Hall said. “They gained weight and gained body fat. And when they were on the minimally processed diet, they spontaneously lost weight and lost body fat.”
According to Hall, “ultra-processed foods are one of the four categories of something called the NOVA classification system” developed by the School of Public Health at the University of São Paulo, Brazil.
NOVA food classification system
- Natural, packaged, cut, chilled, or frozen vegetables; fruits, potatoes, and other roots and tubers
- Nuts, peanuts, and other seeds without salt or sugar
- Bulk or packaged grains such as brown, white, parboiled, and wholegrain rice, corn kernel, or wheat berry
- Fresh and dried herbs and spices (e.g., oregano, pepper, thyme,
cinnamon) - Fresh or pasteurized vegetable or fruit juices with no added sugar or other substances
- Fresh and dried mushrooms and other fungi or algae
- Grains of wheat, oats and other cereals
- Fresh and dried herbs and spices
- Grits, flakes and flours made from corn, wheat or oats, including those fortified with iron, folic acid or other nutrients lost during processing
- Fresh, chilled or frozen meat, poultry, fish, and seafood, whole or in the form of steaks, fillets and other cuts
- Dried or fresh pasta, couscous, and polenta made from water and the grits/flakes/flours described above
- Fresh or pasteurized milk; yogurt without sugar
- Eggs
- Tea, herbal infusions
- Lentils, chickpeas, beans, and other legumes
- Coffee
- Dried fruits
- Oils made from seeds, nuts and fruits, to include soybeans, corn, oil palm, sunflower, or olives
- Butter
- White, brown, and other types of sugar, and molasses obtained from cane or beet
- Lard
- Honey extracted from honeycombs
- Coconut fat
- Syrup extracted from maple trees
- Refined or coarse salt, mined or from seawater
- Starches extracted from corn and other plants
- Any food combining two of these, such as “salted butter”
- Canned or bottled legumes or vegetables preserved in salt (brine) or vinegar, or by pickling
- Canned fish, such as sardine and tuna, with or without added preservatives
- Tomato extract, pastes or concentrates (with salt and/or sugar)
- Tomato extract, pastes or concentrates (with salt and/or sugar)
- Salted, dried, smoked, or cured meat, or fish
- Fruits in sugar syrup (with or without added antioxidants)
- Coconut fat
- Beef jerky
- Freshly made cheeses
- Bacon
- Freshly-made (unpackaged) breads made of wheat flour, yeast, water, and salt
- Salted or sugared nuts and seeds
- Fermented alcoholic beverages such as beer, alcoholic cider, and wine
- Fatty, sweet, savory or salty packaged snacks
- Pre-prepared (packaged) meat, fish, and vegetables
- Biscuits (cookies)
- Pre-prepared pizza and pasta dishes
- Ice creams and frozen desserts
- Pre-prepared burgers, hot dogs, sausages
- Chocolates, candies, and confectionery in general
- Pre-prepared poultry and fish “nuggets” and “sticks”’
- Cola, soda and other carbonated soft drinks
- Other animal products made from remnants
- “Energy” and sports drinks
- Packaged breads, hamburger and hot dog buns
- Canned, packaged, dehydrated (powdered) and other “instant” soups,
noodles, sauces, desserts, drink mixes and seasonings - Baked products made with ingredients such as hydrogenated vegetable fat, sugar, yeast, whey, emulsifiers, and other additives
- Sweetened and flavored yogurts, including fruit yogurts
- Breakfast cereals and bars
- Dairy drinks, including chocolate milk
- Infant formulas and drinks, and meal replacement shakes (e.g., “slim fast”)
- Sweetened juices
- Pastries, cakes, and cake mixes
- Margarines and spreads
- Distilled alcoholic beverages such as whisky, gin, rum, vodka, etc.
Manufacturing techniques to create ultra-processed foods include extrusion, molding, and preprocessing by frying. Panelist Jerold Mande, CEO of Nourish Science and an adjunct professor of nutrition at the Chan School who has previously held positions with the FDA and USDA, pointed out that foods like shelf-stable breads found at the grocery store are often no more than “very sophisticated emulsified foams.”
But Hall noted that not all ultra-processed foods are necessarily equally bad for you. His team is conducting a follow-up study that aims to look at different qualities of ultra-processed versus whole foods, including energy density, palatability, and portions.
“Those are only two potential mechanisms, the calories per gram of food — that’s the energy density of food — and the proportion of foods that have pairs of nutrients that cross certain thresholds, foods that are high in both sugar and fat, salt and fat, and salt and carbohydrates,” he said.
“We’re starting to see a little bit of that evidence that some ultra-processed foods might have a higher risk of disease and chronic disease than others,” said Josiemer Mattei, the Donald and Sue Pritzker Associate Professor of Nutrition at the Chan School.
Still, Mattei argued for lowering consumption across the board.
“Higher consumption and higher intake of ultra-processed foods overall was associated with higher risk of eventually developing Type 2 diabetes, and more emerging evidence coming with cardiovascular disease, especially for coronary heart disease,” she said.
All the panelists agreed that obesity and negative health outcomes have risen alongside consumption of ultra-processed foods.
“We need to invest more in the science,” Mande said. “We need to make sure our regulatory agencies work, and we need to leverage the biggest programs.”
Study finds one fifth of Australian smokers plan to keep smoking
A new study has revealed one in five Australian smokers (21%) would prefer to still be smoking in the next 1-2 years, only 59% would prefer to quit smoking all together, and the remainder would either prefer to switch to a lower harm alternative (12%) or are uncertain.
From high-speed electric cars to ETH in space
Voyager 1 is sending binary gibberish to Earth from 15.1 billion miles away
Update: According to NASA’s Jet Propulsion Laboratory spokesperson Calla Coffield on March 8, the problem is still unresolved. However, the NASA team is fairly confident it is flight data system’s (FDS) memory that is causing the problems. The fixes have been minimal such as setting the object however, there are more ambitious fixes yet toContinue reading "Voyager 1 is sending binary gibberish to Earth from 15.1 billion miles away"
The post Voyager 1 is sending binary gibberish to Earth from 15.1 billion miles away appeared first on Astronomy Magazine.
Saving lives in the ICU: Clean teeth
Saving lives in the ICU: Clean teeth
‘Striking’ study suggests daily use of a toothbrush lowers risk of hospital-acquired pneumonia, intensive-care mortality
BWH Communications
Researchers have found an inexpensive tool that may save lives in the hospital — and it comes with bristles on one end.
Investigators from Harvard-affiliated Brigham and Women’s Hospital worked with colleagues from Harvard Pilgrim Health Care Institute to examine whether daily toothbrushing among hospitalized patients is associated with lower rates of hospital-acquired pneumonia and other outcomes. The team combined the results of 15 randomized clinical trials that included more than 2,700 patients and found that hospital-acquired pneumonia rates were lower among patients who brushed daily than among those who did not. The results were especially compelling among patients on mechanical ventilation. The findings are published in JAMA Internal Medicine.
“The signal that we see here toward lower mortality is striking — it suggests that regular toothbrushing in the hospital may save lives,” said corresponding author Michael Klompas, an infectious disease physician at the Brigham, professor at Harvard Medical School, and professor of population medicine at Harvard Pilgrim Health Care Institute. “It’s rare in the world of hospital preventative medicine to find something like this that is both effective and cheap. Instead of a new device or drug, our study indicates that something as simple as brushing teeth can make a big difference.”
The team conducted a systematic review and meta-analysis to determine the association between daily toothbrushing and pneumonia. Using a variety of databases, the researchers collected and analyzed randomized clinical trials from around the world that compared the effect of regular oral care with toothbrushing versus oral care without toothbrushing on the occurrence of hospital-acquired pneumonia and other outcomes.
The analysis found that daily toothbrushing was associated with a significantly lower risk for hospital-acquired pneumonia and mortality in the intensive care unit. In addition, the investigators identified that toothbrushing for patients in the ICU was associated with fewer days of mechanical ventilation and a shorter length of stay in intensive care.
Most of the research in the team’s review explored the role of teeth-cleaning in adults in the ICU. Only two of the 15 studies included in the authors’ analysis evaluated the impact of toothbrushing on non-ventilated patients. The team is hopeful that the protective effect of toothbrushing extends to non-ICU patients but noted the need for further research.
“The findings from our study emphasize the importance of implementing an oral health routine that includes toothbrushing for hospitalized patients,” Klompas said. “Our hope is that our study will help catalyze policies and programs to assure that hospitalized patients regularly brush their teeth. If a patient cannot perform the task themselves, we recommend a member of the patient’s care team assist.”
NASA laser-beams adorable cat video to Earth from 19 million miles away (video)
The year in photos
The year in photos
Part of the Photography series
Harvard’s campus and community through the lens of our photographers.
Academic and athletic highs, dramatic scenes on- and offstage, quiet moments, a changing of the presidential guard. The year 2023 added its imprint to the long Crimson line.
Chilly crowds join the Parade for Hasty Pudding 2023 Woman of the Year Jennifer Coolidge.
Jon Chase/Harvard Staff Photographer
Alexander Yang (from left), Katherine Marguerite, and Leen Al Kassab receive their residency assignments during Match Day at Harvard Medical School.
Kris Snibbe/Harvard Staff Photographer
Harvard Climbing Club members socialize before getting in a workout.
Jon Chase/Harvard Staff Photographer
The Harvard Foundation hosts the 37th Annual Cultural Rhythms celebrating 2023 Artist of the Year Issa Rae, flanked by Alta Mauro (left) and Sade Abraham (right). Devon Gates performs during the show.
Jon Chase/Harvard Staff Photographer
Devon Gates performs during the show.
Jon Chase/Harvard Staff Photographer
Students strut during Eleganza, an annual fashion and talent show put on at the Bright-Landry Center.
Stephanie Mitchell/Harvard Staff Photographer
Poetry of the past and present intermingle. Harvard student poet Mia Word ’24 stands for a portrait outside Longfellow House on Brattle Street. She selected the location because of its connection to Phyllis Wheatley, the first African American published poet.
Stephanie Mitchell/Harvard Staff Photographer
Government concentrator Quinn Lewis ’23 represents a new generation of graduates focused on climate solutions.
Kris Snibbe/Harvard Staff Photographer
A model poses wearing “Water of Life” during the Marine Debris Fashion Show.
Photo by Scott Eisen
Chloë LeStage ’23 is an undergraduate who trained to become a doula during a COVID gap year and is dedicating her thesis to doulas in prison.
Stephanie Mitchell/Harvard Staff Photographer
Henry Cerbone ’23 is blending studies of animals, philosophy, engineering, and robotics to understand how robotics can learn from biology and biology can, perhaps, be helped by robotics.
Jon Chase/Harvard Staff Photographer
A late afternoon sun creates distinctive shadows on the Carpenter Center.
Jon Chase/Harvard Staff Photographer
Kelly Jenkins carries a large photo of her daughter Shea Jenkins ‘23, captain of the women’s lacrosse team, across the street on her way to Commencement Exercises 2023.
Jon Chase/Harvard Staff Photographer
Harvard College Baccalaureate Service takes place in Tercentenary Theatre. Rakesh Khurana (from left), Larry Bacow, and Matthew Potts process to the event.
Stephanie Mitchell/Harvard Staff Photographer
Harvard Commencement Exercises in Tercentenary Theatre. Katalin Karikó (left) and Tom Hanks are pictured as Hanks’ name is read as the “winner.”
Stephanie Mitchell/Harvard Staff Photographer
As part of the Mental Health and Wellbeing initiative, Goat Yoga on the QUAD is offered to Harvard Medical School students. Miriam Zawadzki (left) and Carla Winter, both M.D./Ph.D. students at HMS, react as goats join their poses.
Stephanie Mitchell/Harvard Staff Photographer
People with umbrellas pass beneath trees outside the courtyard of the Harvard Museum of Natural History on a rainy day.
Kris Snibbe/Harvard Staff Photographer
The columns of Austin Hall are reflected in the entrance to the building at Harvard Law School.
Stephanie Mitchell/Harvard Staff Photographer
Patrice Higonnet, Robert Walton Goelet Research Professor of French History, Emeritus, walks past the murals inside the Busch-Reisinger Museum.
Kris Snibbe/Harvard Staff Photographer
Little Amal is the 12-foot puppet of a 10-year-old Syrian refugee child at the heart of The Walk. Over the last year she has become a global symbol of human rights, especially those of refugees.
Jon Chase/Harvard Staff Photographer
Anhphu Nguyen ‘25 shows a monocle inside the Harvard Science and Engineering Complex (SEC). Nguyen studies specialize in human-computer interaction.
Kris Snibbe/Harvard Staff Photographer
The Physics of Sports, taught by Kelly Miller, applies the laws of physics to understand the world of athletics. Students use motion trackers and sensors to analyze motion in its dynamical and kinematic aspects.
Jon Chase/Harvard Staff Photographer
Students and faculty participate in conversation in the garden at the Center for Government and International Studies.
Photo by Dylan Goodman
Inauguration Arts Prelude features a performance by the Asian American Dance Troupe in Sanders Theatre.
Jon Chase/Harvard Staff Photographer
Harvard President Claudine Gay visits Harvard Archives to see the presidential insignia that will play an important ceremonial part of her Inauguration. Gay is pictured with the Harvard Charter of 1650.
Stephanie Mitchell/Harvard Staff Photographer
View of the procession into Tercentenary Theatre for the Inauguration Ceremony of Harvard President Gay.
Kris Snibbe/Harvard Staff Photographer
Harvard Professor Claudia Goldin is named the winner of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2023. Goldin (pictured) speaks during a press conference.
Stephanie Mitchell/Harvard Staff Photographer
Cross-country star Graham Blanks.
Jon Chase/Harvard Staff Photographer
Ice T (right) speaks about the United Nations graphic in the Day One DNA: 50 Years in HipHop Culture exhibit at the Ethelbert Cooper Gallery of African & African American Art at The Hutchins Center for African & African American Research.
Niles Singer/Harvard Staff Photographer
President Gay met with staff, faculty, and students during a visit to the Business School in November.
Photo courtesy of Harvard Business School
Ryan Adams ’26 (left) and Max Bowman ’26 share a conversation on the rocks outside the Science Center.
Photo by Dylan Goodman
Students await the German National soccer team in Widener Library during their tour of the campus.
Photo by Dylan Goodman
Harvard President Claudine Gay comforts an attendee at an interfaith vigil held this month on the steps of Memorial Church to grieve for victims of the violence in Israel and Gaza.
Niles Singer/Harvard Staff Photographer
The tower of Eliot House is pictured along the Charles River as rowers pass the Weeks Footbridge.
Stephanie Mitchell/Harvard Staff Photographer
Pierce Hall pictured at sunset.
Stephanie Mitchell/Harvard Staff Photographer
Vanessa Valverde ’27, pictured in front of Memorial Hall, is a female veteran who served in the U.S. Marines.
Niles Singer/Harvard Staff Photographer
Overview shows the renovated roof of Sanders Theatre at Memorial Hall as below people cross the street.
Kris Snibbe/Harvard Staff Photographer
Oxford Child-Centred AI shares tips and tools to keep young people's data safe
These scientists want to put a massive 'sunshade' in orbit to help fight climate change
© Planetary Sunshade Foundation
Highlights of the Year
Physics Magazine Editors pick their favorite stories from 2023.
[Physics 16, 213] Published Mon Dec 18, 2023
Survey: Only 43 per cent of companies plan to set out environmental targets and requirements for their leases
- Lianhe Zaobao, 18 December 2023, p19
- Hao 96.3FM, 18 December 2023
- UFM100.3, 18 December 2023
The science behind the Big Bang theory
The first suggestion of the Big Bang was in 1912. Astronomer Vesto Slipher “conducted a series of observations of spiral galaxies (which were believed to be nebulae) and measured their Doppler Redshift. In almost all cases, the spiral galaxies were observed to be moving away from our own,” according this phys.org article. Later in theContinue reading "The science behind the Big Bang theory"
The post The science behind the Big Bang theory appeared first on Astronomy Magazine.
Scaling Up a Trapped-Ion Quantum Computer
Author(s): Sara Mouradian
Major technical improvements to a quantum computer based on trapped ions could bring a large-scale version closer to reality.
[Physics 16, 209] Published Mon Dec 18, 2023
How Solar Orbiter is decoding the sun's mysterious miniflares: 'What we see is just the tip of the iceberg'
© ESA & NASA/Solar Orbiter/EUI Team
Blue Origin launches New Shepard rocket, aces landing in 1st return to flight since 2022 failure (video)
© Blue Origin
How many times has Earth orbited the sun?
© Getty Images
A star exploded 10,000 years ago and left us with the gorgeous Veil Nebula (photo)
© Miguel Claro
NASA donates Ingenuity Mars Helicopter prototype to Smithsonian
© Smithsonian National Air and Space Museum/Mark Avino
Over a third of Americans worry about getting the flu, RSV, or COVID-19
American adults are worried they or loved ones will succumb to the ‘tripledemic’ illnesses in the next three months, according to a new health survey from the Annenberg Public Policy Center.
How the songs of stars can help perfect Gaia's sweeping map of our galaxy
© ESA
Is time travel possible? An astrophysicist explains
Will it ever be possible for time travel to occur? – Alana C., age 12, Queens, New York Have you ever dreamed of traveling through time, like characters do in science fiction movies? For centuries, the concept of time travel has captivated people’s imaginations. Time travel is the concept of moving between different points inContinue reading "Is time travel possible? An astrophysicist explains"
The post Is time travel possible? An astrophysicist explains appeared first on Astronomy Magazine.
The newest infrared image of Cas A offers a fresh view — and new details
In April, the James Webb Space Telescope’s (JWST) Mid-Infrared Instrument (MIRI) snapped an eye-catching picture of the supernova remnant Cassiopeia A (abbreviated Cas A), located 11,000 light-years away in the constellation Cassiopeia the Queen. The image was colored green and red to represent different wavelengths of infrared light. Now, a second look at the regionContinue reading "The newest infrared image of Cas A offers a fresh view — and new details"
The post The newest infrared image of Cas A offers a fresh view — and new details appeared first on Astronomy Magazine.
Deep neural networks show promise as models of human hearing
Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.
In the largest study yet of deep neural networks that have been trained to perform auditory tasks, the MIT team showed that most of these models generate internal representations that share properties of representations seen in the human brain when people are listening to the same sounds.
The study also offers insight into how to best train this type of model: The researchers found that models trained on auditory input including background noise more closely mimic the activation patterns of the human auditory cortex.
“What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.
MIT graduate student Greta Tuckute and Jenelle Feather PhD ’22 are the lead authors of the open-access paper, which appears today in PLOS Biology.
Models of hearing
Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientists have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.
“These models that are built with machine learning are able to mediate behaviors on a scale that really wasn't possible with previous types of models, and that has led to interest in whether or not the representations in the models might capture things that are happening in the brain,” Tuckute says.
When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives, such as a word or other type of sound. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.
In 2018, McDermott and then-graduate student Alexander Kell reported that when they trained a neural network to perform auditory tasks (such as recognizing words from an audio signal), the internal representations generated by the model showed similarity to those seen in fMRI scans of people listening to the same sounds.
Since then, these types of models have become widely used, so McDermott’s research group set out to evaluate a larger set of models, to see if the ability to approximate the neural representations seen in the human brain is a general trait of these models.
For this study, the researchers analyzed nine publicly available deep neural network models that had been trained to perform auditory tasks, and they also created 14 models of their own, based on two different architectures. Most of these models were trained to perform a single task — recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genre — while two of them were trained to perform multiple tasks.
When the researchers presented these models with natural sounds that had been used as stimuli in human fMRI experiments, they found that the internal model representations tended to exhibit similarity with those generated by the human brain. The models whose representations were most similar to those seen in the brain were models that had been trained on more than one task and had been trained on auditory input that included background noise.
“If you train models in noise, they give better brain predictions than if you don’t, which is intuitively reasonable because a lot of real-world hearing involves hearing in noise, and that’s plausibly something the auditory system is adapted to,” Feather says.
Hierarchical processing
The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.
Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.
“Even though the model has seen the exact same training data and the architecture is the same, when you optimize for one particular task, you can see that it selectively explains specific tuning properties in the brain,” Tuckute says.
McDermott’s lab now plans to make use of their findings to try to develop models that are even more successful at reproducing human brain responses. In addition to helping scientists learn more about how the brain may be organized, such models could also be used to help develop better hearing aids, cochlear implants, and brain-machine interfaces.
“A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors,” McDermott says.
The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, a fellowship from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, and a Department of Energy Computational Science Graduate Fellowship.
© Image: iStock
Closing a Gap in Nuclear Theory
Author(s): Charles Day
Theoretical descriptions of the first excited state of helium-4 are now consistent with experimental data.
[Physics 16, 207] Published Wed Dec 13, 2023
Creating a future in music for children with disabilities
A University of Melbourne research program run in conjunction with Melbourne Youth Orchestras, the Adaptive Music Bridging Program, has proudly seen its first cohort of students take to the stage for their debut performance.
Lift-off! SpIRIT nanosatellite launches aboard SpaceX rocket
The University of Melbourne and the Italian Space Agency have announced the successful launch of SpIRIT, a landmark nanosatellite mission supported by the Australian Space Agency.
Deep brain stimulation improves cognition after injury
AI-generated images map visual functions in the brain
Big-data study explores social factors affecting child health
Holiday giving at Penn
From shoes and coats to Hot Wheels and Paw Patrol, the Netter Center’s Isabel Sampson-Mapp coordinates holiday giving.
Public knowledge varies greatly on flu and COVID-19
The latest Annenberg Public Health and Knowledge Survey finds the answers to eight survey questions—four for the flu and four for COVID—have the strongest ability to independently predict individual vaccine willingness.
Lipid nanoparticles that deliver mRNA to T cells hold promise for autoimmune diseases
A new platform to engineer adoptive cell therapies for specific autoimmune diseases has the potential to create therapies for allergies, organ transplants, and more.
Solar storms could affect train safety by glitching railroad signals
© NASA/SDO and the AIA, EVE, and HMI science teams
NASA's Voyager 1 probe in interstellar space can't phone home (again) due to glitch
© NASA, ESA, and G. Bacon (STScI)
How to be an astronaut
The first question a student asked Warren “Woody” Hoburg ’08 during his visit to MIT's Department of Aeronautics and Astronautics (AeroAstro) this November was: “It seems like there’s no real way to know if being an astronaut is something you could really do. Are there any activities we can try out and see if astronaut-related things are something we might want to do?”
Hoburg’s response: There is no one path to space.
“If you look at all the classes of astronauts, there are all sorts of life paths that lead people to the astronaut corps. Do the things that are fun and exciting — work on things you’re excited to do just because it’s fulfilling in and of itself, not because of where it might lead,” he told a room full of Course 16 students.
Hoburg was the only faculty member among his peers in NASA’s Astronaut Class 22, for example. His own CV includes outdoor sports, computer science and robotics, EMT and search and rescue service, design optimization research, and flying airplanes.
In a two-day visit to the department that included a keynote lecture as well as fireside chats and Q&As with undergraduates and grad students, Hoburg shared his personal journey to becoming an astronaut, lessons and observations from his time aboard the International Space Station, and his excitement for what’s next in space exploration.
From MIT to ISS
For Hoburg, the path that led him first to MIT and eventually to the International Space Station wasn’t straightforward, or focused on a specific goal. From his aerospace studies at MIT, he was torn between going to grad school or getting a job in industry. He decided to pursue computer science in grad school, and from there wasn’t sure if he should stay in academia, join a startup, or join the U.S. Air Force. It was late in grad school when his research started going well that he decided to stick with it, and that decision brought him back to MIT in 2014 as an assistant professor leading a research group in AeroAstro.
He had more or less forgotten his childhood dream of becoming an astronaut. “Not in a bad way,” he clarifies, “just there were other things consuming my time and interest.” But then, a friend suggested they submit applications for the NASA Astronaut Candidate Program. “I remembered that when I was a kid I did think that would just be the coolest job, so I applied. I never thought I’d actually get accepted.”
Performing in an operational environment
Hoburg credits his time at MIT with nurturing a love of adventure and pursuing new ideas and passions. “Everyone here was awesome academically, that was a given. But it seemed like everyone also had a wild unique interest, and I loved that about this community.” As an undergraduate Hoburg remembers rushing through his P-sets so he could go off rock-climbing and skiing for the weekend.
The MIT Alpine Ski team was his first experience on a tight-knit, mission focused team, which has become a core part of his personal and professional ethos. Before starting grad school at the University of California at Berkeley he took a year off to be an EMT, and he spent his summers in California on the Yosemite Search and Rescue team.
“That was my first experience doing what I would call real operational stuff, getting called out on a mission to help someone, working with a high-performing team in an austere environment,” he said. “A lot of the civilians who get selected at NASA have something operational in their background, in addition to their technical expertise. I think search and rescue ultimately helped me with my astronaut application, but I don’t know of anyone who had gone that route before me. It did help me grow into a strong operator — but at the time I just wanted to be out in the mountains responding to emergencies.”
This theme of operational capacity emerged throughout Hoburg’s talks and Q&As. He noted that astronaut candidates tend to be natural team players, and the two-plus years of training prepare them to approach every situation with trust and confidence. A comfort level with versatility is critical for an astronaut: they have to fly and dock the spacecraft, operate and perform maintenance on the ISS itself, perform spacewalks, and of course get home again. All of this is in service of their primary mission aboard the ISS:
“We’re just operators up there,” says Hoburg, “we work on literally hundreds of different experiments, while the PIs are on the ground. The science work is definitely the purpose of why we’re there. That place is busy — we are working 12 hour days a lot of the time.”
Moon, Mars, and beyond
Many of the students’ questions and Hoburg’s responses were practical, perhaps unsurprisingly in a department full of aerospace engineers. His ISS wish list — free-flying robots to help with holding and carrying; robotic cameras to better document their experiments and other pursuits onboard; improved automation and manual control interfaces in launch, flight, and docking; better solutions to the challenges of stowage and organization — may be the very projects that this generation of engineers tackles.
Hoburg also shared some broader insights from his career as an astronaut so far, including his personal reflection on the famously profound experience of looking at the Earth from space:
“Earth actually looks really big from the ISS,” he said, adding that he would love to see it from the far-away perspective of the Apollo 8 lunar mission. “The overpowering feeling for me was looking at the atmosphere. When you do a spacewalk, it’s pretty in your face that you’re in a vacuum. There is just pure death all around you. And when you look at the Earth, you see how it’s protected by this very, very thin layer.”
Hoburg is enthusiastic for NASA’s upcoming return to the moon, and for the growing commercialization of low Earth orbit that’s allowing NASA to focus on “a transition period beyond low Earth orbit.” He’s keen to help with the lunar missions that will help prepare the next generations of spacefarers to get to Mars.
Above all, he’s excited about the possibilities ahead. “Looking back at the 20th century, I think the moon landing was truly one of our crowning achievements. So part of it is purely inspirational, that spirit of adventure and exploration. Putting humans farther out into space is an audacious goal, worth our time and effort.”
© Photo: Rachel Ornitz
Gene-editing treatment could replace cholesterol meds
Gene-editing treatment could replace cholesterol meds
Alvin Powell
Harvard Staff Writer
Early stage test shows promise, but cardiologist notes more study needed into longer-term, unintended effects
A recent trial of a novel gene-editing technique that lowered dangerously high cholesterol by up to 55 percent has generated talk of a new front opening against cardiovascular disease, which kills nearly 700,000 Americans each year and is the nation’s leading cause of death.
In a presentation at the American Heart Association’s November meeting, Boston-based Verve Therapeutics announced results of a Phase 1 trial of 10 participants suffering from familial hypercholesterolemia, an inherited condition causing extremely high cholesterol, which often leads to early death due to cardiovascular disease. The treatment uses a gene-editing technique called base editing, in which precision changes are made to a single base in a patient’s DNA in the liver. In this case, the change was made to a gene that changed the liver’s handling of LDL, popularly termed “bad cholesterol.”
To learn more, the Gazette spoke with Michelle O’Donoghue, associate professor at Harvard Medical School and McGillycuddy-Logue Distinguished Chair in Cardiology at Brigham and Women’s Hospital. O’Donoghue said the advance has generated extraordinary optimism in cardiology circles, tempered by caution due to the risks inherent in changing a patient’s DNA.
Q&A
Michelle O’Donoghue
GAZETTE: This was just an initial, Phase 1 trial, but the results have generated a lot of excitement. How would you characterize the trial’s outcome?
O’DONOGHUE: This was a radical concept developed inside a lab and to see it being translated into the treatment of real people is an extraordinary leap. There’s a mix of enthusiasm and optimism for this novel technology, but also a healthy dose of caution and concern. We still need to completely understand the efficacy and safety profile of this type of approach.
GAZETTE: There were two adverse events among the participants, a heart attack and a cardiac arrest — one of which was fatal. It was ultimately determined that the treatment was likely not the cause. But do these kinds of incidents reflect why there is so much concern over safety and efficacy?
O’DONOGHUE: It’s more conceptual. There were too few patients within that initial cohort to really have a firm handle on the safety profile. As the investigators themselves stated, these were very sick persons in the first place. It makes sense to start the investigation of these therapies in patients who need it most desperately — they have genetic conditions that predispose them to very elevated cholesterol levels and already have established atherosclerotic disease (buildup of fatty plaque in the arteries). That being said, in the absence of a control group, one doesn’t know whether or not the treatment was related to the occurrence of the heart attack or fatal cardiac arrest.
But I think that the concerns are more than just theoretical. For many of these gene-editing strategies, you are, in essence, permanently changing that person’s DNA. There are different approaches, some of which are thought to be in part reversible, but nonetheless this type of strategy toward treating illness is going to come with some skepticism. And this early on, that is certainly appropriate.
“We need to follow people who are being treated with novel gene-editing therapies for several years before we can feel confident about recommending the therapies on a more widespread basis.”
GAZETTE: Is it clear that the gene involved, PCSK9, only increases LDL cholesterol? Is there any chance of unintended consequences elsewhere in the body as a result of shutting this gene off?
O’DONOGHUE: As a therapeutic target, PCSK9 is very well established at this point. It’s one of the more elegant stories of drug development. There were individuals in France who had very elevated cholesterol levels, and it was found that they had gain-of-function mutations for the PCSK9 protein. That meant that they had higher levels of this protein being synthesized by their liver cells. On the liver cell surface is an LDL receptor that is important because it mops up excess LDL cholesterol, that “bad” cholesterol in the blood.
They found that PCSK9 targets the LDL receptor on the liver cell surface for degradation. So, with too much PCSK9, that receptor starts to disappear, and you’re not able to mop up that extra LDL cholesterol in the circulation.
Scientists then found the converse. There were individuals who carried a loss-of-function mutation for that PCSK9 protein. They were found to have low rates of cardiovascular disease, and it does not appear to come at the price of other abnormalities.
We also already have treatments, monoclonal antibodies and a small interfering RNA that target PCSK9 and these have previously been shown to be efficacious and with a very acceptable safety profile.
GAZETTE: The study participants had familial hypercholesterolemia, a genetic aberration that causes very serious health problems. Might this one day be used widely, for people with mildly elevated cholesterol or will it always be reserved for the worst cases?
O’DONOGHUE: There are many factors that go into that decision. We have individuals having heart attacks at younger and younger ages, and if you ask them whether they’d rather be on cholesterol-lowering medication for the next several decades or have a one-and-done approach toward treating it, I think different people might choose different paths.
Of course, there’s going to be an appropriate degree of skepticism about gene editing until we completely understand the long-term safety of permanently editing somebody’s genes. That’s why it is relevant that some of these technologies are reversible. We need to follow people who are being treated with novel gene-editing therapies for several years before we can feel confident about recommending the therapies on a more widespread basis.
Some have asked — for cholesterol in particular — why we need a gene-editing approach when we have treatments available that are not too burdensome. But most people end up discontinuing new therapies due to concerns about side effects, cost, and just the reality of daily compliance. Even for those who are the most motivated, we know that doses get missed.
GAZETTE: You mentioned it being reversible. Is that because if you can change a base with this technology, you can easily go in and change it back?
O’DONOGHUE: It comes down to which technology is applied. The two types of gene-editing approaches that people talk the most about would be the traditional CRISPR approach, which people liken to using a pair of scissors, where an enzyme cuts both strands of DNA at a target point. That would not be expected to be reversible.
The base editing used in this case holds the possibility of being reversible. That’s why some have used the analogy that it’s more like a pencil and eraser than a pair of scissors. You’re making a single base change on a strand of DNA to change the spelling, in essence, of that gene. In theory, one could change it back if required, but that remains unproven.
GAZETTE: Are the patients who’ve been treated cured?
O’DONOGHUE: The hope, eventually, would be that they would no longer require cholesterol-lowering medications. Based on what was presented, their cholesterol levels were reduced by up to 55 percent.
More like this
That would be expected to reduce the risk of cardiovascular events, but there are other pathways that contribute to cardiovascular disease. So that treatment, in and of itself, may not be sufficient to make a patient bulletproof when it comes to having a heart attack.
GAZETTE: Before this would be generally available, will it have to go through several more years of trials?
O’DONOGHUE: The question is what will it take for the FDA to feel comfortable going ahead with approval. Typically, any drug or investigational therapy would go through at least three phases of clinical trial testing. I think the question will be what duration of follow-up will be required, because this is different than a medication where if you stop taking it, one expects the drug and the effects of the therapy to go away relatively quickly. We’re talking about a permanent change to the DNA and these are relatively uncharted waters. So how long you need to follow somebody to be confident about the safety profile will be a challenge for the FDA to determine.
Also, the arithmetic, in terms of risk/benefit, of the gene-editing approach may be different for different disease states. For a genetic illness where there currently are no therapies available, it may be more in the patient’s interest to try a novel gene-editing approach, but regulators may be more hesitant to approve the technology in a situation like hypercholesterolemia, where several therapies are available.
GAZETTE: Where does this fit broadly on the landscape of treatments for cardiovascular disease?
O’DONOGHUE: It has opened a new landscape in terms of potential treatments for different types of cardiovascular diseases. An area that’s being hotly explored is a condition called amyloidosis, which is a protein that can deposit in the heart, leading to heart failure. Currently, few treatments exist.
There are also rare diseases that have a genetic basis and where gene editing may really offer hope. Ultimately, we’ll have to see where the science leads us, but it’s exciting when what used to be thought of as science fiction is making its way slowly into prime time.
Artificial intelligence for safer bike helmets and better shoe soles
Astronomers capture a green ghost in our atmosphere
High up in the atmosphere, near the boundary of space, a dazzling, fleeting flash of red sometimes briefly appears above a thunderstorm before evaporating away. These events, which occur far above when lightning strikes in the lower atmosphere, are called sprites. They fall under the umbrella of transient luminous events (TLEs) and only in theContinue reading "Astronomers capture a green ghost in our atmosphere"
The post Astronomers capture a green ghost in our atmosphere appeared first on Astronomy Magazine.
MIT researchers observe a hallmark quantum behavior in bouncing droplets
In our everyday classical world, what you see is what you get. A ball is just a ball, and when lobbed through the air, its trajectory is straightforward and clear. But if that ball were shrunk to the size of an atom or smaller, its behavior would shift into a quantum, fuzzy reality. The ball would exist as not just a physical particle but also a wave of possible particle states. And this wave-particle duality can give rise to some weird and sneaky phenomena.
One of the stranger prospects comes from a thought experiment known as the “quantum bomb tester.” The experiment proposes that a quantum particle, such as a photon, could act as a sort of telekinetic bomb detector. Through its properties as both a particle and a wave, the photon could, in theory, sense the presence of a bomb without physically interacting with it.
The concept checks out mathematically and is in line with what the equations governing quantum mechanics allow. But when it comes to spelling out exactly how a particle would accomplish such a bomb-sniffing feat, physicists are stumped. The conundrum lies in a quantum particle’s inherently shifty, in-between, undefinable state. In other words, scientists just have to trust that it works.
But mathematicians at MIT are hoping to dispel some of the mystery and ultimately establish a more concrete picture of quantum mechanics. They have now shown that they can recreate an analog of the quantum bomb tester and generate the behavior that the experiment predicts. They’ve done so not in an exotic, microscopic, quantum setting, but in a seemingly mundane, classical, tabletop setup.
In a paper appearing today in Physical Review A, the team reports recreating the quantum bomb tester in an experiment with a study of bouncing droplets. The team found that the interaction of the droplet with its own waves is similar to a photon’s quantum wave-particle behavior: When dropped into a configuration similar to what is proposed in the quantum bomb test, the droplet behaves in exactly the same statistical manner that is predicted for the photon. If there were actually a bomb in the setup 50 percent of the time, the droplet, like the photon, would detect it, without physically interacting with it, 25 percent of the time.
The fact that the statistics in both experiments match up suggests that something in the droplet’s classical dynamics may be at the heart of a photon’s otherwise mysterious quantum behavior. The researchers see the study as another bridge between two realities: the observable, classical world and the fuzzier quantum realm.
“Here we have a classical system that gives the same statistics as arises in the quantum bomb test, which is considered one of the wonders of the quantum world,” says study author John Bush, professor of applied mathematics at MIT. “In fact, we find that the phenomenon is not so wonderful after all. And this is another example of quantum behavior that can be understood from a local realist perspective.”
Bush’s co-author is former MIT postdoc Valeri Frumkin.
Making waves
To some physicists, quantum mechanics leaves too much to the imagination and doesn’t say enough about the actual dynamics from which such weird phenomena supposedly arise. In 1927, in an attempt to crystallize quantum mechanics, physicist Louis de Broglie presented the pilot wave theory — a still-controversial idea that poses a particle’s quantum behavior is determined not by an intangible, statistical wave of possible states but by a physical “pilot” wave of its own making, that guides the particle through space.
The concept was mostly discounted until 2005, when physicist Yves Couder discovered that de Broglie’s quantum waves could be replicated and studied in a classical, fluid-based experiment. The setup involves a bath of fluid that is made to subtly vibrate up and down, though not quite enough to generate waves on its own. A millimeter-sized droplet of the same fluid is then dispensed over the bath, and as it bounces off the surface, the droplet resonates with the bath’s vibrations, creating what physicists know as a standing wave field that “pilots,” or pushes the droplet along. The effect is of a droplet that appears to walk along a rippled surface in patterns that turn out to be in line with de Broglie’s pilot wave theory.
For the last 13 years, Bush has worked to refine and extend Couder’s hydrodynamic pilot wave experiments and has successfully used the setup to observe droplets exhibiting emergent, quantum-like behavior, including quantum tunneling, single-particle diffraction, and surreal trajectories.
“It turns out that this hydrodynamic pilot-wave experiment exhibits many features of quantum systems which were previously thought to be impossible to understand from a classical perspective,” Bush says.
Bombs away
In their new study, he and Frumkin took on the quantum bomb tester. The thought experiment begins with a conceptual interferometer — essentially, two corridors of the same length that branch out from the same starting point, then turn and converge, forming a rhombus-like configuration as the corridors continue on, each ending in a respective detector.
According to quantum mechanics, if a photon is fired from the interferometer’s starting point, through a beamsplitter, the particle should travel down one of the two corridors with equal probability. Meanwhile, the photon’s mysterious “wave function,” or the sum of all its possible states, travels down both corridors simultaneously. The wave function interferes in such a way to ensure that the particle only appears at one detector (let’s call this D1) and never the other (D2). Hence, the photon should be detected at D1 100 percent of the time, regardless of which corridor it traveled through.
If there is a bomb in one of the two corridors, and a photon heads down this corridor, it predictably triggers the bomb and the setup is blown to bits, and no photon is detected at either detector. But if the photon travels down the corridor without the bomb, something weird happens: Its wave function, in traveling down both corridors, is cut short in one by the bomb. As it’s not quite a particle, the wave does not set off the bomb. But the wave interference is altered in such a way that the particle will be detected with equal probability at D1 and D2. Any signal at D2 therefore would mean that a photon has detected the presence of the bomb, without physically interacting with it. If the bomb is present 50 percent of the time, then this weird quantum bomb detection should occur 25 percent of the time.
In their new study, Bush and Frumkin set up an analogous experiment to see if this quantum behavior could emerge in classical droplets. Into a bath of silicon oil, they submerged a structure similar to the rhombus-like corridors in the thought experiment. They then carefully dispensed tiny oil droplets into the bath and tracked their paths. They added a structure to one side of the rhombus to mimic a bomb-like object and observed how the droplet and its wave patterns changed in response.
In the end, they found that 25 percent of the time a droplet bounced through the corridor without the “bomb,” while its pilot waves interacted with the bomb structure in a way that pushed the droplet away from the bomb. From this perspective, the droplet was able to “sense” the bomb-like object without physically coming into contact with it. While the droplet exhibited quantum-like behavior, the team could plainly see that this behavior emerged from the droplet’s waves, which physically helped to keep the droplet away from the bomb. These dynamics, the team says, may also help to explain the mysterious behavior in quantum particles.
“Not only are the statistics the same, but we also know the dynamics, which was a mystery,” Frumkin says. “And the inference is that an analogous dynamics may underly the quantum behavior.”
"This system is the only example we know which is not quantum but shares some strong wave-particles properties," says theoretical physicist Matthieu Labousse, of ESPCI Paris, who was not involved in the study. "It is very surprising that many examples thought to be peculiar to the quantum world can be reproduced by such a classical system. It enables to understand the barrier between what it is specific to a quantum system and what is not. The latest results of the group at MIT pushes the barrier very far."
This research is supported, in part, by the National Science Foundation.
© Credit: Courtesy of the researchers
Black holes, explained by an astrophysicist
While we know a lot about black holes, they still remain as one of the greatest mysteries in the universe. So let’s dive in — not literally – to answer all your burning questions about these cosmic enigmas. First, the basics. A black hole is a region of universe where gravity is so outrageously strongContinue reading "Black holes, explained by an astrophysicist"
The post Black holes, explained by an astrophysicist appeared first on Astronomy Magazine.
The Star of Bethlehem: Can science explain what it really was?
Jupiter and Saturn came together in a “Great Conjunction” in 2020 that was unlike any seen in nearly 800 years. The two planets appeared so close together in Earth’s night sky on the winter solstice they looked almost like a single object. That prompted some to dub the sight a “Christmas Star,” and others toContinue reading "The Star of Bethlehem: Can science explain what it really was?"
The post The Star of Bethlehem: Can science explain what it really was? appeared first on Astronomy Magazine.
What radiologists can learn from looking at art
Greek grave stele of a woman dying in childbirth, c. 330 BCE.
© President and Fellows of Harvard College; Courtesy of the Harvard Art Museums
What radiologists can learn from looking at art
Medical humanities program inspires exhibit that rewards critical viewing
Samantha Laine Perfas
Harvard Staff Writer
It’s essential that radiologists develop a critical — and empathetic — eye to inspect X-rays, CT scans, and other medical images. Could an arts program help sharpen those skills?
That question sparked the Seeing in Art and Medical Imaging program — a partnership between the Art Museums and Department of Radiology at Brigham and Women’s Hospital — now in its sixth year.
It’s also the subject of an exhibition on view at the Art Museums through the end of the month that allows the public to explore the same questions as medical residents.
“This is the first time that we have ever had an exhibition at the museum that is about a curricular collaboration,” said Jen Thum, co-curator of the exhibition and a program instructor. “It’s showing people how radiologists work and giving them a chance to think about the same kinds of big issues.”
“Dorothee Oppermann, 20, Nursing Student,” 1974.
“Shutter (c)” by Rosemarie Trockel, 2006.
“Available Portrait Colors” by Annette Lemieux, 1990.
© President and Fellows of Harvard College; Courtesy of the Harvard Art Museums
Hyewon Hyun, a Brigham and Women’s radiologist and co-founder of the program, felt that residents would benefit from interdisciplinary learning. The yearlong program focuses on the “human-to-human” connection, a vital part of working both in hospitals and with art.
“In order to take care of patients, I would want that physician to be as compassionate and as understanding and as human as possible,” Hyun said.
Through the program — and now the exhibit — art is the starting point for in-depth conversations about medicine, humanity, and different ways of seeing the world. It gives residents space and permission to sharpen their observation skills in a low-stakes environment.
“Radiologists, for their entire working lives, are looking at images, which is exactly what I do,” said Thum of her role as an art curator. “But we do it at a different pace. So we slow down; we can remove the residents from the hierarchy of the hospital … it’s a permission to do things that they can’t do in the hospital.”
Residents explore seven themes: narrative, objectivity, embodiment, empathy, power, ambiguity, and care. Each plays an important role in the relationships of medical professionals and their patients.
“We know that from data that medical students kind of lose their empathy for patients,” Hyun said. It can be a lot of pressure to work in a fast-paced hospital setting; removing residents from this environment gives them space to explore their own humanity.
Take the sculpture “Shutter (c)” by artist Rosemarie Trockel. In the red-glazed stoneware, people see all sorts of things, Thum explains. As viewers interact with the sculpture, they may see a window, raw meat, ribs, or other objects.
“It’s very different for a radiologist — versus what happens to them in the reading room — to have multiple, equally valid readings of an image that are very different from each other at one time,” Thum said. “Or for there not to be a single correct answer, or no answer at all. That ambiguity can be kind of uncomfortable, but it’s very productive.”
The installation is on view through Dec. 30.
© President and Fellows of Harvard College; courtesy of the Harvard Art Museums
Thum said while the exhibit was born out of the Seeing in Art and Medical Imaging program, she believes it will appeal to more than just medical professionals.
“Most visitors to an art museum don’t slow down either,” Thum said. “You don’t have to be a doctor to engage in the space and to consider these questions.”
For Hyun, her hope is that visitors will consider the connection between medicine and art in a new way.
“For young people, I’m really hoping that they would see that there is not this divisiveness between the humanities and medicine that they think there is, because that’s an artificial division that has been developed over decades,” she said. “But in order to be a good physician for a long period of time … you need to nurture the other side, the humanity side.”
Harvard Art Museums are free and open to the public. Seeing in Art and Medicine is located in the University Research Gallery through Dec. 30.
Students presenting latest research findings in graph neural networks at NeurIPS
Inconsistency Turns Up Again for Cosmological Observations
Author(s): Mijin Yoon
A new analysis of the distribution of matter in the Universe continues to find a discrepancy in the clumpiness of dark matter in the late and early Universe, suggesting a fundamental error in the standard cosmological model.
[Physics 16, 193] Published Mon Dec 11, 2023
Coming soon to public toilets – a robot cleaner
- The Straits Times, 4 December 2023, The Big Story pA2
- The Straits Times, 4 December 2023, The Big Story pA2
- Tamil Murasu, 4 December 2023, p2
- Money 89.3FM, 4 December 2023
- ONE FM 91.3, 4 December 2023
- Kiss92FM, 4 December 2023
- The New Paper, 4 December 2023
- Berita Harian, 7 December 2023, p11
BOINC — Volunteer Computing
Scientists 3D print self-heating microfluidic devices
MIT researchers have used 3D printing to produce self-heating microfluidic devices, demonstrating a technique which could someday be used to rapidly create cheap, yet accurate, tools to detect a host of diseases.
Microfluidics, miniaturized machines that manipulate fluids and facilitate chemical reactions, can be used to detect disease in tiny samples of blood or fluids. At-home test kits for Covid-19, for example, incorporate a simple type of microfluidic.
But many microfluidic applications require chemical reactions that must be performed at specific temperatures. These more complex microfluidic devices, which are typically manufactured in a clean room, are outfitted with heating elements made from gold or platinum using a complicated and expensive fabrication process that is difficult to scale up.
Instead, the MIT team used multimaterial 3D printing to create self-heating microfluidic devices with built-in heating elements, through a single, inexpensive manufacturing process. They generated devices that can heat fluid to a specific temperature as it flows through microscopic channels inside the tiny machine.
Their technique is customizable, so an engineer could create a microfluidic that heats fluid to a certain temperature or given heating profile within a specific area of the device. The low-cost fabrication process requires about $2 of materials to generate a ready-to-use microfluidic.
The process could be especially useful in creating self-heating microfluidics for remote regions of developing countries where clinicians may not have access to the expensive lab equipment required for many diagnostic procedures.
“Clean rooms in particular, where you would usually make these devices, are incredibly expensive to build and to run. But we can make very capable self-heating microfluidic devices using additive manufacturing, and they can be made a lot faster and cheaper than with these traditional methods. This is really a way to democratize this technology,” says Luis Fernando Velásquez-García, a principal scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper describing the fabrication technique.
He is joined on the paper by lead author Jorge Cañada Pérez-Sala, an electrical engineering and computer science graduate student. The research will be presented at the PowerMEMS Conference this month.
An insulator becomes conductive
This new fabrication process utilizes a technique called multimaterial extrusion 3D printing, in which several materials can be squirted through the printer’s many nozzles to build a device layer by layer. The process is monolithic, which means the entire device can be produced in one step on the 3D printer, without the need for any post-assembly.
To create self-heating microfluidics, the researchers used two materials — a biodegradable polymer known as polylactic acid (PLA) that is commonly used in 3D printing, and a modified version of PLA.
The modified PLA has mixed copper nanoparticles into the polymer, which converts this insulating material into an electrical conductor, Velásquez-García explains. When electrical current is fed into a resistor composed of this copper-doped PLA, energy is dissipated as heat.
“It is amazing when you think about it because the PLA material is a dielectric, but when you put in these nanoparticle impurities, it completely changes the physical properties. This is something we don’t fully understand yet, but it happens and it is repeatable,” he says.
Using a multimaterial 3D printer, the researchers fabricate a heating resistor from the copper-doped PLA and then print the microfluidic device, with microscopic channels through which fluid can flow, directly on top in one printing step. Because the components are made from the same base material, they have similar printing temperatures and are compatible.
Heat dissipated from the resistor will warm fluid flowing through the channels in the microfluidic.
In addition to the resistor and microfluidic, they use the printer to add a thin, continuous layer of PLA that is sandwiched between them. It is especially challenging to manufacture this layer because it must be thin enough so heat can transfer from the resistor to the microfluidic, but not so thin that fluid could leak into the resistor.
The resulting machine is about the size of a U.S. quarter and can be produced in a matter of minutes. Channels about 500 micrometers wide and 400 micrometers tall are threaded through the microfluidic to carry fluid and facilitate chemical reactions.
Importantly, the PLA material is translucent, so fluid in the device remains visible. Many processes rely on visualization or the use of light to infer what is happening during chemical reactions, Velásquez-García explains.
Customizable chemical reactors
The researchers used this one-step manufacturing process to generate a prototype that could heat fluid by 4 degrees Celsius as it flowed between the input and the output. This customizable technique could enable them to make devices which would heat fluids in certain patterns or along specific gradients.
“You can use these two materials to create chemical reactors that do exactly what you want. We can set up a particular heating profile while still having all the capabilities of the microfluidic,” he says.
However, one limitation comes from the fact that PLA can only be heated to about 50 degrees Celsius before it starts to degrade. Many chemical reactions, such as those used for polymerase chain reaction (PCR) tests, require temperatures of 90 degrees or higher. And to precisely control the temperature of the device, researchers would need to integrate a third material that enables temperature sensing.
In addition to tackling these limitations in future work, Velásquez-García wants to print magnets directly into the microfluidic device. These magnets could enable chemical reactions that require particles to be sorted or aligned.
At the same time, he and his colleagues are exploring the use of other materials that could reach higher temperatures. They are also studying PLA to better understand why it becomes conductive when certain impurities are added to the polymer.
“If we can understand the mechanism that is related to the electrical conductivity of PLA, that would greatly enhance the capability of these devices, but it is going to be a lot harder to solve than some other engineering problems,” he adds.
“In Japanese culture, it’s often said that beauty lies in simplicity. This sentiment is echoed by the work of Cañada and Velasquez-Garcia. Their proposed monolithically 3D-printed microfluidic systems embody simplicity and beauty, offering a wide array of potential derivations and applications that we foresee in the future,” says Norihisa Miki, a professor of mechanical engineering at Keio University in Tokyo, who was not involved with this work.
“Being able to directly print microfluidic chips with fluidic channels and electrical features at the same time opens up very exiting applications when processing biological samples, such as to amplify biomarkers or to actuate and mix liquids. Also, due to the fact that PLA degrades over time, one can even think of implantable applications where the chips dissolve and resorb over time,” adds Niclas Roxhed, an associate professor at Sweden’s KTH Royal Institute of Technology, who was not involved with this study.
This research was funded, in part, by the Empiriko Corporation and a fellowship from La Caixa Foundation.
© Image: Courtesy of the researchers
Life might have been possible just seconds after the Big Bang
© X-ray: NASA/ CXC/ CfA/ M.Markevitch, Optical and lensing map: NASA/STScI, Magellan/ U.Arizona/ D.Clowe, Lensing map: ESO/WFI
Easy CI/CD
Git servers, provide features commonly named ‘Actions’ to handle CI/CD operations. Parallel solutions to achieve the requisites are available.
Linux has a built-in functionality to support repeating scheduled tasks (also known as ‘CronJobs’ in some refs) called crontab
.
The deamon responsible for crontab
will send a mail to user who triggered the action which may cause storage and mail transfer issues. Before all let’s disable this in case of no reason to keep all audit logs by setting the environment variables. Also there are alternative ways so far.
MAILTO=""
To redirect echo
outputs to stderr
instead of stdout
, there is a clean method for this:
echoerr() { echo "$@" 1>&2; }
It is optional to prepend the line above to the cicd.sh
file containing scripts below:
SCRIPTPATH="$( cd -- "$(dirname "$0")" >/dev/null 2>&1 ; pwd -P )"
declare -a ProjectsArray=("project1" "project2" "project3")
for project in ${ProjectsArray[@]}; do
cd $SCRIPTPATH/d/$project
git fetch
if [[ $(git diff origin/master) ]];
then
git pull
sudo rm -rf bin/ obj/ publish/
docker compose -f docker-compose.mssql.yml up -d
docker compose up -d --build | mail -s "CI/CD $(date) $(pwd)" ubuntu
else
echoerr "no changes in $(pwd)"
fi;
done
This will check project folders mentioned in the SCRIPTPATH
array and will check their git status. If there are pending incoming changes, it will pass the condition, pull changes and rebuild the containers. This will keep the project updated with a low overhead.
Finally, using the crontab -e
an editor will prepare a safe method to place codes below in crontab configuration.
* * * * * sudo su <USERNAME_HERE> -c /home/<USERNAME_HERE>/cicd.sh | wall
Finally, if mails are not disabled, it’s possible to publish last 500 lines of logs over the net using netcat
with a simple web server.
while true; do { echo -e “HTTP/1.1 200 OK\r\n”; tail -n 500 /var/mail/<USERNAME_HERE>; } | nc -N -l <PORT_NUMBER_HERE>; done
How to run a Mastodon instance simply with Docker
You can get latest configurations and files from official Mastodon repository on GitHub where you can find docker-compose.yml
and .env.production
as well.
To generate hash codes using rake secret
and other configuration files, you will need Ruby libraries installed; but it is possible to keep the host machine clean with running your commands inside a temporary container based on Mastodon image. Just attach to the shell using the command below and run the following scripts, to achieve their goals.
docker run -ti --entrypoint /bin/bash ghcr.io/mastodon/mastodon:v4.2.0
You may enter the commands in the available bash:
bin/rails db:create
rake secret
rake mastodon:webpush:generate_vapid_key
The first command will generate the database, second one will return secret and the last one will generate webpush hash strings.
Copy and paste contents of the .env.production
file from repo, and write it to a file with your preferred name. Personally I will not store my Docker environment files in a dot file, because I do a lot of migrations, and dot
files can increase human error due to their hidden file manager attribute. If you agree, the docker-compose should link to env
instead of .env.production
.
At last, if only the docker-compose.yml
is grabbed (and not the whole repo) remove those lines pointing to local Dockerfile
for build in the .yml
.
There are optional configurations such as memory over-commit for Redis. When the Redis is up, run docker cp my_mastodon_container:/etc/sysctl.conf sysctl.conf
, this will create a Linux systemctl
config file based on Redis image, then prepend vm.overcommit_memor
y=1 to the file and map from the .yml
like ./sysctl.conf:/etc/sysctl.conf:ro
.
امید به زندگی
Mrezatayyebi: افزودن توضیح تکمیلی برای اینکه چرا امید زندگی با امید زندگی ارائه شده در دادههای رسمی آماری برابر است.
[[پرونده:Life Expectancy OECD 2013.jpg|thumb|upright=1.6|Life expectancy total population at birth, OECD 2013]]
[[پرونده:World Life Expectancy 2011 Estimates Map.png|جایگزین=|بندانگشتی|400x400پیکسل|
{| width=100%
|-
| valign=top |
{{legend|#0000CD|+۸۰}}
{{legend|#4169E1|+۷۷٫۵}}
{{legend|#00BFFF|+۷۵}}
{{legend|#3CB371|+۷۲٫۵}}
{{legend|#32CD32|+۷۰}}
{{legend|#ADFF2F|+۶۷٫۵}}
{{legend|#FFFF00|+۶۵}}
| valign=top |
{{legend|#FFD700|+۶۰}}
{{legend|#FF8C00|+۵۵}}
{{legend|#FF4500|+۵۰}}
{{legend|#FF0000|+۴۵}}
{{legend|#800000|+۴۰}}
{{legend|#000000|- ۴۰}}
|}
]]
'''متوسط طول عمر یا میانگین عمر''' {{به انگلیسی|Life Expectancy}} یک شاخص آماری است که نشان میدهد افراد یک جامعه بهطور میانگین چقدر عمر میکنند، یا به عبارت دیگر، انتظار میرود چقدر عمر کنند.
متوسط طول عمر در فارسی، به اشتباه «امید به زندگی» ترجمه شده و میان عوام و حتی کارشناسان به عنوان یک [[غلط مصطلح]] رایج شده است. به نظر میآید که ریشهی این عبارت، از [[امید ریاضی|امیدِ ریاضی]]، برابر با میانگین متغیرِ ''طول عمر'' یا همان «امیدِ زندگی» باشد. در این حالت، برای فردی با مقدار عمر <math>x</math>، زمان انتظاری از <math>x</math> تا لحظهی مرگ برابر <math>e_x^0 = \int\limits_{1}^{\infty} t \, p_x \, dt
</math> خواهد بود؛ بدیهی است که در دادههای آماری رسمی از سازمانهایی چون سازمان بهداشت جهانی، به دلیل آنکه آمار برای امیدِ زندگی نوزاد در نظر گرفته میشود، مقدار <math>x
</math> برابر صفر درنظر گرفته شده و در نتیجه <math>(x + e_x^0 )=e_x^0
</math>.
هرچه سطح بهداشت و درمان در جامعهای افزایش یابد، متوسط طول عمر افزایش خواهد یافت. از این رو، این شاخص یکی از معیارهای پیشرفت یا عقبماندگی کشورهاست. متوسط طول عمر زنان در همهٔ جوامع، چند سال (در جهان، چهار و نیم سال) بیشتر از مردان است.<ref>[http://www.atiye.ir/paper.asp?ID=1558 هفته نامه آتیه]{{پیوند مرده|date=اکتبر ۲۰۱۹ |bot=InternetArchiveBot }}</ref>
بر پایه آمار [[سازمان ملل|سازمان ملل متحد]]، [[ژاپن]]، [[هنگ کنگ]]، [[ایسلند]]، [[سویس|سوییس]] و [[استرالیا]] از نظر متوسط عمر، به ترتیب در ردههای اول تا پنجم هستند؛ و متوسط عمر در آنها حدود ۸۲ سال است که ۲۲ درصد از میانگین جهانی بیشتر است. [[سوازیلند]]، [[موزامبیک]]، [[زیمبابوه]]، [[سیرالئون]] و [[لسوتو]] نیز به ترتیب در پایینترین ردهها قراردارند. متوسط طول عمر در این کشورها حدود ۴۲ سال است که ۳۸ درصد کمتر از میانگین جهانی و نصف کشورهای بالای جدول است.
[https://www.amar.org.ir/خانه/شاخص-های-کلیدی بنا به آمار ۱۳۹۰] {{Webarchive|url=https://web.archive.org/web/20181113151644/https://www.amar.org.ir/%D8%AE%D8%A7%D9%86%D9%87/%D8%B4%D8%A7%D8%AE%D8%B5-%D9%87%D8%A7%DB%8C-%DA%A9%D9%84%DB%8C%D8%AF%DB%8C |date=13 نوامبر 2018 }}، متوسط طول عمر در ایران، در زنان، ۷۴٫۶ و در مردان، ۷۲٫۱ سال است که نسبت به متوسط جهانی، بیشتر بوده و نسبت به آمار ۱۳۸۵، بیش از ۱ سال افزایش یافتهاست.
== جستارهای وابسته ==
* [[فهرست کشورهای جهان بر پایه امید به زندگی|فهرست کشورهای جهان بر پایه متوسط طول عمر]]
== پیوند به بیرون ==
* [http://www.bbc.co.uk/persian/science/2013/05/130516_mgh_women_live_longer.shtml زنان بیشتر از مردان عمر میکنند؟]
* [http://www.google.com/publicdata/explore?ds=wb-wdi&met=sp_dyn_le00_in&idim=country:CAN&dl=en&hl=en&q=average+life+expectancy+canada#!ctype=l&strail=false&bcs=d&nselm=h&met_y=sp_dyn_le00_in&scale_y=lin&ind_y=false&rdim=country&idim=country:USA:CHN:IRN&ifdim=country&hl=en_US&dl=en&ind=false مقایسه نموداری متوسط طول عمر در کشورهای جهان]
* [https://www.cia.gov/library/publications/the-world-factbook/rankorder/2102rank.html جدول ردهبندی متوسط طول عمر در کشورهای جهان] {{Webarchive|url=https://web.archive.org/web/20181229134543/https://www.cia.gov/library/publications/the-world-factbook/rankorder/2102rank.html |date=۲۹ دسامبر ۲۰۱۸ }}
== منابع ==
{{پانویس}}
{{ویکیانبار-رده|Life expectancy maps of the world}}
{{طول عمر}}
[[رده:اقتصاد جمعیت]]
[[رده:بیمسنجی]]
[[رده:پیری]]
[[رده:پیریشناسی]]
[[رده:جامعه]]
[[رده:جمعیت]]
[[رده:جمعیتشناسی]]
[[رده:سالخوردگی]]
Researchers create first logical quantum processor
Researchers create first logical quantum processor
A team led by quantum expert Mikhail Lukin (right) has achieved a breakthrough in quantum computing. Dolev Bluvstein, a Ph.D. student in Lukin’s lab, was first author on the paper.
Jon Chase/Harvard Staff Photographer
Anne J. Manning
Harvard Staff Writer
Key step toward reliable, game-changing quantum computing
Harvard researchers have realized a key milestone in the quest for stable, scalable quantum computing, an ultra-high-speed technology that will enable game-changing advances in a variety of fields, including medicine, science, and finance.
The team, led by Mikhail Lukin, the Joshua and Beth Friedman University Professor in physics and co-director of the Harvard Quantum Initiative, has created the first programmable, logical quantum processor, capable of encoding up to 48 logical qubits, and executing hundreds of logical gate operations, a vast improvement over prior efforts.
Published in Nature, the work was performed in collaboration with Markus Greiner, the George Vasmer Leverett Professor of Physics; colleagues from MIT; and QuEra Computing, a Boston company founded on technology from Harvard labs.
The system is the first demonstration of large-scale algorithm execution on an error-corrected quantum computer, heralding the advent of early fault-tolerant, or reliably uninterrupted, quantum computation.
Lukin described the achievement as a possible inflection point akin to the early days in the field of artificial intelligence: the ideas of quantum error correction and fault tolerance, long theorized, are starting to bear fruit.
“I think this is one of the moments in which it is clear that something very special is coming,” Lukin said. “Although there are still challenges ahead, we expect that this new advance will greatly accelerate the progress toward large-scale, useful quantum computers.”
Denise Caldwell of the National Science Foundation agrees.
“This breakthrough is a tour de force of quantum engineering and design,” said Caldwell, acting assistant director of the Mathematical and Physical Sciences Directorate, which supported the research through NSF’s Physics Frontiers Centers and Quantum Leap Challenge Institutes programs. “The team has not only accelerated the development of quantum information processing by using neutral atoms, but opened a new door to explorations of large-scale logical qubit devices, which could enable transformative benefits for science and society as a whole.”
It’s been a long, complex path.
In quantum computing, a quantum bit or “qubit” is one unit of information, just like a binary bit in classical computing. For more than two decades, physicists and engineers have shown the world that quantum computing is, in principle, possible by manipulating quantum particles — be they atoms, ions, or photons — to create physical qubits.
But successfully exploiting the weirdness of quantum mechanics for computation is more complicated than simply amassing a large-enough number of qubits, which are inherently unstable and prone to collapse out of their quantum states.
The real coins of the realm are so-called logical qubits: bundles of redundant, error-corrected physical qubits, which can store information for use in a quantum algorithm. Creating logical qubits as controllable units — like classical bits — has been a fundamental obstacle for the field, and it’s generally accepted that until quantum computers can run reliably on logical qubits, the technology can’t really take off.
To date, the best computing systems have demonstrated one or two logical qubits, and one quantum gate operation — akin to just one unit of code — between them.
The Harvard team’s breakthrough builds on several years of work on a quantum computing architecture known as a neutral atom array, pioneered in Lukin’s lab. It is now being commercialized by QuEra, which recently entered into a licensing agreement with Harvard’s Office of Technology Development for a patent portfolio based on innovations developed by Lukin’s group.
The key component of the system is a block of ultra-cold, suspended rubidium atoms, in which the atoms — the system’s physical qubits — can move about and be connected into pairs — or “entangled” — mid-computation.
Entangled pairs of atoms form gates, which are units of computing power. Previously, the team had demonstrated low error rates in their entangling operations, proving the reliability of their neutral atom array system.
With their logical quantum processor, the researchers now demonstrate parallel, multiplexed control of an entire patch of logical qubits, using lasers. This result is more efficient and scalable than having to control individual physical qubits.
“We are trying to mark a transition in the field, toward starting to test algorithms with error-corrected qubits instead of physical ones, and enabling a path toward larger devices,” said paper first author Dolev Bluvstein, a Griffin School of Arts and Sciences Ph.D. student in Lukin’s lab.
The team will continue to work toward demonstrating more types of operations on their 48 logical qubits and to configure their system to run continuously, as opposed to manual cycling as it does now.
The work was supported by the Defense Advanced Research Projects Agency through the Optimization with Noisy Intermediate-Scale Quantum devices program; the Center for Ultracold Atoms, a National Science Foundation Physics Frontiers Center; the Army Research Office; the joint Quantum Institute/NIST; and QuEra Computing.
How can we restore public trust in science? (op-ed)
© Hanneke Weitering/Space.com
India's Aditya-L1 solar observatory captures 1st gorgeous views of the sun (images)
© ISRO
Are internet satellites a threat to astronomy?
The number of proposed giant constellations of satellites, mostly in low Earth orbit and mostly devoted to providing broadband internet service anyplace on Earth, just keeps growing, with no international regulations yet in place to limit these new megaconstellations. Within a few years, the number of such satellites in orbit — presently more than 5,000Continue reading "Are internet satellites a threat to astronomy?"
The post Are internet satellites a threat to astronomy? appeared first on Astronomy Magazine.
MIT engineers design a robotic replica of the heart’s right chamber
MIT engineers have developed a robotic replica of the heart’s right ventricle, which mimics the beating and blood-pumping action of live hearts.
The robo-ventricle combines real heart tissue with synthetic, balloon-like artificial muscles that enable scientists to control the ventricle’s contractions while observing how its natural valves and other intricate structures function.
The artificial ventricle can be tuned to mimic healthy and diseased states. The team manipulated the model to simulate conditions of right ventricular dysfunction, including pulmonary hypertension and myocardial infarction. They also used the model to test cardiac devices. For instance, the team implanted a mechanical valve to repair a natural malfunctioning valve, then observed how the ventricle’s pumping changed in response.
They say the new robotic right ventricle, or RRV, can be used as a realistic platform to study right ventricle disorders and test devices and therapies aimed at treating those disorders.
“The right ventricle is particularly susceptible to dysfunction in intensive care unit settings, especially in patients on mechanical ventilation,” says Manisha Singh, a postdoc at MIT’s Institute for Medical Engineering and Science (IMES). “The RRV simulator can be used in the future to study the effects of mechanical ventilation on the right ventricle and to develop strategies to prevent right heart failure in these vulnerable patients.”
Singh and her colleagues report details of the new design in an open-access paper appearing today in Nature Cardiovascular Research. Her co-authors include Associate Professor Ellen Roche, who is a core member of IMES and the associate head for research in the Department of Mechanical Engineering at MIT; along with Jean Bonnemain, Caglar Ozturk, Clara Park, Diego Quevedo-Moreno, Meagan Rowlett, and Yiling Fan of MIT; Brian Ayers of Massachusetts General Hospital; Christopher Nguyen of Cleveland Clinic; and Mossab Saeed of Boston Children’s Hospital.
A ballet of beats
The right ventricle is one of the heart’s four chambers, along with the left ventricle and the left and right atria. Of the four chambers, the left ventricle is the heavy lifter, as its thick, cone-shaped musculature is built for pumping blood through the entire body. The right ventricle, Roche says, is a “ballerina” in comparison, as it handles a lighter though no-less-crucial load.
“The right ventricle pumps deoxygenated blood to the lungs, so it doesn’t have to pump as hard,” Roche notes. “It’s a thinner muscle, with more complex architecture and motion.”
This anatomical complexity has made it difficult for clinicians to accurately observe and assess right ventricle function in patients with heart disease.
“Conventional tools often fail to capture the intricate mechanics and dynamics of the right ventricle, leading to potential misdiagnoses and inadequate treatment strategies,” Singh says.
To improve understanding of the lesser-known chamber and speed the development of cardiac devices to treat its dysfunction, the team designed a realistic, functional model of the right ventricle that both captures its anatomical intricacies and reproduces its pumping function.
The model includes real heart tissue, which the team chose to incorporate because it retains natural structures that are too complex to reproduce synthetically.
“There are thin, tiny chordae and valve leaflets with different material properties that are all moving in concert with the ventricle’s muscle. Trying to cast or print these very delicate structures is quite challenging,” Roche explains.
A heart’s shelf-life
In the new study, the team reports explanting a pig’s right ventricle, which they treated to carefully preserve its internal structures. They then fit a silicone wrapping around it, which acted as a soft, synthetic myocardium, or muscular lining. Within this lining, the team embedded several long, balloon-like tubes, which encircled the real heart tissue, in positions that the team determined through computational modeling to be optimal for reproducing the ventricle’s contractions. The researchers connected each tube to a control system, which they then set to inflate and deflate each tube at rates that mimicked the heart’s real rhythm and motion.
To test its pumping ability, the team infused the model with a liquid similar in viscosity to blood. This particular liquid was also transparent, allowing the engineers to observe with an internal camera how internal valves and structures responded as the ventricle pumped liquid through.
They found that the artificial ventricle’s pumping power and the function of its internal structures were similar to what they previously observed in live, healthy animals, demonstrating that the model can realistically simulate the right ventricle’s action and anatomy. The researchers could also tune the frequency and power of the pumping tubes to mimic various cardiac conditions, such as irregular heartbeats, muscle weakening, and hypertension.
“We’re reanimating the heart, in some sense, and in a way that we can study and potentially treat its dysfunction,” Roche says.
To show that the artificial ventricle can be used to test cardiac devices, the team surgically implanted ring-like medical devices of various sizes to repair the chamber’s tricuspid valve — a leafy, one-way valve that lets blood into the right ventricle. When this valve is leaky, or physically compromised, it can cause right heart failure or atrial fibrillation, and leads to symptoms such as reduced exercise capacity, swelling of the legs and abdomen, and liver enlargement.
The researchers surgically manipulated the robo-ventricle’s valve to simulate this condition, then either replaced it by implanting a mechanical valve or repaired it using ring-like devices of different sizes. They observed which device improved the ventricle’s fluid flow as it continued to pump.
“With its ability to accurately replicate tricuspid valve dysfunction, the RRV serves as an ideal training ground for surgeons and interventional cardiologists,” Singh says. “They can practice new surgical techniques for repairing or replacing the tricuspid valve on our model before performing them on actual patients.”
Currently, the RRV can simulate realistic function over a few months. The team is working to extend that performance and enable the model to run continuously for longer stretches. They are also working with designers of implantable devices to test their prototypes on the artificial ventricle and possibly speed their path to patients. And looking far in the future, Roche plans to pair the RRV with a similar artificial, functional model of the left ventricle, which the group is currently fine-tuning.
“We envision pairing this with the left ventricle to make a fully tunable, artificial heart, that could potentially function in people,” Roche says. “We’re quite a while off, but that’s the overarching vision.”
This research was supported, in part, by the National Science Foundation.
© Credit: Courtesy of the researchers
Researchers safely integrate fragile 2D materials into devices
Two-dimensional materials, which are only a few atoms thick, can exhibit some incredible properties, such as the ability to carry electric charge extremely efficiently, which could boost the performance of next-generation electronic devices.
But integrating 2D materials into devices and systems like computer chips is notoriously difficult. These ultrathin structures can be damaged by conventional fabrication techniques, which often rely on the use of chemicals, high temperatures, or destructive processes like etching.
To overcome this challenge, researchers from MIT and elsewhere have developed a new technique to integrate 2D materials into devices in a single step while keeping the surfaces of the materials and the resulting interfaces pristine and free from defects.
Their method relies on engineering surface forces available at the nanoscale to allow the 2D material to be physically stacked onto other prebuilt device layers. Because the 2D material remains undamaged, the researchers can take full advantage of its unique optical and electrical properties.
They used this approach to fabricate arrays of 2D transistors that achieved new functionalities compared to devices produced using conventional fabrication techniques. Their method, which is versatile enough to be used with many materials, could have diverse applications in high-performance computing, sensing, and flexible electronics.
Core to unlocking these new functionalities is the ability to form clean interfaces, held together by special forces that exist between all matter, called van der Waals forces.
However, such van der Waals integration of materials into fully functional devices is not always easy, says Farnaz Niroui, assistant professor of electrical engineering and computer science (EECS), a member of the Research Laboratory of Electronics (RLE), and senior author of a new paper describing the work.
“Van der Waals integration has a fundamental limit,” she explains. “Since these forces depend on the intrinsic properties of the materials, they cannot be readily tuned. As a result, there are some materials that cannot be directly integrated with each other using their van der Waals interactions alone. We have come up with a platform to address this limit to help make van der Waals integration more versatile, to promote the development of 2D-materials-based devices with new and improved functionalities.”
Niroui wrote the paper with lead author Peter Satterthwaite, an electrical engineering and computer science graduate student; Jing Kong, professor of EECS and a member of RLE; and others at MIT, Boston University, National Tsing Hua University in Taiwan, the National Science and Technology Council of Taiwan, and National Cheng Kung University in Taiwan. The research is published today in Nature Electronics.
Advantageous attraction
Making complex systems such as a computer chip with conventional fabrication techniques can get messy. Typically, a rigid material like silicon is chiseled down to the nanoscale, then interfaced with other components like metal electrodes and insulating layers to form an active device. Such processing can cause damage to the materials.
Recently, researchers have focused on building devices and systems from the bottom up, using 2D materials and a process that requires sequential physical stacking. In this approach, rather than using chemical glues or high temperatures to bond a fragile 2D material to a conventional surface like silicon, researchers leverage van der Waals forces to physically integrate a layer of 2D material onto a device.
Van der Waals forces are natural forces of attraction that exist between all matter. For example, a gecko’s feet can stick to the wall temporarily due to van der Waals forces. Though all materials exhibit a van der Waals interaction, depending on the material, the forces are not always strong enough to hold them together. For instance, a popular semiconducting 2D material known as molybdenum disulfide will stick to gold, a metal, but won’t directly transfer to insulators like silicon dioxide by just coming into physical contact with that surface.
However, heterostructures made by integrating semiconductor and insulating layers are key building blocks of an electronic device. Previously, this integration has been enabled by bonding the 2D material to an intermediate layer like gold, then using this intermediate layer to transfer the 2D material onto the insulator, before removing the intermediate layer using chemicals or high temperatures.
Instead of using this sacrificial layer, the MIT researchers embed the low-adhesion insulator in a high-adhesion matrix. This adhesive matrix is what makes the 2D material stick to the embedded low-adhesion surface, providing the forces needed to create a van der Waals interface between the 2D material and the insulator.
Making the matrix
To make electronic devices, they form a hybrid surface of metals and insulators on a carrier substrate. This surface is then peeled off and flipped over to reveal a completely smooth top surface that contains the building blocks of the desired device.
This smoothness is important, since gaps between the surface and 2D material can hamper van der Waals interactions. Then, the researchers prepare the 2D material separately, in a completely clean environment, and bring it into direct contact with the prepared device stack.
“Once the hybrid surface is brought into contact with the 2D layer, without needing any high-temperatures, solvents, or sacrificial layers, it can pick up the 2D layer and integrate it with the surface. This way, we are allowing a van der Waals integration that would be traditionally forbidden, but now is possible and allows formation of fully functioning devices in a single step,” Satterthwaite explains.
This single-step process keeps the 2D material interface completely clean, which enables the material to reach its fundamental limits of performance without being held back by defects or contamination.
And because the surfaces also remain pristine, researchers can engineer the surface of the 2D material to form features or connections to other components. For example, they used this technique to create p-type transistors, which are generally challenging to make with 2D materials. Their transistors have improved on previous studies, and can provide a platform toward studying and achieving the performance needed for practical electronics.
Their approach can be done at scale to make larger arrays of devices. The adhesive matrix technique can also be used with a range of materials, and even with other forces to enhance the versatility of this platform. For instance, the researchers integrated graphene onto a device, forming the desired van der Waals interfaces using a matrix made with a polymer. In this case, adhesion relies on chemical interactions rather than van der Waals forces alone.
In the future, the researchers want to build on this platform to enable integration of a diverse library of 2D materials to study their intrinsic properties without the influence of processing damage, and develop new device platforms that leverage these superior functionalities.
This research is funded, in part, by the U.S. National Science Foundation, the U.S. Department of Energy, the BUnano Cross-Disciplinary Fellowship at Boston University, and the U.S. Army Research Office. The fabrication and characterization procedures were carried out, largely, in the MIT.nano shared facilities.
© Image: Courtesy of Sampson Wilcox/Research Laboratory of Electronics
Automated system teaches users when to collaborate with an AI assistant
Artificial intelligence models that pick out patterns in images can often do so better than human eyes — but not always. If a radiologist is using an AI model to help her determine whether a patient’s X-rays show signs of pneumonia, when should she trust the model’s advice and when should she ignore it?
A customized onboarding process could help this radiologist answer that question, according to researchers at MIT and the MIT-IBM Watson AI Lab. They designed a system that teaches a user when to collaborate with an AI assistant.
In this case, the training method might find situations where the radiologist trusts the model’s advice — except she shouldn’t because the model is wrong. The system automatically learns rules for how she should collaborate with the AI, and describes them with natural language.
During onboarding, the radiologist practices collaborating with the AI using training exercises based on these rules, receiving feedback about her performance and the AI’s performance.
The researchers found that this onboarding procedure led to about a 5 percent improvement in accuracy when humans and AI collaborated on an image prediction task. Their results also show that just telling the user when to trust the AI, without training, led to worse performance.
Importantly, the researchers’ system is fully automated, so it learns to create the onboarding process based on data from the human and AI performing a specific task. It can also adapt to different tasks, so it can be scaled up and used in many situations where humans and AI models work together, such as in social media content moderation, writing, and programming.
“So often, people are given these AI tools to use without any training to help them figure out when it is going to be helpful. That’s not what we do with nearly every other tool that people use — there is almost always some kind of tutorial that comes with it. But for AI, this seems to be missing. We are trying to tackle this problem from a methodological and behavioral perspective,” says Hussein Mozannar, a graduate student in the Social and Engineering Systems doctoral program within the Institute for Data, Systems, and Society (IDSS) and lead author of a paper about this training process.
The researchers envision that such onboarding will be a crucial part of training for medical professionals.
“One could imagine, for example, that doctors making treatment decisions with the help of AI will first have to do training similar to what we propose. We may need to rethink everything from continuing medical education to the way clinical trials are designed,” says senior author David Sontag, a professor of EECS, a member of the MIT-IBM Watson AI Lab and the MIT Jameel Clinic, and the leader of the Clinical Machine Learning Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Mozannar, who is also a researcher with the Clinical Machine Learning Group, is joined on the paper by Jimin J. Lee, an undergraduate in electrical engineering and computer science; Dennis Wei, a senior research scientist at IBM Research; and Prasanna Sattigeri and Subhro Das, research staff members at the MIT-IBM Watson AI Lab. The paper will be presented at the Conference on Neural Information Processing Systems.
Training that evolves
Existing onboarding methods for human-AI collaboration are often composed of training materials produced by human experts for specific use cases, making them difficult to scale up. Some related techniques rely on explanations, where the AI tells the user its confidence in each decision, but research has shown that explanations are rarely helpful, Mozannar says.
“The AI model’s capabilities are constantly evolving, so the use cases where the human could potentially benefit from it are growing over time. At the same time, the user’s perception of the model continues changing. So, we need a training procedure that also evolves over time,” he adds.
To accomplish this, their onboarding method is automatically learned from data. It is built from a dataset that contains many instances of a task, such as detecting the presence of a traffic light from a blurry image.
The system’s first step is to collect data on the human and AI performing this task. In this case, the human would try to predict, with the help of AI, whether blurry images contain traffic lights.
The system embeds these data points onto a latent space, which is a representation of data in which similar data points are closer together. It uses an algorithm to discover regions of this space where the human collaborates incorrectly with the AI. These regions capture instances where the human trusted the AI’s prediction but the prediction was wrong, and vice versa.
Perhaps the human mistakenly trusts the AI when images show a highway at night.
After discovering the regions, a second algorithm utilizes a large language model to describe each region as a rule, using natural language. The algorithm iteratively fine-tunes that rule by finding contrasting examples. It might describe this region as “ignore AI when it is a highway during the night.”
These rules are used to build training exercises. The onboarding system shows an example to the human, in this case a blurry highway scene at night, as well as the AI’s prediction, and asks the user if the image shows traffic lights. The user can answer yes, no, or use the AI’s prediction.
If the human is wrong, they are shown the correct answer and performance statistics for the human and AI on these instances of the task. The system does this for each region, and at the end of the training process, repeats the exercises the human got wrong.
“After that, the human has learned something about these regions that we hope they will take away in the future to make more accurate predictions,” Mozannar says.
Onboarding boosts accuracy
The researchers tested this system with users on two tasks — detecting traffic lights in blurry images and answering multiple choice questions from many domains (such as biology, philosophy, computer science, etc.).
They first showed users a card with information about the AI model, how it was trained, and a breakdown of its performance on broad categories. Users were split into five groups: Some were only shown the card, some went through the researchers’ onboarding procedure, some went through a baseline onboarding procedure, some went through the researchers’ onboarding procedure and were given recommendations of when they should or should not trust the AI, and others were only given the recommendations.
Only the researchers’ onboarding procedure without recommendations improved users’ accuracy significantly, boosting their performance on the traffic light prediction task by about 5 percent without slowing them down. However, onboarding was not as effective for the question-answering task. The researchers believe this is because the AI model, ChatGPT, provided explanations with each answer that convey whether it should be trusted.
But providing recommendations without onboarding had the opposite effect — users not only performed worse, they took more time to make predictions.
“When you only give someone recommendations, it seems like they get confused and don’t know what to do. It derails their process. People also don’t like being told what to do, so that is a factor as well,” Mozannar says.
Providing recommendations alone could harm the user if those recommendations are wrong, he adds. With onboarding, on the other hand, the biggest limitation is the amount of available data. If there aren’t enough data, the onboarding stage won’t be as effective, he says.
In the future, he and his collaborators want to conduct larger studies to evaluate the short- and long-term effects of onboarding. They also want to leverage unlabeled data for the onboarding process, and find methods to effectively reduce the number of regions without omitting important examples.
“People are adopting AI systems willy-nilly, and indeed AI offers great potential, but these AI agents still sometimes makes mistakes. Thus, it’s crucial for AI developers to devise methods that help humans know when it’s safe to rely on the AI’s suggestions,” says Dan Weld, professor emeritus at the Paul G. Allen School of Computer Science and Engineering at the University of Washington, who was not involved with this research. “Mozannar et al. have created an innovative method for identifying situations where the AI is trustworthy, and (importantly) to describe them to people in a way that leads to better human-AI team interactions.”
This work is funded, in part, by the MIT-IBM Watson AI Lab.
© Credit: Christine Daniloff, MIT; iStock
Chinese startup Galactic Energy bounces back with successful satellite launch
© CCTV
NASA says Hubble Telescope will resume science operations after gyroscope glitch
© NASA
Exoplanet hunters: What it takes to find needles in the cosmic haystacks
A purely hypothetical field a few decades ago, exoplanet hunters inhabit one of the most dynamic areas in astronomy today. Each discovery brings new insights. From giant gas planets to so-called super Earths and everything in between, each detection expands our understanding of the universe’s cosmic landscape. For the exoplanet hunters themselves, these discoveries areContinue reading "Exoplanet hunters: What it takes to find needles in the cosmic haystacks"
The post Exoplanet hunters: What it takes to find needles in the cosmic haystacks appeared first on Astronomy Magazine.
Academics to explore legacy of Genghis Khan
Under the recently signed Memorandum of Understanding, Cambridge’s Mongolia & Inner Asia Studies Unit (MIASU) will work together with the Mongolian government to promote and further academic links, including the possibility of a programme for visiting research fellowships and travel grants to promote the study of Chinggis Khaan.
The agreement was signed during a visit to the UK by Mongolian Culture Minister Nomin Chinbat, a former media CEO who brought the TV show Mongolia’s Got Talent to the Asian country. The visit adds to a growing awareness of Mongolian culture in the UK, with historic art and precious artefacts from the early years of the nomadic Mongol Empire set to be displayed at the Royal Academy of Arts in London, and the opening of The Mongol Khan theatre production at the London Coliseum.
Professor David Sneath, Director of the Mongolia & Inner Asia Studies Unit at the University of Cambridge, said:
“This is all about exploring the historical reality behind the myth… We are interested not just in the man himself, Chinggis Khaan - although of course he is of great historical interest - but in his legacy. We are trying to encourage a deeper study of Chinggis Khan and his impact.”
Minister Chinbat said: “Of course Chinggis Khaan is primarily known for his warriorship, but he was also a great diplomat, innovator and ruler. How many people know he invented the postal service, the first passports? That he showed great religious tolerance, and he himself was a peacemaker?
“That’s why we look forward to working with the University of Cambridge to foster the next generation of Mongolian academics and strengthen understanding of the Mongol Empire’s impact across the world.”
MIASU’s Professor Uradyn E Bulag added: “Because in Mongolia we didn’t have a written tradition as strong as our neighbours, to some extent our history – and the history of Chinggis Khan – was written by others… This will be a chance to hopefully reset the balance.”
Researchers at the University of Cambridge have signed an agreement with the Mongolian government which will see them explore the legacy of the legendary figure Genghis Khan - or Chinggis Khaan as he is known in Mongolia.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
Making nuclear energy facilities easier to build and transport
For the United States to meet its net zero goals, nuclear energy needs to be on the smorgasbord of options. The problem: Its production still suffers from a lack of scale. To increase access rapidly, we need to stand up reactors quickly, says Isabel Naranjo De Candido, a third-year doctoral student advised by Professor Koroush Shirvan.
One option is to work with microreactors, transportable units that can be wheeled to areas that need clean electricity. Naranjo De Candido's master’s thesis at MIT, supervised by Professor Jacopo Buongiorno, focused on such reactors.
Another way to improve access to nuclear energy is to develop reactors that are modular so their component units can be manufactured quickly while still maintaining quality. “The idea is that you apply the industrialization techniques of manufacturing so companies produce more [nuclear] vessels, with a more predictable supply chain,” she says. The assumption is that working with standardized recipes to manufacture just a few designed components over and over again improves speed and reliability and decreases cost.
As part of her doctoral studies, Naranjo De Candido is working on optimizing the operations and management of these small, modular reactors so they can be efficient in all stages of their lifecycle: building; operations and maintenance; and decommissioning. The motivation for her research is simple: “We need nuclear for climate change because we need a reliable and stable source of energy to fight climate change,” she says.
A childhood in Italy
Despite her passion for nuclear energy and engineering today, Naranjo De Candido was unsure what she wanted to pursue after high school in Padua, Italy. The daughter of a physician Italian mother and an architect Spanish father, she enrolled in a science-based high school shortly after middle school, as she knew that was the track she enjoyed best.
Having earned very high marks in school, she won a full scholarship to study in Pisa, at the special Sant’Anna School of Advanced Studies. Housed in a centuries-old convent, the school granted only masters and doctoral degrees. “I had to select what to study but I was unsure. I knew I was interested in engineering,” she recalls, “so I selected mechanical engineering because it’s more generic.”
It turns out Sant’Anna was a perfect fit for Naranjo De Candido to explore her passions. An inspirational nuclear engineering course during her studies set her on the path toward studying the field as part of her master’s studies in Pisa. During her time there, she traveled around the world — to China as part of a student exchange program and to Switzerland and the United States for internships. “I formed a good background and curriculum and that allowed me to [gain admission] to MIT,” she says.
At an internship at NASA’s Jet Propulsion Lab, she met an MIT mechanical engineering student who encouraged her to apply to the school for doctoral studies. Yet another mentor in the Italian nuclear sector had also suggested she apply to MIT to pursue nuclear engineering, so she decided to take the leap.
And she is glad she did.
Improving access to nuclear energy
At MIT, Naranjo De Candido is working on improving access to nuclear energy by scaling down reactor size and, in the case of microreactors, making them mobile enough to travel to places where they’re needed. “The idea with a microreactor is that when the fuel is exhausted, you replace the entire microreactor onsite with a freshly fueled unit and take the old one back to a central facility where it’s going to be refueled,” she says. One of the early use cases for such microreactors has been remote mining sites which need reliable power 24/7.
Modular reactors, about 10 times the size of microreactors, ensure access differently: The components can be manufactured and installed at scale. These reactors don't just deliver electricity but also cater to the market for industrial heat, she says. “You can locate them close to industrial facilities and use the heat directly to power ammonia or hydrogen production or water desalinization for example,” she adds.
As more of these modular reactors are installed, the industry is expected to expand to include enterprises that choose to simply build them and hand off operations to other companies. Whereas traditional nuclear energy reactors might have a full suite of staff on board, smaller-scale reactors such as modular ones cannot afford to staff in large numbers, so talent needs to be optimized and staff shared among many units. “Many of these companies are very interested in knowing exactly how many people and how much money to allocate, and how to organize resources to serve more than one reactor at the same time,” she says.
Naranjo De Candido is working on a complex software program that factors in a large range of variables — from raw materials cost and worker training, reactor size, megawatt output and more — and leans on historical data to predict what resources newer plants might need. The program also informs operators about the tradeoffs they need to accept. For example, she explains, “if you reduce people below the typical level assigned, how does that impact the reliability of the plant, that is, the number of hours that it is able to operate without malfunctions and failures?”
And managing and operating a nuclear reactor is particularly complex because safety standards limit how much time workers can work in certain areas and how safe zones need to be handled.
“There’s a shortage of [qualified talent] in the industry so this is not just about reducing costs but also about making it possible to have plants out there,” Naranjo De Candido says. Different types of talent are needed, from professionals who specialize in mechanical components to electronic controls. The model that she is working on considers the need for such specialized skillsets as well as making room for cross-training talent in multiple fields as needed.
In keeping with her goal of making nuclear energy more accessible, the optimization software will be open-source, available for all to use. “We want this to be a common ground for utilities and vendors and other players to be able to communicate better,” Naranjo De Candido says, Doing so will accelerate the operation of nuclear energy plants at scale, she hopes — an achievement that will come not a moment too soon.
© Photo: Gretchen Ertl
Satellites watch as Japan's new volcanic island continues to grow (image)
© ESA/USGS
Iran launches 'bio-space capsule' protoype, aims to fly astronauts by 2030
© IRNA
Observing Basics: How you can take sharp pictures of the planets
How do astroimagers get such sharp images of the Sun, Moon, and planets? The answer is a technique called lucky imaging. As every astronomer has witnessed, the atmosphere roils with waves as hot and cold air mix, causing our neighboring celestial bodies to blur at the eyepiece or in the camera. Fortunately, lucky imaging canContinue reading "Observing Basics: How you can take sharp pictures of the planets"
The post Observing Basics: How you can take sharp pictures of the planets appeared first on Astronomy Magazine.
Nuclear power on the moon: Rolls-Royce unveils reactor mockup
© Rolls-Royce
Atom Diffraction from a Microscopic Spot
Author(s): Philippe Roncin and William Allison
Researchers have developed an atom-diffraction imaging method with micrometer spatial resolution, which may allow new applications in material characterization.
[Physics 16, 205] Published Wed Dec 06, 2023
Gravitational waves rippling from black hole merger could help test general relativity
© SXS (Simulating eXtreme Spacetimes) Project
Watch the explosive new trailer for astronaut action film 'ISS' (video)
© Bleecker Street
Meet one of the fastest growing astroimaging communities
The sky never fails to AMAZE us with its beauty. An astrophotographer has the joyful job of photographing and documenting it. But just as exciting is what comes next: sharing this beauty with others. Astronomads Bangla, a group of amateur astrophotographers based in Kolkata, India, has been doing both for the last three years. AndContinue reading "Meet one of the fastest growing astroimaging communities"
The post Meet one of the fastest growing astroimaging communities appeared first on Astronomy Magazine.
Diamonds and rust help unveil ‘impossible’ quasi-particles
Researchers led by the University of Cambridge used a technique known as diamond quantum sensing to observe swirling textures and faint magnetic signals on the surface of hematite, a type of iron oxide.
The researchers observed that magnetic monopoles in hematite emerge through the collective behaviour of many spins (the angular momentum of a particle). These monopoles glide across the swirling textures on the surface of the hematite, like tiny hockey pucks of magnetic charge. This is the first time that naturally occurring emergent monopoles have been observed experimentally.
The research has also shown the direct connection between the previously hidden swirling textures and the magnetic charges of materials like hematite, as if there is a secret code linking them together. The results, which could be useful in enabling next-generation logic and memory applications, are reported in the journal Nature Materials.
According to the equations of James Clerk Maxwell, a giant of Cambridge physics, magnetic objects, whether a fridge magnet or the Earth itself, must always exist as a pair of magnetic poles that cannot be isolated.
“The magnets we use every day have two poles: north and south,” said Professor Mete Atatüre, who led the research. “In the 19th century, it was hypothesised that monopoles could exist. But in one of his foundational equations for the study of electromagnetism, James Clerk Maxwell disagreed.”
Atatüre is Head of Cambridge’s Cavendish Laboratory, a position once held by Maxwell himself. “If monopoles did exist, and we were able to isolate them, it would be like finding a missing puzzle piece that was assumed to be lost,” he said.
About 15 years ago, scientists suggested how monopoles could exist in a magnetic material. This theoretical result relied on the extreme separation of north and south poles so that locally each pole appeared isolated in an exotic material called spin ice.
However, there is an alternative strategy to find monopoles, involving the concept of emergence. The idea of emergence is the combination of many physical entities can give rise to properties that are either more than or different to the sum of their parts.
Working with colleagues from the University of Oxford and the National University of Singapore, the Cambridge researchers used emergence to uncover monopoles spread over two-dimensional space, gliding across the swirling textures on the surface of a magnetic material.
The swirling topological textures are found in two main types of materials: ferromagnets and antiferromagnets. Of the two, antiferromagnets are more stable than ferromagnets, but they are more difficult to study, as they don’t have a strong magnetic signature.
To study the behaviour of antiferromagnets, Atatüre and his colleagues use an imaging technique known as diamond quantum magnetometry. This technique uses a single spin – the inherent angular momentum of an electron – in a diamond needle to precisely measure the magnetic field on the surface of a material, without affecting its behaviour.
For the current study, the researchers used the technique to look at hematite, an antiferromagnetic iron oxide material. To their surprise, they found hidden patterns of magnetic charges within hematite, including monopoles, dipoles and quadrupoles.
“Monopoles had been predicted theoretically, but this is the first time we’ve actually seen a two-dimensional monopole in a naturally occurring magnet,” said co-author Professor Paolo Radaelli, from the University of Oxford.
“These monopoles are a collective state of many spins that twirl around a singularity rather than a single fixed particle, so they emerge through many-body interactions. The result is a tiny, localised stable particle with diverging magnetic field coming out of it,” said co-first author Dr Hariom Jani, from the University of Oxford.
“We’ve shown how diamond quantum magnetometry could be used to unravel the mysterious behaviour of magnetism in two-dimensional quantum materials, which could open up new fields of study in this area,” said co-first author Dr Anthony Tan, from the Cavendish Laboratory. “The challenge has always been direct imaging of these textures in antiferromagnets due to their weaker magnetic pull, but now we’re able to do so, with a nice combination of diamonds and rust.”
The study not only highlights the potential of diamond quantum magnetometry but also underscores its capacity to uncover and investigate hidden magnetic phenomena in quantum materials. If controlled, these swirling textures dressed in magnetic charges could power super-fast and energy-efficient computer memory logic.
The research was supported in part by the Royal Society, the Sir Henry Royce Institute, the European Union, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).
Reference:
K C Tan, Hariom Jani, Michael Högen et al. ‘Revealing Emergent Magnetic Charge in an Antiferromagnet with Diamond Quantum Magnetometry.’ Nature Materials (2023). DOI: 10.1038/s41563-023-01737-4.
For more information on energy-related research in Cambridge, please visit the Energy IRC, which brings together Cambridge’s research knowledge and expertise, in collaboration with global partners, to create solutions for a sustainable and resilient energy landscape for generations to come.
Researchers have discovered magnetic monopoles – isolated magnetic charges – in a material closely related to rust, a result that could be used to power greener and faster computing technologies.
The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.
AI accelerates problem-solving in complex scenarios
While Santa Claus may have a magical sleigh and nine plucky reindeer to help him deliver presents, for companies like FedEx, the optimization problem of efficiently routing holiday packages is so complicated that they often employ specialized software to find a solution.
This software, called a mixed-integer linear programming (MILP) solver, splits a massive optimization problem into smaller pieces and uses generic algorithms to try and find the best solution. However, the solver could take hours — or even days — to arrive at a solution.
The process is so onerous that a company often must stop the software partway through, accepting a solution that is not ideal but the best that could be generated in a set amount of time.
Researchers from MIT and ETH Zurich used machine learning to speed things up.
They identified a key intermediate step in MILP solvers that has so many potential solutions it takes an enormous amount of time to unravel, which slows the entire process. The researchers employed a filtering technique to simplify this step, then used machine learning to find the optimal solution for a specific type of problem.
Their data-driven approach enables a company to use its own data to tailor a general-purpose MILP solver to the problem at hand.
This new technique sped up MILP solvers between 30 and 70 percent, without any drop in accuracy. One could use this method to obtain an optimal solution more quickly or, for especially complex problems, a better solution in a tractable amount of time.
This approach could be used wherever MILP solvers are employed, such as by ride-hailing services, electric grid operators, vaccination distributors, or any entity faced with a thorny resource-allocation problem.
“Sometimes, in a field like optimization, it is very common for folks to think of solutions as either purely machine learning or purely classical. I am a firm believer that we want to get the best of both worlds, and this is a really strong instantiation of that hybrid approach,” says senior author Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).
Wu wrote the paper with co-lead authors Sirui Li, an IDSS graduate student, and Wenbin Ouyang, a CEE graduate student; as well as Max Paulus, a graduate student at ETH Zurich. The research will be presented at the Conference on Neural Information Processing Systems.
Tough to solve
MILP problems have an exponential number of potential solutions. For instance, say a traveling salesperson wants to find the shortest path to visit several cities and then return to their city of origin. If there are many cities which could be visited in any order, the number of potential solutions might be greater than the number of atoms in the universe.
“These problems are called NP-hard, which means it is very unlikely there is an efficient algorithm to solve them. When the problem is big enough, we can only hope to achieve some suboptimal performance,” Wu explains.
An MILP solver employs an array of techniques and practical tricks that can achieve reasonable solutions in a tractable amount of time.
A typical solver uses a divide-and-conquer approach, first splitting the space of potential solutions into smaller pieces with a technique called branching. Then, the solver employs a technique called cutting to tighten up these smaller pieces so they can be searched faster.
Cutting uses a set of rules that tighten the search space without removing any feasible solutions. These rules are generated by a few dozen algorithms, known as separators, that have been created for different kinds of MILP problems.
Wu and her team found that the process of identifying the ideal combination of separator algorithms to use is, in itself, a problem with an exponential number of solutions.
“Separator management is a core part of every solver, but this is an underappreciated aspect of the problem space. One of the contributions of this work is identifying the problem of separator management as a machine learning task to begin with,” she says.
Shrinking the solution space
She and her collaborators devised a filtering mechanism that reduces this separator search space from more than 130,000 potential combinations to around 20 options. This filtering mechanism draws on the principle of diminishing marginal returns, which says that the most benefit would come from a small set of algorithms, and adding additional algorithms won’t bring much extra improvement.
Then they use a machine-learning model to pick the best combination of algorithms from among the 20 remaining options.
This model is trained with a dataset specific to the user’s optimization problem, so it learns to choose algorithms that best suit the user’s particular task. Since a company like FedEx has solved routing problems many times before, using real data gleaned from past experience should lead to better solutions than starting from scratch each time.
The model’s iterative learning process, known as contextual bandits, a form of reinforcement learning, involves picking a potential solution, getting feedback on how good it was, and then trying again to find a better solution.
This data-driven approach accelerated MILP solvers between 30 and 70 percent without any drop in accuracy. Moreover, the speedup was similar when they applied it to a simpler, open-source solver and a more powerful, commercial solver.
In the future, Wu and her collaborators want to apply this approach to even more complex MILP problems, where gathering labeled data to train the model could be especially challenging. Perhaps they can train the model on a smaller dataset and then tweak it to tackle a much larger optimization problem, she says. The researchers are also interested in interpreting the learned model to better understand the effectiveness of different separator algorithms.
This research is supported, in part, by Mathworks, the National Science Foundation (NSF), the MIT Amazon Science Hub, and MIT’s Research Support Committee.
© Image: iStock
X-ray telescope catches 'spider pulsars' devouring stars like cosmic black widows (image)
© NASA/CXC/SAO; Optical, NASA/ESA/STScI/AURA; IR:NASA/JPL/Caltech; Image Processing, NASA/CXC/SAO/N. Wolk
James Webb Space Telescope gazes into 'The Brick,' a dark nebula near the Milky Way's heart
© Adam Ginsburg
Historic magnetic storms help scientists learn what to expect when one hits
© Gul Meltem Temiz Sahin/Anadolu Agency via Getty Images
How do radio telescopes work?
How are images made with radio telescopes? Steve HeppMontesano, Washington Radio telescopes come in all shapes and sizes, depending mainly on the radio wavelengths they are designed to receive. The most familiar sort is a curved dish that reflects radio waves to a focal point, ranging from meter to submillimeter wavelengths. Usually, radio waves areContinue reading "How do radio telescopes work?"
The post How do radio telescopes work? appeared first on Astronomy Magazine.
The best telescopes for beginners to view planets, galaxies, and more
Note: This post contains affiliate links. When you buy a product through the links on this page, we may earn a commission. Maybe you have had a casual interest in astronomy for years, looking up at the night sky every chance you get. Or maybe you’ve just recently become interested in the wonders hanging highContinue reading "The best telescopes for beginners to view planets, galaxies, and more"
The post The best telescopes for beginners to view planets, galaxies, and more appeared first on Astronomy Magazine.
Explore the Pleiades: This Week in Astronomy with Dave Eicher
The Pleiades star cluster (M45) is one of the great gems of the winter sky. One of the closest open star clusters to us, it lies just 440 light-years away in Taurus. It is also an asterism: In a dark sky, the naked eye can distinguish seven bright stars in a distinctive dipper-shaped arrangement —Continue reading "Explore the Pleiades: This Week in Astronomy with Dave Eicher"
The post Explore the Pleiades: This Week in Astronomy with Dave Eicher appeared first on Astronomy Magazine.
Might There Be No Quantum Gravity After All?
Author(s): Thomas Galley
A proposed model unites quantum theory with classical gravity by assuming that states evolve in a probabilistic way, like a game of chance.
[Physics 16, 203] Published Mon Dec 04, 2023